jcapik pushed to kubernetes (master). "Initial commit (..more)"

notifications at fedoraproject.org notifications at fedoraproject.org
Wed Jun 10 15:11:55 UTC 2015


From c623ade0261e994e0113fb36103832a8c1f54bc8 Mon Sep 17 00:00:00 2001
From: Jan Chaloupka <jchaloup at redhat.com>
Date: Thu, 9 Oct 2014 23:18:28 +0200
Subject: Initial commit - resolves: #1122176


diff --git a/0001-remove-all-third-party-software.patch b/0001-remove-all-third-party-software.patch
new file mode 100644
index 0000000..f43b4d6
--- /dev/null
+++ b/0001-remove-all-third-party-software.patch
@@ -0,0 +1,108430 @@
+From 125ed116ea43230e3d26b9dffb352e18a06a712c Mon Sep 17 00:00:00 2001
+From: Eric Paris <eparis at redhat.com>
+Date: Thu, 21 Aug 2014 13:55:27 -0400
+Subject: [PATCH] remove all third party software
+
+---
+ Godeps/Godeps.json                                 |   104 -
+ Godeps/Readme                                      |     5 -
+ Godeps/_workspace/.gitignore                       |     2 -
+ .../_workspace/src/code.google.com/p/gcfg/LICENSE  |    57 -
+ .../_workspace/src/code.google.com/p/gcfg/README   |     7 -
+ .../_workspace/src/code.google.com/p/gcfg/doc.go   |   118 -
+ .../src/code.google.com/p/gcfg/example_test.go     |   132 -
+ .../_workspace/src/code.google.com/p/gcfg/go1_0.go |     7 -
+ .../_workspace/src/code.google.com/p/gcfg/go1_2.go |     9 -
+ .../src/code.google.com/p/gcfg/issues_test.go      |    63 -
+ .../_workspace/src/code.google.com/p/gcfg/read.go  |   181 -
+ .../src/code.google.com/p/gcfg/read_test.go        |   333 -
+ .../src/code.google.com/p/gcfg/scanner/errors.go   |   121 -
+ .../code.google.com/p/gcfg/scanner/example_test.go |    46 -
+ .../src/code.google.com/p/gcfg/scanner/scanner.go  |   342 -
+ .../code.google.com/p/gcfg/scanner/scanner_test.go |   417 -
+ .../_workspace/src/code.google.com/p/gcfg/set.go   |   281 -
+ .../code.google.com/p/gcfg/testdata/gcfg_test.gcfg |     3 -
+ .../p/gcfg/testdata/gcfg_unicode_test.gcfg         |     3 -
+ .../src/code.google.com/p/gcfg/token/position.go   |   435 -
+ .../code.google.com/p/gcfg/token/position_test.go  |   181 -
+ .../src/code.google.com/p/gcfg/token/serialize.go  |    56 -
+ .../code.google.com/p/gcfg/token/serialize_test.go |   111 -
+ .../src/code.google.com/p/gcfg/token/token.go      |    83 -
+ .../src/code.google.com/p/gcfg/types/bool.go       |    23 -
+ .../src/code.google.com/p/gcfg/types/doc.go        |     4 -
+ .../src/code.google.com/p/gcfg/types/enum.go       |    44 -
+ .../src/code.google.com/p/gcfg/types/enum_test.go  |    29 -
+ .../src/code.google.com/p/gcfg/types/int.go        |    86 -
+ .../src/code.google.com/p/gcfg/types/int_test.go   |    67 -
+ .../src/code.google.com/p/gcfg/types/scan.go       |    23 -
+ .../src/code.google.com/p/gcfg/types/scan_test.go  |    36 -
+ .../src/code.google.com/p/go-uuid/uuid/LICENSE     |    27 -
+ .../src/code.google.com/p/go-uuid/uuid/dce.go      |    84 -
+ .../src/code.google.com/p/go-uuid/uuid/doc.go      |     8 -
+ .../src/code.google.com/p/go-uuid/uuid/hash.go     |    53 -
+ .../src/code.google.com/p/go-uuid/uuid/node.go     |   101 -
+ .../src/code.google.com/p/go-uuid/uuid/time.go     |   132 -
+ .../src/code.google.com/p/go-uuid/uuid/util.go     |    43 -
+ .../src/code.google.com/p/go-uuid/uuid/uuid.go     |   163 -
+ .../code.google.com/p/go-uuid/uuid/uuid_test.go    |   390 -
+ .../src/code.google.com/p/go-uuid/uuid/version1.go |    41 -
+ .../src/code.google.com/p/go-uuid/uuid/version4.go |    25 -
+ .../src/code.google.com/p/go.net/html/atom/atom.go |    78 -
+ .../p/go.net/html/atom/atom_test.go                |   109 -
+ .../src/code.google.com/p/go.net/html/atom/gen.go  |   636 -
+ .../code.google.com/p/go.net/html/atom/table.go    |   694 -
+ .../p/go.net/html/atom/table_test.go               |   341 -
+ .../p/go.net/html/charset/charset.go               |   227 -
+ .../p/go.net/html/charset/charset_test.go          |   200 -
+ .../code.google.com/p/go.net/html/charset/gen.go   |   107 -
+ .../code.google.com/p/go.net/html/charset/table.go |   235 -
+ .../go.net/html/charset/testdata/HTTP-charset.html |    48 -
+ .../html/charset/testdata/HTTP-vs-UTF-8-BOM.html   |    48 -
+ .../charset/testdata/HTTP-vs-meta-charset.html     |    49 -
+ .../charset/testdata/HTTP-vs-meta-content.html     |    49 -
+ .../charset/testdata/No-encoding-declaration.html  |    47 -
+ .../p/go.net/html/charset/testdata/README          |     1 -
+ .../go.net/html/charset/testdata/UTF-16BE-BOM.html |   Bin 2670 -> 0 bytes
+ .../go.net/html/charset/testdata/UTF-16LE-BOM.html |   Bin 2682 -> 0 bytes
+ .../testdata/UTF-8-BOM-vs-meta-charset.html        |    49 -
+ .../testdata/UTF-8-BOM-vs-meta-content.html        |    48 -
+ .../charset/testdata/meta-charset-attribute.html   |    48 -
+ .../charset/testdata/meta-content-attribute.html   |    48 -
+ .../src/code.google.com/p/go.net/html/const.go     |   100 -
+ .../src/code.google.com/p/go.net/html/doc.go       |   106 -
+ .../src/code.google.com/p/go.net/html/doctype.go   |   156 -
+ .../src/code.google.com/p/go.net/html/entity.go    |  2253 ---
+ .../code.google.com/p/go.net/html/entity_test.go   |    29 -
+ .../src/code.google.com/p/go.net/html/escape.go    |   258 -
+ .../code.google.com/p/go.net/html/escape_test.go   |    97 -
+ .../code.google.com/p/go.net/html/example_test.go  |    40 -
+ .../src/code.google.com/p/go.net/html/foreign.go   |   226 -
+ .../src/code.google.com/p/go.net/html/node.go      |   193 -
+ .../src/code.google.com/p/go.net/html/node_test.go |   146 -
+ .../src/code.google.com/p/go.net/html/parse.go     |  2092 ---
+ .../code.google.com/p/go.net/html/parse_test.go    |   388 -
+ .../src/code.google.com/p/go.net/html/render.go    |   271 -
+ .../code.google.com/p/go.net/html/render_test.go   |   156 -
+ .../p/go.net/html/testdata/go1.html                |  2237 ---
+ .../p/go.net/html/testdata/webkit/README           |    28 -
+ .../p/go.net/html/testdata/webkit/adoption01.dat   |   194 -
+ .../p/go.net/html/testdata/webkit/adoption02.dat   |    31 -
+ .../p/go.net/html/testdata/webkit/comments01.dat   |   135 -
+ .../p/go.net/html/testdata/webkit/doctype01.dat    |   370 -
+ .../p/go.net/html/testdata/webkit/entities01.dat   |   603 -
+ .../p/go.net/html/testdata/webkit/entities02.dat   |   249 -
+ .../go.net/html/testdata/webkit/html5test-com.dat  |   246 -
+ .../p/go.net/html/testdata/webkit/inbody01.dat     |    43 -
+ .../p/go.net/html/testdata/webkit/isindex.dat      |    40 -
+ .../pending-spec-changes-plain-text-unsafe.dat     |   Bin 115 -> 0 bytes
+ .../html/testdata/webkit/pending-spec-changes.dat  |    52 -
+ .../html/testdata/webkit/plain-text-unsafe.dat     |   Bin 4166 -> 0 bytes
+ .../p/go.net/html/testdata/webkit/scriptdata01.dat |   308 -
+ .../html/testdata/webkit/scripted/adoption01.dat   |    15 -
+ .../html/testdata/webkit/scripted/webkit01.dat     |    28 -
+ .../p/go.net/html/testdata/webkit/tables01.dat     |   212 -
+ .../p/go.net/html/testdata/webkit/tests1.dat       |  1952 ---
+ .../p/go.net/html/testdata/webkit/tests10.dat      |   799 -
+ .../p/go.net/html/testdata/webkit/tests11.dat      |   482 -
+ .../p/go.net/html/testdata/webkit/tests12.dat      |    62 -
+ .../p/go.net/html/testdata/webkit/tests14.dat      |    74 -
+ .../p/go.net/html/testdata/webkit/tests15.dat      |   208 -
+ .../p/go.net/html/testdata/webkit/tests16.dat      |  2299 ---
+ .../p/go.net/html/testdata/webkit/tests17.dat      |   153 -
+ .../p/go.net/html/testdata/webkit/tests18.dat      |   269 -
+ .../p/go.net/html/testdata/webkit/tests19.dat      |  1237 --
+ .../p/go.net/html/testdata/webkit/tests2.dat       |   763 -
+ .../p/go.net/html/testdata/webkit/tests20.dat      |   455 -
+ .../p/go.net/html/testdata/webkit/tests21.dat      |   221 -
+ .../p/go.net/html/testdata/webkit/tests22.dat      |   157 -
+ .../p/go.net/html/testdata/webkit/tests23.dat      |   155 -
+ .../p/go.net/html/testdata/webkit/tests24.dat      |    79 -
+ .../p/go.net/html/testdata/webkit/tests25.dat      |   219 -
+ .../p/go.net/html/testdata/webkit/tests26.dat      |   313 -
+ .../p/go.net/html/testdata/webkit/tests3.dat       |   305 -
+ .../p/go.net/html/testdata/webkit/tests4.dat       |    59 -
+ .../p/go.net/html/testdata/webkit/tests5.dat       |   191 -
+ .../p/go.net/html/testdata/webkit/tests6.dat       |   663 -
+ .../p/go.net/html/testdata/webkit/tests7.dat       |   390 -
+ .../p/go.net/html/testdata/webkit/tests8.dat       |   148 -
+ .../p/go.net/html/testdata/webkit/tests9.dat       |   457 -
+ .../html/testdata/webkit/tests_innerHTML_1.dat     |   741 -
+ .../p/go.net/html/testdata/webkit/tricky01.dat     |   261 -
+ .../p/go.net/html/testdata/webkit/webkit01.dat     |   610 -
+ .../p/go.net/html/testdata/webkit/webkit02.dat     |   159 -
+ .../src/code.google.com/p/go.net/html/token.go     |  1219 --
+ .../code.google.com/p/go.net/html/token_test.go    |   748 -
+ .../code.google.com/p/go.net/websocket/client.go   |    98 -
+ .../p/go.net/websocket/exampledial_test.go         |    31 -
+ .../p/go.net/websocket/examplehandler_test.go      |    26 -
+ .../src/code.google.com/p/go.net/websocket/hybi.go |   564 -
+ .../p/go.net/websocket/hybi_test.go                |   590 -
+ .../code.google.com/p/go.net/websocket/server.go   |   114 -
+ .../p/go.net/websocket/websocket.go                |   411 -
+ .../p/go.net/websocket/websocket_test.go           |   341 -
+ .../compute/serviceaccount/serviceaccount.go       |   172 -
+ .../p/goauth2/oauth/example/oauthreq.go            |   100 -
+ .../oauth/jwt/example/example.client_secrets.json  |     1 -
+ .../p/goauth2/oauth/jwt/example/example.pem        |    20 -
+ .../p/goauth2/oauth/jwt/example/main.go            |   114 -
+ .../src/code.google.com/p/goauth2/oauth/jwt/jwt.go |   511 -
+ .../p/goauth2/oauth/jwt/jwt_test.go                |   486 -
+ .../src/code.google.com/p/goauth2/oauth/oauth.go   |   405 -
+ .../code.google.com/p/goauth2/oauth/oauth_test.go  |   214 -
+ .../compute/v1/compute-api.json                    |  9229 ------------
+ .../google-api-go-client/compute/v1/compute-gen.go | 15027 -------------------
+ .../p/google-api-go-client/googleapi/googleapi.go  |   377 -
+ .../googleapi/googleapi_test.go                    |   351 -
+ .../googleapi/transport/apikey.go                  |    38 -
+ .../p/google-api-go-client/googleapi/types.go      |   150 -
+ .../p/google-api-go-client/googleapi/types_test.go |    44 -
+ .../github.com/coreos/go-etcd/etcd/add_child.go    |    23 -
+ .../coreos/go-etcd/etcd/add_child_test.go          |    73 -
+ .../src/github.com/coreos/go-etcd/etcd/client.go   |   435 -
+ .../github.com/coreos/go-etcd/etcd/client_test.go  |    96 -
+ .../src/github.com/coreos/go-etcd/etcd/cluster.go  |    51 -
+ .../coreos/go-etcd/etcd/compare_and_delete.go      |    34 -
+ .../coreos/go-etcd/etcd/compare_and_delete_test.go |    46 -
+ .../coreos/go-etcd/etcd/compare_and_swap.go        |    36 -
+ .../coreos/go-etcd/etcd/compare_and_swap_test.go   |    57 -
+ .../src/github.com/coreos/go-etcd/etcd/debug.go    |    55 -
+ .../github.com/coreos/go-etcd/etcd/debug_test.go   |    28 -
+ .../src/github.com/coreos/go-etcd/etcd/delete.go   |    40 -
+ .../github.com/coreos/go-etcd/etcd/delete_test.go  |    81 -
+ .../src/github.com/coreos/go-etcd/etcd/error.go    |    48 -
+ .../src/github.com/coreos/go-etcd/etcd/get.go      |    27 -
+ .../src/github.com/coreos/go-etcd/etcd/get_test.go |   131 -
+ .../src/github.com/coreos/go-etcd/etcd/options.go  |    72 -
+ .../src/github.com/coreos/go-etcd/etcd/requests.go |   377 -
+ .../src/github.com/coreos/go-etcd/etcd/response.go |    89 -
+ .../coreos/go-etcd/etcd/set_curl_chan_test.go      |    42 -
+ .../coreos/go-etcd/etcd/set_update_create.go       |   137 -
+ .../coreos/go-etcd/etcd/set_update_create_test.go  |   241 -
+ .../src/github.com/coreos/go-etcd/etcd/version.go  |     3 -
+ .../src/github.com/coreos/go-etcd/etcd/watch.go    |   103 -
+ .../github.com/coreos/go-etcd/etcd/watch_test.go   |   119 -
+ .../github.com/fsouza/go-dockerclient/.travis.yml  |    13 -
+ .../src/github.com/fsouza/go-dockerclient/AUTHORS  |    41 -
+ .../fsouza/go-dockerclient/DOCKER-LICENSE          |     6 -
+ .../src/github.com/fsouza/go-dockerclient/LICENSE  |    22 -
+ .../fsouza/go-dockerclient/README.markdown         |    43 -
+ .../github.com/fsouza/go-dockerclient/change.go    |    36 -
+ .../fsouza/go-dockerclient/change_test.go          |    26 -
+ .../github.com/fsouza/go-dockerclient/client.go    |   536 -
+ .../fsouza/go-dockerclient/client_test.go          |   290 -
+ .../github.com/fsouza/go-dockerclient/container.go |   693 -
+ .../fsouza/go-dockerclient/container_test.go       |  1416 --
+ .../src/github.com/fsouza/go-dockerclient/env.go   |   168 -
+ .../github.com/fsouza/go-dockerclient/env_test.go  |   349 -
+ .../src/github.com/fsouza/go-dockerclient/event.go |   278 -
+ .../fsouza/go-dockerclient/event_test.go           |    93 -
+ .../fsouza/go-dockerclient/example_test.go         |   168 -
+ .../src/github.com/fsouza/go-dockerclient/image.go |   351 -
+ .../fsouza/go-dockerclient/image_test.go           |   712 -
+ .../src/github.com/fsouza/go-dockerclient/misc.go  |    59 -
+ .../github.com/fsouza/go-dockerclient/misc_test.go |   159 -
+ .../github.com/fsouza/go-dockerclient/signal.go    |    49 -
+ .../github.com/fsouza/go-dockerclient/stdcopy.go   |    91 -
+ .../fsouza/go-dockerclient/stdcopy_test.go         |   255 -
+ .../fsouza/go-dockerclient/testing/data/Dockerfile |    15 -
+ .../go-dockerclient/testing/data/container.tar     |   Bin 2048 -> 0 bytes
+ .../go-dockerclient/testing/data/dockerfile.tar    |   Bin 2560 -> 0 bytes
+ .../fsouza/go-dockerclient/testing/server.go       |   668 -
+ .../fsouza/go-dockerclient/testing/server_test.go  |   965 --
+ .../fsouza/go-dockerclient/testing/writer.go       |    43 -
+ .../_workspace/src/github.com/golang/glog/LICENSE  |   191 -
+ .../_workspace/src/github.com/golang/glog/README   |    44 -
+ .../_workspace/src/github.com/golang/glog/glog.go  |  1034 --
+ .../src/github.com/golang/glog/glog_file.go        |   124 -
+ .../src/github.com/golang/glog/glog_test.go        |   333 -
+ .../github.com/google/cadvisor/client/client.go    |   106 -
+ .../google/cadvisor/client/client_test.go          |   113 -
+ .../src/github.com/google/cadvisor/info/advice.go  |    34 -
+ .../github.com/google/cadvisor/info/container.go   |   312 -
+ .../google/cadvisor/info/container_test.go         |   101 -
+ .../src/github.com/google/cadvisor/info/machine.go |    42 -
+ .../google/cadvisor/info/test/datagen.go           |    78 -
+ .../src/github.com/google/cadvisor/info/version.go |    18 -
+ .../src/github.com/google/gofuzz/.travis.yml       |    12 -
+ .../src/github.com/google/gofuzz/CONTRIBUTING.md   |    67 -
+ .../src/github.com/google/gofuzz/LICENSE           |   202 -
+ .../src/github.com/google/gofuzz/README.md         |    71 -
+ .../_workspace/src/github.com/google/gofuzz/doc.go |    18 -
+ .../src/github.com/google/gofuzz/example_test.go   |   225 -
+ .../src/github.com/google/gofuzz/fuzz.go           |   366 -
+ .../src/github.com/google/gofuzz/fuzz_test.go      |   258 -
+ .../src/github.com/mitchellh/goamz/aws/attempt.go  |    74 -
+ .../github.com/mitchellh/goamz/aws/attempt_test.go |    57 -
+ .../src/github.com/mitchellh/goamz/aws/aws.go      |   423 -
+ .../src/github.com/mitchellh/goamz/aws/aws_test.go |   203 -
+ .../src/github.com/mitchellh/goamz/aws/client.go   |   125 -
+ .../github.com/mitchellh/goamz/aws/client_test.go  |   121 -
+ .../src/github.com/mitchellh/goamz/ec2/ec2.go      |  2599 ----
+ .../src/github.com/mitchellh/goamz/ec2/ec2_test.go |  1243 --
+ .../github.com/mitchellh/goamz/ec2/ec2i_test.go    |   203 -
+ .../github.com/mitchellh/goamz/ec2/ec2t_test.go    |   580 -
+ .../mitchellh/goamz/ec2/ec2test/filter.go          |    84 -
+ .../mitchellh/goamz/ec2/ec2test/server.go          |   993 --
+ .../github.com/mitchellh/goamz/ec2/export_test.go  |    22 -
+ .../mitchellh/goamz/ec2/responses_test.go          |   854 --
+ .../src/github.com/mitchellh/goamz/ec2/sign.go     |    45 -
+ .../github.com/mitchellh/goamz/ec2/sign_test.go    |    68 -
+ .../src/github.com/stretchr/objx/.gitignore        |    22 -
+ .../src/github.com/stretchr/objx/README.md         |     3 -
+ .../src/github.com/stretchr/objx/accessors.go      |   179 -
+ .../src/github.com/stretchr/objx/accessors_test.go |   145 -
+ .../stretchr/objx/codegen/array-access.txt         |    14 -
+ .../github.com/stretchr/objx/codegen/index.html    |    86 -
+ .../github.com/stretchr/objx/codegen/template.txt  |   286 -
+ .../stretchr/objx/codegen/types_list.txt           |    20 -
+ .../src/github.com/stretchr/objx/constants.go      |    13 -
+ .../src/github.com/stretchr/objx/conversions.go    |   117 -
+ .../github.com/stretchr/objx/conversions_test.go   |    94 -
+ .../_workspace/src/github.com/stretchr/objx/doc.go |    72 -
+ .../src/github.com/stretchr/objx/fixture_test.go   |    98 -
+ .../_workspace/src/github.com/stretchr/objx/map.go |   222 -
+ .../src/github.com/stretchr/objx/map_for_test.go   |    10 -
+ .../src/github.com/stretchr/objx/map_test.go       |   147 -
+ .../src/github.com/stretchr/objx/mutations.go      |    81 -
+ .../src/github.com/stretchr/objx/mutations_test.go |    77 -
+ .../src/github.com/stretchr/objx/security.go       |    14 -
+ .../src/github.com/stretchr/objx/security_test.go  |    12 -
+ .../stretchr/objx/simple_example_test.go           |    41 -
+ .../src/github.com/stretchr/objx/tests.go          |    17 -
+ .../src/github.com/stretchr/objx/tests_test.go     |    24 -
+ .../stretchr/objx/type_specific_codegen.go         |  2881 ----
+ .../stretchr/objx/type_specific_codegen_test.go    |  2867 ----
+ .../src/github.com/stretchr/objx/value.go          |    13 -
+ .../src/github.com/stretchr/objx/value_test.go     |     1 -
+ .../stretchr/testify/assert/assertions.go          |   490 -
+ .../stretchr/testify/assert/assertions_test.go     |   401 -
+ .../src/github.com/stretchr/testify/assert/doc.go  |    74 -
+ .../github.com/stretchr/testify/assert/errors.go   |    10 -
+ .../src/github.com/stretchr/testify/mock/doc.go    |    43 -
+ .../src/github.com/stretchr/testify/mock/mock.go   |   505 -
+ .../github.com/stretchr/testify/mock/mock_test.go  |   657 -
+ .../src/github.com/vaughan0/go-ini/LICENSE         |    14 -
+ .../src/github.com/vaughan0/go-ini/README.md       |    70 -
+ .../src/github.com/vaughan0/go-ini/ini.go          |   123 -
+ .../github.com/vaughan0/go-ini/ini_linux_test.go   |    43 -
+ .../src/github.com/vaughan0/go-ini/ini_test.go     |    89 -
+ .../src/github.com/vaughan0/go-ini/test.ini        |     2 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/LICENSE     |   185 -
+ .../src/gopkg.in/v1/yaml/LICENSE.libyaml           |    31 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/README.md   |   128 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/apic.go     |   742 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/decode.go   |   538 -
+ .../_workspace/src/gopkg.in/v1/yaml/decode_test.go |   648 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/emitterc.go |  1682 ---
+ Godeps/_workspace/src/gopkg.in/v1/yaml/encode.go   |   226 -
+ .../_workspace/src/gopkg.in/v1/yaml/encode_test.go |   386 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/parserc.go  |  1096 --
+ Godeps/_workspace/src/gopkg.in/v1/yaml/readerc.go  |   391 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/resolve.go  |   148 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/scannerc.go |  2710 ----
+ Godeps/_workspace/src/gopkg.in/v1/yaml/sorter.go   |   104 -
+ .../_workspace/src/gopkg.in/v1/yaml/suite_test.go  |    12 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/writerc.go  |    89 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/yaml.go     |   306 -
+ Godeps/_workspace/src/gopkg.in/v1/yaml/yamlh.go    |   712 -
+ .../src/gopkg.in/v1/yaml/yamlprivateh.go           |   173 -
+ 302 files changed, 105918 deletions(-)
+ delete mode 100644 Godeps/Godeps.json
+ delete mode 100644 Godeps/Readme
+ delete mode 100644 Godeps/_workspace/.gitignore
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/LICENSE
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/README
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/doc.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/example_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/go1_0.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/go1_2.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/issues_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/read.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/read_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/scanner/errors.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/scanner/example_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/scanner/scanner.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/scanner/scanner_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/set.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/testdata/gcfg_test.gcfg
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/testdata/gcfg_unicode_test.gcfg
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/token/position.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/token/position_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/token/serialize.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/token/serialize_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/token/token.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/types/bool.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/types/doc.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/types/enum.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/types/enum_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/types/int.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/types/int_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/types/scan.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/gcfg/types/scan_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/LICENSE
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/dce.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/doc.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/hash.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/node.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/time.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/util.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/uuid.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/uuid_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/version1.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/version4.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/atom/atom.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/atom/atom_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/atom/gen.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/atom/table.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/atom/table_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/charset.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/charset_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/gen.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/table.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-charset.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-UTF-8-BOM.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-meta-charset.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-meta-content.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/No-encoding-declaration.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/README
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-16BE-BOM.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-16LE-BOM.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-8-BOM-vs-meta-charset.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-8-BOM-vs-meta-content.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/meta-charset-attribute.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/meta-content-attribute.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/const.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/doc.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/doctype.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/entity.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/entity_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/escape.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/escape_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/example_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/foreign.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/node.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/node_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/parse.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/parse_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/render.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/render_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/go1.html
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/README
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/adoption01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/adoption02.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/comments01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/doctype01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/entities01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/entities02.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/html5test-com.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/inbody01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/isindex.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/pending-spec-changes-plain-text-unsafe.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/pending-spec-changes.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/plain-text-unsafe.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scriptdata01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scripted/adoption01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scripted/webkit01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tables01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests1.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests10.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests11.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests12.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests14.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests15.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests16.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests17.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests18.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests19.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests2.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests20.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests21.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests22.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests23.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests24.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests25.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests26.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests3.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests4.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests5.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests6.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests7.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests8.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests9.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests_innerHTML_1.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tricky01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/webkit01.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/webkit02.dat
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/token.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/html/token_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/websocket/client.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/websocket/exampledial_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/websocket/examplehandler_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/websocket/hybi.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/websocket/hybi_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/websocket/server.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/websocket/websocket.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/go.net/websocket/websocket_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/goauth2/compute/serviceaccount/serviceaccount.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/goauth2/oauth/example/oauthreq.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/example.client_secrets.json
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/example.pem
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/main.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/jwt.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/jwt_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/goauth2/oauth/oauth.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/goauth2/oauth/oauth_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/google-api-go-client/compute/v1/compute-api.json
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/google-api-go-client/compute/v1/compute-gen.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/googleapi.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/googleapi_test.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/transport/apikey.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/types.go
+ delete mode 100644 Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/types_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/add_child.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/add_child_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/client.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/client_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/cluster.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_delete.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_delete_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_swap.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_swap_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/debug.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/debug_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/delete.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/delete_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/error.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/get.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/get_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/options.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/response.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_curl_chan_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_update_create.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_update_create_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/version.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/watch.go
+ delete mode 100644 Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/watch_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/.travis.yml
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/AUTHORS
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/DOCKER-LICENSE
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/LICENSE
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/README.markdown
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/change.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/change_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/client.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/client_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/container.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/container_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/env.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/env_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/event.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/event_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/example_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/image.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/image_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/misc.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/misc_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/signal.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/stdcopy.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/stdcopy_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/Dockerfile
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/container.tar
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/dockerfile.tar
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/server.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/server_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/writer.go
+ delete mode 100644 Godeps/_workspace/src/github.com/golang/glog/LICENSE
+ delete mode 100644 Godeps/_workspace/src/github.com/golang/glog/README
+ delete mode 100644 Godeps/_workspace/src/github.com/golang/glog/glog.go
+ delete mode 100644 Godeps/_workspace/src/github.com/golang/glog/glog_file.go
+ delete mode 100644 Godeps/_workspace/src/github.com/golang/glog/glog_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/cadvisor/client/client.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/cadvisor/client/client_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/cadvisor/info/advice.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/cadvisor/info/container.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/cadvisor/info/container_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/cadvisor/info/machine.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/cadvisor/info/test/datagen.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/cadvisor/info/version.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/gofuzz/.travis.yml
+ delete mode 100644 Godeps/_workspace/src/github.com/google/gofuzz/CONTRIBUTING.md
+ delete mode 100644 Godeps/_workspace/src/github.com/google/gofuzz/LICENSE
+ delete mode 100644 Godeps/_workspace/src/github.com/google/gofuzz/README.md
+ delete mode 100644 Godeps/_workspace/src/github.com/google/gofuzz/doc.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/gofuzz/example_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/gofuzz/fuzz.go
+ delete mode 100644 Godeps/_workspace/src/github.com/google/gofuzz/fuzz_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/aws/attempt.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/aws/attempt_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/aws/aws.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/aws/aws_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/aws/client.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/aws/client_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2i_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2t_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2test/filter.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2test/server.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/export_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/responses_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/sign.go
+ delete mode 100644 Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/sign_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/.gitignore
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/README.md
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/accessors.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/accessors_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/codegen/array-access.txt
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/codegen/index.html
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/codegen/template.txt
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/codegen/types_list.txt
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/constants.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/conversions.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/conversions_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/doc.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/fixture_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/map.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/map_for_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/map_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/mutations.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/mutations_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/security.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/security_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/simple_example_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/tests.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/tests_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/type_specific_codegen.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/type_specific_codegen_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/value.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/objx/value_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/testify/assert/assertions.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/testify/assert/assertions_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/testify/assert/doc.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/testify/assert/errors.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/testify/mock/doc.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/testify/mock/mock.go
+ delete mode 100644 Godeps/_workspace/src/github.com/stretchr/testify/mock/mock_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/vaughan0/go-ini/LICENSE
+ delete mode 100644 Godeps/_workspace/src/github.com/vaughan0/go-ini/README.md
+ delete mode 100644 Godeps/_workspace/src/github.com/vaughan0/go-ini/ini.go
+ delete mode 100644 Godeps/_workspace/src/github.com/vaughan0/go-ini/ini_linux_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/vaughan0/go-ini/ini_test.go
+ delete mode 100644 Godeps/_workspace/src/github.com/vaughan0/go-ini/test.ini
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/LICENSE
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/LICENSE.libyaml
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/README.md
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/apic.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/decode.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/decode_test.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/emitterc.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/encode.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/encode_test.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/parserc.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/readerc.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/resolve.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/scannerc.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/sorter.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/suite_test.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/writerc.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/yaml.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/yamlh.go
+ delete mode 100644 Godeps/_workspace/src/gopkg.in/v1/yaml/yamlprivateh.go
+
+diff --git a/Godeps/Godeps.json b/Godeps/Godeps.json
+deleted file mode 100644
+index b680328..0000000
+--- a/Godeps/Godeps.json
++++ /dev/null
+@@ -1,104 +0,0 @@
+-{
+-	"ImportPath": "github.com/GoogleCloudPlatform/kubernetes",
+-	"GoVersion": "go1.3",
+-	"Packages": [
+-		"./..."
+-	],
+-	"Deps": [
+-		{
+-			"ImportPath": "code.google.com/p/gcfg",
+-			"Rev": "c2d3050044d05357eaf6c3547249ba57c5e235cb"
+-		},
+-		{
+-			"ImportPath": "code.google.com/p/go-uuid/uuid",
+-			"Comment": "null-12",
+-			"Rev": "7dda39b2e7d5e265014674c5af696ba4186679e9"
+-		},
+-		{
+-			"ImportPath": "code.google.com/p/go.net/html",
+-			"Comment": "null-144",
+-			"Rev": "ad01a6fcc8a19d3a4478c836895ffe883bd2ceab"
+-		},
+-		{
+-			"ImportPath": "code.google.com/p/go.net/websocket",
+-			"Comment": "null-144",
+-			"Rev": "ad01a6fcc8a19d3a4478c836895ffe883bd2ceab"
+-		},
+-		{
+-			"ImportPath": "code.google.com/p/goauth2/compute/serviceaccount",
+-			"Comment": "weekly-50",
+-			"Rev": "7fc9d958c83464bd7650240569bf93a102266e6a"
+-		},
+-		{
+-			"ImportPath": "code.google.com/p/goauth2/oauth",
+-			"Comment": "weekly-50",
+-			"Rev": "7fc9d958c83464bd7650240569bf93a102266e6a"
+-		},
+-		{
+-			"ImportPath": "code.google.com/p/google-api-go-client/compute/v1",
+-			"Comment": "release-96",
+-			"Rev": "0923cdda5b82a7dd0dd5c689f824ca5e7d9b60de"
+-		},
+-		{
+-			"ImportPath": "code.google.com/p/google-api-go-client/googleapi",
+-			"Comment": "release-96",
+-			"Rev": "0923cdda5b82a7dd0dd5c689f824ca5e7d9b60de"
+-		},
+-		{
+-			"ImportPath": "github.com/coreos/go-etcd/etcd",
+-			"Comment": "v0.2.0-rc1-120-g23142f6",
+-			"Rev": "23142f6773a676cc2cae8dd0cb90b2ea761c853f"
+-		},
+-		{
+-			"ImportPath": "github.com/fsouza/go-dockerclient",
+-			"Comment": "0.2.1-241-g0dbb508",
+-			"Rev": "0dbb508e94dd899a6743d035d8f249c7634d26da"
+-		},
+-		{
+-			"ImportPath": "github.com/golang/glog",
+-			"Rev": "d1c4472bf2efd3826f2b5bdcc02d8416798d678c"
+-		},
+-		{
+-			"ImportPath": "github.com/google/cadvisor/client",
+-			"Comment": "0.4.0",
+-			"Rev": "5a6d06c02600b1e57e55a9d9f71dbac1bfc9fe6c"
+-		},
+-		{
+-			"ImportPath": "github.com/google/cadvisor/info",
+-			"Comment": "0.4.0",
+-			"Rev": "5a6d06c02600b1e57e55a9d9f71dbac1bfc9fe6c"
+-		},
+-		{
+-			"ImportPath": "github.com/google/gofuzz",
+-			"Rev": "aef70dacbc78771e35beb261bb3a72986adf7906"
+-		},
+-		{
+-			"ImportPath": "github.com/mitchellh/goamz/aws",
+-			"Rev": "9cad7da945e699385c1a3e115aa255211921c9bb"
+-		},
+-		{
+-			"ImportPath": "github.com/mitchellh/goamz/ec2",
+-			"Rev": "9cad7da945e699385c1a3e115aa255211921c9bb"
+-		},
+-		{
+-			"ImportPath": "github.com/stretchr/objx",
+-			"Rev": "d40df0cc104c06eae2dfe03d7dddb83802d52f9a"
+-		},
+-		{
+-			"ImportPath": "github.com/stretchr/testify/assert",
+-			"Rev": "37614ac27794505bf7867ca93aac883cadb6a5f7"
+-		},
+-		{
+-			"ImportPath": "github.com/stretchr/testify/mock",
+-			"Rev": "37614ac27794505bf7867ca93aac883cadb6a5f7"
+-		},
+-		{
+-			"ImportPath": "github.com/vaughan0/go-ini",
+-			"Rev": "a98ad7ee00ec53921f08832bc06ecf7fd600e6a1"
+-		},
+-		{
+-			"ImportPath": "gopkg.in/v1/yaml",
+-			"Rev": "1b9791953ba4027efaeb728c7355e542a203be5e"
+-		}
+-	]
+-}
+diff --git a/Godeps/Readme b/Godeps/Readme
+deleted file mode 100644
+index 4cdaa53..0000000
+--- a/Godeps/Readme
++++ /dev/null
+@@ -1,5 +0,0 @@
+-This directory tree is generated automatically by godep.
+-
+-Please do not edit.
+-
+-See https://github.com/tools/godep for more information.
+diff --git a/Godeps/_workspace/.gitignore b/Godeps/_workspace/.gitignore
+deleted file mode 100644
+index f037d68..0000000
+--- a/Godeps/_workspace/.gitignore
++++ /dev/null
+@@ -1,2 +0,0 @@
+-/pkg
+-/bin
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/LICENSE b/Godeps/_workspace/src/code.google.com/p/gcfg/LICENSE
+deleted file mode 100644
+index b0a9e76..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/LICENSE
++++ /dev/null
+@@ -1,57 +0,0 @@
+-Copyright (c) 2012 Péter Surányi. All rights reserved.
+-
+-Redistribution and use in source and binary forms, with or without
+-modification, are permitted provided that the following conditions are
+-met:
+-
+-   * Redistributions of source code must retain the above copyright
+-notice, this list of conditions and the following disclaimer.
+-   * Redistributions in binary form must reproduce the above
+-copyright notice, this list of conditions and the following disclaimer
+-in the documentation and/or other materials provided with the
+-distribution.
+-
+-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+-
+-----------------------------------------------------------------------
+-Portions of gcfg's source code have been derived from Go, and are
+-covered by the following license:
+-----------------------------------------------------------------------
+-
+-Copyright (c) 2009 The Go Authors. All rights reserved.
+-
+-Redistribution and use in source and binary forms, with or without
+-modification, are permitted provided that the following conditions are
+-met:
+-
+-   * Redistributions of source code must retain the above copyright
+-notice, this list of conditions and the following disclaimer.
+-   * Redistributions in binary form must reproduce the above
+-copyright notice, this list of conditions and the following disclaimer
+-in the documentation and/or other materials provided with the
+-distribution.
+-   * Neither the name of Google Inc. nor the names of its
+-contributors may be used to endorse or promote products derived from
+-this software without specific prior written permission.
+-
+-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/README b/Godeps/_workspace/src/code.google.com/p/gcfg/README
+deleted file mode 100644
+index 8f621c3..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/README
++++ /dev/null
+@@ -1,7 +0,0 @@
+-Gcfg reads INI-style configuration files into Go structs;
+-supports user-defined types and subsections.
+-
+-Project page: https://code.google.com/p/gcfg
+-Package docs: http://godoc.org/code.google.com/p/gcfg
+-
+-My other projects: https://speter.net
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/doc.go b/Godeps/_workspace/src/code.google.com/p/gcfg/doc.go
+deleted file mode 100644
+index 99687b4..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/doc.go
++++ /dev/null
+@@ -1,118 +0,0 @@
+-// Package gcfg reads "INI-style" text-based configuration files with
+-// "name=value" pairs grouped into sections (gcfg files).
+-//
+-// This package is still a work in progress; see the sections below for planned
+-// changes.
+-//
+-// Syntax
+-//
+-// The syntax is based on that used by git config:
+-// http://git-scm.com/docs/git-config#_syntax .
+-// There are some (planned) differences compared to the git config format:
+-//  - improve data portability:
+-//    - must be encoded in UTF-8 (for now) and must not contain the 0 byte
+-//    - include and "path" type is not supported
+-//      (path type may be implementable as a user-defined type)
+-//  - internationalization
+-//    - section and variable names can contain unicode letters, unicode digits
+-//      (as defined in http://golang.org/ref/spec#Characters ) and hyphens
+-//      (U+002D), starting with a unicode letter
+-//  - disallow potentially ambiguous or misleading definitions:
+-//    - `[sec.sub]` format is not allowed (deprecated in gitconfig)
+-//    - `[sec ""]` is not allowed
+-//      - use `[sec]` for section name "sec" and empty subsection name
+-//    - (planned) within a single file, definitions must be contiguous for each:
+-//      - section: '[secA]' -> '[secB]' -> '[secA]' is an error
+-//      - subsection: '[sec "A"]' -> '[sec "B"]' -> '[sec "A"]' is an error
+-//      - multivalued variable: 'multi=a' -> 'other=x' -> 'multi=b' is an error
+-//
+-// Data structure
+-//
+-// The functions in this package read values into a user-defined struct.
+-// Each section corresponds to a struct field in the config struct, and each
+-// variable in a section corresponds to a data field in the section struct.
+-// The mapping of each section or variable name to fields is done either based
+-// on the "gcfg" struct tag or by matching the name of the section or variable,
+-// ignoring case. In the latter case, hyphens '-' in section and variable names
+-// correspond to underscores '_' in field names.
+-// Fields must be exported; to use a section or variable name starting with a
+-// letter that is neither upper- or lower-case, prefix the field name with 'X'.
+-// (See https://code.google.com/p/go/issues/detail?id=5763#c4 .)
+-//
+-// For sections with subsections, the corresponding field in config must be a
+-// map, rather than a struct, with string keys and pointer-to-struct values.
+-// Values for subsection variables are stored in the map with the subsection
+-// name used as the map key.
+-// (Note that unlike section and variable names, subsection names are case
+-// sensitive.)
+-// When using a map, and there is a section with the same section name but
+-// without a subsection name, its values are stored with the empty string used
+-// as the key.
+-//
+-// The functions in this package panic if config is not a pointer to a struct,
+-// or when a field is not of a suitable type (either a struct or a map with
+-// string keys and pointer-to-struct values).
+-//
+-// Parsing of values
+-//
+-// The section structs in the config struct may contain single-valued or
+-// multi-valued variables. Variables of unnamed slice type (that is, a type
+-// starting with `[]`) are treated as multi-value; all others (including named
+-// slice types) are treated as single-valued variables.
+-//
+-// Single-valued variables are handled based on the type as follows.
+-// Unnamed pointer types (that is, types starting with `*`) are dereferenced,
+-// and if necessary, a new instance is allocated.
+-//
+-// For types implementing the encoding.TextUnmarshaler interface, the
+-// UnmarshalText method is used to set the value. Implementing this method is
+-// the recommended way for parsing user-defined types.
+-//
+-// For fields of string kind, the value string is assigned to the field, after
+-// unquoting and unescaping as needed.
+-// For fields of bool kind, the field is set to true if the value is "true",
+-// "yes", "on" or "1", and set to false if the value is "false", "no", "off" or
+-// "0", ignoring case. In addition, single-valued bool fields can be specified
+-// with a "blank" value (variable name without equals sign and value); in such
+-// case the value is set to true.
+-//
+-// Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as
+-// decimal or hexadecimal (if having '0x' prefix). (This is to prevent
+-// unintuitively handling zero-padded numbers as octal.) Other types having
+-// [u]int* as the underlying type, such as os.FileMode and uintptr allow
+-// decimal, hexadecimal, or octal values.
+-// Parsing mode for integer types can be overridden using the struct tag option
+-// ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters
+-// (each standing for decimal, hexadecimal, and octal, respectively.)
+-//
+-// All other types are parsed using fmt.Sscanf with the "%v" verb.
+-//
+-// For multi-valued variables, each individual value is parsed as above and
+-// appended to the slice. If the first value is specified as a "blank" value
+-// (variable name without equals sign and value), a new slice is allocated;
+-// that is any values previously set in the slice will be ignored.
+-//
+-// The types subpackage for provides helpers for parsing "enum-like" and integer
+-// types.
+-//
+-// TODO
+-//
+-// The following is a list of changes under consideration:
+-//  - documentation
+-//    - self-contained syntax documentation
+-//    - more practical examples
+-//    - move TODOs to issue tracker (eventually)
+-//  - syntax
+-//    - reconsider valid escape sequences
+-//      (gitconfig doesn't support \r in value, \t in subsection name, etc.)
+-//  - reading / parsing gcfg files
+-//    - define internal representation structure
+-//    - support multiple inputs (readers, strings, files)
+-//    - support declaring encoding (?)
+-//    - support varying fields sets for subsections (?)
+-//  - writing gcfg files
+-//  - error handling
+-//    - make error context accessible programmatically?
+-//    - limit input size?
+-//
+-package gcfg
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/example_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/example_test.go
+deleted file mode 100644
+index 884f3fb..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/example_test.go
++++ /dev/null
+@@ -1,132 +0,0 @@
+-package gcfg_test
+-
+-import (
+-	"fmt"
+-	"log"
+-)
+-
+-import "code.google.com/p/gcfg"
+-
+-func ExampleReadStringInto() {
+-	cfgStr := `; Comment line
+-[section]
+-name=value # comment`
+-	cfg := struct {
+-		Section struct {
+-			Name string
+-		}
+-	}{}
+-	err := gcfg.ReadStringInto(&cfg, cfgStr)
+-	if err != nil {
+-		log.Fatalf("Failed to parse gcfg data: %s", err)
+-	}
+-	fmt.Println(cfg.Section.Name)
+-	// Output: value
+-}
+-
+-func ExampleReadStringInto_bool() {
+-	cfgStr := `; Comment line
+-[section]
+-switch=on`
+-	cfg := struct {
+-		Section struct {
+-			Switch bool
+-		}
+-	}{}
+-	err := gcfg.ReadStringInto(&cfg, cfgStr)
+-	if err != nil {
+-		log.Fatalf("Failed to parse gcfg data: %s", err)
+-	}
+-	fmt.Println(cfg.Section.Switch)
+-	// Output: true
+-}
+-
+-func ExampleReadStringInto_hyphens() {
+-	cfgStr := `; Comment line
+-[section-name]
+-variable-name=value # comment`
+-	cfg := struct {
+-		Section_Name struct {
+-			Variable_Name string
+-		}
+-	}{}
+-	err := gcfg.ReadStringInto(&cfg, cfgStr)
+-	if err != nil {
+-		log.Fatalf("Failed to parse gcfg data: %s", err)
+-	}
+-	fmt.Println(cfg.Section_Name.Variable_Name)
+-	// Output: value
+-}
+-
+-func ExampleReadStringInto_tags() {
+-	cfgStr := `; Comment line
+-[section]
+-var-name=value # comment`
+-	cfg := struct {
+-		Section struct {
+-			FieldName string `gcfg:"var-name"`
+-		}
+-	}{}
+-	err := gcfg.ReadStringInto(&cfg, cfgStr)
+-	if err != nil {
+-		log.Fatalf("Failed to parse gcfg data: %s", err)
+-	}
+-	fmt.Println(cfg.Section.FieldName)
+-	// Output: value
+-}
+-
+-func ExampleReadStringInto_subsections() {
+-	cfgStr := `; Comment line
+-[profile "A"]
+-color = white
+-
+-[profile "B"]
+-color = black
+-`
+-	cfg := struct {
+-		Profile map[string]*struct {
+-			Color string
+-		}
+-	}{}
+-	err := gcfg.ReadStringInto(&cfg, cfgStr)
+-	if err != nil {
+-		log.Fatalf("Failed to parse gcfg data: %s", err)
+-	}
+-	fmt.Printf("%s %s\n", cfg.Profile["A"].Color, cfg.Profile["B"].Color)
+-	// Output: white black
+-}
+-
+-func ExampleReadStringInto_multivalue() {
+-	cfgStr := `; Comment line
+-[section]
+-multi=value1
+-multi=value2`
+-	cfg := struct {
+-		Section struct {
+-			Multi []string
+-		}
+-	}{}
+-	err := gcfg.ReadStringInto(&cfg, cfgStr)
+-	if err != nil {
+-		log.Fatalf("Failed to parse gcfg data: %s", err)
+-	}
+-	fmt.Println(cfg.Section.Multi)
+-	// Output: [value1 value2]
+-}
+-
+-func ExampleReadStringInto_unicode() {
+-	cfgStr := `; Comment line
+-[甲]
+-乙=丙 # comment`
+-	cfg := struct {
+-		X甲 struct {
+-			X乙 string
+-		}
+-	}{}
+-	err := gcfg.ReadStringInto(&cfg, cfgStr)
+-	if err != nil {
+-		log.Fatalf("Failed to parse gcfg data: %s", err)
+-	}
+-	fmt.Println(cfg.X甲.X乙)
+-	// Output: 丙
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/go1_0.go b/Godeps/_workspace/src/code.google.com/p/gcfg/go1_0.go
+deleted file mode 100644
+index 6670210..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/go1_0.go
++++ /dev/null
+@@ -1,7 +0,0 @@
+-// +build !go1.2
+-
+-package gcfg
+-
+-type textUnmarshaler interface {
+-	UnmarshalText(text []byte) error
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/go1_2.go b/Godeps/_workspace/src/code.google.com/p/gcfg/go1_2.go
+deleted file mode 100644
+index 6f5843b..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/go1_2.go
++++ /dev/null
+@@ -1,9 +0,0 @@
+-// +build go1.2
+-
+-package gcfg
+-
+-import (
+-	"encoding"
+-)
+-
+-type textUnmarshaler encoding.TextUnmarshaler
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/issues_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/issues_test.go
+deleted file mode 100644
+index 796dd10..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/issues_test.go
++++ /dev/null
+@@ -1,63 +0,0 @@
+-package gcfg
+-
+-import (
+-	"fmt"
+-	"math/big"
+-	"strings"
+-	"testing"
+-)
+-
+-type Config1 struct {
+-	Section struct {
+-		Int    int
+-		BigInt big.Int
+-	}
+-}
+-
+-var testsIssue1 = []struct {
+-	cfg      string
+-	typename string
+-}{
+-	{"[section]\nint=X", "int"},
+-	{"[section]\nint=", "int"},
+-	{"[section]\nint=1A", "int"},
+-	{"[section]\nbigint=X", "big.Int"},
+-	{"[section]\nbigint=", "big.Int"},
+-	{"[section]\nbigint=1A", "big.Int"},
+-}
+-
+-// Value parse error should:
+-//  - include plain type name
+-//  - not include reflect internals
+-func TestIssue1(t *testing.T) {
+-	for i, tt := range testsIssue1 {
+-		var c Config1
+-		err := ReadStringInto(&c, tt.cfg)
+-		switch {
+-		case err == nil:
+-			t.Errorf("%d fail: got ok; wanted error", i)
+-		case !strings.Contains(err.Error(), tt.typename):
+-			t.Errorf("%d fail: error message doesn't contain type name %q: %v",
+-				i, tt.typename, err)
+-		case strings.Contains(err.Error(), "reflect"):
+-			t.Errorf("%d fail: error message includes reflect internals: %v",
+-				i, err)
+-		default:
+-			t.Logf("%d pass: %v", i, err)
+-		}
+-	}
+-}
+-
+-type confIssue2 struct{ Main struct{ Foo string } }
+-
+-var testsIssue2 = []readtest{
+-	{"[main]\n;\nfoo = bar\n", &confIssue2{struct{ Foo string }{"bar"}}, true},
+-	{"[main]\r\n;\r\nfoo = bar\r\n", &confIssue2{struct{ Foo string }{"bar"}}, true},
+-}
+-
+-func TestIssue2(t *testing.T) {
+-	for i, tt := range testsIssue2 {
+-		id := fmt.Sprintf("issue2:%d", i)
+-		testRead(t, id, tt)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/read.go b/Godeps/_workspace/src/code.google.com/p/gcfg/read.go
+deleted file mode 100644
+index 4719c2b..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/read.go
++++ /dev/null
+@@ -1,181 +0,0 @@
+-package gcfg
+-
+-import (
+-	"fmt"
+-	"io"
+-	"io/ioutil"
+-	"os"
+-	"strings"
+-)
+-
+-import (
+-	"code.google.com/p/gcfg/scanner"
+-	"code.google.com/p/gcfg/token"
+-)
+-
+-var unescape = map[rune]rune{'\\': '\\', '"': '"', 'n': '\n', 't': '\t'}
+-
+-// no error: invalid literals should be caught by scanner
+-func unquote(s string) string {
+-	u, q, esc := make([]rune, 0, len(s)), false, false
+-	for _, c := range s {
+-		if esc {
+-			uc, ok := unescape[c]
+-			switch {
+-			case ok:
+-				u = append(u, uc)
+-				fallthrough
+-			case !q && c == '\n':
+-				esc = false
+-				continue
+-			}
+-			panic("invalid escape sequence")
+-		}
+-		switch c {
+-		case '"':
+-			q = !q
+-		case '\\':
+-			esc = true
+-		default:
+-			u = append(u, c)
+-		}
+-	}
+-	if q {
+-		panic("missing end quote")
+-	}
+-	if esc {
+-		panic("invalid escape sequence")
+-	}
+-	return string(u)
+-}
+-
+-func readInto(config interface{}, fset *token.FileSet, file *token.File, src []byte) error {
+-	var s scanner.Scanner
+-	var errs scanner.ErrorList
+-	s.Init(file, src, func(p token.Position, m string) { errs.Add(p, m) }, 0)
+-	sect, sectsub := "", ""
+-	pos, tok, lit := s.Scan()
+-	errfn := func(msg string) error {
+-		return fmt.Errorf("%s: %s", fset.Position(pos), msg)
+-	}
+-	for {
+-		if errs.Len() > 0 {
+-			return errs.Err()
+-		}
+-		switch tok {
+-		case token.EOF:
+-			return nil
+-		case token.EOL, token.COMMENT:
+-			pos, tok, lit = s.Scan()
+-		case token.LBRACK:
+-			pos, tok, lit = s.Scan()
+-			if errs.Len() > 0 {
+-				return errs.Err()
+-			}
+-			if tok != token.IDENT {
+-				return errfn("expected section name")
+-			}
+-			sect, sectsub = lit, ""
+-			pos, tok, lit = s.Scan()
+-			if errs.Len() > 0 {
+-				return errs.Err()
+-			}
+-			if tok == token.STRING {
+-				sectsub = unquote(lit)
+-				if sectsub == "" {
+-					return errfn("empty subsection name")
+-				}
+-				pos, tok, lit = s.Scan()
+-				if errs.Len() > 0 {
+-					return errs.Err()
+-				}
+-			}
+-			if tok != token.RBRACK {
+-				if sectsub == "" {
+-					return errfn("expected subsection name or right bracket")
+-				}
+-				return errfn("expected right bracket")
+-			}
+-			pos, tok, lit = s.Scan()
+-			if tok != token.EOL && tok != token.EOF && tok != token.COMMENT {
+-				return errfn("expected EOL, EOF, or comment")
+-			}
+-		case token.IDENT:
+-			if sect == "" {
+-				return errfn("expected section header")
+-			}
+-			n := lit
+-			pos, tok, lit = s.Scan()
+-			if errs.Len() > 0 {
+-				return errs.Err()
+-			}
+-			blank, v := tok == token.EOF || tok == token.EOL || tok == token.COMMENT, ""
+-			if !blank {
+-				if tok != token.ASSIGN {
+-					return errfn("expected '='")
+-				}
+-				pos, tok, lit = s.Scan()
+-				if errs.Len() > 0 {
+-					return errs.Err()
+-				}
+-				if tok != token.STRING {
+-					return errfn("expected value")
+-				}
+-				v = unquote(lit)
+-				pos, tok, lit = s.Scan()
+-				if errs.Len() > 0 {
+-					return errs.Err()
+-				}
+-				if tok != token.EOL && tok != token.EOF && tok != token.COMMENT {
+-					return errfn("expected EOL, EOF, or comment")
+-				}
+-			}
+-			err := set(config, sect, sectsub, n, blank, v)
+-			if err != nil {
+-				return err
+-			}
+-		default:
+-			if sect == "" {
+-				return errfn("expected section header")
+-			}
+-			return errfn("expected section header or variable declaration")
+-		}
+-	}
+-	panic("never reached")
+-}
+-
+-// ReadInto reads gcfg formatted data from reader and sets the values into the
+-// corresponding fields in config.
+-func ReadInto(config interface{}, reader io.Reader) error {
+-	src, err := ioutil.ReadAll(reader)
+-	if err != nil {
+-		return err
+-	}
+-	fset := token.NewFileSet()
+-	file := fset.AddFile("", fset.Base(), len(src))
+-	return readInto(config, fset, file, src)
+-}
+-
+-// ReadStringInto reads gcfg formatted data from str and sets the values into
+-// the corresponding fields in config.
+-func ReadStringInto(config interface{}, str string) error {
+-	r := strings.NewReader(str)
+-	return ReadInto(config, r)
+-}
+-
+-// ReadFileInto reads gcfg formatted data from the file filename and sets the
+-// values into the corresponding fields in config.
+-func ReadFileInto(config interface{}, filename string) error {
+-	f, err := os.Open(filename)
+-	if err != nil {
+-		return err
+-	}
+-	defer f.Close()
+-	src, err := ioutil.ReadAll(f)
+-	if err != nil {
+-		return err
+-	}
+-	fset := token.NewFileSet()
+-	file := fset.AddFile(filename, fset.Base(), len(src))
+-	return readInto(config, fset, file, src)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/read_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/read_test.go
+deleted file mode 100644
+index 4a7d8e1..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/read_test.go
++++ /dev/null
+@@ -1,333 +0,0 @@
+-package gcfg
+-
+-import (
+-	"fmt"
+-	"math/big"
+-	"os"
+-	"reflect"
+-	"testing"
+-)
+-
+-const (
+-	// 64 spaces
+-	sp64 = "                                                                "
+-	// 512 spaces
+-	sp512 = sp64 + sp64 + sp64 + sp64 + sp64 + sp64 + sp64 + sp64
+-	// 4096 spaces
+-	sp4096 = sp512 + sp512 + sp512 + sp512 + sp512 + sp512 + sp512 + sp512
+-)
+-
+-type cBasic struct {
+-	Section           cBasicS1
+-	Hyphen_In_Section cBasicS2
+-	unexported        cBasicS1
+-	Exported          cBasicS3
+-	TagName           cBasicS1 `gcfg:"tag-name"`
+-}
+-type cBasicS1 struct {
+-	Name  string
+-	Int   int
+-	PName *string
+-}
+-type cBasicS2 struct {
+-	Hyphen_In_Name string
+-}
+-type cBasicS3 struct {
+-	unexported string
+-}
+-
+-type nonMulti []string
+-
+-type unmarshalable string
+-
+-func (u *unmarshalable) UnmarshalText(text []byte) error {
+-	s := string(text)
+-	if s == "error" {
+-		return fmt.Errorf("%s", s)
+-	}
+-	*u = unmarshalable(s)
+-	return nil
+-}
+-
+-var _ textUnmarshaler = new(unmarshalable)
+-
+-type cUni struct {
+-	X甲       cUniS1
+-	XSection cUniS2
+-}
+-type cUniS1 struct {
+-	X乙 string
+-}
+-type cUniS2 struct {
+-	XName string
+-}
+-
+-type cMulti struct {
+-	M1 cMultiS1
+-	M2 cMultiS2
+-	M3 cMultiS3
+-}
+-type cMultiS1 struct{ Multi []string }
+-type cMultiS2 struct{ NonMulti nonMulti }
+-type cMultiS3 struct{ MultiInt []int }
+-
+-type cSubs struct{ Sub map[string]*cSubsS1 }
+-type cSubsS1 struct{ Name string }
+-
+-type cBool struct{ Section cBoolS1 }
+-type cBoolS1 struct{ Bool bool }
+-
+-type cTxUnm struct{ Section cTxUnmS1 }
+-type cTxUnmS1 struct{ Name unmarshalable }
+-
+-type cNum struct {
+-	N1 cNumS1
+-	N2 cNumS2
+-	N3 cNumS3
+-}
+-type cNumS1 struct {
+-	Int    int
+-	IntDHO int `gcfg:",int=dho"`
+-	Big    *big.Int
+-}
+-type cNumS2 struct {
+-	MultiInt []int
+-	MultiBig []*big.Int
+-}
+-type cNumS3 struct{ FileMode os.FileMode }
+-type readtest struct {
+-	gcfg string
+-	exp  interface{}
+-	ok   bool
+-}
+-
+-func newString(s string) *string {
+-	return &s
+-}
+-
+-var readtests = []struct {
+-	group string
+-	tests []readtest
+-}{{"scanning", []readtest{
+-	{"[section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	// hyphen in name
+-	{"[hyphen-in-section]\nhyphen-in-name=value", &cBasic{Hyphen_In_Section: cBasicS2{Hyphen_In_Name: "value"}}, true},
+-	// quoted string value
+-	{"[section]\nname=\"\"", &cBasic{Section: cBasicS1{Name: ""}}, true},
+-	{"[section]\nname=\" \"", &cBasic{Section: cBasicS1{Name: " "}}, true},
+-	{"[section]\nname=\"value\"", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\nname=\" value \"", &cBasic{Section: cBasicS1{Name: " value "}}, true},
+-	{"\n[section]\nname=\"va ; lue\"", &cBasic{Section: cBasicS1{Name: "va ; lue"}}, true},
+-	{"[section]\nname=\"val\" \"ue\"", &cBasic{Section: cBasicS1{Name: "val ue"}}, true},
+-	{"[section]\nname=\"value", &cBasic{}, false},
+-	// escape sequences
+-	{"[section]\nname=\"va\\\\lue\"", &cBasic{Section: cBasicS1{Name: "va\\lue"}}, true},
+-	{"[section]\nname=\"va\\\"lue\"", &cBasic{Section: cBasicS1{Name: "va\"lue"}}, true},
+-	{"[section]\nname=\"va\\nlue\"", &cBasic{Section: cBasicS1{Name: "va\nlue"}}, true},
+-	{"[section]\nname=\"va\\tlue\"", &cBasic{Section: cBasicS1{Name: "va\tlue"}}, true},
+-	{"\n[section]\nname=\\", &cBasic{}, false},
+-	{"\n[section]\nname=\\a", &cBasic{}, false},
+-	{"\n[section]\nname=\"val\\a\"", &cBasic{}, false},
+-	{"\n[section]\nname=val\\", &cBasic{}, false},
+-	{"\n[sub \"A\\\n\"]\nname=value", &cSubs{}, false},
+-	{"\n[sub \"A\\\t\"]\nname=value", &cSubs{}, false},
+-	// broken line
+-	{"[section]\nname=value \\\n value", &cBasic{Section: cBasicS1{Name: "value  value"}}, true},
+-	{"[section]\nname=\"value \\\n value\"", &cBasic{}, false},
+-}}, {"scanning:whitespace", []readtest{
+-	{" \n[section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{" [section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"\t[section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[ section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section ]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\n name=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\nname =value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\nname= value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\nname=value ", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\r\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\r\nname=value\r\n", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{";cmnt\r\n[section]\r\nname=value\r\n", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	// long lines
+-	{sp4096 + "[section]\nname=value\n", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[" + sp4096 + "section]\nname=value\n", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section" + sp4096 + "]\nname=value\n", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]" + sp4096 + "\nname=value\n", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\n" + sp4096 + "name=value\n", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\nname" + sp4096 + "=value\n", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\nname=" + sp4096 + "value\n", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\nname=value\n" + sp4096, &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-}}, {"scanning:comments", []readtest{
+-	{"; cmnt\n[section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"# cmnt\n[section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{" ; cmnt\n[section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"\t; cmnt\n[section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"\n[section]; cmnt\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"\n[section] ; cmnt\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"\n[section]\nname=value; cmnt", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"\n[section]\nname=value ; cmnt", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"\n[section]\nname=\"value\" ; cmnt", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"\n[section]\nname=value ; \"cmnt", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"\n[section]\nname=\"va ; lue\" ; cmnt", &cBasic{Section: cBasicS1{Name: "va ; lue"}}, true},
+-	{"\n[section]\nname=; cmnt", &cBasic{Section: cBasicS1{Name: ""}}, true},
+-}}, {"scanning:subsections", []readtest{
+-	{"\n[sub \"A\"]\nname=value", &cSubs{map[string]*cSubsS1{"A": &cSubsS1{"value"}}}, true},
+-	{"\n[sub \"b\"]\nname=value", &cSubs{map[string]*cSubsS1{"b": &cSubsS1{"value"}}}, true},
+-	{"\n[sub \"A\\\\\"]\nname=value", &cSubs{map[string]*cSubsS1{"A\\": &cSubsS1{"value"}}}, true},
+-	{"\n[sub \"A\\\"\"]\nname=value", &cSubs{map[string]*cSubsS1{"A\"": &cSubsS1{"value"}}}, true},
+-}}, {"syntax", []readtest{
+-	// invalid line
+-	{"\n[section]\n=", &cBasic{}, false},
+-	// no section
+-	{"name=value", &cBasic{}, false},
+-	// empty section
+-	{"\n[]\nname=value", &cBasic{}, false},
+-	// empty subsection
+-	{"\n[sub \"\"]\nname=value", &cSubs{}, false},
+-}}, {"setting", []readtest{
+-	{"[section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	// pointer
+-	{"[section]", &cBasic{Section: cBasicS1{PName: nil}}, true},
+-	{"[section]\npname=value", &cBasic{Section: cBasicS1{PName: newString("value")}}, true},
+-	// section name not matched
+-	{"\n[nonexistent]\nname=value", &cBasic{}, false},
+-	// subsection name not matched
+-	{"\n[section \"nonexistent\"]\nname=value", &cBasic{}, false},
+-	// variable name not matched
+-	{"\n[section]\nnonexistent=value", &cBasic{}, false},
+-	// hyphen in name
+-	{"[hyphen-in-section]\nhyphen-in-name=value", &cBasic{Hyphen_In_Section: cBasicS2{Hyphen_In_Name: "value"}}, true},
+-	// ignore unexported fields
+-	{"[unexported]\nname=value", &cBasic{}, false},
+-	{"[exported]\nunexported=value", &cBasic{}, false},
+-	// 'X' prefix for non-upper/lower-case letters
+-	{"[甲]\n乙=丙", &cUni{X甲: cUniS1{X乙: "丙"}}, true},
+-	//{"[section]\nxname=value", &cBasic{XSection: cBasicS4{XName: "value"}}, false},
+-	//{"[xsection]\nname=value", &cBasic{XSection: cBasicS4{XName: "value"}}, false},
+-	// name specified as struct tag
+-	{"[tag-name]\nname=value", &cBasic{TagName: cBasicS1{Name: "value"}}, true},
+-}}, {"multivalue", []readtest{
+-	// unnamed slice type: treat as multi-value
+-	{"\n[m1]", &cMulti{M1: cMultiS1{}}, true},
+-	{"\n[m1]\nmulti=value", &cMulti{M1: cMultiS1{[]string{"value"}}}, true},
+-	{"\n[m1]\nmulti=value1\nmulti=value2", &cMulti{M1: cMultiS1{[]string{"value1", "value2"}}}, true},
+-	// "blank" empties multi-valued slice -- here same result as above
+-	{"\n[m1]\nmulti\nmulti=value1\nmulti=value2", &cMulti{M1: cMultiS1{[]string{"value1", "value2"}}}, true},
+-	// named slice type: do not treat as multi-value
+-	{"\n[m2]", &cMulti{}, true},
+-	{"\n[m2]\nmulti=value", &cMulti{}, false},
+-	{"\n[m2]\nmulti=value1\nmulti=value2", &cMulti{}, false},
+-}}, {"type:string", []readtest{
+-	{"[section]\nname=value", &cBasic{Section: cBasicS1{Name: "value"}}, true},
+-	{"[section]\nname=", &cBasic{Section: cBasicS1{Name: ""}}, true},
+-}}, {"type:bool", []readtest{
+-	// explicit values
+-	{"[section]\nbool=true", &cBool{cBoolS1{true}}, true},
+-	{"[section]\nbool=yes", &cBool{cBoolS1{true}}, true},
+-	{"[section]\nbool=on", &cBool{cBoolS1{true}}, true},
+-	{"[section]\nbool=1", &cBool{cBoolS1{true}}, true},
+-	{"[section]\nbool=tRuE", &cBool{cBoolS1{true}}, true},
+-	{"[section]\nbool=false", &cBool{cBoolS1{false}}, true},
+-	{"[section]\nbool=no", &cBool{cBoolS1{false}}, true},
+-	{"[section]\nbool=off", &cBool{cBoolS1{false}}, true},
+-	{"[section]\nbool=0", &cBool{cBoolS1{false}}, true},
+-	{"[section]\nbool=NO", &cBool{cBoolS1{false}}, true},
+-	// "blank" value handled as true
+-	{"[section]\nbool", &cBool{cBoolS1{true}}, true},
+-	// bool parse errors
+-	{"[section]\nbool=maybe", &cBool{}, false},
+-	{"[section]\nbool=t", &cBool{}, false},
+-	{"[section]\nbool=truer", &cBool{}, false},
+-	{"[section]\nbool=2", &cBool{}, false},
+-	{"[section]\nbool=-1", &cBool{}, false},
+-}}, {"type:numeric", []readtest{
+-	{"[section]\nint=0", &cBasic{Section: cBasicS1{Int: 0}}, true},
+-	{"[section]\nint=1", &cBasic{Section: cBasicS1{Int: 1}}, true},
+-	{"[section]\nint=-1", &cBasic{Section: cBasicS1{Int: -1}}, true},
+-	{"[section]\nint=0.2", &cBasic{}, false},
+-	{"[section]\nint=1e3", &cBasic{}, false},
+-	// primitive [u]int(|8|16|32|64) and big.Int is parsed as dec or hex (not octal)
+-	{"[n1]\nint=010", &cNum{N1: cNumS1{Int: 10}}, true},
+-	{"[n1]\nint=0x10", &cNum{N1: cNumS1{Int: 0x10}}, true},
+-	{"[n1]\nbig=1", &cNum{N1: cNumS1{Big: big.NewInt(1)}}, true},
+-	{"[n1]\nbig=0x10", &cNum{N1: cNumS1{Big: big.NewInt(0x10)}}, true},
+-	{"[n1]\nbig=010", &cNum{N1: cNumS1{Big: big.NewInt(10)}}, true},
+-	{"[n2]\nmultiint=010", &cNum{N2: cNumS2{MultiInt: []int{10}}}, true},
+-	{"[n2]\nmultibig=010", &cNum{N2: cNumS2{MultiBig: []*big.Int{big.NewInt(10)}}}, true},
+-	// set parse mode for int types via struct tag
+-	{"[n1]\nintdho=010", &cNum{N1: cNumS1{IntDHO: 010}}, true},
+-	// octal allowed for named type
+-	{"[n3]\nfilemode=0777", &cNum{N3: cNumS3{FileMode: 0777}}, true},
+-}}, {"type:textUnmarshaler", []readtest{
+-	{"[section]\nname=value", &cTxUnm{Section: cTxUnmS1{Name: "value"}}, true},
+-	{"[section]\nname=error", &cTxUnm{}, false},
+-}},
+-}
+-
+-func TestReadStringInto(t *testing.T) {
+-	for _, tg := range readtests {
+-		for i, tt := range tg.tests {
+-			id := fmt.Sprintf("%s:%d", tg.group, i)
+-			testRead(t, id, tt)
+-		}
+-	}
+-}
+-
+-func TestReadStringIntoMultiBlankPreset(t *testing.T) {
+-	tt := readtest{"\n[m1]\nmulti\nmulti=value1\nmulti=value2", &cMulti{M1: cMultiS1{[]string{"value1", "value2"}}}, true}
+-	cfg := &cMulti{M1: cMultiS1{[]string{"preset1", "preset2"}}}
+-	testReadInto(t, "multi:blank", tt, cfg)
+-}
+-
+-func testRead(t *testing.T, id string, tt readtest) {
+-	// get the type of the expected result
+-	restyp := reflect.TypeOf(tt.exp).Elem()
+-	// create a new instance to hold the actual result
+-	res := reflect.New(restyp).Interface()
+-	testReadInto(t, id, tt, res)
+-}
+-
+-func testReadInto(t *testing.T, id string, tt readtest, res interface{}) {
+-	err := ReadStringInto(res, tt.gcfg)
+-	if tt.ok {
+-		if err != nil {
+-			t.Errorf("%s fail: got error %v, wanted ok", id, err)
+-			return
+-		} else if !reflect.DeepEqual(res, tt.exp) {
+-			t.Errorf("%s fail: got value %#v, wanted value %#v", id, res, tt.exp)
+-			return
+-		}
+-		if !testing.Short() {
+-			t.Logf("%s pass: got value %#v", id, res)
+-		}
+-	} else { // !tt.ok
+-		if err == nil {
+-			t.Errorf("%s fail: got value %#v, wanted error", id, res)
+-			return
+-		}
+-		if !testing.Short() {
+-			t.Logf("%s pass: got error %v", id, err)
+-		}
+-	}
+-}
+-
+-func TestReadFileInto(t *testing.T) {
+-	res := &struct{ Section struct{ Name string } }{}
+-	err := ReadFileInto(res, "testdata/gcfg_test.gcfg")
+-	if err != nil {
+-		t.Errorf(err.Error())
+-	}
+-	if "value" != res.Section.Name {
+-		t.Errorf("got %q, wanted %q", res.Section.Name, "value")
+-	}
+-}
+-
+-func TestReadFileIntoUnicode(t *testing.T) {
+-	res := &struct{ X甲 struct{ X乙 string } }{}
+-	err := ReadFileInto(res, "testdata/gcfg_unicode_test.gcfg")
+-	if err != nil {
+-		t.Errorf(err.Error())
+-	}
+-	if "丙" != res.X甲.X乙 {
+-		t.Errorf("got %q, wanted %q", res.X甲.X乙, "丙")
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/errors.go b/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/errors.go
+deleted file mode 100644
+index 4ff920a..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/errors.go
++++ /dev/null
+@@ -1,121 +0,0 @@
+-// Copyright 2009 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package scanner
+-
+-import (
+-	"fmt"
+-	"io"
+-	"sort"
+-)
+-
+-import (
+-	"code.google.com/p/gcfg/token"
+-)
+-
+-// In an ErrorList, an error is represented by an *Error.
+-// The position Pos, if valid, points to the beginning of
+-// the offending token, and the error condition is described
+-// by Msg.
+-//
+-type Error struct {
+-	Pos token.Position
+-	Msg string
+-}
+-
+-// Error implements the error interface.
+-func (e Error) Error() string {
+-	if e.Pos.Filename != "" || e.Pos.IsValid() {
+-		// don't print "<unknown position>"
+-		// TODO(gri) reconsider the semantics of Position.IsValid
+-		return e.Pos.String() + ": " + e.Msg
+-	}
+-	return e.Msg
+-}
+-
+-// ErrorList is a list of *Errors.
+-// The zero value for an ErrorList is an empty ErrorList ready to use.
+-//
+-type ErrorList []*Error
+-
+-// Add adds an Error with given position and error message to an ErrorList.
+-func (p *ErrorList) Add(pos token.Position, msg string) {
+-	*p = append(*p, &Error{pos, msg})
+-}
+-
+-// Reset resets an ErrorList to no errors.
+-func (p *ErrorList) Reset() { *p = (*p)[0:0] }
+-
+-// ErrorList implements the sort Interface.
+-func (p ErrorList) Len() int      { return len(p) }
+-func (p ErrorList) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
+-
+-func (p ErrorList) Less(i, j int) bool {
+-	e := &p[i].Pos
+-	f := &p[j].Pos
+-	if e.Filename < f.Filename {
+-		return true
+-	}
+-	if e.Filename == f.Filename {
+-		return e.Offset < f.Offset
+-	}
+-	return false
+-}
+-
+-// Sort sorts an ErrorList. *Error entries are sorted by position,
+-// other errors are sorted by error message, and before any *Error
+-// entry.
+-//
+-func (p ErrorList) Sort() {
+-	sort.Sort(p)
+-}
+-
+-// RemoveMultiples sorts an ErrorList and removes all but the first error per line.
+-func (p *ErrorList) RemoveMultiples() {
+-	sort.Sort(p)
+-	var last token.Position // initial last.Line is != any legal error line
+-	i := 0
+-	for _, e := range *p {
+-		if e.Pos.Filename != last.Filename || e.Pos.Line != last.Line {
+-			last = e.Pos
+-			(*p)[i] = e
+-			i++
+-		}
+-	}
+-	(*p) = (*p)[0:i]
+-}
+-
+-// An ErrorList implements the error interface.
+-func (p ErrorList) Error() string {
+-	switch len(p) {
+-	case 0:
+-		return "no errors"
+-	case 1:
+-		return p[0].Error()
+-	}
+-	return fmt.Sprintf("%s (and %d more errors)", p[0], len(p)-1)
+-}
+-
+-// Err returns an error equivalent to this error list.
+-// If the list is empty, Err returns nil.
+-func (p ErrorList) Err() error {
+-	if len(p) == 0 {
+-		return nil
+-	}
+-	return p
+-}
+-
+-// PrintError is a utility function that prints a list of errors to w,
+-// one error per line, if the err parameter is an ErrorList. Otherwise
+-// it prints the err string.
+-//
+-func PrintError(w io.Writer, err error) {
+-	if list, ok := err.(ErrorList); ok {
+-		for _, e := range list {
+-			fmt.Fprintf(w, "%s\n", e)
+-		}
+-	} else if err != nil {
+-		fmt.Fprintf(w, "%s\n", err)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/example_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/example_test.go
+deleted file mode 100644
+index 05eadf5..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/example_test.go
++++ /dev/null
+@@ -1,46 +0,0 @@
+-// Copyright 2012 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package scanner_test
+-
+-import (
+-	"fmt"
+-)
+-
+-import (
+-	"code.google.com/p/gcfg/scanner"
+-	"code.google.com/p/gcfg/token"
+-)
+-
+-func ExampleScanner_Scan() {
+-	// src is the input that we want to tokenize.
+-	src := []byte(`[profile "A"]
+-color = blue ; Comment`)
+-
+-	// Initialize the scanner.
+-	var s scanner.Scanner
+-	fset := token.NewFileSet()                      // positions are relative to fset
+-	file := fset.AddFile("", fset.Base(), len(src)) // register input "file"
+-	s.Init(file, src, nil /* no error handler */, scanner.ScanComments)
+-
+-	// Repeated calls to Scan yield the token sequence found in the input.
+-	for {
+-		pos, tok, lit := s.Scan()
+-		if tok == token.EOF {
+-			break
+-		}
+-		fmt.Printf("%s\t%q\t%q\n", fset.Position(pos), tok, lit)
+-	}
+-
+-	// output:
+-	// 1:1	"["	""
+-	// 1:2	"IDENT"	"profile"
+-	// 1:10	"STRING"	"\"A\""
+-	// 1:13	"]"	""
+-	// 1:14	"\n"	""
+-	// 2:1	"IDENT"	"color"
+-	// 2:7	"="	""
+-	// 2:9	"STRING"	"blue"
+-	// 2:14	"COMMENT"	"; Comment"
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/scanner.go b/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/scanner.go
+deleted file mode 100644
+index f65a4f5..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/scanner.go
++++ /dev/null
+@@ -1,342 +0,0 @@
+-// Copyright 2009 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// Package scanner implements a scanner for gcfg configuration text.
+-// It takes a []byte as source which can then be tokenized
+-// through repeated calls to the Scan method.
+-//
+-// Note that the API for the scanner package may change to accommodate new
+-// features or implementation changes in gcfg.
+-//
+-package scanner
+-
+-import (
+-	"fmt"
+-	"path/filepath"
+-	"unicode"
+-	"unicode/utf8"
+-)
+-
+-import (
+-	"code.google.com/p/gcfg/token"
+-)
+-
+-// An ErrorHandler may be provided to Scanner.Init. If a syntax error is
+-// encountered and a handler was installed, the handler is called with a
+-// position and an error message. The position points to the beginning of
+-// the offending token.
+-//
+-type ErrorHandler func(pos token.Position, msg string)
+-
+-// A Scanner holds the scanner's internal state while processing
+-// a given text.  It can be allocated as part of another data
+-// structure but must be initialized via Init before use.
+-//
+-type Scanner struct {
+-	// immutable state
+-	file *token.File  // source file handle
+-	dir  string       // directory portion of file.Name()
+-	src  []byte       // source
+-	err  ErrorHandler // error reporting; or nil
+-	mode Mode         // scanning mode
+-
+-	// scanning state
+-	ch         rune // current character
+-	offset     int  // character offset
+-	rdOffset   int  // reading offset (position after current character)
+-	lineOffset int  // current line offset
+-	nextVal    bool // next token is expected to be a value
+-
+-	// public state - ok to modify
+-	ErrorCount int // number of errors encountered
+-}
+-
+-// Read the next Unicode char into s.ch.
+-// s.ch < 0 means end-of-file.
+-//
+-func (s *Scanner) next() {
+-	if s.rdOffset < len(s.src) {
+-		s.offset = s.rdOffset
+-		if s.ch == '\n' {
+-			s.lineOffset = s.offset
+-			s.file.AddLine(s.offset)
+-		}
+-		r, w := rune(s.src[s.rdOffset]), 1
+-		switch {
+-		case r == 0:
+-			s.error(s.offset, "illegal character NUL")
+-		case r >= 0x80:
+-			// not ASCII
+-			r, w = utf8.DecodeRune(s.src[s.rdOffset:])
+-			if r == utf8.RuneError && w == 1 {
+-				s.error(s.offset, "illegal UTF-8 encoding")
+-			}
+-		}
+-		s.rdOffset += w
+-		s.ch = r
+-	} else {
+-		s.offset = len(s.src)
+-		if s.ch == '\n' {
+-			s.lineOffset = s.offset
+-			s.file.AddLine(s.offset)
+-		}
+-		s.ch = -1 // eof
+-	}
+-}
+-
+-// A mode value is a set of flags (or 0).
+-// They control scanner behavior.
+-//
+-type Mode uint
+-
+-const (
+-	ScanComments Mode = 1 << iota // return comments as COMMENT tokens
+-)
+-
+-// Init prepares the scanner s to tokenize the text src by setting the
+-// scanner at the beginning of src. The scanner uses the file set file
+-// for position information and it adds line information for each line.
+-// It is ok to re-use the same file when re-scanning the same file as
+-// line information which is already present is ignored. Init causes a
+-// panic if the file size does not match the src size.
+-//
+-// Calls to Scan will invoke the error handler err if they encounter a
+-// syntax error and err is not nil. Also, for each error encountered,
+-// the Scanner field ErrorCount is incremented by one. The mode parameter
+-// determines how comments are handled.
+-//
+-// Note that Init may call err if there is an error in the first character
+-// of the file.
+-//
+-func (s *Scanner) Init(file *token.File, src []byte, err ErrorHandler, mode Mode) {
+-	// Explicitly initialize all fields since a scanner may be reused.
+-	if file.Size() != len(src) {
+-		panic(fmt.Sprintf("file size (%d) does not match src len (%d)", file.Size(), len(src)))
+-	}
+-	s.file = file
+-	s.dir, _ = filepath.Split(file.Name())
+-	s.src = src
+-	s.err = err
+-	s.mode = mode
+-
+-	s.ch = ' '
+-	s.offset = 0
+-	s.rdOffset = 0
+-	s.lineOffset = 0
+-	s.ErrorCount = 0
+-	s.nextVal = false
+-
+-	s.next()
+-}
+-
+-func (s *Scanner) error(offs int, msg string) {
+-	if s.err != nil {
+-		s.err(s.file.Position(s.file.Pos(offs)), msg)
+-	}
+-	s.ErrorCount++
+-}
+-
+-func (s *Scanner) scanComment() string {
+-	// initial [;#] already consumed
+-	offs := s.offset - 1 // position of initial [;#]
+-
+-	for s.ch != '\n' && s.ch >= 0 {
+-		s.next()
+-	}
+-	return string(s.src[offs:s.offset])
+-}
+-
+-func isLetter(ch rune) bool {
+-	return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch >= 0x80 && unicode.IsLetter(ch)
+-}
+-
+-func isDigit(ch rune) bool {
+-	return '0' <= ch && ch <= '9' || ch >= 0x80 && unicode.IsDigit(ch)
+-}
+-
+-func (s *Scanner) scanIdentifier() string {
+-	offs := s.offset
+-	for isLetter(s.ch) || isDigit(s.ch) || s.ch == '-' {
+-		s.next()
+-	}
+-	return string(s.src[offs:s.offset])
+-}
+-
+-func (s *Scanner) scanEscape(val bool) {
+-	offs := s.offset
+-	ch := s.ch
+-	s.next() // always make progress
+-	switch ch {
+-	case '\\', '"':
+-		// ok
+-	case 'n', 't':
+-		if val {
+-			break // ok
+-		}
+-		fallthrough
+-	default:
+-		s.error(offs, "unknown escape sequence")
+-	}
+-}
+-
+-func (s *Scanner) scanString() string {
+-	// '"' opening already consumed
+-	offs := s.offset - 1
+-
+-	for s.ch != '"' {
+-		ch := s.ch
+-		s.next()
+-		if ch == '\n' || ch < 0 {
+-			s.error(offs, "string not terminated")
+-			break
+-		}
+-		if ch == '\\' {
+-			s.scanEscape(false)
+-		}
+-	}
+-
+-	s.next()
+-
+-	return string(s.src[offs:s.offset])
+-}
+-
+-func stripCR(b []byte) []byte {
+-	c := make([]byte, len(b))
+-	i := 0
+-	for _, ch := range b {
+-		if ch != '\r' {
+-			c[i] = ch
+-			i++
+-		}
+-	}
+-	return c[:i]
+-}
+-
+-func (s *Scanner) scanValString() string {
+-	offs := s.offset
+-
+-	hasCR := false
+-	end := offs
+-	inQuote := false
+-loop:
+-	for inQuote || s.ch >= 0 && s.ch != '\n' && s.ch != ';' && s.ch != '#' {
+-		ch := s.ch
+-		s.next()
+-		switch {
+-		case inQuote && ch == '\\':
+-			s.scanEscape(true)
+-		case !inQuote && ch == '\\':
+-			if s.ch == '\r' {
+-				hasCR = true
+-				s.next()
+-			}
+-			if s.ch != '\n' {
+-				s.error(offs, "unquoted '\\' must be followed by new line")
+-				break loop
+-			}
+-			s.next()
+-		case ch == '"':
+-			inQuote = !inQuote
+-		case ch == '\r':
+-			hasCR = true
+-		case ch < 0 || inQuote && ch == '\n':
+-			s.error(offs, "string not terminated")
+-			break loop
+-		}
+-		if inQuote || !isWhiteSpace(ch) {
+-			end = s.offset
+-		}
+-	}
+-
+-	lit := s.src[offs:end]
+-	if hasCR {
+-		lit = stripCR(lit)
+-	}
+-
+-	return string(lit)
+-}
+-
+-func isWhiteSpace(ch rune) bool {
+-	return ch == ' ' || ch == '\t' || ch == '\r'
+-}
+-
+-func (s *Scanner) skipWhitespace() {
+-	for isWhiteSpace(s.ch) {
+-		s.next()
+-	}
+-}
+-
+-// Scan scans the next token and returns the token position, the token,
+-// and its literal string if applicable. The source end is indicated by
+-// token.EOF.
+-//
+-// If the returned token is a literal (token.IDENT, token.STRING) or
+-// token.COMMENT, the literal string has the corresponding value.
+-//
+-// If the returned token is token.ILLEGAL, the literal string is the
+-// offending character.
+-//
+-// In all other cases, Scan returns an empty literal string.
+-//
+-// For more tolerant parsing, Scan will return a valid token if
+-// possible even if a syntax error was encountered. Thus, even
+-// if the resulting token sequence contains no illegal tokens,
+-// a client may not assume that no error occurred. Instead it
+-// must check the scanner's ErrorCount or the number of calls
+-// of the error handler, if there was one installed.
+-//
+-// Scan adds line information to the file added to the file
+-// set with Init. Token positions are relative to that file
+-// and thus relative to the file set.
+-//
+-func (s *Scanner) Scan() (pos token.Pos, tok token.Token, lit string) {
+-scanAgain:
+-	s.skipWhitespace()
+-
+-	// current token start
+-	pos = s.file.Pos(s.offset)
+-
+-	// determine token value
+-	switch ch := s.ch; {
+-	case s.nextVal:
+-		lit = s.scanValString()
+-		tok = token.STRING
+-		s.nextVal = false
+-	case isLetter(ch):
+-		lit = s.scanIdentifier()
+-		tok = token.IDENT
+-	default:
+-		s.next() // always make progress
+-		switch ch {
+-		case -1:
+-			tok = token.EOF
+-		case '\n':
+-			tok = token.EOL
+-		case '"':
+-			tok = token.STRING
+-			lit = s.scanString()
+-		case '[':
+-			tok = token.LBRACK
+-		case ']':
+-			tok = token.RBRACK
+-		case ';', '#':
+-			// comment
+-			lit = s.scanComment()
+-			if s.mode&ScanComments == 0 {
+-				// skip comment
+-				goto scanAgain
+-			}
+-			tok = token.COMMENT
+-		case '=':
+-			tok = token.ASSIGN
+-			s.nextVal = true
+-		default:
+-			s.error(s.file.Offset(pos), fmt.Sprintf("illegal character %#U", ch))
+-			tok = token.ILLEGAL
+-			lit = string(ch)
+-		}
+-	}
+-
+-	return
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/scanner_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/scanner_test.go
+deleted file mode 100644
+index 33227c1..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/scanner/scanner_test.go
++++ /dev/null
+@@ -1,417 +0,0 @@
+-// Copyright 2009 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package scanner
+-
+-import (
+-	"os"
+-	"strings"
+-	"testing"
+-)
+-
+-import (
+-	"code.google.com/p/gcfg/token"
+-)
+-
+-var fset = token.NewFileSet()
+-
+-const /* class */ (
+-	special = iota
+-	literal
+-	operator
+-)
+-
+-func tokenclass(tok token.Token) int {
+-	switch {
+-	case tok.IsLiteral():
+-		return literal
+-	case tok.IsOperator():
+-		return operator
+-	}
+-	return special
+-}
+-
+-type elt struct {
+-	tok   token.Token
+-	lit   string
+-	class int
+-	pre   string
+-	suf   string
+-}
+-
+-var tokens = [...]elt{
+-	// Special tokens
+-	{token.COMMENT, "; a comment", special, "", "\n"},
+-	{token.COMMENT, "# a comment", special, "", "\n"},
+-
+-	// Operators and delimiters
+-	{token.ASSIGN, "=", operator, "", "value"},
+-	{token.LBRACK, "[", operator, "", ""},
+-	{token.RBRACK, "]", operator, "", ""},
+-	{token.EOL, "\n", operator, "", ""},
+-
+-	// Identifiers
+-	{token.IDENT, "foobar", literal, "", ""},
+-	{token.IDENT, "a۰۱۸", literal, "", ""},
+-	{token.IDENT, "foo६४", literal, "", ""},
+-	{token.IDENT, "bar9876", literal, "", ""},
+-	{token.IDENT, "foo-bar", literal, "", ""},
+-	{token.IDENT, "foo", literal, ";\n", ""},
+-	// String literals (subsection names)
+-	{token.STRING, `"foobar"`, literal, "", ""},
+-	{token.STRING, `"\""`, literal, "", ""},
+-	// String literals (values)
+-	{token.STRING, `"\n"`, literal, "=", ""},
+-	{token.STRING, `"foobar"`, literal, "=", ""},
+-	{token.STRING, `"foo\nbar"`, literal, "=", ""},
+-	{token.STRING, `"foo\"bar"`, literal, "=", ""},
+-	{token.STRING, `"foo\\bar"`, literal, "=", ""},
+-	{token.STRING, `"foobar"`, literal, "=", ""},
+-	{token.STRING, `"foobar"`, literal, "= ", ""},
+-	{token.STRING, `"foobar"`, literal, "=", "\n"},
+-	{token.STRING, `"foobar"`, literal, "=", ";"},
+-	{token.STRING, `"foobar"`, literal, "=", " ;"},
+-	{token.STRING, `"foobar"`, literal, "=", "#"},
+-	{token.STRING, `"foobar"`, literal, "=", " #"},
+-	{token.STRING, "foobar", literal, "=", ""},
+-	{token.STRING, "foobar", literal, "= ", ""},
+-	{token.STRING, "foobar", literal, "=", " "},
+-	{token.STRING, `"foo" "bar"`, literal, "=", " "},
+-	{token.STRING, "foo\\\nbar", literal, "=", ""},
+-	{token.STRING, "foo\\\r\nbar", literal, "=", ""},
+-}
+-
+-const whitespace = "  \t  \n\n\n" // to separate tokens
+-
+-var source = func() []byte {
+-	var src []byte
+-	for _, t := range tokens {
+-		src = append(src, t.pre...)
+-		src = append(src, t.lit...)
+-		src = append(src, t.suf...)
+-		src = append(src, whitespace...)
+-	}
+-	return src
+-}()
+-
+-func newlineCount(s string) int {
+-	n := 0
+-	for i := 0; i < len(s); i++ {
+-		if s[i] == '\n' {
+-			n++
+-		}
+-	}
+-	return n
+-}
+-
+-func checkPos(t *testing.T, lit string, p token.Pos, expected token.Position) {
+-	pos := fset.Position(p)
+-	if pos.Filename != expected.Filename {
+-		t.Errorf("bad filename for %q: got %s, expected %s", lit, pos.Filename, expected.Filename)
+-	}
+-	if pos.Offset != expected.Offset {
+-		t.Errorf("bad position for %q: got %d, expected %d", lit, pos.Offset, expected.Offset)
+-	}
+-	if pos.Line != expected.Line {
+-		t.Errorf("bad line for %q: got %d, expected %d", lit, pos.Line, expected.Line)
+-	}
+-	if pos.Column != expected.Column {
+-		t.Errorf("bad column for %q: got %d, expected %d", lit, pos.Column, expected.Column)
+-	}
+-}
+-
+-// Verify that calling Scan() provides the correct results.
+-func TestScan(t *testing.T) {
+-	// make source
+-	src_linecount := newlineCount(string(source))
+-	whitespace_linecount := newlineCount(whitespace)
+-
+-	index := 0
+-
+-	// error handler
+-	eh := func(_ token.Position, msg string) {
+-		t.Errorf("%d: error handler called (msg = %s)", index, msg)
+-	}
+-
+-	// verify scan
+-	var s Scanner
+-	s.Init(fset.AddFile("", fset.Base(), len(source)), source, eh, ScanComments)
+-	// epos is the expected position
+-	epos := token.Position{
+-		Filename: "",
+-		Offset:   0,
+-		Line:     1,
+-		Column:   1,
+-	}
+-	for {
+-		pos, tok, lit := s.Scan()
+-		if lit == "" {
+-			// no literal value for non-literal tokens
+-			lit = tok.String()
+-		}
+-		e := elt{token.EOF, "", special, "", ""}
+-		if index < len(tokens) {
+-			e = tokens[index]
+-		}
+-		if tok == token.EOF {
+-			lit = "<EOF>"
+-			epos.Line = src_linecount
+-			epos.Column = 2
+-		}
+-		if e.pre != "" && strings.ContainsRune("=;#", rune(e.pre[0])) {
+-			epos.Column = 1
+-			checkPos(t, lit, pos, epos)
+-			var etok token.Token
+-			if e.pre[0] == '=' {
+-				etok = token.ASSIGN
+-			} else {
+-				etok = token.COMMENT
+-			}
+-			if tok != etok {
+-				t.Errorf("bad token for %q: got %q, expected %q", lit, tok, etok)
+-			}
+-			pos, tok, lit = s.Scan()
+-		}
+-		epos.Offset += len(e.pre)
+-		if tok != token.EOF {
+-			epos.Column = 1 + len(e.pre)
+-		}
+-		if e.pre != "" && e.pre[len(e.pre)-1] == '\n' {
+-			epos.Offset--
+-			epos.Column--
+-			checkPos(t, lit, pos, epos)
+-			if tok != token.EOL {
+-				t.Errorf("bad token for %q: got %q, expected %q", lit, tok, token.EOL)
+-			}
+-			epos.Line++
+-			epos.Offset++
+-			epos.Column = 1
+-			pos, tok, lit = s.Scan()
+-		}
+-		checkPos(t, lit, pos, epos)
+-		if tok != e.tok {
+-			t.Errorf("bad token for %q: got %q, expected %q", lit, tok, e.tok)
+-		}
+-		if e.tok.IsLiteral() {
+-			// no CRs in value string literals
+-			elit := e.lit
+-			if strings.ContainsRune(e.pre, '=') {
+-				elit = string(stripCR([]byte(elit)))
+-				epos.Offset += len(e.lit) - len(lit) // correct position
+-			}
+-			if lit != elit {
+-				t.Errorf("bad literal for %q: got %q, expected %q", lit, lit, elit)
+-			}
+-		}
+-		if tokenclass(tok) != e.class {
+-			t.Errorf("bad class for %q: got %d, expected %d", lit, tokenclass(tok), e.class)
+-		}
+-		epos.Offset += len(lit) + len(e.suf) + len(whitespace)
+-		epos.Line += newlineCount(lit) + newlineCount(e.suf) + whitespace_linecount
+-		index++
+-		if tok == token.EOF {
+-			break
+-		}
+-		if e.suf == "value" {
+-			pos, tok, lit = s.Scan()
+-			if tok != token.STRING {
+-				t.Errorf("bad token for %q: got %q, expected %q", lit, tok, token.STRING)
+-			}
+-		} else if strings.ContainsRune(e.suf, ';') || strings.ContainsRune(e.suf, '#') {
+-			pos, tok, lit = s.Scan()
+-			if tok != token.COMMENT {
+-				t.Errorf("bad token for %q: got %q, expected %q", lit, tok, token.COMMENT)
+-			}
+-		}
+-		// skip EOLs
+-		for i := 0; i < whitespace_linecount+newlineCount(e.suf); i++ {
+-			pos, tok, lit = s.Scan()
+-			if tok != token.EOL {
+-				t.Errorf("bad token for %q: got %q, expected %q", lit, tok, token.EOL)
+-			}
+-		}
+-	}
+-	if s.ErrorCount != 0 {
+-		t.Errorf("found %d errors", s.ErrorCount)
+-	}
+-}
+-
+-func TestScanValStringEOF(t *testing.T) {
+-	var s Scanner
+-	src := "= value"
+-	f := fset.AddFile("src", fset.Base(), len(src))
+-	s.Init(f, []byte(src), nil, 0)
+-	s.Scan()              // =
+-	s.Scan()              // value
+-	_, tok, _ := s.Scan() // EOF
+-	if tok != token.EOF {
+-		t.Errorf("bad token: got %s, expected %s", tok, token.EOF)
+-	}
+-	if s.ErrorCount > 0 {
+-		t.Error("scanning error")
+-	}
+-}
+-
+-// Verify that initializing the same scanner more then once works correctly.
+-func TestInit(t *testing.T) {
+-	var s Scanner
+-
+-	// 1st init
+-	src1 := "\nname = value"
+-	f1 := fset.AddFile("src1", fset.Base(), len(src1))
+-	s.Init(f1, []byte(src1), nil, 0)
+-	if f1.Size() != len(src1) {
+-		t.Errorf("bad file size: got %d, expected %d", f1.Size(), len(src1))
+-	}
+-	s.Scan()              // \n
+-	s.Scan()              // name
+-	_, tok, _ := s.Scan() // =
+-	if tok != token.ASSIGN {
+-		t.Errorf("bad token: got %s, expected %s", tok, token.ASSIGN)
+-	}
+-
+-	// 2nd init
+-	src2 := "[section]"
+-	f2 := fset.AddFile("src2", fset.Base(), len(src2))
+-	s.Init(f2, []byte(src2), nil, 0)
+-	if f2.Size() != len(src2) {
+-		t.Errorf("bad file size: got %d, expected %d", f2.Size(), len(src2))
+-	}
+-	_, tok, _ = s.Scan() // [
+-	if tok != token.LBRACK {
+-		t.Errorf("bad token: got %s, expected %s", tok, token.LBRACK)
+-	}
+-
+-	if s.ErrorCount != 0 {
+-		t.Errorf("found %d errors", s.ErrorCount)
+-	}
+-}
+-
+-func TestStdErrorHandler(t *testing.T) {
+-	const src = "@\n" + // illegal character, cause an error
+-		"@ @\n" // two errors on the same line
+-
+-	var list ErrorList
+-	eh := func(pos token.Position, msg string) { list.Add(pos, msg) }
+-
+-	var s Scanner
+-	s.Init(fset.AddFile("File1", fset.Base(), len(src)), []byte(src), eh, 0)
+-	for {
+-		if _, tok, _ := s.Scan(); tok == token.EOF {
+-			break
+-		}
+-	}
+-
+-	if len(list) != s.ErrorCount {
+-		t.Errorf("found %d errors, expected %d", len(list), s.ErrorCount)
+-	}
+-
+-	if len(list) != 3 {
+-		t.Errorf("found %d raw errors, expected 3", len(list))
+-		PrintError(os.Stderr, list)
+-	}
+-
+-	list.Sort()
+-	if len(list) != 3 {
+-		t.Errorf("found %d sorted errors, expected 3", len(list))
+-		PrintError(os.Stderr, list)
+-	}
+-
+-	list.RemoveMultiples()
+-	if len(list) != 2 {
+-		t.Errorf("found %d one-per-line errors, expected 2", len(list))
+-		PrintError(os.Stderr, list)
+-	}
+-}
+-
+-type errorCollector struct {
+-	cnt int            // number of errors encountered
+-	msg string         // last error message encountered
+-	pos token.Position // last error position encountered
+-}
+-
+-func checkError(t *testing.T, src string, tok token.Token, pos int, err string) {
+-	var s Scanner
+-	var h errorCollector
+-	eh := func(pos token.Position, msg string) {
+-		h.cnt++
+-		h.msg = msg
+-		h.pos = pos
+-	}
+-	s.Init(fset.AddFile("", fset.Base(), len(src)), []byte(src), eh, ScanComments)
+-	if src[0] == '=' {
+-		_, _, _ = s.Scan()
+-	}
+-	_, tok0, _ := s.Scan()
+-	_, tok1, _ := s.Scan()
+-	if tok0 != tok {
+-		t.Errorf("%q: got %s, expected %s", src, tok0, tok)
+-	}
+-	if tok1 != token.EOF {
+-		t.Errorf("%q: got %s, expected EOF", src, tok1)
+-	}
+-	cnt := 0
+-	if err != "" {
+-		cnt = 1
+-	}
+-	if h.cnt != cnt {
+-		t.Errorf("%q: got cnt %d, expected %d", src, h.cnt, cnt)
+-	}
+-	if h.msg != err {
+-		t.Errorf("%q: got msg %q, expected %q", src, h.msg, err)
+-	}
+-	if h.pos.Offset != pos {
+-		t.Errorf("%q: got offset %d, expected %d", src, h.pos.Offset, pos)
+-	}
+-}
+-
+-var errors = []struct {
+-	src string
+-	tok token.Token
+-	pos int
+-	err string
+-}{
+-	{"\a", token.ILLEGAL, 0, "illegal character U+0007"},
+-	{"/", token.ILLEGAL, 0, "illegal character U+002F '/'"},
+-	{"_", token.ILLEGAL, 0, "illegal character U+005F '_'"},
+-	{`…`, token.ILLEGAL, 0, "illegal character U+2026 '…'"},
+-	{`""`, token.STRING, 0, ""},
+-	{`"`, token.STRING, 0, "string not terminated"},
+-	{"\"\n", token.STRING, 0, "string not terminated"},
+-	{`="`, token.STRING, 1, "string not terminated"},
+-	{"=\"\n", token.STRING, 1, "string not terminated"},
+-	{"=\\", token.STRING, 1, "unquoted '\\' must be followed by new line"},
+-	{"=\\\r", token.STRING, 1, "unquoted '\\' must be followed by new line"},
+-	{`"\z"`, token.STRING, 2, "unknown escape sequence"},
+-	{`"\a"`, token.STRING, 2, "unknown escape sequence"},
+-	{`"\b"`, token.STRING, 2, "unknown escape sequence"},
+-	{`"\f"`, token.STRING, 2, "unknown escape sequence"},
+-	{`"\r"`, token.STRING, 2, "unknown escape sequence"},
+-	{`"\t"`, token.STRING, 2, "unknown escape sequence"},
+-	{`"\v"`, token.STRING, 2, "unknown escape sequence"},
+-	{`"\0"`, token.STRING, 2, "unknown escape sequence"},
+-}
+-
+-func TestScanErrors(t *testing.T) {
+-	for _, e := range errors {
+-		checkError(t, e.src, e.tok, e.pos, e.err)
+-	}
+-}
+-
+-func BenchmarkScan(b *testing.B) {
+-	b.StopTimer()
+-	fset := token.NewFileSet()
+-	file := fset.AddFile("", fset.Base(), len(source))
+-	var s Scanner
+-	b.StartTimer()
+-	for i := b.N - 1; i >= 0; i-- {
+-		s.Init(file, source, nil, ScanComments)
+-		for {
+-			_, tok, _ := s.Scan()
+-			if tok == token.EOF {
+-				break
+-			}
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/set.go b/Godeps/_workspace/src/code.google.com/p/gcfg/set.go
+deleted file mode 100644
+index 4e15604..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/set.go
++++ /dev/null
+@@ -1,281 +0,0 @@
+-package gcfg
+-
+-import (
+-	"fmt"
+-	"math/big"
+-	"reflect"
+-	"strings"
+-	"unicode"
+-	"unicode/utf8"
+-
+-	"code.google.com/p/gcfg/types"
+-)
+-
+-type tag struct {
+-	ident   string
+-	intMode string
+-}
+-
+-func newTag(ts string) tag {
+-	t := tag{}
+-	s := strings.Split(ts, ",")
+-	t.ident = s[0]
+-	for _, tse := range s[1:] {
+-		if strings.HasPrefix(tse, "int=") {
+-			t.intMode = tse[len("int="):]
+-		}
+-	}
+-	return t
+-}
+-
+-func fieldFold(v reflect.Value, name string) (reflect.Value, tag) {
+-	var n string
+-	r0, _ := utf8.DecodeRuneInString(name)
+-	if unicode.IsLetter(r0) && !unicode.IsLower(r0) && !unicode.IsUpper(r0) {
+-		n = "X"
+-	}
+-	n += strings.Replace(name, "-", "_", -1)
+-	f, ok := v.Type().FieldByNameFunc(func(fieldName string) bool {
+-		if !v.FieldByName(fieldName).CanSet() {
+-			return false
+-		}
+-		f, _ := v.Type().FieldByName(fieldName)
+-		t := newTag(f.Tag.Get("gcfg"))
+-		if t.ident != "" {
+-			return strings.EqualFold(t.ident, name)
+-		}
+-		return strings.EqualFold(n, fieldName)
+-	})
+-	if !ok {
+-		return reflect.Value{}, tag{}
+-	}
+-	return v.FieldByName(f.Name), newTag(f.Tag.Get("gcfg"))
+-}
+-
+-type setter func(destp interface{}, blank bool, val string, t tag) error
+-
+-var errUnsupportedType = fmt.Errorf("unsupported type")
+-var errBlankUnsupported = fmt.Errorf("blank value not supported for type")
+-
+-var setters = []setter{
+-	typeSetter, textUnmarshalerSetter, kindSetter, scanSetter,
+-}
+-
+-func textUnmarshalerSetter(d interface{}, blank bool, val string, t tag) error {
+-	dtu, ok := d.(textUnmarshaler)
+-	if !ok {
+-		return errUnsupportedType
+-	}
+-	if blank {
+-		return errBlankUnsupported
+-	}
+-	return dtu.UnmarshalText([]byte(val))
+-}
+-
+-func boolSetter(d interface{}, blank bool, val string, t tag) error {
+-	if blank {
+-		reflect.ValueOf(d).Elem().Set(reflect.ValueOf(true))
+-		return nil
+-	}
+-	b, err := types.ParseBool(val)
+-	if err == nil {
+-		reflect.ValueOf(d).Elem().Set(reflect.ValueOf(b))
+-	}
+-	return err
+-}
+-
+-func intMode(mode string) types.IntMode {
+-	var m types.IntMode
+-	if strings.ContainsAny(mode, "dD") {
+-		m |= types.Dec
+-	}
+-	if strings.ContainsAny(mode, "hH") {
+-		m |= types.Hex
+-	}
+-	if strings.ContainsAny(mode, "oO") {
+-		m |= types.Oct
+-	}
+-	return m
+-}
+-
+-var typeModes = map[reflect.Type]types.IntMode{
+-	reflect.TypeOf(int(0)):    types.Dec | types.Hex,
+-	reflect.TypeOf(int8(0)):   types.Dec | types.Hex,
+-	reflect.TypeOf(int16(0)):  types.Dec | types.Hex,
+-	reflect.TypeOf(int32(0)):  types.Dec | types.Hex,
+-	reflect.TypeOf(int64(0)):  types.Dec | types.Hex,
+-	reflect.TypeOf(uint(0)):   types.Dec | types.Hex,
+-	reflect.TypeOf(uint8(0)):  types.Dec | types.Hex,
+-	reflect.TypeOf(uint16(0)): types.Dec | types.Hex,
+-	reflect.TypeOf(uint32(0)): types.Dec | types.Hex,
+-	reflect.TypeOf(uint64(0)): types.Dec | types.Hex,
+-	// use default mode (allow dec/hex/oct) for uintptr type
+-	reflect.TypeOf(big.Int{}): types.Dec | types.Hex,
+-}
+-
+-func intModeDefault(t reflect.Type) types.IntMode {
+-	m, ok := typeModes[t]
+-	if !ok {
+-		m = types.Dec | types.Hex | types.Oct
+-	}
+-	return m
+-}
+-
+-func intSetter(d interface{}, blank bool, val string, t tag) error {
+-	if blank {
+-		return errBlankUnsupported
+-	}
+-	mode := intMode(t.intMode)
+-	if mode == 0 {
+-		mode = intModeDefault(reflect.TypeOf(d).Elem())
+-	}
+-	return types.ParseInt(d, val, mode)
+-}
+-
+-func stringSetter(d interface{}, blank bool, val string, t tag) error {
+-	if blank {
+-		return errBlankUnsupported
+-	}
+-	dsp, ok := d.(*string)
+-	if !ok {
+-		return errUnsupportedType
+-	}
+-	*dsp = val
+-	return nil
+-}
+-
+-var kindSetters = map[reflect.Kind]setter{
+-	reflect.String:  stringSetter,
+-	reflect.Bool:    boolSetter,
+-	reflect.Int:     intSetter,
+-	reflect.Int8:    intSetter,
+-	reflect.Int16:   intSetter,
+-	reflect.Int32:   intSetter,
+-	reflect.Int64:   intSetter,
+-	reflect.Uint:    intSetter,
+-	reflect.Uint8:   intSetter,
+-	reflect.Uint16:  intSetter,
+-	reflect.Uint32:  intSetter,
+-	reflect.Uint64:  intSetter,
+-	reflect.Uintptr: intSetter,
+-}
+-
+-var typeSetters = map[reflect.Type]setter{
+-	reflect.TypeOf(big.Int{}): intSetter,
+-}
+-
+-func typeSetter(d interface{}, blank bool, val string, tt tag) error {
+-	t := reflect.ValueOf(d).Type().Elem()
+-	setter, ok := typeSetters[t]
+-	if !ok {
+-		return errUnsupportedType
+-	}
+-	return setter(d, blank, val, tt)
+-}
+-
+-func kindSetter(d interface{}, blank bool, val string, tt tag) error {
+-	k := reflect.ValueOf(d).Type().Elem().Kind()
+-	setter, ok := kindSetters[k]
+-	if !ok {
+-		return errUnsupportedType
+-	}
+-	return setter(d, blank, val, tt)
+-}
+-
+-func scanSetter(d interface{}, blank bool, val string, tt tag) error {
+-	if blank {
+-		return errBlankUnsupported
+-	}
+-	return types.ScanFully(d, val, 'v')
+-}
+-
+-func set(cfg interface{}, sect, sub, name string, blank bool, value string) error {
+-	vPCfg := reflect.ValueOf(cfg)
+-	if vPCfg.Kind() != reflect.Ptr || vPCfg.Elem().Kind() != reflect.Struct {
+-		panic(fmt.Errorf("config must be a pointer to a struct"))
+-	}
+-	vCfg := vPCfg.Elem()
+-	vSect, _ := fieldFold(vCfg, sect)
+-	if !vSect.IsValid() {
+-		return fmt.Errorf("invalid section: section %q", sect)
+-	}
+-	if vSect.Kind() == reflect.Map {
+-		vst := vSect.Type()
+-		if vst.Key().Kind() != reflect.String ||
+-			vst.Elem().Kind() != reflect.Ptr ||
+-			vst.Elem().Elem().Kind() != reflect.Struct {
+-			panic(fmt.Errorf("map field for section must have string keys and "+
+-				" pointer-to-struct values: section %q", sect))
+-		}
+-		if vSect.IsNil() {
+-			vSect.Set(reflect.MakeMap(vst))
+-		}
+-		k := reflect.ValueOf(sub)
+-		pv := vSect.MapIndex(k)
+-		if !pv.IsValid() {
+-			vType := vSect.Type().Elem().Elem()
+-			pv = reflect.New(vType)
+-			vSect.SetMapIndex(k, pv)
+-		}
+-		vSect = pv.Elem()
+-	} else if vSect.Kind() != reflect.Struct {
+-		panic(fmt.Errorf("field for section must be a map or a struct: "+
+-			"section %q", sect))
+-	} else if sub != "" {
+-		return fmt.Errorf("invalid subsection: "+
+-			"section %q subsection %q", sect, sub)
+-	}
+-	vVar, t := fieldFold(vSect, name)
+-	if !vVar.IsValid() {
+-		return fmt.Errorf("invalid variable: "+
+-			"section %q subsection %q variable %q", sect, sub, name)
+-	}
+-	// vVal is either single-valued var, or newly allocated value within multi-valued var
+-	var vVal reflect.Value
+-	// multi-value if unnamed slice type
+-	isMulti := vVar.Type().Name() == "" && vVar.Kind() == reflect.Slice
+-	if isMulti && blank {
+-		vVar.Set(reflect.Zero(vVar.Type()))
+-		return nil
+-	}
+-	if isMulti {
+-		vVal = reflect.New(vVar.Type().Elem()).Elem()
+-	} else {
+-		vVal = vVar
+-	}
+-	isDeref := vVal.Type().Name() == "" && vVal.Type().Kind() == reflect.Ptr
+-	isNew := isDeref && vVal.IsNil()
+-	// vAddr is address of value to set (dereferenced & allocated as needed)
+-	var vAddr reflect.Value
+-	switch {
+-	case isNew:
+-		vAddr = reflect.New(vVal.Type().Elem())
+-	case isDeref && !isNew:
+-		vAddr = vVal
+-	default:
+-		vAddr = vVal.Addr()
+-	}
+-	vAddrI := vAddr.Interface()
+-	err, ok := error(nil), false
+-	for _, s := range setters {
+-		err = s(vAddrI, blank, value, t)
+-		if err == nil {
+-			ok = true
+-			break
+-		}
+-		if err != errUnsupportedType {
+-			return err
+-		}
+-	}
+-	if !ok {
+-		// in case all setters returned errUnsupportedType
+-		return err
+-	}
+-	if isNew { // set reference if it was dereferenced and newly allocated
+-		vVal.Set(vAddr)
+-	}
+-	if isMulti { // append if multi-valued
+-		vVar.Set(reflect.Append(vVar, vVal))
+-	}
+-	return nil
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/testdata/gcfg_test.gcfg b/Godeps/_workspace/src/code.google.com/p/gcfg/testdata/gcfg_test.gcfg
+deleted file mode 100644
+index cddff29..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/testdata/gcfg_test.gcfg
++++ /dev/null
+@@ -1,3 +0,0 @@
+-; Comment line
+-[section]
+-name=value # comment
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/testdata/gcfg_unicode_test.gcfg b/Godeps/_workspace/src/code.google.com/p/gcfg/testdata/gcfg_unicode_test.gcfg
+deleted file mode 100644
+index 3762a20..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/testdata/gcfg_unicode_test.gcfg
++++ /dev/null
+@@ -1,3 +0,0 @@
+-; Comment line
+-[甲]
+-乙=丙 # comment
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/token/position.go b/Godeps/_workspace/src/code.google.com/p/gcfg/token/position.go
+deleted file mode 100644
+index fc45c1e..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/token/position.go
++++ /dev/null
+@@ -1,435 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// TODO(gri) consider making this a separate package outside the go directory.
+-
+-package token
+-
+-import (
+-	"fmt"
+-	"sort"
+-	"sync"
+-)
+-
+-// -----------------------------------------------------------------------------
+-// Positions
+-
+-// Position describes an arbitrary source position
+-// including the file, line, and column location.
+-// A Position is valid if the line number is > 0.
+-//
+-type Position struct {
+-	Filename string // filename, if any
+-	Offset   int    // offset, starting at 0
+-	Line     int    // line number, starting at 1
+-	Column   int    // column number, starting at 1 (character count)
+-}
+-
+-// IsValid returns true if the position is valid.
+-func (pos *Position) IsValid() bool { return pos.Line > 0 }
+-
+-// String returns a string in one of several forms:
+-//
+-//	file:line:column    valid position with file name
+-//	line:column         valid position without file name
+-//	file                invalid position with file name
+-//	-                   invalid position without file name
+-//
+-func (pos Position) String() string {
+-	s := pos.Filename
+-	if pos.IsValid() {
+-		if s != "" {
+-			s += ":"
+-		}
+-		s += fmt.Sprintf("%d:%d", pos.Line, pos.Column)
+-	}
+-	if s == "" {
+-		s = "-"
+-	}
+-	return s
+-}
+-
+-// Pos is a compact encoding of a source position within a file set.
+-// It can be converted into a Position for a more convenient, but much
+-// larger, representation.
+-//
+-// The Pos value for a given file is a number in the range [base, base+size],
+-// where base and size are specified when adding the file to the file set via
+-// AddFile.
+-//
+-// To create the Pos value for a specific source offset, first add
+-// the respective file to the current file set (via FileSet.AddFile)
+-// and then call File.Pos(offset) for that file. Given a Pos value p
+-// for a specific file set fset, the corresponding Position value is
+-// obtained by calling fset.Position(p).
+-//
+-// Pos values can be compared directly with the usual comparison operators:
+-// If two Pos values p and q are in the same file, comparing p and q is
+-// equivalent to comparing the respective source file offsets. If p and q
+-// are in different files, p < q is true if the file implied by p was added
+-// to the respective file set before the file implied by q.
+-//
+-type Pos int
+-
+-// The zero value for Pos is NoPos; there is no file and line information
+-// associated with it, and NoPos().IsValid() is false. NoPos is always
+-// smaller than any other Pos value. The corresponding Position value
+-// for NoPos is the zero value for Position.
+-//
+-const NoPos Pos = 0
+-
+-// IsValid returns true if the position is valid.
+-func (p Pos) IsValid() bool {
+-	return p != NoPos
+-}
+-
+-// -----------------------------------------------------------------------------
+-// File
+-
+-// A File is a handle for a file belonging to a FileSet.
+-// A File has a name, size, and line offset table.
+-//
+-type File struct {
+-	set  *FileSet
+-	name string // file name as provided to AddFile
+-	base int    // Pos value range for this file is [base...base+size]
+-	size int    // file size as provided to AddFile
+-
+-	// lines and infos are protected by set.mutex
+-	lines []int
+-	infos []lineInfo
+-}
+-
+-// Name returns the file name of file f as registered with AddFile.
+-func (f *File) Name() string {
+-	return f.name
+-}
+-
+-// Base returns the base offset of file f as registered with AddFile.
+-func (f *File) Base() int {
+-	return f.base
+-}
+-
+-// Size returns the size of file f as registered with AddFile.
+-func (f *File) Size() int {
+-	return f.size
+-}
+-
+-// LineCount returns the number of lines in file f.
+-func (f *File) LineCount() int {
+-	f.set.mutex.RLock()
+-	n := len(f.lines)
+-	f.set.mutex.RUnlock()
+-	return n
+-}
+-
+-// AddLine adds the line offset for a new line.
+-// The line offset must be larger than the offset for the previous line
+-// and smaller than the file size; otherwise the line offset is ignored.
+-//
+-func (f *File) AddLine(offset int) {
+-	f.set.mutex.Lock()
+-	if i := len(f.lines); (i == 0 || f.lines[i-1] < offset) && offset < f.size {
+-		f.lines = append(f.lines, offset)
+-	}
+-	f.set.mutex.Unlock()
+-}
+-
+-// SetLines sets the line offsets for a file and returns true if successful.
+-// The line offsets are the offsets of the first character of each line;
+-// for instance for the content "ab\nc\n" the line offsets are {0, 3}.
+-// An empty file has an empty line offset table.
+-// Each line offset must be larger than the offset for the previous line
+-// and smaller than the file size; otherwise SetLines fails and returns
+-// false.
+-//
+-func (f *File) SetLines(lines []int) bool {
+-	// verify validity of lines table
+-	size := f.size
+-	for i, offset := range lines {
+-		if i > 0 && offset <= lines[i-1] || size <= offset {
+-			return false
+-		}
+-	}
+-
+-	// set lines table
+-	f.set.mutex.Lock()
+-	f.lines = lines
+-	f.set.mutex.Unlock()
+-	return true
+-}
+-
+-// SetLinesForContent sets the line offsets for the given file content.
+-func (f *File) SetLinesForContent(content []byte) {
+-	var lines []int
+-	line := 0
+-	for offset, b := range content {
+-		if line >= 0 {
+-			lines = append(lines, line)
+-		}
+-		line = -1
+-		if b == '\n' {
+-			line = offset + 1
+-		}
+-	}
+-
+-	// set lines table
+-	f.set.mutex.Lock()
+-	f.lines = lines
+-	f.set.mutex.Unlock()
+-}
+-
+-// A lineInfo object describes alternative file and line number
+-// information (such as provided via a //line comment in a .go
+-// file) for a given file offset.
+-type lineInfo struct {
+-	// fields are exported to make them accessible to gob
+-	Offset   int
+-	Filename string
+-	Line     int
+-}
+-
+-// AddLineInfo adds alternative file and line number information for
+-// a given file offset. The offset must be larger than the offset for
+-// the previously added alternative line info and smaller than the
+-// file size; otherwise the information is ignored.
+-//
+-// AddLineInfo is typically used to register alternative position
+-// information for //line filename:line comments in source files.
+-//
+-func (f *File) AddLineInfo(offset int, filename string, line int) {
+-	f.set.mutex.Lock()
+-	if i := len(f.infos); i == 0 || f.infos[i-1].Offset < offset && offset < f.size {
+-		f.infos = append(f.infos, lineInfo{offset, filename, line})
+-	}
+-	f.set.mutex.Unlock()
+-}
+-
+-// Pos returns the Pos value for the given file offset;
+-// the offset must be <= f.Size().
+-// f.Pos(f.Offset(p)) == p.
+-//
+-func (f *File) Pos(offset int) Pos {
+-	if offset > f.size {
+-		panic("illegal file offset")
+-	}
+-	return Pos(f.base + offset)
+-}
+-
+-// Offset returns the offset for the given file position p;
+-// p must be a valid Pos value in that file.
+-// f.Offset(f.Pos(offset)) == offset.
+-//
+-func (f *File) Offset(p Pos) int {
+-	if int(p) < f.base || int(p) > f.base+f.size {
+-		panic("illegal Pos value")
+-	}
+-	return int(p) - f.base
+-}
+-
+-// Line returns the line number for the given file position p;
+-// p must be a Pos value in that file or NoPos.
+-//
+-func (f *File) Line(p Pos) int {
+-	// TODO(gri) this can be implemented much more efficiently
+-	return f.Position(p).Line
+-}
+-
+-func searchLineInfos(a []lineInfo, x int) int {
+-	return sort.Search(len(a), func(i int) bool { return a[i].Offset > x }) - 1
+-}
+-
+-// info returns the file name, line, and column number for a file offset.
+-func (f *File) info(offset int) (filename string, line, column int) {
+-	filename = f.name
+-	if i := searchInts(f.lines, offset); i >= 0 {
+-		line, column = i+1, offset-f.lines[i]+1
+-	}
+-	if len(f.infos) > 0 {
+-		// almost no files have extra line infos
+-		if i := searchLineInfos(f.infos, offset); i >= 0 {
+-			alt := &f.infos[i]
+-			filename = alt.Filename
+-			if i := searchInts(f.lines, alt.Offset); i >= 0 {
+-				line += alt.Line - i - 1
+-			}
+-		}
+-	}
+-	return
+-}
+-
+-func (f *File) position(p Pos) (pos Position) {
+-	offset := int(p) - f.base
+-	pos.Offset = offset
+-	pos.Filename, pos.Line, pos.Column = f.info(offset)
+-	return
+-}
+-
+-// Position returns the Position value for the given file position p;
+-// p must be a Pos value in that file or NoPos.
+-//
+-func (f *File) Position(p Pos) (pos Position) {
+-	if p != NoPos {
+-		if int(p) < f.base || int(p) > f.base+f.size {
+-			panic("illegal Pos value")
+-		}
+-		pos = f.position(p)
+-	}
+-	return
+-}
+-
+-// -----------------------------------------------------------------------------
+-// FileSet
+-
+-// A FileSet represents a set of source files.
+-// Methods of file sets are synchronized; multiple goroutines
+-// may invoke them concurrently.
+-//
+-type FileSet struct {
+-	mutex sync.RWMutex // protects the file set
+-	base  int          // base offset for the next file
+-	files []*File      // list of files in the order added to the set
+-	last  *File        // cache of last file looked up
+-}
+-
+-// NewFileSet creates a new file set.
+-func NewFileSet() *FileSet {
+-	s := new(FileSet)
+-	s.base = 1 // 0 == NoPos
+-	return s
+-}
+-
+-// Base returns the minimum base offset that must be provided to
+-// AddFile when adding the next file.
+-//
+-func (s *FileSet) Base() int {
+-	s.mutex.RLock()
+-	b := s.base
+-	s.mutex.RUnlock()
+-	return b
+-
+-}
+-
+-// AddFile adds a new file with a given filename, base offset, and file size
+-// to the file set s and returns the file. Multiple files may have the same
+-// name. The base offset must not be smaller than the FileSet's Base(), and
+-// size must not be negative.
+-//
+-// Adding the file will set the file set's Base() value to base + size + 1
+-// as the minimum base value for the next file. The following relationship
+-// exists between a Pos value p for a given file offset offs:
+-//
+-//	int(p) = base + offs
+-//
+-// with offs in the range [0, size] and thus p in the range [base, base+size].
+-// For convenience, File.Pos may be used to create file-specific position
+-// values from a file offset.
+-//
+-func (s *FileSet) AddFile(filename string, base, size int) *File {
+-	s.mutex.Lock()
+-	defer s.mutex.Unlock()
+-	if base < s.base || size < 0 {
+-		panic("illegal base or size")
+-	}
+-	// base >= s.base && size >= 0
+-	f := &File{s, filename, base, size, []int{0}, nil}
+-	base += size + 1 // +1 because EOF also has a position
+-	if base < 0 {
+-		panic("token.Pos offset overflow (> 2G of source code in file set)")
+-	}
+-	// add the file to the file set
+-	s.base = base
+-	s.files = append(s.files, f)
+-	s.last = f
+-	return f
+-}
+-
+-// Iterate calls f for the files in the file set in the order they were added
+-// until f returns false.
+-//
+-func (s *FileSet) Iterate(f func(*File) bool) {
+-	for i := 0; ; i++ {
+-		var file *File
+-		s.mutex.RLock()
+-		if i < len(s.files) {
+-			file = s.files[i]
+-		}
+-		s.mutex.RUnlock()
+-		if file == nil || !f(file) {
+-			break
+-		}
+-	}
+-}
+-
+-func searchFiles(a []*File, x int) int {
+-	return sort.Search(len(a), func(i int) bool { return a[i].base > x }) - 1
+-}
+-
+-func (s *FileSet) file(p Pos) *File {
+-	// common case: p is in last file
+-	if f := s.last; f != nil && f.base <= int(p) && int(p) <= f.base+f.size {
+-		return f
+-	}
+-	// p is not in last file - search all files
+-	if i := searchFiles(s.files, int(p)); i >= 0 {
+-		f := s.files[i]
+-		// f.base <= int(p) by definition of searchFiles
+-		if int(p) <= f.base+f.size {
+-			s.last = f
+-			return f
+-		}
+-	}
+-	return nil
+-}
+-
+-// File returns the file that contains the position p.
+-// If no such file is found (for instance for p == NoPos),
+-// the result is nil.
+-//
+-func (s *FileSet) File(p Pos) (f *File) {
+-	if p != NoPos {
+-		s.mutex.RLock()
+-		f = s.file(p)
+-		s.mutex.RUnlock()
+-	}
+-	return
+-}
+-
+-// Position converts a Pos in the fileset into a general Position.
+-func (s *FileSet) Position(p Pos) (pos Position) {
+-	if p != NoPos {
+-		s.mutex.RLock()
+-		if f := s.file(p); f != nil {
+-			pos = f.position(p)
+-		}
+-		s.mutex.RUnlock()
+-	}
+-	return
+-}
+-
+-// -----------------------------------------------------------------------------
+-// Helper functions
+-
+-func searchInts(a []int, x int) int {
+-	// This function body is a manually inlined version of:
+-	//
+-	//   return sort.Search(len(a), func(i int) bool { return a[i] > x }) - 1
+-	//
+-	// With better compiler optimizations, this may not be needed in the
+-	// future, but at the moment this change improves the go/printer
+-	// benchmark performance by ~30%. This has a direct impact on the
+-	// speed of gofmt and thus seems worthwhile (2011-04-29).
+-	// TODO(gri): Remove this when compilers have caught up.
+-	i, j := 0, len(a)
+-	for i < j {
+-		h := i + (j-i)/2 // avoid overflow when computing h
+-		// i ≤ h < j
+-		if a[h] <= x {
+-			i = h + 1
+-		} else {
+-			j = h
+-		}
+-	}
+-	return i - 1
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/token/position_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/token/position_test.go
+deleted file mode 100644
+index 160107d..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/token/position_test.go
++++ /dev/null
+@@ -1,181 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package token
+-
+-import (
+-	"fmt"
+-	"testing"
+-)
+-
+-func checkPos(t *testing.T, msg string, p, q Position) {
+-	if p.Filename != q.Filename {
+-		t.Errorf("%s: expected filename = %q; got %q", msg, q.Filename, p.Filename)
+-	}
+-	if p.Offset != q.Offset {
+-		t.Errorf("%s: expected offset = %d; got %d", msg, q.Offset, p.Offset)
+-	}
+-	if p.Line != q.Line {
+-		t.Errorf("%s: expected line = %d; got %d", msg, q.Line, p.Line)
+-	}
+-	if p.Column != q.Column {
+-		t.Errorf("%s: expected column = %d; got %d", msg, q.Column, p.Column)
+-	}
+-}
+-
+-func TestNoPos(t *testing.T) {
+-	if NoPos.IsValid() {
+-		t.Errorf("NoPos should not be valid")
+-	}
+-	var fset *FileSet
+-	checkPos(t, "nil NoPos", fset.Position(NoPos), Position{})
+-	fset = NewFileSet()
+-	checkPos(t, "fset NoPos", fset.Position(NoPos), Position{})
+-}
+-
+-var tests = []struct {
+-	filename string
+-	source   []byte // may be nil
+-	size     int
+-	lines    []int
+-}{
+-	{"a", []byte{}, 0, []int{}},
+-	{"b", []byte("01234"), 5, []int{0}},
+-	{"c", []byte("\n\n\n\n\n\n\n\n\n"), 9, []int{0, 1, 2, 3, 4, 5, 6, 7, 8}},
+-	{"d", nil, 100, []int{0, 5, 10, 20, 30, 70, 71, 72, 80, 85, 90, 99}},
+-	{"e", nil, 777, []int{0, 80, 100, 120, 130, 180, 267, 455, 500, 567, 620}},
+-	{"f", []byte("package p\n\nimport \"fmt\""), 23, []int{0, 10, 11}},
+-	{"g", []byte("package p\n\nimport \"fmt\"\n"), 24, []int{0, 10, 11}},
+-	{"h", []byte("package p\n\nimport \"fmt\"\n "), 25, []int{0, 10, 11, 24}},
+-}
+-
+-func linecol(lines []int, offs int) (int, int) {
+-	prevLineOffs := 0
+-	for line, lineOffs := range lines {
+-		if offs < lineOffs {
+-			return line, offs - prevLineOffs + 1
+-		}
+-		prevLineOffs = lineOffs
+-	}
+-	return len(lines), offs - prevLineOffs + 1
+-}
+-
+-func verifyPositions(t *testing.T, fset *FileSet, f *File, lines []int) {
+-	for offs := 0; offs < f.Size(); offs++ {
+-		p := f.Pos(offs)
+-		offs2 := f.Offset(p)
+-		if offs2 != offs {
+-			t.Errorf("%s, Offset: expected offset %d; got %d", f.Name(), offs, offs2)
+-		}
+-		line, col := linecol(lines, offs)
+-		msg := fmt.Sprintf("%s (offs = %d, p = %d)", f.Name(), offs, p)
+-		checkPos(t, msg, f.Position(f.Pos(offs)), Position{f.Name(), offs, line, col})
+-		checkPos(t, msg, fset.Position(p), Position{f.Name(), offs, line, col})
+-	}
+-}
+-
+-func makeTestSource(size int, lines []int) []byte {
+-	src := make([]byte, size)
+-	for _, offs := range lines {
+-		if offs > 0 {
+-			src[offs-1] = '\n'
+-		}
+-	}
+-	return src
+-}
+-
+-func TestPositions(t *testing.T) {
+-	const delta = 7 // a non-zero base offset increment
+-	fset := NewFileSet()
+-	for _, test := range tests {
+-		// verify consistency of test case
+-		if test.source != nil && len(test.source) != test.size {
+-			t.Errorf("%s: inconsistent test case: expected file size %d; got %d", test.filename, test.size, len(test.source))
+-		}
+-
+-		// add file and verify name and size
+-		f := fset.AddFile(test.filename, fset.Base()+delta, test.size)
+-		if f.Name() != test.filename {
+-			t.Errorf("expected filename %q; got %q", test.filename, f.Name())
+-		}
+-		if f.Size() != test.size {
+-			t.Errorf("%s: expected file size %d; got %d", f.Name(), test.size, f.Size())
+-		}
+-		if fset.File(f.Pos(0)) != f {
+-			t.Errorf("%s: f.Pos(0) was not found in f", f.Name())
+-		}
+-
+-		// add lines individually and verify all positions
+-		for i, offset := range test.lines {
+-			f.AddLine(offset)
+-			if f.LineCount() != i+1 {
+-				t.Errorf("%s, AddLine: expected line count %d; got %d", f.Name(), i+1, f.LineCount())
+-			}
+-			// adding the same offset again should be ignored
+-			f.AddLine(offset)
+-			if f.LineCount() != i+1 {
+-				t.Errorf("%s, AddLine: expected unchanged line count %d; got %d", f.Name(), i+1, f.LineCount())
+-			}
+-			verifyPositions(t, fset, f, test.lines[0:i+1])
+-		}
+-
+-		// add lines with SetLines and verify all positions
+-		if ok := f.SetLines(test.lines); !ok {
+-			t.Errorf("%s: SetLines failed", f.Name())
+-		}
+-		if f.LineCount() != len(test.lines) {
+-			t.Errorf("%s, SetLines: expected line count %d; got %d", f.Name(), len(test.lines), f.LineCount())
+-		}
+-		verifyPositions(t, fset, f, test.lines)
+-
+-		// add lines with SetLinesForContent and verify all positions
+-		src := test.source
+-		if src == nil {
+-			// no test source available - create one from scratch
+-			src = makeTestSource(test.size, test.lines)
+-		}
+-		f.SetLinesForContent(src)
+-		if f.LineCount() != len(test.lines) {
+-			t.Errorf("%s, SetLinesForContent: expected line count %d; got %d", f.Name(), len(test.lines), f.LineCount())
+-		}
+-		verifyPositions(t, fset, f, test.lines)
+-	}
+-}
+-
+-func TestLineInfo(t *testing.T) {
+-	fset := NewFileSet()
+-	f := fset.AddFile("foo", fset.Base(), 500)
+-	lines := []int{0, 42, 77, 100, 210, 220, 277, 300, 333, 401}
+-	// add lines individually and provide alternative line information
+-	for _, offs := range lines {
+-		f.AddLine(offs)
+-		f.AddLineInfo(offs, "bar", 42)
+-	}
+-	// verify positions for all offsets
+-	for offs := 0; offs <= f.Size(); offs++ {
+-		p := f.Pos(offs)
+-		_, col := linecol(lines, offs)
+-		msg := fmt.Sprintf("%s (offs = %d, p = %d)", f.Name(), offs, p)
+-		checkPos(t, msg, f.Position(f.Pos(offs)), Position{"bar", offs, 42, col})
+-		checkPos(t, msg, fset.Position(p), Position{"bar", offs, 42, col})
+-	}
+-}
+-
+-func TestFiles(t *testing.T) {
+-	fset := NewFileSet()
+-	for i, test := range tests {
+-		fset.AddFile(test.filename, fset.Base(), test.size)
+-		j := 0
+-		fset.Iterate(func(f *File) bool {
+-			if f.Name() != tests[j].filename {
+-				t.Errorf("expected filename = %s; got %s", tests[j].filename, f.Name())
+-			}
+-			j++
+-			return true
+-		})
+-		if j != i+1 {
+-			t.Errorf("expected %d files; got %d", i+1, j)
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/token/serialize.go b/Godeps/_workspace/src/code.google.com/p/gcfg/token/serialize.go
+deleted file mode 100644
+index 4adc8f9..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/token/serialize.go
++++ /dev/null
+@@ -1,56 +0,0 @@
+-// Copyright 2011 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package token
+-
+-type serializedFile struct {
+-	// fields correspond 1:1 to fields with same (lower-case) name in File
+-	Name  string
+-	Base  int
+-	Size  int
+-	Lines []int
+-	Infos []lineInfo
+-}
+-
+-type serializedFileSet struct {
+-	Base  int
+-	Files []serializedFile
+-}
+-
+-// Read calls decode to deserialize a file set into s; s must not be nil.
+-func (s *FileSet) Read(decode func(interface{}) error) error {
+-	var ss serializedFileSet
+-	if err := decode(&ss); err != nil {
+-		return err
+-	}
+-
+-	s.mutex.Lock()
+-	s.base = ss.Base
+-	files := make([]*File, len(ss.Files))
+-	for i := 0; i < len(ss.Files); i++ {
+-		f := &ss.Files[i]
+-		files[i] = &File{s, f.Name, f.Base, f.Size, f.Lines, f.Infos}
+-	}
+-	s.files = files
+-	s.last = nil
+-	s.mutex.Unlock()
+-
+-	return nil
+-}
+-
+-// Write calls encode to serialize the file set s.
+-func (s *FileSet) Write(encode func(interface{}) error) error {
+-	var ss serializedFileSet
+-
+-	s.mutex.Lock()
+-	ss.Base = s.base
+-	files := make([]serializedFile, len(s.files))
+-	for i, f := range s.files {
+-		files[i] = serializedFile{f.name, f.base, f.size, f.lines, f.infos}
+-	}
+-	ss.Files = files
+-	s.mutex.Unlock()
+-
+-	return encode(ss)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/token/serialize_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/token/serialize_test.go
+deleted file mode 100644
+index 4e925ad..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/token/serialize_test.go
++++ /dev/null
+@@ -1,111 +0,0 @@
+-// Copyright 2011 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package token
+-
+-import (
+-	"bytes"
+-	"encoding/gob"
+-	"fmt"
+-	"testing"
+-)
+-
+-// equal returns nil if p and q describe the same file set;
+-// otherwise it returns an error describing the discrepancy.
+-func equal(p, q *FileSet) error {
+-	if p == q {
+-		// avoid deadlock if p == q
+-		return nil
+-	}
+-
+-	// not strictly needed for the test
+-	p.mutex.Lock()
+-	q.mutex.Lock()
+-	defer q.mutex.Unlock()
+-	defer p.mutex.Unlock()
+-
+-	if p.base != q.base {
+-		return fmt.Errorf("different bases: %d != %d", p.base, q.base)
+-	}
+-
+-	if len(p.files) != len(q.files) {
+-		return fmt.Errorf("different number of files: %d != %d", len(p.files), len(q.files))
+-	}
+-
+-	for i, f := range p.files {
+-		g := q.files[i]
+-		if f.set != p {
+-			return fmt.Errorf("wrong fileset for %q", f.name)
+-		}
+-		if g.set != q {
+-			return fmt.Errorf("wrong fileset for %q", g.name)
+-		}
+-		if f.name != g.name {
+-			return fmt.Errorf("different filenames: %q != %q", f.name, g.name)
+-		}
+-		if f.base != g.base {
+-			return fmt.Errorf("different base for %q: %d != %d", f.name, f.base, g.base)
+-		}
+-		if f.size != g.size {
+-			return fmt.Errorf("different size for %q: %d != %d", f.name, f.size, g.size)
+-		}
+-		for j, l := range f.lines {
+-			m := g.lines[j]
+-			if l != m {
+-				return fmt.Errorf("different offsets for %q", f.name)
+-			}
+-		}
+-		for j, l := range f.infos {
+-			m := g.infos[j]
+-			if l.Offset != m.Offset || l.Filename != m.Filename || l.Line != m.Line {
+-				return fmt.Errorf("different infos for %q", f.name)
+-			}
+-		}
+-	}
+-
+-	// we don't care about .last - it's just a cache
+-	return nil
+-}
+-
+-func checkSerialize(t *testing.T, p *FileSet) {
+-	var buf bytes.Buffer
+-	encode := func(x interface{}) error {
+-		return gob.NewEncoder(&buf).Encode(x)
+-	}
+-	if err := p.Write(encode); err != nil {
+-		t.Errorf("writing fileset failed: %s", err)
+-		return
+-	}
+-	q := NewFileSet()
+-	decode := func(x interface{}) error {
+-		return gob.NewDecoder(&buf).Decode(x)
+-	}
+-	if err := q.Read(decode); err != nil {
+-		t.Errorf("reading fileset failed: %s", err)
+-		return
+-	}
+-	if err := equal(p, q); err != nil {
+-		t.Errorf("filesets not identical: %s", err)
+-	}
+-}
+-
+-func TestSerialization(t *testing.T) {
+-	p := NewFileSet()
+-	checkSerialize(t, p)
+-	// add some files
+-	for i := 0; i < 10; i++ {
+-		f := p.AddFile(fmt.Sprintf("file%d", i), p.Base()+i, i*100)
+-		checkSerialize(t, p)
+-		// add some lines and alternative file infos
+-		line := 1000
+-		for offs := 0; offs < f.Size(); offs += 40 + i {
+-			f.AddLine(offs)
+-			if offs%7 == 0 {
+-				f.AddLineInfo(offs, fmt.Sprintf("file%d", offs), line)
+-				line += 33
+-			}
+-		}
+-		checkSerialize(t, p)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/token/token.go b/Godeps/_workspace/src/code.google.com/p/gcfg/token/token.go
+deleted file mode 100644
+index b3c7c83..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/token/token.go
++++ /dev/null
+@@ -1,83 +0,0 @@
+-// Copyright 2009 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// Package token defines constants representing the lexical tokens of the gcfg
+-// configuration syntax and basic operations on tokens (printing, predicates).
+-//
+-// Note that the API for the token package may change to accommodate new
+-// features or implementation changes in gcfg.
+-//
+-package token
+-
+-import "strconv"
+-
+-// Token is the set of lexical tokens of the gcfg configuration syntax.
+-type Token int
+-
+-// The list of tokens.
+-const (
+-	// Special tokens
+-	ILLEGAL Token = iota
+-	EOF
+-	COMMENT
+-
+-	literal_beg
+-	// Identifiers and basic type literals
+-	// (these tokens stand for classes of literals)
+-	IDENT  // section-name, variable-name
+-	STRING // "subsection-name", variable value
+-	literal_end
+-
+-	operator_beg
+-	// Operators and delimiters
+-	ASSIGN // =
+-	LBRACK // [
+-	RBRACK // ]
+-	EOL    // \n
+-	operator_end
+-)
+-
+-var tokens = [...]string{
+-	ILLEGAL: "ILLEGAL",
+-
+-	EOF:     "EOF",
+-	COMMENT: "COMMENT",
+-
+-	IDENT:  "IDENT",
+-	STRING: "STRING",
+-
+-	ASSIGN: "=",
+-	LBRACK: "[",
+-	RBRACK: "]",
+-	EOL:    "\n",
+-}
+-
+-// String returns the string corresponding to the token tok.
+-// For operators and delimiters, the string is the actual token character
+-// sequence (e.g., for the token ASSIGN, the string is "="). For all other
+-// tokens the string corresponds to the token constant name (e.g. for the
+-// token IDENT, the string is "IDENT").
+-//
+-func (tok Token) String() string {
+-	s := ""
+-	if 0 <= tok && tok < Token(len(tokens)) {
+-		s = tokens[tok]
+-	}
+-	if s == "" {
+-		s = "token(" + strconv.Itoa(int(tok)) + ")"
+-	}
+-	return s
+-}
+-
+-// Predicates
+-
+-// IsLiteral returns true for tokens corresponding to identifiers
+-// and basic type literals; it returns false otherwise.
+-//
+-func (tok Token) IsLiteral() bool { return literal_beg < tok && tok < literal_end }
+-
+-// IsOperator returns true for tokens corresponding to operators and
+-// delimiters; it returns false otherwise.
+-//
+-func (tok Token) IsOperator() bool { return operator_beg < tok && tok < operator_end }
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/types/bool.go b/Godeps/_workspace/src/code.google.com/p/gcfg/types/bool.go
+deleted file mode 100644
+index 8dcae0d..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/types/bool.go
++++ /dev/null
+@@ -1,23 +0,0 @@
+-package types
+-
+-// BoolValues defines the name and value mappings for ParseBool.
+-var BoolValues = map[string]interface{}{
+-	"true": true, "yes": true, "on": true, "1": true,
+-	"false": false, "no": false, "off": false, "0": false,
+-}
+-
+-var boolParser = func() *EnumParser {
+-	ep := &EnumParser{}
+-	ep.AddVals(BoolValues)
+-	return ep
+-}()
+-
+-// ParseBool parses bool values according to the definitions in BoolValues.
+-// Parsing is case-insensitive.
+-func ParseBool(s string) (bool, error) {
+-	v, err := boolParser.Parse(s)
+-	if err != nil {
+-		return false, err
+-	}
+-	return v.(bool), nil
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/types/doc.go b/Godeps/_workspace/src/code.google.com/p/gcfg/types/doc.go
+deleted file mode 100644
+index 9f9c345..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/types/doc.go
++++ /dev/null
+@@ -1,4 +0,0 @@
+-// Package types defines helpers for type conversions.
+-//
+-// The API for this package is not finalized yet.
+-package types
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/types/enum.go b/Godeps/_workspace/src/code.google.com/p/gcfg/types/enum.go
+deleted file mode 100644
+index 1a0c7ef..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/types/enum.go
++++ /dev/null
+@@ -1,44 +0,0 @@
+-package types
+-
+-import (
+-	"fmt"
+-	"reflect"
+-	"strings"
+-)
+-
+-// EnumParser parses "enum" values; i.e. a predefined set of strings to
+-// predefined values.
+-type EnumParser struct {
+-	Type      string // type name; if not set, use type of first value added
+-	CaseMatch bool   // if true, matching of strings is case-sensitive
+-	// PrefixMatch bool
+-	vals map[string]interface{}
+-}
+-
+-// AddVals adds strings and values to an EnumParser.
+-func (ep *EnumParser) AddVals(vals map[string]interface{}) {
+-	if ep.vals == nil {
+-		ep.vals = make(map[string]interface{})
+-	}
+-	for k, v := range vals {
+-		if ep.Type == "" {
+-			ep.Type = reflect.TypeOf(v).Name()
+-		}
+-		if !ep.CaseMatch {
+-			k = strings.ToLower(k)
+-		}
+-		ep.vals[k] = v
+-	}
+-}
+-
+-// Parse parses the string and returns the value or an error.
+-func (ep EnumParser) Parse(s string) (interface{}, error) {
+-	if !ep.CaseMatch {
+-		s = strings.ToLower(s)
+-	}
+-	v, ok := ep.vals[s]
+-	if !ok {
+-		return false, fmt.Errorf("failed to parse %s %#q", ep.Type, s)
+-	}
+-	return v, nil
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/types/enum_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/types/enum_test.go
+deleted file mode 100644
+index 4bf135e..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/types/enum_test.go
++++ /dev/null
+@@ -1,29 +0,0 @@
+-package types
+-
+-import (
+-	"testing"
+-)
+-
+-func TestEnumParserBool(t *testing.T) {
+-	for _, tt := range []struct {
+-		val string
+-		res bool
+-		ok  bool
+-	}{
+-		{val: "tRuE", res: true, ok: true},
+-		{val: "False", res: false, ok: true},
+-		{val: "t", ok: false},
+-	} {
+-		b, err := ParseBool(tt.val)
+-		switch {
+-		case tt.ok && err != nil:
+-			t.Errorf("%q: got error %v, want %v", tt.val, err, tt.res)
+-		case !tt.ok && err == nil:
+-			t.Errorf("%q: got %v, want error", tt.val, b)
+-		case tt.ok && b != tt.res:
+-			t.Errorf("%q: got %v, want %v", tt.val, b, tt.res)
+-		default:
+-			t.Logf("%q: got %v, %v", tt.val, b, err)
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/types/int.go b/Godeps/_workspace/src/code.google.com/p/gcfg/types/int.go
+deleted file mode 100644
+index af7e75c..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/types/int.go
++++ /dev/null
+@@ -1,86 +0,0 @@
+-package types
+-
+-import (
+-	"fmt"
+-	"strings"
+-)
+-
+-// An IntMode is a mode for parsing integer values, representing a set of
+-// accepted bases.
+-type IntMode uint8
+-
+-// IntMode values for ParseInt; can be combined using binary or.
+-const (
+-	Dec IntMode = 1 << iota
+-	Hex
+-	Oct
+-)
+-
+-// String returns a string representation of IntMode; e.g. `IntMode(Dec|Hex)`.
+-func (m IntMode) String() string {
+-	var modes []string
+-	if m&Dec != 0 {
+-		modes = append(modes, "Dec")
+-	}
+-	if m&Hex != 0 {
+-		modes = append(modes, "Hex")
+-	}
+-	if m&Oct != 0 {
+-		modes = append(modes, "Oct")
+-	}
+-	return "IntMode(" + strings.Join(modes, "|") + ")"
+-}
+-
+-var errIntAmbig = fmt.Errorf("ambiguous integer value; must include '0' prefix")
+-
+-func prefix0(val string) bool {
+-	return strings.HasPrefix(val, "0") || strings.HasPrefix(val, "-0")
+-}
+-
+-func prefix0x(val string) bool {
+-	return strings.HasPrefix(val, "0x") || strings.HasPrefix(val, "-0x")
+-}
+-
+-// ParseInt parses val using mode into intptr, which must be a pointer to an
+-// integer kind type. Non-decimal value require prefix `0` or `0x` in the cases
+-// when mode permits ambiguity of base; otherwise the prefix can be omitted.
+-func ParseInt(intptr interface{}, val string, mode IntMode) error {
+-	val = strings.TrimSpace(val)
+-	verb := byte(0)
+-	switch mode {
+-	case Dec:
+-		verb = 'd'
+-	case Dec + Hex:
+-		if prefix0x(val) {
+-			verb = 'v'
+-		} else {
+-			verb = 'd'
+-		}
+-	case Dec + Oct:
+-		if prefix0(val) && !prefix0x(val) {
+-			verb = 'v'
+-		} else {
+-			verb = 'd'
+-		}
+-	case Dec + Hex + Oct:
+-		verb = 'v'
+-	case Hex:
+-		if prefix0x(val) {
+-			verb = 'v'
+-		} else {
+-			verb = 'x'
+-		}
+-	case Oct:
+-		verb = 'o'
+-	case Hex + Oct:
+-		if prefix0(val) {
+-			verb = 'v'
+-		} else {
+-			return errIntAmbig
+-		}
+-	}
+-	if verb == 0 {
+-		panic("unsupported mode")
+-	}
+-	return ScanFully(intptr, val, verb)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/types/int_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/types/int_test.go
+deleted file mode 100644
+index b63dbcb..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/types/int_test.go
++++ /dev/null
+@@ -1,67 +0,0 @@
+-package types
+-
+-import (
+-	"reflect"
+-	"testing"
+-)
+-
+-func elem(p interface{}) interface{} {
+-	return reflect.ValueOf(p).Elem().Interface()
+-}
+-
+-func TestParseInt(t *testing.T) {
+-	for _, tt := range []struct {
+-		val  string
+-		mode IntMode
+-		exp  interface{}
+-		ok   bool
+-	}{
+-		{"0", Dec, int(0), true},
+-		{"10", Dec, int(10), true},
+-		{"-10", Dec, int(-10), true},
+-		{"x", Dec, int(0), false},
+-		{"0xa", Hex, int(0xa), true},
+-		{"a", Hex, int(0xa), true},
+-		{"10", Hex, int(0x10), true},
+-		{"-0xa", Hex, int(-0xa), true},
+-		{"0x", Hex, int(0x0), true},  // Scanf doesn't require digit behind 0x
+-		{"-0x", Hex, int(0x0), true}, // Scanf doesn't require digit behind 0x
+-		{"-a", Hex, int(-0xa), true},
+-		{"-10", Hex, int(-0x10), true},
+-		{"x", Hex, int(0), false},
+-		{"10", Oct, int(010), true},
+-		{"010", Oct, int(010), true},
+-		{"-10", Oct, int(-010), true},
+-		{"-010", Oct, int(-010), true},
+-		{"10", Dec | Hex, int(10), true},
+-		{"010", Dec | Hex, int(10), true},
+-		{"0x10", Dec | Hex, int(0x10), true},
+-		{"10", Dec | Oct, int(10), true},
+-		{"010", Dec | Oct, int(010), true},
+-		{"0x10", Dec | Oct, int(0), false},
+-		{"10", Hex | Oct, int(0), false}, // need prefix to distinguish Hex/Oct
+-		{"010", Hex | Oct, int(010), true},
+-		{"0x10", Hex | Oct, int(0x10), true},
+-		{"10", Dec | Hex | Oct, int(10), true},
+-		{"010", Dec | Hex | Oct, int(010), true},
+-		{"0x10", Dec | Hex | Oct, int(0x10), true},
+-	} {
+-		typ := reflect.TypeOf(tt.exp)
+-		res := reflect.New(typ).Interface()
+-		err := ParseInt(res, tt.val, tt.mode)
+-		switch {
+-		case tt.ok && err != nil:
+-			t.Errorf("ParseInt(%v, %#v, %v): fail; got error %v, want ok",
+-				typ, tt.val, tt.mode, err)
+-		case !tt.ok && err == nil:
+-			t.Errorf("ParseInt(%v, %#v, %v): fail; got %v, want error",
+-				typ, tt.val, tt.mode, elem(res))
+-		case tt.ok && !reflect.DeepEqual(elem(res), tt.exp):
+-			t.Errorf("ParseInt(%v, %#v, %v): fail; got %v, want %v",
+-				typ, tt.val, tt.mode, elem(res), tt.exp)
+-		default:
+-			t.Logf("ParseInt(%v, %#v, %s): pass; got %v, error %v",
+-				typ, tt.val, tt.mode, elem(res), err)
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/types/scan.go b/Godeps/_workspace/src/code.google.com/p/gcfg/types/scan.go
+deleted file mode 100644
+index db2f6ed..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/types/scan.go
++++ /dev/null
+@@ -1,23 +0,0 @@
+-package types
+-
+-import (
+-	"fmt"
+-	"io"
+-	"reflect"
+-)
+-
+-// ScanFully uses fmt.Sscanf with verb to fully scan val into ptr.
+-func ScanFully(ptr interface{}, val string, verb byte) error {
+-	t := reflect.ValueOf(ptr).Elem().Type()
+-	// attempt to read extra bytes to make sure the value is consumed
+-	var b []byte
+-	n, err := fmt.Sscanf(val, "%"+string(verb)+"%s", ptr, &b)
+-	switch {
+-	case n < 1 || n == 1 && err != io.EOF:
+-		return fmt.Errorf("failed to parse %q as %v: %v", val, t, err)
+-	case n > 1:
+-		return fmt.Errorf("failed to parse %q as %v: extra characters %q", val, t, string(b))
+-	}
+-	// n == 1 && err == io.EOF
+-	return nil
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/gcfg/types/scan_test.go b/Godeps/_workspace/src/code.google.com/p/gcfg/types/scan_test.go
+deleted file mode 100644
+index a8083e0..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/gcfg/types/scan_test.go
++++ /dev/null
+@@ -1,36 +0,0 @@
+-package types
+-
+-import (
+-	"reflect"
+-	"testing"
+-)
+-
+-func TestScanFully(t *testing.T) {
+-	for _, tt := range []struct {
+-		val  string
+-		verb byte
+-		res  interface{}
+-		ok   bool
+-	}{
+-		{"a", 'v', int(0), false},
+-		{"0x", 'v', int(0), true},
+-		{"0x", 'd', int(0), false},
+-	} {
+-		d := reflect.New(reflect.TypeOf(tt.res)).Interface()
+-		err := ScanFully(d, tt.val, tt.verb)
+-		switch {
+-		case tt.ok && err != nil:
+-			t.Errorf("ScanFully(%T, %q, '%c'): want ok, got error %v",
+-				d, tt.val, tt.verb, err)
+-		case !tt.ok && err == nil:
+-			t.Errorf("ScanFully(%T, %q, '%c'): want error, got %v",
+-				d, tt.val, tt.verb, elem(d))
+-		case tt.ok && err == nil && !reflect.DeepEqual(tt.res, elem(d)):
+-			t.Errorf("ScanFully(%T, %q, '%c'): want %v, got %v",
+-				d, tt.val, tt.verb, tt.res, elem(d))
+-		default:
+-			t.Logf("ScanFully(%T, %q, '%c') = %v; *ptr==%v",
+-				d, tt.val, tt.verb, err, elem(d))
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/LICENSE b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/LICENSE
+deleted file mode 100644
+index ab6b011..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/LICENSE
++++ /dev/null
+@@ -1,27 +0,0 @@
+-Copyright (c) 2009 Google Inc. All rights reserved.
+-
+-Redistribution and use in source and binary forms, with or without
+-modification, are permitted provided that the following conditions are
+-met:
+-
+-   * Redistributions of source code must retain the above copyright
+-notice, this list of conditions and the following disclaimer.
+-   * Redistributions in binary form must reproduce the above
+-copyright notice, this list of conditions and the following disclaimer
+-in the documentation and/or other materials provided with the
+-distribution.
+-   * Neither the name of Google Inc. nor the names of its
+-contributors may be used to endorse or promote products derived from
+-this software without specific prior written permission.
+-
+-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/dce.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/dce.go
+deleted file mode 100644
+index 50a0f2d..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/dce.go
++++ /dev/null
+@@ -1,84 +0,0 @@
+-// Copyright 2011 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package uuid
+-
+-import (
+-	"encoding/binary"
+-	"fmt"
+-	"os"
+-)
+-
+-// A Domain represents a Version 2 domain
+-type Domain byte
+-
+-// Domain constants for DCE Security (Version 2) UUIDs.
+-const (
+-	Person = Domain(0)
+-	Group  = Domain(1)
+-	Org    = Domain(2)
+-)
+-
+-// NewDCESecurity returns a DCE Security (Version 2) UUID.
+-//
+-// The domain should be one of Person, Group or Org.
+-// On a POSIX system the id should be the users UID for the Person
+-// domain and the users GID for the Group.  The meaning of id for
+-// the domain Org or on non-POSIX systems is site defined.
+-//
+-// For a given domain/id pair the same token may be returned for up to
+-// 7 minutes and 10 seconds.
+-func NewDCESecurity(domain Domain, id uint32) UUID {
+-	uuid := NewUUID()
+-	if uuid != nil {
+-		uuid[6] = (uuid[6] & 0x0f) | 0x20 // Version 2
+-		uuid[9] = byte(domain)
+-		binary.BigEndian.PutUint32(uuid[0:], id)
+-	}
+-	return uuid
+-}
+-
+-// NewDCEPerson returns a DCE Security (Version 2) UUID in the person
+-// domain with the id returned by os.Getuid.
+-//
+-//  NewDCEPerson(Person, uint32(os.Getuid()))
+-func NewDCEPerson() UUID {
+-	return NewDCESecurity(Person, uint32(os.Getuid()))
+-}
+-
+-// NewDCEGroup returns a DCE Security (Version 2) UUID in the group
+-// domain with the id returned by os.Getgid.
+-//
+-//  NewDCEGroup(Group, uint32(os.Getgid()))
+-func NewDCEGroup() UUID {
+-	return NewDCESecurity(Group, uint32(os.Getgid()))
+-}
+-
+-// Domain returns the domain for a Version 2 UUID or false.
+-func (uuid UUID) Domain() (Domain, bool) {
+-	if v, _ := uuid.Version(); v != 2 {
+-		return 0, false
+-	}
+-	return Domain(uuid[9]), true
+-}
+-
+-// Id returns the id for a Version 2 UUID or false.
+-func (uuid UUID) Id() (uint32, bool) {
+-	if v, _ := uuid.Version(); v != 2 {
+-		return 0, false
+-	}
+-	return binary.BigEndian.Uint32(uuid[0:4]), true
+-}
+-
+-func (d Domain) String() string {
+-	switch d {
+-	case Person:
+-		return "Person"
+-	case Group:
+-		return "Group"
+-	case Org:
+-		return "Org"
+-	}
+-	return fmt.Sprintf("Domain%d", int(d))
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/doc.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/doc.go
+deleted file mode 100644
+index d8bd013..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/doc.go
++++ /dev/null
+@@ -1,8 +0,0 @@
+-// Copyright 2011 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// The uuid package generates and inspects UUIDs.
+-//
+-// UUIDs are based on RFC 4122 and DCE 1.1: Authentication and Security Services.
+-package uuid
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/hash.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/hash.go
+deleted file mode 100644
+index cdd4192..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/hash.go
++++ /dev/null
+@@ -1,53 +0,0 @@
+-// Copyright 2011 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package uuid
+-
+-import (
+-	"crypto/md5"
+-	"crypto/sha1"
+-	"hash"
+-)
+-
+-// Well known Name Space IDs and UUIDs
+-var (
+-	NameSpace_DNS  = Parse("6ba7b810-9dad-11d1-80b4-00c04fd430c8")
+-	NameSpace_URL  = Parse("6ba7b811-9dad-11d1-80b4-00c04fd430c8")
+-	NameSpace_OID  = Parse("6ba7b812-9dad-11d1-80b4-00c04fd430c8")
+-	NameSpace_X500 = Parse("6ba7b814-9dad-11d1-80b4-00c04fd430c8")
+-	NIL            = Parse("00000000-0000-0000-0000-000000000000")
+-)
+-
+-// NewHash returns a new UUID dervied from the hash of space concatenated with
+-// data generated by h.  The hash should be at least 16 byte in length.  The
+-// first 16 bytes of the hash are used to form the UUID.  The version of the
+-// UUID will be the lower 4 bits of version.  NewHash is used to implement
+-// NewMD5 and NewSHA1.
+-func NewHash(h hash.Hash, space UUID, data []byte, version int) UUID {
+-	h.Reset()
+-	h.Write(space)
+-	h.Write([]byte(data))
+-	s := h.Sum(nil)
+-	uuid := make([]byte, 16)
+-	copy(uuid, s)
+-	uuid[6] = (uuid[6] & 0x0f) | uint8((version&0xf)<<4)
+-	uuid[8] = (uuid[8] & 0x3f) | 0x80 // RFC 4122 variant
+-	return uuid
+-}
+-
+-// NewMD5 returns a new MD5 (Version 3) UUID based on the
+-// supplied name space and data.
+-//
+-//  NewHash(md5.New(), space, data, 3)
+-func NewMD5(space UUID, data []byte) UUID {
+-	return NewHash(md5.New(), space, data, 3)
+-}
+-
+-// NewSHA1 returns a new SHA1 (Version 5) UUID based on the
+-// supplied name space and data.
+-//
+-//  NewHash(sha1.New(), space, data, 5)
+-func NewSHA1(space UUID, data []byte) UUID {
+-	return NewHash(sha1.New(), space, data, 5)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/node.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/node.go
+deleted file mode 100644
+index dd0a8ac..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/node.go
++++ /dev/null
+@@ -1,101 +0,0 @@
+-// Copyright 2011 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package uuid
+-
+-import "net"
+-
+-var (
+-	interfaces []net.Interface // cached list of interfaces
+-	ifname     string          // name of interface being used
+-	nodeID     []byte          // hardware for version 1 UUIDs
+-)
+-
+-// NodeInterface returns the name of the interface from which the NodeID was
+-// derived.  The interface "user" is returned if the NodeID was set by
+-// SetNodeID.
+-func NodeInterface() string {
+-	return ifname
+-}
+-
+-// SetNodeInterface selects the hardware address to be used for Version 1 UUIDs.
+-// If name is "" then the first usable interface found will be used or a random
+-// Node ID will be generated.  If a named interface cannot be found then false
+-// is returned.
+-//
+-// SetNodeInterface never fails when name is "".
+-func SetNodeInterface(name string) bool {
+-	if interfaces == nil {
+-		var err error
+-		interfaces, err = net.Interfaces()
+-		if err != nil && name != "" {
+-			return false
+-		}
+-	}
+-
+-	for _, ifs := range interfaces {
+-		if len(ifs.HardwareAddr) >= 6 && (name == "" || name == ifs.Name) {
+-			if setNodeID(ifs.HardwareAddr) {
+-				ifname = ifs.Name
+-				return true
+-			}
+-		}
+-	}
+-
+-	// We found no interfaces with a valid hardware address.  If name
+-	// does not specify a specific interface generate a random Node ID
+-	// (section 4.1.6)
+-	if name == "" {
+-		if nodeID == nil {
+-			nodeID = make([]byte, 6)
+-		}
+-		randomBits(nodeID)
+-		return true
+-	}
+-	return false
+-}
+-
+-// NodeID returns a slice of a copy of the current Node ID, setting the Node ID
+-// if not already set.
+-func NodeID() []byte {
+-	if nodeID == nil {
+-		SetNodeInterface("")
+-	}
+-	nid := make([]byte, 6)
+-	copy(nid, nodeID)
+-	return nid
+-}
+-
+-// SetNodeID sets the Node ID to be used for Version 1 UUIDs.  The first 6 bytes
+-// of id are used.  If id is less than 6 bytes then false is returned and the
+-// Node ID is not set.
+-func SetNodeID(id []byte) bool {
+-	if setNodeID(id) {
+-		ifname = "user"
+-		return true
+-	}
+-	return false
+-}
+-
+-func setNodeID(id []byte) bool {
+-	if len(id) < 6 {
+-		return false
+-	}
+-	if nodeID == nil {
+-		nodeID = make([]byte, 6)
+-	}
+-	copy(nodeID, id)
+-	return true
+-}
+-
+-// NodeID returns the 6 byte node id encoded in uuid.  It returns nil if uuid is
+-// not valid.  The NodeID is only well defined for version 1 and 2 UUIDs.
+-func (uuid UUID) NodeID() []byte {
+-	if len(uuid) != 16 {
+-		return nil
+-	}
+-	node := make([]byte, 6)
+-	copy(node, uuid[10:])
+-	return node
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/time.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/time.go
+deleted file mode 100644
+index b9369c2..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/time.go
++++ /dev/null
+@@ -1,132 +0,0 @@
+-// Copyright 2014 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package uuid
+-
+-import (
+-	"encoding/binary"
+-	"sync"
+-	"time"
+-)
+-
+-// A Time represents a time as the number of 100's of nanoseconds since 15 Oct
+-// 1582.
+-type Time int64
+-
+-const (
+-	lillian    = 2299160          // Julian day of 15 Oct 1582
+-	unix       = 2440587          // Julian day of 1 Jan 1970
+-	epoch      = unix - lillian   // Days between epochs
+-	g1582      = epoch * 86400    // seconds between epochs
+-	g1582ns100 = g1582 * 10000000 // 100s of a nanoseconds between epochs
+-)
+-
+-var (
+-	mu        sync.Mutex
+-	lasttime  uint64 // last time we returned
+-	clock_seq uint16 // clock sequence for this run
+-
+-	timeNow = time.Now // for testing
+-)
+-
+-// UnixTime converts t the number of seconds and nanoseconds using the Unix
+-// epoch of 1 Jan 1970.
+-func (t Time) UnixTime() (sec, nsec int64) {
+-	sec = int64(t - g1582ns100)
+-	nsec = (sec % 10000000) * 100
+-	sec /= 10000000
+-	return sec, nsec
+-}
+-
+-// GetTime returns the current Time (100s of nanoseconds since 15 Oct 1582) and
+-// adjusts the clock sequence as needed.  An error is returned if the current
+-// time cannot be determined.
+-func GetTime() (Time, error) {
+-	defer mu.Unlock()
+-	mu.Lock()
+-	return getTime()
+-}
+-
+-func getTime() (Time, error) {
+-	t := timeNow()
+-
+-	// If we don't have a clock sequence already, set one.
+-	if clock_seq == 0 {
+-		setClockSequence(-1)
+-	}
+-	now := uint64(t.UnixNano()/100) + g1582ns100
+-
+-	// If time has gone backwards with this clock sequence then we
+-	// increment the clock sequence
+-	if now <= lasttime {
+-		clock_seq = ((clock_seq + 1) & 0x3fff) | 0x8000
+-	}
+-	lasttime = now
+-	return Time(now), nil
+-}
+-
+-// ClockSequence returns the current clock sequence, generating one if not
+-// already set.  The clock sequence is only used for Version 1 UUIDs.
+-//
+-// The uuid package does not use global static storage for the clock sequence or
+-// the last time a UUID was generated.  Unless SetClockSequence a new random
+-// clock sequence is generated the first time a clock sequence is requested by
+-// ClockSequence, GetTime, or NewUUID.  (section 4.2.1.1) sequence is generated
+-// for
+-func ClockSequence() int {
+-	defer mu.Unlock()
+-	mu.Lock()
+-	return clockSequence()
+-}
+-
+-func clockSequence() int {
+-	if clock_seq == 0 {
+-		setClockSequence(-1)
+-	}
+-	return int(clock_seq & 0x3fff)
+-}
+-
+-// SetClockSeq sets the clock sequence to the lower 14 bits of seq.  Setting to
+-// -1 causes a new sequence to be generated.
+-func SetClockSequence(seq int) {
+-	defer mu.Unlock()
+-	mu.Lock()
+-	setClockSequence(seq)
+-}
+-
+-func setClockSequence(seq int) {
+-	if seq == -1 {
+-		var b [2]byte
+-		randomBits(b[:]) // clock sequence
+-		seq = int(b[0])<<8 | int(b[1])
+-	}
+-	old_seq := clock_seq
+-	clock_seq = uint16(seq&0x3fff) | 0x8000 // Set our variant
+-	if old_seq != clock_seq {
+-		lasttime = 0
+-	}
+-}
+-
+-// Time returns the time in 100s of nanoseconds since 15 Oct 1582 encoded in
+-// uuid.  It returns false if uuid is not valid.  The time is only well defined
+-// for version 1 and 2 UUIDs.
+-func (uuid UUID) Time() (Time, bool) {
+-	if len(uuid) != 16 {
+-		return 0, false
+-	}
+-	time := int64(binary.BigEndian.Uint32(uuid[0:4]))
+-	time |= int64(binary.BigEndian.Uint16(uuid[4:6])) << 32
+-	time |= int64(binary.BigEndian.Uint16(uuid[6:8])&0xfff) << 48
+-	return Time(time), true
+-}
+-
+-// ClockSequence returns the clock sequence encoded in uuid.  It returns false
+-// if uuid is not valid.  The clock sequence is only well defined for version 1
+-// and 2 UUIDs.
+-func (uuid UUID) ClockSequence() (int, bool) {
+-	if len(uuid) != 16 {
+-		return 0, false
+-	}
+-	return int(binary.BigEndian.Uint16(uuid[8:10])) & 0x3fff, true
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/util.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/util.go
+deleted file mode 100644
+index de40b10..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/util.go
++++ /dev/null
+@@ -1,43 +0,0 @@
+-// Copyright 2011 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package uuid
+-
+-import (
+-	"io"
+-)
+-
+-// randomBits completely fills slice b with random data.
+-func randomBits(b []byte) {
+-	if _, err := io.ReadFull(rander, b); err != nil {
+-		panic(err.Error()) // rand should never fail
+-	}
+-}
+-
+-// xvalues returns the value of a byte as a hexadecimal digit or 255.
+-var xvalues = []byte{
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 255, 255, 255, 255, 255, 255,
+-	255, 10, 11, 12, 13, 14, 15, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 10, 11, 12, 13, 14, 15, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-	255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+-}
+-
+-// xtob converts the the first two hex bytes of x into a byte.
+-func xtob(x string) (byte, bool) {
+-	b1 := xvalues[x[0]]
+-	b2 := xvalues[x[1]]
+-	return (b1 << 4) | b2, b1 != 255 && b2 != 255
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/uuid.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/uuid.go
+deleted file mode 100644
+index 2920fae..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/uuid.go
++++ /dev/null
+@@ -1,163 +0,0 @@
+-// Copyright 2011 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package uuid
+-
+-import (
+-	"bytes"
+-	"crypto/rand"
+-	"fmt"
+-	"io"
+-	"strings"
+-)
+-
+-// A UUID is a 128 bit (16 byte) Universal Unique IDentifier as defined in RFC
+-// 4122.
+-type UUID []byte
+-
+-// A Version represents a UUIDs version.
+-type Version byte
+-
+-// A Variant represents a UUIDs variant.
+-type Variant byte
+-
+-// Constants returned by Variant.
+-const (
+-	Invalid   = Variant(iota) // Invalid UUID
+-	RFC4122                   // The variant specified in RFC4122
+-	Reserved                  // Reserved, NCS backward compatibility.
+-	Microsoft                 // Reserved, Microsoft Corporation backward compatibility.
+-	Future                    // Reserved for future definition.
+-)
+-
+-var rander = rand.Reader // random function
+-
+-// New returns a new random (version 4) UUID as a string.  It is a convenience
+-// function for NewRandom().String().
+-func New() string {
+-	return NewRandom().String()
+-}
+-
+-// Parse decodes s into a UUID or returns nil.  Both the UUID form of
+-// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and
+-// urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx are decoded.
+-func Parse(s string) UUID {
+-	if len(s) == 36+9 {
+-		if strings.ToLower(s[:9]) != "urn:uuid:" {
+-			return nil
+-		}
+-		s = s[9:]
+-	} else if len(s) != 36 {
+-		return nil
+-	}
+-	if s[8] != '-' || s[13] != '-' || s[18] != '-' || s[23] != '-' {
+-		return nil
+-	}
+-	uuid := make([]byte, 16)
+-	for i, x := range []int{
+-		0, 2, 4, 6,
+-		9, 11,
+-		14, 16,
+-		19, 21,
+-		24, 26, 28, 30, 32, 34} {
+-		if v, ok := xtob(s[x:]); !ok {
+-			return nil
+-		} else {
+-			uuid[i] = v
+-		}
+-	}
+-	return uuid
+-}
+-
+-// Equal returns true if uuid1 and uuid2 are equal.
+-func Equal(uuid1, uuid2 UUID) bool {
+-	return bytes.Equal(uuid1, uuid2)
+-}
+-
+-// String returns the string form of uuid, xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+-// , or "" if uuid is invalid.
+-func (uuid UUID) String() string {
+-	if uuid == nil || len(uuid) != 16 {
+-		return ""
+-	}
+-	b := []byte(uuid)
+-	return fmt.Sprintf("%08x-%04x-%04x-%04x-%012x",
+-		b[:4], b[4:6], b[6:8], b[8:10], b[10:])
+-}
+-
+-// URN returns the RFC 2141 URN form of uuid,
+-// urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,  or "" if uuid is invalid.
+-func (uuid UUID) URN() string {
+-	if uuid == nil || len(uuid) != 16 {
+-		return ""
+-	}
+-	b := []byte(uuid)
+-	return fmt.Sprintf("urn:uuid:%08x-%04x-%04x-%04x-%012x",
+-		b[:4], b[4:6], b[6:8], b[8:10], b[10:])
+-}
+-
+-// Variant returns the variant encoded in uuid.  It returns Invalid if
+-// uuid is invalid.
+-func (uuid UUID) Variant() Variant {
+-	if len(uuid) != 16 {
+-		return Invalid
+-	}
+-	switch {
+-	case (uuid[8] & 0xc0) == 0x80:
+-		return RFC4122
+-	case (uuid[8] & 0xe0) == 0xc0:
+-		return Microsoft
+-	case (uuid[8] & 0xe0) == 0xe0:
+-		return Future
+-	default:
+-		return Reserved
+-	}
+-	panic("unreachable")
+-}
+-
+-// Version returns the verison of uuid.  It returns false if uuid is not
+-// valid.
+-func (uuid UUID) Version() (Version, bool) {
+-	if len(uuid) != 16 {
+-		return 0, false
+-	}
+-	return Version(uuid[6] >> 4), true
+-}
+-
+-func (v Version) String() string {
+-	if v > 15 {
+-		return fmt.Sprintf("BAD_VERSION_%d", v)
+-	}
+-	return fmt.Sprintf("VERSION_%d", v)
+-}
+-
+-func (v Variant) String() string {
+-	switch v {
+-	case RFC4122:
+-		return "RFC4122"
+-	case Reserved:
+-		return "Reserved"
+-	case Microsoft:
+-		return "Microsoft"
+-	case Future:
+-		return "Future"
+-	case Invalid:
+-		return "Invalid"
+-	}
+-	return fmt.Sprintf("BadVariant%d", int(v))
+-}
+-
+-// SetRand sets the random number generator to r, which implents io.Reader.
+-// If r.Read returns an error when the package requests random data then
+-// a panic will be issued.
+-//
+-// Calling SetRand with nil sets the random number generator to the default
+-// generator.
+-func SetRand(r io.Reader) {
+-	if r == nil {
+-		rander = rand.Reader
+-		return
+-	}
+-	rander = r
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/uuid_test.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/uuid_test.go
+deleted file mode 100644
+index 417ebeb..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/uuid_test.go
++++ /dev/null
+@@ -1,390 +0,0 @@
+-// Copyright 2011 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package uuid
+-
+-import (
+-	"bytes"
+-	"fmt"
+-	"os"
+-	"strings"
+-	"testing"
+-	"time"
+-)
+-
+-type test struct {
+-	in      string
+-	version Version
+-	variant Variant
+-	isuuid  bool
+-}
+-
+-var tests = []test{
+-	{"f47ac10b-58cc-0372-8567-0e02b2c3d479", 0, RFC4122, true},
+-	{"f47ac10b-58cc-1372-8567-0e02b2c3d479", 1, RFC4122, true},
+-	{"f47ac10b-58cc-2372-8567-0e02b2c3d479", 2, RFC4122, true},
+-	{"f47ac10b-58cc-3372-8567-0e02b2c3d479", 3, RFC4122, true},
+-	{"f47ac10b-58cc-4372-8567-0e02b2c3d479", 4, RFC4122, true},
+-	{"f47ac10b-58cc-5372-8567-0e02b2c3d479", 5, RFC4122, true},
+-	{"f47ac10b-58cc-6372-8567-0e02b2c3d479", 6, RFC4122, true},
+-	{"f47ac10b-58cc-7372-8567-0e02b2c3d479", 7, RFC4122, true},
+-	{"f47ac10b-58cc-8372-8567-0e02b2c3d479", 8, RFC4122, true},
+-	{"f47ac10b-58cc-9372-8567-0e02b2c3d479", 9, RFC4122, true},
+-	{"f47ac10b-58cc-a372-8567-0e02b2c3d479", 10, RFC4122, true},
+-	{"f47ac10b-58cc-b372-8567-0e02b2c3d479", 11, RFC4122, true},
+-	{"f47ac10b-58cc-c372-8567-0e02b2c3d479", 12, RFC4122, true},
+-	{"f47ac10b-58cc-d372-8567-0e02b2c3d479", 13, RFC4122, true},
+-	{"f47ac10b-58cc-e372-8567-0e02b2c3d479", 14, RFC4122, true},
+-	{"f47ac10b-58cc-f372-8567-0e02b2c3d479", 15, RFC4122, true},
+-
+-	{"urn:uuid:f47ac10b-58cc-4372-0567-0e02b2c3d479", 4, Reserved, true},
+-	{"URN:UUID:f47ac10b-58cc-4372-0567-0e02b2c3d479", 4, Reserved, true},
+-	{"f47ac10b-58cc-4372-0567-0e02b2c3d479", 4, Reserved, true},
+-	{"f47ac10b-58cc-4372-1567-0e02b2c3d479", 4, Reserved, true},
+-	{"f47ac10b-58cc-4372-2567-0e02b2c3d479", 4, Reserved, true},
+-	{"f47ac10b-58cc-4372-3567-0e02b2c3d479", 4, Reserved, true},
+-	{"f47ac10b-58cc-4372-4567-0e02b2c3d479", 4, Reserved, true},
+-	{"f47ac10b-58cc-4372-5567-0e02b2c3d479", 4, Reserved, true},
+-	{"f47ac10b-58cc-4372-6567-0e02b2c3d479", 4, Reserved, true},
+-	{"f47ac10b-58cc-4372-7567-0e02b2c3d479", 4, Reserved, true},
+-	{"f47ac10b-58cc-4372-8567-0e02b2c3d479", 4, RFC4122, true},
+-	{"f47ac10b-58cc-4372-9567-0e02b2c3d479", 4, RFC4122, true},
+-	{"f47ac10b-58cc-4372-a567-0e02b2c3d479", 4, RFC4122, true},
+-	{"f47ac10b-58cc-4372-b567-0e02b2c3d479", 4, RFC4122, true},
+-	{"f47ac10b-58cc-4372-c567-0e02b2c3d479", 4, Microsoft, true},
+-	{"f47ac10b-58cc-4372-d567-0e02b2c3d479", 4, Microsoft, true},
+-	{"f47ac10b-58cc-4372-e567-0e02b2c3d479", 4, Future, true},
+-	{"f47ac10b-58cc-4372-f567-0e02b2c3d479", 4, Future, true},
+-
+-	{"f47ac10b158cc-5372-a567-0e02b2c3d479", 0, Invalid, false},
+-	{"f47ac10b-58cc25372-a567-0e02b2c3d479", 0, Invalid, false},
+-	{"f47ac10b-58cc-53723a567-0e02b2c3d479", 0, Invalid, false},
+-	{"f47ac10b-58cc-5372-a56740e02b2c3d479", 0, Invalid, false},
+-	{"f47ac10b-58cc-5372-a567-0e02-2c3d479", 0, Invalid, false},
+-	{"g47ac10b-58cc-4372-a567-0e02b2c3d479", 0, Invalid, false},
+-}
+-
+-var constants = []struct {
+-	c    interface{}
+-	name string
+-}{
+-	{Person, "Person"},
+-	{Group, "Group"},
+-	{Org, "Org"},
+-	{Invalid, "Invalid"},
+-	{RFC4122, "RFC4122"},
+-	{Reserved, "Reserved"},
+-	{Microsoft, "Microsoft"},
+-	{Future, "Future"},
+-	{Domain(17), "Domain17"},
+-	{Variant(42), "BadVariant42"},
+-}
+-
+-func testTest(t *testing.T, in string, tt test) {
+-	uuid := Parse(in)
+-	if ok := (uuid != nil); ok != tt.isuuid {
+-		t.Errorf("Parse(%s) got %v expected %v\b", in, ok, tt.isuuid)
+-	}
+-	if uuid == nil {
+-		return
+-	}
+-
+-	if v := uuid.Variant(); v != tt.variant {
+-		t.Errorf("Variant(%s) got %d expected %d\b", in, v, tt.variant)
+-	}
+-	if v, _ := uuid.Version(); v != tt.version {
+-		t.Errorf("Version(%s) got %d expected %d\b", in, v, tt.version)
+-	}
+-}
+-
+-func TestUUID(t *testing.T) {
+-	for _, tt := range tests {
+-		testTest(t, tt.in, tt)
+-		testTest(t, strings.ToUpper(tt.in), tt)
+-	}
+-}
+-
+-func TestConstants(t *testing.T) {
+-	for x, tt := range constants {
+-		v, ok := tt.c.(fmt.Stringer)
+-		if !ok {
+-			t.Errorf("%x: %v: not a stringer", x, v)
+-		} else if s := v.String(); s != tt.name {
+-			v, _ := tt.c.(int)
+-			t.Errorf("%x: Constant %T:%d gives %q, expected %q\n", x, tt.c, v, s, tt.name)
+-		}
+-	}
+-}
+-
+-func TestRandomUUID(t *testing.T) {
+-	m := make(map[string]bool)
+-	for x := 1; x < 32; x++ {
+-		uuid := NewRandom()
+-		s := uuid.String()
+-		if m[s] {
+-			t.Errorf("NewRandom returned duplicated UUID %s\n", s)
+-		}
+-		m[s] = true
+-		if v, _ := uuid.Version(); v != 4 {
+-			t.Errorf("Random UUID of version %s\n", v)
+-		}
+-		if uuid.Variant() != RFC4122 {
+-			t.Errorf("Random UUID is variant %d\n", uuid.Variant())
+-		}
+-	}
+-}
+-
+-func TestNew(t *testing.T) {
+-	m := make(map[string]bool)
+-	for x := 1; x < 32; x++ {
+-		s := New()
+-		if m[s] {
+-			t.Errorf("New returned duplicated UUID %s\n", s)
+-		}
+-		m[s] = true
+-		uuid := Parse(s)
+-		if uuid == nil {
+-			t.Errorf("New returned %q which does not decode\n", s)
+-			continue
+-		}
+-		if v, _ := uuid.Version(); v != 4 {
+-			t.Errorf("Random UUID of version %s\n", v)
+-		}
+-		if uuid.Variant() != RFC4122 {
+-			t.Errorf("Random UUID is variant %d\n", uuid.Variant())
+-		}
+-	}
+-}
+-
+-func clockSeq(t *testing.T, uuid UUID) int {
+-	seq, ok := uuid.ClockSequence()
+-	if !ok {
+-		t.Fatalf("%s: invalid clock sequence\n", uuid)
+-	}
+-	return seq
+-}
+-
+-func TestClockSeq(t *testing.T) {
+-	// Fake time.Now for this test to return a monotonically advancing time; restore it at end.
+-	defer func(orig func() time.Time) { timeNow = orig }(timeNow)
+-	monTime := time.Now()
+-	timeNow = func() time.Time {
+-		monTime = monTime.Add(1 * time.Second)
+-		return monTime
+-	}
+-
+-	SetClockSequence(-1)
+-	uuid1 := NewUUID()
+-	uuid2 := NewUUID()
+-
+-	if clockSeq(t, uuid1) != clockSeq(t, uuid2) {
+-		t.Errorf("clock sequence %d != %d\n", clockSeq(t, uuid1), clockSeq(t, uuid2))
+-	}
+-
+-	SetClockSequence(-1)
+-	uuid2 = NewUUID()
+-
+-	// Just on the very off chance we generated the same sequence
+-	// two times we try again.
+-	if clockSeq(t, uuid1) == clockSeq(t, uuid2) {
+-		SetClockSequence(-1)
+-		uuid2 = NewUUID()
+-	}
+-	if clockSeq(t, uuid1) == clockSeq(t, uuid2) {
+-		t.Errorf("Duplicate clock sequence %d\n", clockSeq(t, uuid1))
+-	}
+-
+-	SetClockSequence(0x1234)
+-	uuid1 = NewUUID()
+-	if seq := clockSeq(t, uuid1); seq != 0x1234 {
+-		t.Errorf("%s: expected seq 0x1234 got 0x%04x\n", uuid1, seq)
+-	}
+-}
+-
+-func TestCoding(t *testing.T) {
+-	text := "7d444840-9dc0-11d1-b245-5ffdce74fad2"
+-	urn := "urn:uuid:7d444840-9dc0-11d1-b245-5ffdce74fad2"
+-	data := UUID{
+-		0x7d, 0x44, 0x48, 0x40,
+-		0x9d, 0xc0,
+-		0x11, 0xd1,
+-		0xb2, 0x45,
+-		0x5f, 0xfd, 0xce, 0x74, 0xfa, 0xd2,
+-	}
+-	if v := data.String(); v != text {
+-		t.Errorf("%x: encoded to %s, expected %s\n", data, v, text)
+-	}
+-	if v := data.URN(); v != urn {
+-		t.Errorf("%x: urn is %s, expected %s\n", data, v, urn)
+-	}
+-
+-	uuid := Parse(text)
+-	if !Equal(uuid, data) {
+-		t.Errorf("%s: decoded to %s, expected %s\n", text, uuid, data)
+-	}
+-}
+-
+-func TestVersion1(t *testing.T) {
+-	uuid1 := NewUUID()
+-	uuid2 := NewUUID()
+-
+-	if Equal(uuid1, uuid2) {
+-		t.Errorf("%s:duplicate uuid\n", uuid1)
+-	}
+-	if v, _ := uuid1.Version(); v != 1 {
+-		t.Errorf("%s: version %s expected 1\n", uuid1, v)
+-	}
+-	if v, _ := uuid2.Version(); v != 1 {
+-		t.Errorf("%s: version %s expected 1\n", uuid2, v)
+-	}
+-	n1 := uuid1.NodeID()
+-	n2 := uuid2.NodeID()
+-	if !bytes.Equal(n1, n2) {
+-		t.Errorf("Different nodes %x != %x\n", n1, n2)
+-	}
+-	t1, ok := uuid1.Time()
+-	if !ok {
+-		t.Errorf("%s: invalid time\n", uuid1)
+-	}
+-	t2, ok := uuid2.Time()
+-	if !ok {
+-		t.Errorf("%s: invalid time\n", uuid2)
+-	}
+-	q1, ok := uuid1.ClockSequence()
+-	if !ok {
+-		t.Errorf("%s: invalid clock sequence\n", uuid1)
+-	}
+-	q2, ok := uuid2.ClockSequence()
+-	if !ok {
+-		t.Errorf("%s: invalid clock sequence", uuid2)
+-	}
+-
+-	switch {
+-	case t1 == t2 && q1 == q2:
+-		t.Errorf("time stopped\n")
+-	case t1 > t2 && q1 == q2:
+-		t.Errorf("time reversed\n")
+-	case t1 < t2 && q1 != q2:
+-		t.Errorf("clock sequence chaned unexpectedly\n")
+-	}
+-}
+-
+-func TestNodeAndTime(t *testing.T) {
+-	// Time is February 5, 1998 12:30:23.136364800 AM GMT
+-
+-	uuid := Parse("7d444840-9dc0-11d1-b245-5ffdce74fad2")
+-	node := []byte{0x5f, 0xfd, 0xce, 0x74, 0xfa, 0xd2}
+-
+-	ts, ok := uuid.Time()
+-	if ok {
+-		c := time.Unix(ts.UnixTime())
+-		want := time.Date(1998, 2, 5, 0, 30, 23, 136364800, time.UTC)
+-		if !c.Equal(want) {
+-			t.Errorf("Got time %v, want %v", c, want)
+-		}
+-	} else {
+-		t.Errorf("%s: bad time\n", uuid)
+-	}
+-	if !bytes.Equal(node, uuid.NodeID()) {
+-		t.Errorf("Expected node %v got %v\n", node, uuid.NodeID())
+-	}
+-}
+-
+-func TestMD5(t *testing.T) {
+-	uuid := NewMD5(NameSpace_DNS, []byte("python.org")).String()
+-	want := "6fa459ea-ee8a-3ca4-894e-db77e160355e"
+-	if uuid != want {
+-		t.Errorf("MD5: got %q expected %q\n", uuid, want)
+-	}
+-}
+-
+-func TestSHA1(t *testing.T) {
+-	uuid := NewSHA1(NameSpace_DNS, []byte("python.org")).String()
+-	want := "886313e1-3b8a-5372-9b90-0c9aee199e5d"
+-	if uuid != want {
+-		t.Errorf("SHA1: got %q expected %q\n", uuid, want)
+-	}
+-}
+-
+-func TestNodeID(t *testing.T) {
+-	nid := []byte{1, 2, 3, 4, 5, 6}
+-	SetNodeInterface("")
+-	s := NodeInterface()
+-	if s == "" || s == "user" {
+-		t.Errorf("NodeInterface %q after SetInteface\n", s)
+-	}
+-	node1 := NodeID()
+-	if node1 == nil {
+-		t.Errorf("NodeID nil after SetNodeInterface\n", s)
+-	}
+-	SetNodeID(nid)
+-	s = NodeInterface()
+-	if s != "user" {
+-		t.Errorf("Expected NodeInterface %q got %q\n", "user", s)
+-	}
+-	node2 := NodeID()
+-	if node2 == nil {
+-		t.Errorf("NodeID nil after SetNodeID\n", s)
+-	}
+-	if bytes.Equal(node1, node2) {
+-		t.Errorf("NodeID not changed after SetNodeID\n", s)
+-	} else if !bytes.Equal(nid, node2) {
+-		t.Errorf("NodeID is %x, expected %x\n", node2, nid)
+-	}
+-}
+-
+-func testDCE(t *testing.T, name string, uuid UUID, domain Domain, id uint32) {
+-	if uuid == nil {
+-		t.Errorf("%s failed\n", name)
+-		return
+-	}
+-	if v, _ := uuid.Version(); v != 2 {
+-		t.Errorf("%s: %s: expected version 2, got %s\n", name, uuid, v)
+-		return
+-	}
+-	if v, ok := uuid.Domain(); !ok || v != domain {
+-		if !ok {
+-			t.Errorf("%s: %d: Domain failed\n", name, uuid)
+-		} else {
+-			t.Errorf("%s: %s: expected domain %d, got %d\n", name, uuid, domain, v)
+-		}
+-	}
+-	if v, ok := uuid.Id(); !ok || v != id {
+-		if !ok {
+-			t.Errorf("%s: %d: Id failed\n", name, uuid)
+-		} else {
+-			t.Errorf("%s: %s: expected id %d, got %d\n", name, uuid, id, v)
+-		}
+-	}
+-}
+-
+-func TestDCE(t *testing.T) {
+-	testDCE(t, "NewDCESecurity", NewDCESecurity(42, 12345678), 42, 12345678)
+-	testDCE(t, "NewDCEPerson", NewDCEPerson(), Person, uint32(os.Getuid()))
+-	testDCE(t, "NewDCEGroup", NewDCEGroup(), Group, uint32(os.Getgid()))
+-}
+-
+-type badRand struct{}
+-
+-func (r badRand) Read(buf []byte) (int, error) {
+-	for i, _ := range buf {
+-		buf[i] = byte(i)
+-	}
+-	return len(buf), nil
+-}
+-
+-func TestBadRand(t *testing.T) {
+-	SetRand(badRand{})
+-	uuid1 := New()
+-	uuid2 := New()
+-	if uuid1 != uuid2 {
+-		t.Errorf("execpted duplicates, got %q and %q\n", uuid1, uuid2)
+-	}
+-	SetRand(nil)
+-	uuid1 = New()
+-	uuid2 = New()
+-	if uuid1 == uuid2 {
+-		t.Errorf("unexecpted duplicates, got %q\n", uuid1)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/version1.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/version1.go
+deleted file mode 100644
+index 6358004..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/version1.go
++++ /dev/null
+@@ -1,41 +0,0 @@
+-// Copyright 2011 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package uuid
+-
+-import (
+-	"encoding/binary"
+-)
+-
+-// NewUUID returns a Version 1 UUID based on the current NodeID and clock
+-// sequence, and the current time.  If the NodeID has not been set by SetNodeID
+-// or SetNodeInterface then it will be set automatically.  If the NodeID cannot
+-// be set NewUUID returns nil.  If clock sequence has not been set by
+-// SetClockSequence then it will be set automatically.  If GetTime fails to
+-// return the current NewUUID returns nil.
+-func NewUUID() UUID {
+-	if nodeID == nil {
+-		SetNodeInterface("")
+-	}
+-
+-	now, err := GetTime()
+-	if err != nil {
+-		return nil
+-	}
+-
+-	uuid := make([]byte, 16)
+-
+-	time_low := uint32(now & 0xffffffff)
+-	time_mid := uint16((now >> 32) & 0xffff)
+-	time_hi := uint16((now >> 48) & 0x0fff)
+-	time_hi |= 0x1000 // Version 1
+-
+-	binary.BigEndian.PutUint32(uuid[0:], time_low)
+-	binary.BigEndian.PutUint16(uuid[4:], time_mid)
+-	binary.BigEndian.PutUint16(uuid[6:], time_hi)
+-	binary.BigEndian.PutUint16(uuid[8:], clock_seq)
+-	copy(uuid[10:], nodeID)
+-
+-	return uuid
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/version4.go b/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/version4.go
+deleted file mode 100644
+index b3d4a36..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid/version4.go
++++ /dev/null
+@@ -1,25 +0,0 @@
+-// Copyright 2011 Google Inc.  All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package uuid
+-
+-// Random returns a Random (Version 4) UUID or panics.
+-//
+-// The strength of the UUIDs is based on the strength of the crypto/rand
+-// package.
+-//
+-// A note about uniqueness derived from from the UUID Wikipedia entry:
+-//
+-//  Randomly generated UUIDs have 122 random bits.  One's annual risk of being
+-//  hit by a meteorite is estimated to be one chance in 17 billion, that
+-//  means the probability is about 0.00000000006 (6 × 10−11),
+-//  equivalent to the odds of creating a few tens of trillions of UUIDs in a
+-//  year and having one duplicate.
+-func NewRandom() UUID {
+-	uuid := make([]byte, 16)
+-	randomBits([]byte(uuid))
+-	uuid[6] = (uuid[6] & 0x0f) | 0x40 // Version 4
+-	uuid[8] = (uuid[8] & 0x3f) | 0x80 // Variant is 10
+-	return uuid
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/atom.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/atom.go
+deleted file mode 100644
+index 227404b..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/atom.go
++++ /dev/null
+@@ -1,78 +0,0 @@
+-// Copyright 2012 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// Package atom provides integer codes (also known as atoms) for a fixed set of
+-// frequently occurring HTML strings: tag names and attribute keys such as "p"
+-// and "id".
+-//
+-// Sharing an atom's name between all elements with the same tag can result in
+-// fewer string allocations when tokenizing and parsing HTML. Integer
+-// comparisons are also generally faster than string comparisons.
+-//
+-// The value of an atom's particular code is not guaranteed to stay the same
+-// between versions of this package. Neither is any ordering guaranteed:
+-// whether atom.H1 < atom.H2 may also change. The codes are not guaranteed to
+-// be dense. The only guarantees are that e.g. looking up "div" will yield
+-// atom.Div, calling atom.Div.String will return "div", and atom.Div != 0.
+-package atom
+-
+-// Atom is an integer code for a string. The zero value maps to "".
+-type Atom uint32
+-
+-// String returns the atom's name.
+-func (a Atom) String() string {
+-	start := uint32(a >> 8)
+-	n := uint32(a & 0xff)
+-	if start+n > uint32(len(atomText)) {
+-		return ""
+-	}
+-	return atomText[start : start+n]
+-}
+-
+-func (a Atom) string() string {
+-	return atomText[a>>8 : a>>8+a&0xff]
+-}
+-
+-// fnv computes the FNV hash with an arbitrary starting value h.
+-func fnv(h uint32, s []byte) uint32 {
+-	for i := range s {
+-		h ^= uint32(s[i])
+-		h *= 16777619
+-	}
+-	return h
+-}
+-
+-func match(s string, t []byte) bool {
+-	for i, c := range t {
+-		if s[i] != c {
+-			return false
+-		}
+-	}
+-	return true
+-}
+-
+-// Lookup returns the atom whose name is s. It returns zero if there is no
+-// such atom. The lookup is case sensitive.
+-func Lookup(s []byte) Atom {
+-	if len(s) == 0 || len(s) > maxAtomLen {
+-		return 0
+-	}
+-	h := fnv(hash0, s)
+-	if a := table[h&uint32(len(table)-1)]; int(a&0xff) == len(s) && match(a.string(), s) {
+-		return a
+-	}
+-	if a := table[(h>>16)&uint32(len(table)-1)]; int(a&0xff) == len(s) && match(a.string(), s) {
+-		return a
+-	}
+-	return 0
+-}
+-
+-// String returns a string whose contents are equal to s. In that sense, it is
+-// equivalent to string(s) but may be more efficient.
+-func String(s []byte) string {
+-	if a := Lookup(s); a != 0 {
+-		return a.String()
+-	}
+-	return string(s)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/atom_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/atom_test.go
+deleted file mode 100644
+index 6e33704..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/atom_test.go
++++ /dev/null
+@@ -1,109 +0,0 @@
+-// Copyright 2012 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package atom
+-
+-import (
+-	"sort"
+-	"testing"
+-)
+-
+-func TestKnown(t *testing.T) {
+-	for _, s := range testAtomList {
+-		if atom := Lookup([]byte(s)); atom.String() != s {
+-			t.Errorf("Lookup(%q) = %#x (%q)", s, uint32(atom), atom.String())
+-		}
+-	}
+-}
+-
+-func TestHits(t *testing.T) {
+-	for _, a := range table {
+-		if a == 0 {
+-			continue
+-		}
+-		got := Lookup([]byte(a.String()))
+-		if got != a {
+-			t.Errorf("Lookup(%q) = %#x, want %#x", a.String(), uint32(got), uint32(a))
+-		}
+-	}
+-}
+-
+-func TestMisses(t *testing.T) {
+-	testCases := []string{
+-		"",
+-		"\x00",
+-		"\xff",
+-		"A",
+-		"DIV",
+-		"Div",
+-		"dIV",
+-		"aa",
+-		"a\x00",
+-		"ab",
+-		"abb",
+-		"abbr0",
+-		"abbr ",
+-		" abbr",
+-		" a",
+-		"acceptcharset",
+-		"acceptCharset",
+-		"accept_charset",
+-		"h0",
+-		"h1h2",
+-		"h7",
+-		"onClick",
+-		"λ",
+-		// The following string has the same hash (0xa1d7fab7) as "onmouseover".
+-		"\x00\x00\x00\x00\x00\x50\x18\xae\x38\xd0\xb7",
+-	}
+-	for _, tc := range testCases {
+-		got := Lookup([]byte(tc))
+-		if got != 0 {
+-			t.Errorf("Lookup(%q): got %d, want 0", tc, got)
+-		}
+-	}
+-}
+-
+-func TestForeignObject(t *testing.T) {
+-	const (
+-		afo = Foreignobject
+-		afO = ForeignObject
+-		sfo = "foreignobject"
+-		sfO = "foreignObject"
+-	)
+-	if got := Lookup([]byte(sfo)); got != afo {
+-		t.Errorf("Lookup(%q): got %#v, want %#v", sfo, got, afo)
+-	}
+-	if got := Lookup([]byte(sfO)); got != afO {
+-		t.Errorf("Lookup(%q): got %#v, want %#v", sfO, got, afO)
+-	}
+-	if got := afo.String(); got != sfo {
+-		t.Errorf("Atom(%#v).String(): got %q, want %q", afo, got, sfo)
+-	}
+-	if got := afO.String(); got != sfO {
+-		t.Errorf("Atom(%#v).String(): got %q, want %q", afO, got, sfO)
+-	}
+-}
+-
+-func BenchmarkLookup(b *testing.B) {
+-	sortedTable := make([]string, 0, len(table))
+-	for _, a := range table {
+-		if a != 0 {
+-			sortedTable = append(sortedTable, a.String())
+-		}
+-	}
+-	sort.Strings(sortedTable)
+-
+-	x := make([][]byte, 1000)
+-	for i := range x {
+-		x[i] = []byte(sortedTable[i%len(sortedTable)])
+-	}
+-
+-	b.ResetTimer()
+-	for i := 0; i < b.N; i++ {
+-		for _, s := range x {
+-			Lookup(s)
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/gen.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/gen.go
+deleted file mode 100644
+index 9958a71..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/gen.go
++++ /dev/null
+@@ -1,636 +0,0 @@
+-// Copyright 2012 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// +build ignore
+-
+-package main
+-
+-// This program generates table.go and table_test.go.
+-// Invoke as
+-//
+-//	go run gen.go |gofmt >table.go
+-//	go run gen.go -test |gofmt >table_test.go
+-
+-import (
+-	"flag"
+-	"fmt"
+-	"math/rand"
+-	"os"
+-	"sort"
+-	"strings"
+-)
+-
+-// identifier converts s to a Go exported identifier.
+-// It converts "div" to "Div" and "accept-charset" to "AcceptCharset".
+-func identifier(s string) string {
+-	b := make([]byte, 0, len(s))
+-	cap := true
+-	for _, c := range s {
+-		if c == '-' {
+-			cap = true
+-			continue
+-		}
+-		if cap && 'a' <= c && c <= 'z' {
+-			c -= 'a' - 'A'
+-		}
+-		cap = false
+-		b = append(b, byte(c))
+-	}
+-	return string(b)
+-}
+-
+-var test = flag.Bool("test", false, "generate table_test.go")
+-
+-func main() {
+-	flag.Parse()
+-
+-	var all []string
+-	all = append(all, elements...)
+-	all = append(all, attributes...)
+-	all = append(all, eventHandlers...)
+-	all = append(all, extra...)
+-	sort.Strings(all)
+-
+-	if *test {
+-		fmt.Printf("// generated by go run gen.go -test; DO NOT EDIT\n\n")
+-		fmt.Printf("package atom\n\n")
+-		fmt.Printf("var testAtomList = []string{\n")
+-		for _, s := range all {
+-			fmt.Printf("\t%q,\n", s)
+-		}
+-		fmt.Printf("}\n")
+-		return
+-	}
+-
+-	// uniq - lists have dups
+-	// compute max len too
+-	maxLen := 0
+-	w := 0
+-	for _, s := range all {
+-		if w == 0 || all[w-1] != s {
+-			if maxLen < len(s) {
+-				maxLen = len(s)
+-			}
+-			all[w] = s
+-			w++
+-		}
+-	}
+-	all = all[:w]
+-
+-	// Find hash that minimizes table size.
+-	var best *table
+-	for i := 0; i < 1000000; i++ {
+-		if best != nil && 1<<(best.k-1) < len(all) {
+-			break
+-		}
+-		h := rand.Uint32()
+-		for k := uint(0); k <= 16; k++ {
+-			if best != nil && k >= best.k {
+-				break
+-			}
+-			var t table
+-			if t.init(h, k, all) {
+-				best = &t
+-				break
+-			}
+-		}
+-	}
+-	if best == nil {
+-		fmt.Fprintf(os.Stderr, "failed to construct string table\n")
+-		os.Exit(1)
+-	}
+-
+-	// Lay out strings, using overlaps when possible.
+-	layout := append([]string{}, all...)
+-
+-	// Remove strings that are substrings of other strings
+-	for changed := true; changed; {
+-		changed = false
+-		for i, s := range layout {
+-			if s == "" {
+-				continue
+-			}
+-			for j, t := range layout {
+-				if i != j && t != "" && strings.Contains(s, t) {
+-					changed = true
+-					layout[j] = ""
+-				}
+-			}
+-		}
+-	}
+-
+-	// Join strings where one suffix matches another prefix.
+-	for {
+-		// Find best i, j, k such that layout[i][len-k:] == layout[j][:k],
+-		// maximizing overlap length k.
+-		besti := -1
+-		bestj := -1
+-		bestk := 0
+-		for i, s := range layout {
+-			if s == "" {
+-				continue
+-			}
+-			for j, t := range layout {
+-				if i == j {
+-					continue
+-				}
+-				for k := bestk + 1; k <= len(s) && k <= len(t); k++ {
+-					if s[len(s)-k:] == t[:k] {
+-						besti = i
+-						bestj = j
+-						bestk = k
+-					}
+-				}
+-			}
+-		}
+-		if bestk > 0 {
+-			layout[besti] += layout[bestj][bestk:]
+-			layout[bestj] = ""
+-			continue
+-		}
+-		break
+-	}
+-
+-	text := strings.Join(layout, "")
+-
+-	atom := map[string]uint32{}
+-	for _, s := range all {
+-		off := strings.Index(text, s)
+-		if off < 0 {
+-			panic("lost string " + s)
+-		}
+-		atom[s] = uint32(off<<8 | len(s))
+-	}
+-
+-	// Generate the Go code.
+-	fmt.Printf("// generated by go run gen.go; DO NOT EDIT\n\n")
+-	fmt.Printf("package atom\n\nconst (\n")
+-	for _, s := range all {
+-		fmt.Printf("\t%s Atom = %#x\n", identifier(s), atom[s])
+-	}
+-	fmt.Printf(")\n\n")
+-
+-	fmt.Printf("const hash0 = %#x\n\n", best.h0)
+-	fmt.Printf("const maxAtomLen = %d\n\n", maxLen)
+-
+-	fmt.Printf("var table = [1<<%d]Atom{\n", best.k)
+-	for i, s := range best.tab {
+-		if s == "" {
+-			continue
+-		}
+-		fmt.Printf("\t%#x: %#x, // %s\n", i, atom[s], s)
+-	}
+-	fmt.Printf("}\n")
+-	datasize := (1 << best.k) * 4
+-
+-	fmt.Printf("const atomText =\n")
+-	textsize := len(text)
+-	for len(text) > 60 {
+-		fmt.Printf("\t%q +\n", text[:60])
+-		text = text[60:]
+-	}
+-	fmt.Printf("\t%q\n\n", text)
+-
+-	fmt.Fprintf(os.Stderr, "%d atoms; %d string bytes + %d tables = %d total data\n", len(all), textsize, datasize, textsize+datasize)
+-}
+-
+-type byLen []string
+-
+-func (x byLen) Less(i, j int) bool { return len(x[i]) > len(x[j]) }
+-func (x byLen) Swap(i, j int)      { x[i], x[j] = x[j], x[i] }
+-func (x byLen) Len() int           { return len(x) }
+-
+-// fnv computes the FNV hash with an arbitrary starting value h.
+-func fnv(h uint32, s string) uint32 {
+-	for i := 0; i < len(s); i++ {
+-		h ^= uint32(s[i])
+-		h *= 16777619
+-	}
+-	return h
+-}
+-
+-// A table represents an attempt at constructing the lookup table.
+-// The lookup table uses cuckoo hashing, meaning that each string
+-// can be found in one of two positions.
+-type table struct {
+-	h0   uint32
+-	k    uint
+-	mask uint32
+-	tab  []string
+-}
+-
+-// hash returns the two hashes for s.
+-func (t *table) hash(s string) (h1, h2 uint32) {
+-	h := fnv(t.h0, s)
+-	h1 = h & t.mask
+-	h2 = (h >> 16) & t.mask
+-	return
+-}
+-
+-// init initializes the table with the given parameters.
+-// h0 is the initial hash value,
+-// k is the number of bits of hash value to use, and
+-// x is the list of strings to store in the table.
+-// init returns false if the table cannot be constructed.
+-func (t *table) init(h0 uint32, k uint, x []string) bool {
+-	t.h0 = h0
+-	t.k = k
+-	t.tab = make([]string, 1<<k)
+-	t.mask = 1<<k - 1
+-	for _, s := range x {
+-		if !t.insert(s) {
+-			return false
+-		}
+-	}
+-	return true
+-}
+-
+-// insert inserts s in the table.
+-func (t *table) insert(s string) bool {
+-	h1, h2 := t.hash(s)
+-	if t.tab[h1] == "" {
+-		t.tab[h1] = s
+-		return true
+-	}
+-	if t.tab[h2] == "" {
+-		t.tab[h2] = s
+-		return true
+-	}
+-	if t.push(h1, 0) {
+-		t.tab[h1] = s
+-		return true
+-	}
+-	if t.push(h2, 0) {
+-		t.tab[h2] = s
+-		return true
+-	}
+-	return false
+-}
+-
+-// push attempts to push aside the entry in slot i.
+-func (t *table) push(i uint32, depth int) bool {
+-	if depth > len(t.tab) {
+-		return false
+-	}
+-	s := t.tab[i]
+-	h1, h2 := t.hash(s)
+-	j := h1 + h2 - i
+-	if t.tab[j] != "" && !t.push(j, depth+1) {
+-		return false
+-	}
+-	t.tab[j] = s
+-	return true
+-}
+-
+-// The lists of element names and attribute keys were taken from
+-// http://www.whatwg.org/specs/web-apps/current-work/multipage/section-index.html
+-// as of the "HTML Living Standard - Last Updated 30 May 2012" version.
+-
+-var elements = []string{
+-	"a",
+-	"abbr",
+-	"address",
+-	"area",
+-	"article",
+-	"aside",
+-	"audio",
+-	"b",
+-	"base",
+-	"bdi",
+-	"bdo",
+-	"blockquote",
+-	"body",
+-	"br",
+-	"button",
+-	"canvas",
+-	"caption",
+-	"cite",
+-	"code",
+-	"col",
+-	"colgroup",
+-	"command",
+-	"data",
+-	"datalist",
+-	"dd",
+-	"del",
+-	"details",
+-	"dfn",
+-	"dialog",
+-	"div",
+-	"dl",
+-	"dt",
+-	"em",
+-	"embed",
+-	"fieldset",
+-	"figcaption",
+-	"figure",
+-	"footer",
+-	"form",
+-	"h1",
+-	"h2",
+-	"h3",
+-	"h4",
+-	"h5",
+-	"h6",
+-	"head",
+-	"header",
+-	"hgroup",
+-	"hr",
+-	"html",
+-	"i",
+-	"iframe",
+-	"img",
+-	"input",
+-	"ins",
+-	"kbd",
+-	"keygen",
+-	"label",
+-	"legend",
+-	"li",
+-	"link",
+-	"map",
+-	"mark",
+-	"menu",
+-	"meta",
+-	"meter",
+-	"nav",
+-	"noscript",
+-	"object",
+-	"ol",
+-	"optgroup",
+-	"option",
+-	"output",
+-	"p",
+-	"param",
+-	"pre",
+-	"progress",
+-	"q",
+-	"rp",
+-	"rt",
+-	"ruby",
+-	"s",
+-	"samp",
+-	"script",
+-	"section",
+-	"select",
+-	"small",
+-	"source",
+-	"span",
+-	"strong",
+-	"style",
+-	"sub",
+-	"summary",
+-	"sup",
+-	"table",
+-	"tbody",
+-	"td",
+-	"textarea",
+-	"tfoot",
+-	"th",
+-	"thead",
+-	"time",
+-	"title",
+-	"tr",
+-	"track",
+-	"u",
+-	"ul",
+-	"var",
+-	"video",
+-	"wbr",
+-}
+-
+-var attributes = []string{
+-	"accept",
+-	"accept-charset",
+-	"accesskey",
+-	"action",
+-	"alt",
+-	"async",
+-	"autocomplete",
+-	"autofocus",
+-	"autoplay",
+-	"border",
+-	"challenge",
+-	"charset",
+-	"checked",
+-	"cite",
+-	"class",
+-	"cols",
+-	"colspan",
+-	"command",
+-	"content",
+-	"contenteditable",
+-	"contextmenu",
+-	"controls",
+-	"coords",
+-	"crossorigin",
+-	"data",
+-	"datetime",
+-	"default",
+-	"defer",
+-	"dir",
+-	"dirname",
+-	"disabled",
+-	"download",
+-	"draggable",
+-	"dropzone",
+-	"enctype",
+-	"for",
+-	"form",
+-	"formaction",
+-	"formenctype",
+-	"formmethod",
+-	"formnovalidate",
+-	"formtarget",
+-	"headers",
+-	"height",
+-	"hidden",
+-	"high",
+-	"href",
+-	"hreflang",
+-	"http-equiv",
+-	"icon",
+-	"id",
+-	"inert",
+-	"ismap",
+-	"itemid",
+-	"itemprop",
+-	"itemref",
+-	"itemscope",
+-	"itemtype",
+-	"keytype",
+-	"kind",
+-	"label",
+-	"lang",
+-	"list",
+-	"loop",
+-	"low",
+-	"manifest",
+-	"max",
+-	"maxlength",
+-	"media",
+-	"mediagroup",
+-	"method",
+-	"min",
+-	"multiple",
+-	"muted",
+-	"name",
+-	"novalidate",
+-	"open",
+-	"optimum",
+-	"pattern",
+-	"ping",
+-	"placeholder",
+-	"poster",
+-	"preload",
+-	"radiogroup",
+-	"readonly",
+-	"rel",
+-	"required",
+-	"reversed",
+-	"rows",
+-	"rowspan",
+-	"sandbox",
+-	"spellcheck",
+-	"scope",
+-	"scoped",
+-	"seamless",
+-	"selected",
+-	"shape",
+-	"size",
+-	"sizes",
+-	"span",
+-	"src",
+-	"srcdoc",
+-	"srclang",
+-	"start",
+-	"step",
+-	"style",
+-	"tabindex",
+-	"target",
+-	"title",
+-	"translate",
+-	"type",
+-	"typemustmatch",
+-	"usemap",
+-	"value",
+-	"width",
+-	"wrap",
+-}
+-
+-var eventHandlers = []string{
+-	"onabort",
+-	"onafterprint",
+-	"onbeforeprint",
+-	"onbeforeunload",
+-	"onblur",
+-	"oncancel",
+-	"oncanplay",
+-	"oncanplaythrough",
+-	"onchange",
+-	"onclick",
+-	"onclose",
+-	"oncontextmenu",
+-	"oncuechange",
+-	"ondblclick",
+-	"ondrag",
+-	"ondragend",
+-	"ondragenter",
+-	"ondragleave",
+-	"ondragover",
+-	"ondragstart",
+-	"ondrop",
+-	"ondurationchange",
+-	"onemptied",
+-	"onended",
+-	"onerror",
+-	"onfocus",
+-	"onhashchange",
+-	"oninput",
+-	"oninvalid",
+-	"onkeydown",
+-	"onkeypress",
+-	"onkeyup",
+-	"onload",
+-	"onloadeddata",
+-	"onloadedmetadata",
+-	"onloadstart",
+-	"onmessage",
+-	"onmousedown",
+-	"onmousemove",
+-	"onmouseout",
+-	"onmouseover",
+-	"onmouseup",
+-	"onmousewheel",
+-	"onoffline",
+-	"ononline",
+-	"onpagehide",
+-	"onpageshow",
+-	"onpause",
+-	"onplay",
+-	"onplaying",
+-	"onpopstate",
+-	"onprogress",
+-	"onratechange",
+-	"onreset",
+-	"onresize",
+-	"onscroll",
+-	"onseeked",
+-	"onseeking",
+-	"onselect",
+-	"onshow",
+-	"onstalled",
+-	"onstorage",
+-	"onsubmit",
+-	"onsuspend",
+-	"ontimeupdate",
+-	"onunload",
+-	"onvolumechange",
+-	"onwaiting",
+-}
+-
+-// extra are ad-hoc values not covered by any of the lists above.
+-var extra = []string{
+-	"align",
+-	"annotation",
+-	"annotation-xml",
+-	"applet",
+-	"basefont",
+-	"bgsound",
+-	"big",
+-	"blink",
+-	"center",
+-	"color",
+-	"desc",
+-	"face",
+-	"font",
+-	"foreignObject", // HTML is case-insensitive, but SVG-embedded-in-HTML is case-sensitive.
+-	"foreignobject",
+-	"frame",
+-	"frameset",
+-	"image",
+-	"isindex",
+-	"listing",
+-	"malignmark",
+-	"marquee",
+-	"math",
+-	"mglyph",
+-	"mi",
+-	"mn",
+-	"mo",
+-	"ms",
+-	"mtext",
+-	"nobr",
+-	"noembed",
+-	"noframes",
+-	"plaintext",
+-	"prompt",
+-	"public",
+-	"spacer",
+-	"strike",
+-	"svg",
+-	"system",
+-	"tt",
+-	"xmp",
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/table.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/table.go
+deleted file mode 100644
+index 20b8b8a..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/table.go
++++ /dev/null
+@@ -1,694 +0,0 @@
+-// generated by go run gen.go; DO NOT EDIT
+-
+-package atom
+-
+-const (
+-	A                Atom = 0x1
+-	Abbr             Atom = 0x4
+-	Accept           Atom = 0x2106
+-	AcceptCharset    Atom = 0x210e
+-	Accesskey        Atom = 0x3309
+-	Action           Atom = 0x21b06
+-	Address          Atom = 0x5d507
+-	Align            Atom = 0x1105
+-	Alt              Atom = 0x4503
+-	Annotation       Atom = 0x18d0a
+-	AnnotationXml    Atom = 0x18d0e
+-	Applet           Atom = 0x2d106
+-	Area             Atom = 0x31804
+-	Article          Atom = 0x39907
+-	Aside            Atom = 0x4f05
+-	Async            Atom = 0x9305
+-	Audio            Atom = 0xaf05
+-	Autocomplete     Atom = 0xd50c
+-	Autofocus        Atom = 0xe109
+-	Autoplay         Atom = 0x10c08
+-	B                Atom = 0x101
+-	Base             Atom = 0x11404
+-	Basefont         Atom = 0x11408
+-	Bdi              Atom = 0x1a03
+-	Bdo              Atom = 0x12503
+-	Bgsound          Atom = 0x13807
+-	Big              Atom = 0x14403
+-	Blink            Atom = 0x14705
+-	Blockquote       Atom = 0x14c0a
+-	Body             Atom = 0x2f04
+-	Border           Atom = 0x15606
+-	Br               Atom = 0x202
+-	Button           Atom = 0x15c06
+-	Canvas           Atom = 0x4b06
+-	Caption          Atom = 0x1e007
+-	Center           Atom = 0x2df06
+-	Challenge        Atom = 0x23e09
+-	Charset          Atom = 0x2807
+-	Checked          Atom = 0x33f07
+-	Cite             Atom = 0x9704
+-	Class            Atom = 0x3d905
+-	Code             Atom = 0x16f04
+-	Col              Atom = 0x17603
+-	Colgroup         Atom = 0x17608
+-	Color            Atom = 0x18305
+-	Cols             Atom = 0x18804
+-	Colspan          Atom = 0x18807
+-	Command          Atom = 0x19b07
+-	Content          Atom = 0x42c07
+-	Contenteditable  Atom = 0x42c0f
+-	Contextmenu      Atom = 0x3480b
+-	Controls         Atom = 0x1ae08
+-	Coords           Atom = 0x1ba06
+-	Crossorigin      Atom = 0x1c40b
+-	Data             Atom = 0x44304
+-	Datalist         Atom = 0x44308
+-	Datetime         Atom = 0x25b08
+-	Dd               Atom = 0x28802
+-	Default          Atom = 0x5207
+-	Defer            Atom = 0x17105
+-	Del              Atom = 0x4d603
+-	Desc             Atom = 0x4804
+-	Details          Atom = 0x6507
+-	Dfn              Atom = 0x8303
+-	Dialog           Atom = 0x1b06
+-	Dir              Atom = 0x9d03
+-	Dirname          Atom = 0x9d07
+-	Disabled         Atom = 0x10008
+-	Div              Atom = 0x10703
+-	Dl               Atom = 0x13e02
+-	Download         Atom = 0x40908
+-	Draggable        Atom = 0x1a109
+-	Dropzone         Atom = 0x3a208
+-	Dt               Atom = 0x4e402
+-	Em               Atom = 0x7f02
+-	Embed            Atom = 0x7f05
+-	Enctype          Atom = 0x23007
+-	Face             Atom = 0x2dd04
+-	Fieldset         Atom = 0x1d508
+-	Figcaption       Atom = 0x1dd0a
+-	Figure           Atom = 0x1f106
+-	Font             Atom = 0x11804
+-	Footer           Atom = 0x5906
+-	For              Atom = 0x1fd03
+-	ForeignObject    Atom = 0x1fd0d
+-	Foreignobject    Atom = 0x20a0d
+-	Form             Atom = 0x21704
+-	Formaction       Atom = 0x2170a
+-	Formenctype      Atom = 0x22c0b
+-	Formmethod       Atom = 0x2470a
+-	Formnovalidate   Atom = 0x2510e
+-	Formtarget       Atom = 0x2660a
+-	Frame            Atom = 0x8705
+-	Frameset         Atom = 0x8708
+-	H1               Atom = 0x13602
+-	H2               Atom = 0x29602
+-	H3               Atom = 0x2c502
+-	H4               Atom = 0x30e02
+-	H5               Atom = 0x4e602
+-	H6               Atom = 0x27002
+-	Head             Atom = 0x2fa04
+-	Header           Atom = 0x2fa06
+-	Headers          Atom = 0x2fa07
+-	Height           Atom = 0x27206
+-	Hgroup           Atom = 0x27a06
+-	Hidden           Atom = 0x28606
+-	High             Atom = 0x29304
+-	Hr               Atom = 0x13102
+-	Href             Atom = 0x29804
+-	Hreflang         Atom = 0x29808
+-	Html             Atom = 0x27604
+-	HttpEquiv        Atom = 0x2a00a
+-	I                Atom = 0x601
+-	Icon             Atom = 0x42b04
+-	Id               Atom = 0x5102
+-	Iframe           Atom = 0x2b406
+-	Image            Atom = 0x2ba05
+-	Img              Atom = 0x2bf03
+-	Inert            Atom = 0x4c105
+-	Input            Atom = 0x3f605
+-	Ins              Atom = 0x1cd03
+-	Isindex          Atom = 0x2c707
+-	Ismap            Atom = 0x2ce05
+-	Itemid           Atom = 0x9806
+-	Itemprop         Atom = 0x57e08
+-	Itemref          Atom = 0x2d707
+-	Itemscope        Atom = 0x2e509
+-	Itemtype         Atom = 0x2ef08
+-	Kbd              Atom = 0x1903
+-	Keygen           Atom = 0x3906
+-	Keytype          Atom = 0x51207
+-	Kind             Atom = 0xfd04
+-	Label            Atom = 0xba05
+-	Lang             Atom = 0x29c04
+-	Legend           Atom = 0x1a806
+-	Li               Atom = 0x1202
+-	Link             Atom = 0x14804
+-	List             Atom = 0x44704
+-	Listing          Atom = 0x44707
+-	Loop             Atom = 0xbe04
+-	Low              Atom = 0x13f03
+-	Malignmark       Atom = 0x100a
+-	Manifest         Atom = 0x5b608
+-	Map              Atom = 0x2d003
+-	Mark             Atom = 0x1604
+-	Marquee          Atom = 0x5f207
+-	Math             Atom = 0x2f704
+-	Max              Atom = 0x30603
+-	Maxlength        Atom = 0x30609
+-	Media            Atom = 0xa205
+-	Mediagroup       Atom = 0xa20a
+-	Menu             Atom = 0x34f04
+-	Meta             Atom = 0x45604
+-	Meter            Atom = 0x26105
+-	Method           Atom = 0x24b06
+-	Mglyph           Atom = 0x2c006
+-	Mi               Atom = 0x9b02
+-	Min              Atom = 0x31003
+-	Mn               Atom = 0x25402
+-	Mo               Atom = 0x47a02
+-	Ms               Atom = 0x2e802
+-	Mtext            Atom = 0x31305
+-	Multiple         Atom = 0x32108
+-	Muted            Atom = 0x32905
+-	Name             Atom = 0xa004
+-	Nav              Atom = 0x3e03
+-	Nobr             Atom = 0x7404
+-	Noembed          Atom = 0x7d07
+-	Noframes         Atom = 0x8508
+-	Noscript         Atom = 0x28b08
+-	Novalidate       Atom = 0x2550a
+-	Object           Atom = 0x21106
+-	Ol               Atom = 0xcd02
+-	Onabort          Atom = 0x16007
+-	Onafterprint     Atom = 0x1e50c
+-	Onbeforeprint    Atom = 0x21f0d
+-	Onbeforeunload   Atom = 0x5c90e
+-	Onblur           Atom = 0x3e206
+-	Oncancel         Atom = 0xb308
+-	Oncanplay        Atom = 0x12709
+-	Oncanplaythrough Atom = 0x12710
+-	Onchange         Atom = 0x3b808
+-	Onclick          Atom = 0x2ad07
+-	Onclose          Atom = 0x32e07
+-	Oncontextmenu    Atom = 0x3460d
+-	Oncuechange      Atom = 0x3530b
+-	Ondblclick       Atom = 0x35e0a
+-	Ondrag           Atom = 0x36806
+-	Ondragend        Atom = 0x36809
+-	Ondragenter      Atom = 0x3710b
+-	Ondragleave      Atom = 0x37c0b
+-	Ondragover       Atom = 0x3870a
+-	Ondragstart      Atom = 0x3910b
+-	Ondrop           Atom = 0x3a006
+-	Ondurationchange Atom = 0x3b010
+-	Onemptied        Atom = 0x3a709
+-	Onended          Atom = 0x3c007
+-	Onerror          Atom = 0x3c707
+-	Onfocus          Atom = 0x3ce07
+-	Onhashchange     Atom = 0x3e80c
+-	Oninput          Atom = 0x3f407
+-	Oninvalid        Atom = 0x3fb09
+-	Onkeydown        Atom = 0x40409
+-	Onkeypress       Atom = 0x4110a
+-	Onkeyup          Atom = 0x42107
+-	Onload           Atom = 0x43b06
+-	Onloadeddata     Atom = 0x43b0c
+-	Onloadedmetadata Atom = 0x44e10
+-	Onloadstart      Atom = 0x4640b
+-	Onmessage        Atom = 0x46f09
+-	Onmousedown      Atom = 0x4780b
+-	Onmousemove      Atom = 0x4830b
+-	Onmouseout       Atom = 0x48e0a
+-	Onmouseover      Atom = 0x49b0b
+-	Onmouseup        Atom = 0x4a609
+-	Onmousewheel     Atom = 0x4af0c
+-	Onoffline        Atom = 0x4bb09
+-	Ononline         Atom = 0x4c608
+-	Onpagehide       Atom = 0x4ce0a
+-	Onpageshow       Atom = 0x4d90a
+-	Onpause          Atom = 0x4e807
+-	Onplay           Atom = 0x4f206
+-	Onplaying        Atom = 0x4f209
+-	Onpopstate       Atom = 0x4fb0a
+-	Onprogress       Atom = 0x5050a
+-	Onratechange     Atom = 0x5190c
+-	Onreset          Atom = 0x52507
+-	Onresize         Atom = 0x52c08
+-	Onscroll         Atom = 0x53a08
+-	Onseeked         Atom = 0x54208
+-	Onseeking        Atom = 0x54a09
+-	Onselect         Atom = 0x55308
+-	Onshow           Atom = 0x55d06
+-	Onstalled        Atom = 0x56609
+-	Onstorage        Atom = 0x56f09
+-	Onsubmit         Atom = 0x57808
+-	Onsuspend        Atom = 0x58809
+-	Ontimeupdate     Atom = 0x1190c
+-	Onunload         Atom = 0x59108
+-	Onvolumechange   Atom = 0x5990e
+-	Onwaiting        Atom = 0x5a709
+-	Open             Atom = 0x58404
+-	Optgroup         Atom = 0xc008
+-	Optimum          Atom = 0x5b007
+-	Option           Atom = 0x5c506
+-	Output           Atom = 0x49506
+-	P                Atom = 0xc01
+-	Param            Atom = 0xc05
+-	Pattern          Atom = 0x6e07
+-	Ping             Atom = 0xab04
+-	Placeholder      Atom = 0xc70b
+-	Plaintext        Atom = 0xf109
+-	Poster           Atom = 0x17d06
+-	Pre              Atom = 0x27f03
+-	Preload          Atom = 0x27f07
+-	Progress         Atom = 0x50708
+-	Prompt           Atom = 0x5bf06
+-	Public           Atom = 0x42706
+-	Q                Atom = 0x15101
+-	Radiogroup       Atom = 0x30a
+-	Readonly         Atom = 0x31908
+-	Rel              Atom = 0x28003
+-	Required         Atom = 0x1f508
+-	Reversed         Atom = 0x5e08
+-	Rows             Atom = 0x7704
+-	Rowspan          Atom = 0x7707
+-	Rp               Atom = 0x1eb02
+-	Rt               Atom = 0x16502
+-	Ruby             Atom = 0xd104
+-	S                Atom = 0x2c01
+-	Samp             Atom = 0x6b04
+-	Sandbox          Atom = 0xe907
+-	Scope            Atom = 0x2e905
+-	Scoped           Atom = 0x2e906
+-	Script           Atom = 0x28d06
+-	Seamless         Atom = 0x33308
+-	Section          Atom = 0x3dd07
+-	Select           Atom = 0x55506
+-	Selected         Atom = 0x55508
+-	Shape            Atom = 0x1b505
+-	Size             Atom = 0x53004
+-	Sizes            Atom = 0x53005
+-	Small            Atom = 0x1bf05
+-	Source           Atom = 0x1cf06
+-	Spacer           Atom = 0x30006
+-	Span             Atom = 0x7a04
+-	Spellcheck       Atom = 0x33a0a
+-	Src              Atom = 0x3d403
+-	Srcdoc           Atom = 0x3d406
+-	Srclang          Atom = 0x41a07
+-	Start            Atom = 0x39705
+-	Step             Atom = 0x5bc04
+-	Strike           Atom = 0x50e06
+-	Strong           Atom = 0x53406
+-	Style            Atom = 0x5db05
+-	Sub              Atom = 0x57a03
+-	Summary          Atom = 0x5e007
+-	Sup              Atom = 0x5e703
+-	Svg              Atom = 0x5ea03
+-	System           Atom = 0x5ed06
+-	Tabindex         Atom = 0x45c08
+-	Table            Atom = 0x43605
+-	Target           Atom = 0x26a06
+-	Tbody            Atom = 0x2e05
+-	Td               Atom = 0x4702
+-	Textarea         Atom = 0x31408
+-	Tfoot            Atom = 0x5805
+-	Th               Atom = 0x13002
+-	Thead            Atom = 0x2f905
+-	Time             Atom = 0x11b04
+-	Title            Atom = 0x8e05
+-	Tr               Atom = 0xf902
+-	Track            Atom = 0xf905
+-	Translate        Atom = 0x16609
+-	Tt               Atom = 0x7002
+-	Type             Atom = 0x23304
+-	Typemustmatch    Atom = 0x2330d
+-	U                Atom = 0xb01
+-	Ul               Atom = 0x5602
+-	Usemap           Atom = 0x4ec06
+-	Value            Atom = 0x4005
+-	Var              Atom = 0x10903
+-	Video            Atom = 0x2a905
+-	Wbr              Atom = 0x14103
+-	Width            Atom = 0x4e205
+-	Wrap             Atom = 0x56204
+-	Xmp              Atom = 0xef03
+-)
+-
+-const hash0 = 0xc17da63e
+-
+-const maxAtomLen = 16
+-
+-var table = [1 << 9]Atom{
+-	0x1:   0x4830b, // onmousemove
+-	0x2:   0x5a709, // onwaiting
+-	0x4:   0x5bf06, // prompt
+-	0x7:   0x5b007, // optimum
+-	0x8:   0x1604,  // mark
+-	0xa:   0x2d707, // itemref
+-	0xb:   0x4d90a, // onpageshow
+-	0xc:   0x55506, // select
+-	0xd:   0x1a109, // draggable
+-	0xe:   0x3e03,  // nav
+-	0xf:   0x19b07, // command
+-	0x11:  0xb01,   // u
+-	0x14:  0x2fa07, // headers
+-	0x15:  0x44308, // datalist
+-	0x17:  0x6b04,  // samp
+-	0x1a:  0x40409, // onkeydown
+-	0x1b:  0x53a08, // onscroll
+-	0x1c:  0x17603, // col
+-	0x20:  0x57e08, // itemprop
+-	0x21:  0x2a00a, // http-equiv
+-	0x22:  0x5e703, // sup
+-	0x24:  0x1f508, // required
+-	0x2b:  0x27f07, // preload
+-	0x2c:  0x21f0d, // onbeforeprint
+-	0x2d:  0x3710b, // ondragenter
+-	0x2e:  0x4e402, // dt
+-	0x2f:  0x57808, // onsubmit
+-	0x30:  0x13102, // hr
+-	0x31:  0x3460d, // oncontextmenu
+-	0x33:  0x2ba05, // image
+-	0x34:  0x4e807, // onpause
+-	0x35:  0x27a06, // hgroup
+-	0x36:  0xab04,  // ping
+-	0x37:  0x55308, // onselect
+-	0x3a:  0x10703, // div
+-	0x40:  0x9b02,  // mi
+-	0x41:  0x33308, // seamless
+-	0x42:  0x2807,  // charset
+-	0x43:  0x5102,  // id
+-	0x44:  0x4fb0a, // onpopstate
+-	0x45:  0x4d603, // del
+-	0x46:  0x5f207, // marquee
+-	0x47:  0x3309,  // accesskey
+-	0x49:  0x5906,  // footer
+-	0x4a:  0x2d106, // applet
+-	0x4b:  0x2ce05, // ismap
+-	0x51:  0x34f04, // menu
+-	0x52:  0x2f04,  // body
+-	0x55:  0x8708,  // frameset
+-	0x56:  0x52507, // onreset
+-	0x57:  0x14705, // blink
+-	0x58:  0x8e05,  // title
+-	0x59:  0x39907, // article
+-	0x5b:  0x13002, // th
+-	0x5d:  0x15101, // q
+-	0x5e:  0x58404, // open
+-	0x5f:  0x31804, // area
+-	0x61:  0x43b06, // onload
+-	0x62:  0x3f605, // input
+-	0x63:  0x11404, // base
+-	0x64:  0x18807, // colspan
+-	0x65:  0x51207, // keytype
+-	0x66:  0x13e02, // dl
+-	0x68:  0x1d508, // fieldset
+-	0x6a:  0x31003, // min
+-	0x6b:  0x10903, // var
+-	0x6f:  0x2fa06, // header
+-	0x70:  0x16502, // rt
+-	0x71:  0x17608, // colgroup
+-	0x72:  0x25402, // mn
+-	0x74:  0x16007, // onabort
+-	0x75:  0x3906,  // keygen
+-	0x76:  0x4bb09, // onoffline
+-	0x77:  0x23e09, // challenge
+-	0x78:  0x2d003, // map
+-	0x7a:  0x30e02, // h4
+-	0x7b:  0x3c707, // onerror
+-	0x7c:  0x30609, // maxlength
+-	0x7d:  0x31305, // mtext
+-	0x7e:  0x5805,  // tfoot
+-	0x7f:  0x11804, // font
+-	0x80:  0x100a,  // malignmark
+-	0x81:  0x45604, // meta
+-	0x82:  0x9305,  // async
+-	0x83:  0x2c502, // h3
+-	0x84:  0x28802, // dd
+-	0x85:  0x29804, // href
+-	0x86:  0xa20a,  // mediagroup
+-	0x87:  0x1ba06, // coords
+-	0x88:  0x41a07, // srclang
+-	0x89:  0x35e0a, // ondblclick
+-	0x8a:  0x4005,  // value
+-	0x8c:  0xb308,  // oncancel
+-	0x8e:  0x33a0a, // spellcheck
+-	0x8f:  0x8705,  // frame
+-	0x91:  0x14403, // big
+-	0x94:  0x21b06, // action
+-	0x95:  0x9d03,  // dir
+-	0x97:  0x31908, // readonly
+-	0x99:  0x43605, // table
+-	0x9a:  0x5e007, // summary
+-	0x9b:  0x14103, // wbr
+-	0x9c:  0x30a,   // radiogroup
+-	0x9d:  0xa004,  // name
+-	0x9f:  0x5ed06, // system
+-	0xa1:  0x18305, // color
+-	0xa2:  0x4b06,  // canvas
+-	0xa3:  0x27604, // html
+-	0xa5:  0x54a09, // onseeking
+-	0xac:  0x1b505, // shape
+-	0xad:  0x28003, // rel
+-	0xae:  0x12710, // oncanplaythrough
+-	0xaf:  0x3870a, // ondragover
+-	0xb1:  0x1fd0d, // foreignObject
+-	0xb3:  0x7704,  // rows
+-	0xb6:  0x44707, // listing
+-	0xb7:  0x49506, // output
+-	0xb9:  0x3480b, // contextmenu
+-	0xbb:  0x13f03, // low
+-	0xbc:  0x1eb02, // rp
+-	0xbd:  0x58809, // onsuspend
+-	0xbe:  0x15c06, // button
+-	0xbf:  0x4804,  // desc
+-	0xc1:  0x3dd07, // section
+-	0xc2:  0x5050a, // onprogress
+-	0xc3:  0x56f09, // onstorage
+-	0xc4:  0x2f704, // math
+-	0xc5:  0x4f206, // onplay
+-	0xc7:  0x5602,  // ul
+-	0xc8:  0x6e07,  // pattern
+-	0xc9:  0x4af0c, // onmousewheel
+-	0xca:  0x36809, // ondragend
+-	0xcb:  0xd104,  // ruby
+-	0xcc:  0xc01,   // p
+-	0xcd:  0x32e07, // onclose
+-	0xce:  0x26105, // meter
+-	0xcf:  0x13807, // bgsound
+-	0xd2:  0x27206, // height
+-	0xd4:  0x101,   // b
+-	0xd5:  0x2ef08, // itemtype
+-	0xd8:  0x1e007, // caption
+-	0xd9:  0x10008, // disabled
+-	0xdc:  0x5ea03, // svg
+-	0xdd:  0x1bf05, // small
+-	0xde:  0x44304, // data
+-	0xe0:  0x4c608, // ononline
+-	0xe1:  0x2c006, // mglyph
+-	0xe3:  0x7f05,  // embed
+-	0xe4:  0xf902,  // tr
+-	0xe5:  0x4640b, // onloadstart
+-	0xe7:  0x3b010, // ondurationchange
+-	0xed:  0x12503, // bdo
+-	0xee:  0x4702,  // td
+-	0xef:  0x4f05,  // aside
+-	0xf0:  0x29602, // h2
+-	0xf1:  0x50708, // progress
+-	0xf2:  0x14c0a, // blockquote
+-	0xf4:  0xba05,  // label
+-	0xf5:  0x601,   // i
+-	0xf7:  0x7707,  // rowspan
+-	0xfb:  0x4f209, // onplaying
+-	0xfd:  0x2bf03, // img
+-	0xfe:  0xc008,  // optgroup
+-	0xff:  0x42c07, // content
+-	0x101: 0x5190c, // onratechange
+-	0x103: 0x3e80c, // onhashchange
+-	0x104: 0x6507,  // details
+-	0x106: 0x40908, // download
+-	0x109: 0xe907,  // sandbox
+-	0x10b: 0x42c0f, // contenteditable
+-	0x10d: 0x37c0b, // ondragleave
+-	0x10e: 0x2106,  // accept
+-	0x10f: 0x55508, // selected
+-	0x112: 0x2170a, // formaction
+-	0x113: 0x2df06, // center
+-	0x115: 0x44e10, // onloadedmetadata
+-	0x116: 0x14804, // link
+-	0x117: 0x11b04, // time
+-	0x118: 0x1c40b, // crossorigin
+-	0x119: 0x3ce07, // onfocus
+-	0x11a: 0x56204, // wrap
+-	0x11b: 0x42b04, // icon
+-	0x11d: 0x2a905, // video
+-	0x11e: 0x3d905, // class
+-	0x121: 0x5990e, // onvolumechange
+-	0x122: 0x3e206, // onblur
+-	0x123: 0x2e509, // itemscope
+-	0x124: 0x5db05, // style
+-	0x127: 0x42706, // public
+-	0x129: 0x2510e, // formnovalidate
+-	0x12a: 0x55d06, // onshow
+-	0x12c: 0x16609, // translate
+-	0x12d: 0x9704,  // cite
+-	0x12e: 0x2e802, // ms
+-	0x12f: 0x1190c, // ontimeupdate
+-	0x130: 0xfd04,  // kind
+-	0x131: 0x2660a, // formtarget
+-	0x135: 0x3c007, // onended
+-	0x136: 0x28606, // hidden
+-	0x137: 0x2c01,  // s
+-	0x139: 0x2470a, // formmethod
+-	0x13a: 0x44704, // list
+-	0x13c: 0x27002, // h6
+-	0x13d: 0xcd02,  // ol
+-	0x13e: 0x3530b, // oncuechange
+-	0x13f: 0x20a0d, // foreignobject
+-	0x143: 0x5c90e, // onbeforeunload
+-	0x145: 0x3a709, // onemptied
+-	0x146: 0x17105, // defer
+-	0x147: 0xef03,  // xmp
+-	0x148: 0xaf05,  // audio
+-	0x149: 0x1903,  // kbd
+-	0x14c: 0x46f09, // onmessage
+-	0x14d: 0x5c506, // option
+-	0x14e: 0x4503,  // alt
+-	0x14f: 0x33f07, // checked
+-	0x150: 0x10c08, // autoplay
+-	0x152: 0x202,   // br
+-	0x153: 0x2550a, // novalidate
+-	0x156: 0x7d07,  // noembed
+-	0x159: 0x2ad07, // onclick
+-	0x15a: 0x4780b, // onmousedown
+-	0x15b: 0x3b808, // onchange
+-	0x15e: 0x3fb09, // oninvalid
+-	0x15f: 0x2e906, // scoped
+-	0x160: 0x1ae08, // controls
+-	0x161: 0x32905, // muted
+-	0x163: 0x4ec06, // usemap
+-	0x164: 0x1dd0a, // figcaption
+-	0x165: 0x36806, // ondrag
+-	0x166: 0x29304, // high
+-	0x168: 0x3d403, // src
+-	0x169: 0x17d06, // poster
+-	0x16b: 0x18d0e, // annotation-xml
+-	0x16c: 0x5bc04, // step
+-	0x16d: 0x4,     // abbr
+-	0x16e: 0x1b06,  // dialog
+-	0x170: 0x1202,  // li
+-	0x172: 0x47a02, // mo
+-	0x175: 0x1fd03, // for
+-	0x176: 0x1cd03, // ins
+-	0x178: 0x53004, // size
+-	0x17a: 0x5207,  // default
+-	0x17b: 0x1a03,  // bdi
+-	0x17c: 0x4ce0a, // onpagehide
+-	0x17d: 0x9d07,  // dirname
+-	0x17e: 0x23304, // type
+-	0x17f: 0x21704, // form
+-	0x180: 0x4c105, // inert
+-	0x181: 0x12709, // oncanplay
+-	0x182: 0x8303,  // dfn
+-	0x183: 0x45c08, // tabindex
+-	0x186: 0x7f02,  // em
+-	0x187: 0x29c04, // lang
+-	0x189: 0x3a208, // dropzone
+-	0x18a: 0x4110a, // onkeypress
+-	0x18b: 0x25b08, // datetime
+-	0x18c: 0x18804, // cols
+-	0x18d: 0x1,     // a
+-	0x18e: 0x43b0c, // onloadeddata
+-	0x191: 0x15606, // border
+-	0x192: 0x2e05,  // tbody
+-	0x193: 0x24b06, // method
+-	0x195: 0xbe04,  // loop
+-	0x196: 0x2b406, // iframe
+-	0x198: 0x2fa04, // head
+-	0x19e: 0x5b608, // manifest
+-	0x19f: 0xe109,  // autofocus
+-	0x1a0: 0x16f04, // code
+-	0x1a1: 0x53406, // strong
+-	0x1a2: 0x32108, // multiple
+-	0x1a3: 0xc05,   // param
+-	0x1a6: 0x23007, // enctype
+-	0x1a7: 0x2dd04, // face
+-	0x1a8: 0xf109,  // plaintext
+-	0x1a9: 0x13602, // h1
+-	0x1aa: 0x56609, // onstalled
+-	0x1ad: 0x28d06, // script
+-	0x1ae: 0x30006, // spacer
+-	0x1af: 0x52c08, // onresize
+-	0x1b0: 0x49b0b, // onmouseover
+-	0x1b1: 0x59108, // onunload
+-	0x1b2: 0x54208, // onseeked
+-	0x1b4: 0x2330d, // typemustmatch
+-	0x1b5: 0x1f106, // figure
+-	0x1b6: 0x48e0a, // onmouseout
+-	0x1b7: 0x27f03, // pre
+-	0x1b8: 0x4e205, // width
+-	0x1bb: 0x7404,  // nobr
+-	0x1be: 0x7002,  // tt
+-	0x1bf: 0x1105,  // align
+-	0x1c0: 0x3f407, // oninput
+-	0x1c3: 0x42107, // onkeyup
+-	0x1c6: 0x1e50c, // onafterprint
+-	0x1c7: 0x210e,  // accept-charset
+-	0x1c8: 0x9806,  // itemid
+-	0x1cb: 0x50e06, // strike
+-	0x1cc: 0x57a03, // sub
+-	0x1cd: 0xf905,  // track
+-	0x1ce: 0x39705, // start
+-	0x1d0: 0x11408, // basefont
+-	0x1d6: 0x1cf06, // source
+-	0x1d7: 0x1a806, // legend
+-	0x1d8: 0x2f905, // thead
+-	0x1da: 0x2e905, // scope
+-	0x1dd: 0x21106, // object
+-	0x1de: 0xa205,  // media
+-	0x1df: 0x18d0a, // annotation
+-	0x1e0: 0x22c0b, // formenctype
+-	0x1e2: 0x28b08, // noscript
+-	0x1e4: 0x53005, // sizes
+-	0x1e5: 0xd50c,  // autocomplete
+-	0x1e6: 0x7a04,  // span
+-	0x1e7: 0x8508,  // noframes
+-	0x1e8: 0x26a06, // target
+-	0x1e9: 0x3a006, // ondrop
+-	0x1ea: 0x3d406, // srcdoc
+-	0x1ec: 0x5e08,  // reversed
+-	0x1f0: 0x2c707, // isindex
+-	0x1f3: 0x29808, // hreflang
+-	0x1f5: 0x4e602, // h5
+-	0x1f6: 0x5d507, // address
+-	0x1fa: 0x30603, // max
+-	0x1fb: 0xc70b,  // placeholder
+-	0x1fc: 0x31408, // textarea
+-	0x1fe: 0x4a609, // onmouseup
+-	0x1ff: 0x3910b, // ondragstart
+-}
+-
+-const atomText = "abbradiogrouparamalignmarkbdialogaccept-charsetbodyaccesskey" +
+-	"genavaluealtdescanvasidefaultfootereversedetailsampatternobr" +
+-	"owspanoembedfnoframesetitleasyncitemidirnamediagroupingaudio" +
+-	"ncancelabelooptgrouplaceholderubyautocompleteautofocusandbox" +
+-	"mplaintextrackindisabledivarautoplaybasefontimeupdatebdoncan" +
+-	"playthrough1bgsoundlowbrbigblinkblockquoteborderbuttonabortr" +
+-	"anslatecodefercolgroupostercolorcolspannotation-xmlcommandra" +
+-	"ggablegendcontrolshapecoordsmallcrossoriginsourcefieldsetfig" +
+-	"captionafterprintfigurequiredforeignObjectforeignobjectforma" +
+-	"ctionbeforeprintformenctypemustmatchallengeformmethodformnov" +
+-	"alidatetimeterformtargeth6heightmlhgroupreloadhiddenoscripth" +
+-	"igh2hreflanghttp-equivideonclickiframeimageimglyph3isindexis" +
+-	"mappletitemrefacenteritemscopeditemtypematheaderspacermaxlen" +
+-	"gth4minmtextareadonlymultiplemutedoncloseamlesspellcheckedon" +
+-	"contextmenuoncuechangeondblclickondragendondragenterondragle" +
+-	"aveondragoverondragstarticleondropzonemptiedondurationchange" +
+-	"onendedonerroronfocusrcdoclassectionbluronhashchangeoninputo" +
+-	"ninvalidonkeydownloadonkeypressrclangonkeyupublicontentedita" +
+-	"bleonloadeddatalistingonloadedmetadatabindexonloadstartonmes" +
+-	"sageonmousedownonmousemoveonmouseoutputonmouseoveronmouseupo" +
+-	"nmousewheelonofflinertononlineonpagehidelonpageshowidth5onpa" +
+-	"usemaponplayingonpopstateonprogresstrikeytypeonratechangeonr" +
+-	"esetonresizestrongonscrollonseekedonseekingonselectedonshowr" +
+-	"aponstalledonstorageonsubmitempropenonsuspendonunloadonvolum" +
+-	"echangeonwaitingoptimumanifestepromptoptionbeforeunloaddress" +
+-	"tylesummarysupsvgsystemarquee"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/table_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/table_test.go
+deleted file mode 100644
+index db016a1..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/atom/table_test.go
++++ /dev/null
+@@ -1,341 +0,0 @@
+-// generated by go run gen.go -test; DO NOT EDIT
+-
+-package atom
+-
+-var testAtomList = []string{
+-	"a",
+-	"abbr",
+-	"accept",
+-	"accept-charset",
+-	"accesskey",
+-	"action",
+-	"address",
+-	"align",
+-	"alt",
+-	"annotation",
+-	"annotation-xml",
+-	"applet",
+-	"area",
+-	"article",
+-	"aside",
+-	"async",
+-	"audio",
+-	"autocomplete",
+-	"autofocus",
+-	"autoplay",
+-	"b",
+-	"base",
+-	"basefont",
+-	"bdi",
+-	"bdo",
+-	"bgsound",
+-	"big",
+-	"blink",
+-	"blockquote",
+-	"body",
+-	"border",
+-	"br",
+-	"button",
+-	"canvas",
+-	"caption",
+-	"center",
+-	"challenge",
+-	"charset",
+-	"checked",
+-	"cite",
+-	"cite",
+-	"class",
+-	"code",
+-	"col",
+-	"colgroup",
+-	"color",
+-	"cols",
+-	"colspan",
+-	"command",
+-	"command",
+-	"content",
+-	"contenteditable",
+-	"contextmenu",
+-	"controls",
+-	"coords",
+-	"crossorigin",
+-	"data",
+-	"data",
+-	"datalist",
+-	"datetime",
+-	"dd",
+-	"default",
+-	"defer",
+-	"del",
+-	"desc",
+-	"details",
+-	"dfn",
+-	"dialog",
+-	"dir",
+-	"dirname",
+-	"disabled",
+-	"div",
+-	"dl",
+-	"download",
+-	"draggable",
+-	"dropzone",
+-	"dt",
+-	"em",
+-	"embed",
+-	"enctype",
+-	"face",
+-	"fieldset",
+-	"figcaption",
+-	"figure",
+-	"font",
+-	"footer",
+-	"for",
+-	"foreignObject",
+-	"foreignobject",
+-	"form",
+-	"form",
+-	"formaction",
+-	"formenctype",
+-	"formmethod",
+-	"formnovalidate",
+-	"formtarget",
+-	"frame",
+-	"frameset",
+-	"h1",
+-	"h2",
+-	"h3",
+-	"h4",
+-	"h5",
+-	"h6",
+-	"head",
+-	"header",
+-	"headers",
+-	"height",
+-	"hgroup",
+-	"hidden",
+-	"high",
+-	"hr",
+-	"href",
+-	"hreflang",
+-	"html",
+-	"http-equiv",
+-	"i",
+-	"icon",
+-	"id",
+-	"iframe",
+-	"image",
+-	"img",
+-	"inert",
+-	"input",
+-	"ins",
+-	"isindex",
+-	"ismap",
+-	"itemid",
+-	"itemprop",
+-	"itemref",
+-	"itemscope",
+-	"itemtype",
+-	"kbd",
+-	"keygen",
+-	"keytype",
+-	"kind",
+-	"label",
+-	"label",
+-	"lang",
+-	"legend",
+-	"li",
+-	"link",
+-	"list",
+-	"listing",
+-	"loop",
+-	"low",
+-	"malignmark",
+-	"manifest",
+-	"map",
+-	"mark",
+-	"marquee",
+-	"math",
+-	"max",
+-	"maxlength",
+-	"media",
+-	"mediagroup",
+-	"menu",
+-	"meta",
+-	"meter",
+-	"method",
+-	"mglyph",
+-	"mi",
+-	"min",
+-	"mn",
+-	"mo",
+-	"ms",
+-	"mtext",
+-	"multiple",
+-	"muted",
+-	"name",
+-	"nav",
+-	"nobr",
+-	"noembed",
+-	"noframes",
+-	"noscript",
+-	"novalidate",
+-	"object",
+-	"ol",
+-	"onabort",
+-	"onafterprint",
+-	"onbeforeprint",
+-	"onbeforeunload",
+-	"onblur",
+-	"oncancel",
+-	"oncanplay",
+-	"oncanplaythrough",
+-	"onchange",
+-	"onclick",
+-	"onclose",
+-	"oncontextmenu",
+-	"oncuechange",
+-	"ondblclick",
+-	"ondrag",
+-	"ondragend",
+-	"ondragenter",
+-	"ondragleave",
+-	"ondragover",
+-	"ondragstart",
+-	"ondrop",
+-	"ondurationchange",
+-	"onemptied",
+-	"onended",
+-	"onerror",
+-	"onfocus",
+-	"onhashchange",
+-	"oninput",
+-	"oninvalid",
+-	"onkeydown",
+-	"onkeypress",
+-	"onkeyup",
+-	"onload",
+-	"onloadeddata",
+-	"onloadedmetadata",
+-	"onloadstart",
+-	"onmessage",
+-	"onmousedown",
+-	"onmousemove",
+-	"onmouseout",
+-	"onmouseover",
+-	"onmouseup",
+-	"onmousewheel",
+-	"onoffline",
+-	"ononline",
+-	"onpagehide",
+-	"onpageshow",
+-	"onpause",
+-	"onplay",
+-	"onplaying",
+-	"onpopstate",
+-	"onprogress",
+-	"onratechange",
+-	"onreset",
+-	"onresize",
+-	"onscroll",
+-	"onseeked",
+-	"onseeking",
+-	"onselect",
+-	"onshow",
+-	"onstalled",
+-	"onstorage",
+-	"onsubmit",
+-	"onsuspend",
+-	"ontimeupdate",
+-	"onunload",
+-	"onvolumechange",
+-	"onwaiting",
+-	"open",
+-	"optgroup",
+-	"optimum",
+-	"option",
+-	"output",
+-	"p",
+-	"param",
+-	"pattern",
+-	"ping",
+-	"placeholder",
+-	"plaintext",
+-	"poster",
+-	"pre",
+-	"preload",
+-	"progress",
+-	"prompt",
+-	"public",
+-	"q",
+-	"radiogroup",
+-	"readonly",
+-	"rel",
+-	"required",
+-	"reversed",
+-	"rows",
+-	"rowspan",
+-	"rp",
+-	"rt",
+-	"ruby",
+-	"s",
+-	"samp",
+-	"sandbox",
+-	"scope",
+-	"scoped",
+-	"script",
+-	"seamless",
+-	"section",
+-	"select",
+-	"selected",
+-	"shape",
+-	"size",
+-	"sizes",
+-	"small",
+-	"source",
+-	"spacer",
+-	"span",
+-	"span",
+-	"spellcheck",
+-	"src",
+-	"srcdoc",
+-	"srclang",
+-	"start",
+-	"step",
+-	"strike",
+-	"strong",
+-	"style",
+-	"style",
+-	"sub",
+-	"summary",
+-	"sup",
+-	"svg",
+-	"system",
+-	"tabindex",
+-	"table",
+-	"target",
+-	"tbody",
+-	"td",
+-	"textarea",
+-	"tfoot",
+-	"th",
+-	"thead",
+-	"time",
+-	"title",
+-	"title",
+-	"tr",
+-	"track",
+-	"translate",
+-	"tt",
+-	"type",
+-	"typemustmatch",
+-	"u",
+-	"ul",
+-	"usemap",
+-	"value",
+-	"var",
+-	"video",
+-	"wbr",
+-	"width",
+-	"wrap",
+-	"xmp",
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/charset.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/charset.go
+deleted file mode 100644
+index 39dc268..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/charset.go
++++ /dev/null
+@@ -1,227 +0,0 @@
+-// Package charset provides common text encodings for HTML documents.
+-//
+-// The mapping from encoding labels to encodings is defined at
+-// http://encoding.spec.whatwg.org.
+-package charset
+-
+-import (
+-	"bytes"
+-	"io"
+-	"mime"
+-	"strings"
+-	"unicode/utf8"
+-
+-	"code.google.com/p/go.net/html"
+-	"code.google.com/p/go.text/encoding"
+-	"code.google.com/p/go.text/encoding/charmap"
+-	"code.google.com/p/go.text/transform"
+-)
+-
+-// Lookup returns the encoding with the specified label, and its canonical
+-// name. It returns nil and the empty string if label is not one of the
+-// standard encodings for HTML. Matching is case-insensitive and ignores
+-// leading and trailing whitespace.
+-func Lookup(label string) (e encoding.Encoding, name string) {
+-	label = strings.ToLower(strings.Trim(label, "\t\n\r\f "))
+-	enc := encodings[label]
+-	return enc.e, enc.name
+-}
+-
+-// DetermineEncoding determines the encoding of an HTML document by examining
+-// up to the first 1024 bytes of content and the declared Content-Type.
+-//
+-// See http://www.whatwg.org/specs/web-apps/current-work/multipage/parsing.html#determining-the-character-encoding
+-func DetermineEncoding(content []byte, contentType string) (e encoding.Encoding, name string, certain bool) {
+-	if len(content) > 1024 {
+-		content = content[:1024]
+-	}
+-
+-	for _, b := range boms {
+-		if bytes.HasPrefix(content, b.bom) {
+-			e, name = Lookup(b.enc)
+-			return e, name, true
+-		}
+-	}
+-
+-	if _, params, err := mime.ParseMediaType(contentType); err == nil {
+-		if cs, ok := params["charset"]; ok {
+-			if e, name = Lookup(cs); e != nil {
+-				return e, name, true
+-			}
+-		}
+-	}
+-
+-	if len(content) > 0 {
+-		e, name = prescan(content)
+-		if e != nil {
+-			return e, name, false
+-		}
+-	}
+-
+-	// Try to detect UTF-8.
+-	// First eliminate any partial rune at the end.
+-	for i := len(content) - 1; i >= 0 && i > len(content)-4; i-- {
+-		b := content[i]
+-		if b < 0x80 {
+-			break
+-		}
+-		if utf8.RuneStart(b) {
+-			content = content[:i]
+-			break
+-		}
+-	}
+-	hasHighBit := false
+-	for _, c := range content {
+-		if c >= 0x80 {
+-			hasHighBit = true
+-			break
+-		}
+-	}
+-	if hasHighBit && utf8.Valid(content) {
+-		return encoding.Nop, "utf-8", false
+-	}
+-
+-	// TODO: change default depending on user's locale?
+-	return charmap.Windows1252, "windows-1252", false
+-}
+-
+-// NewReader returns an io.Reader that converts the content of r to UTF-8.
+-// It calls DetermineEncoding to find out what r's encoding is.
+-func NewReader(r io.Reader, contentType string) (io.Reader, error) {
+-	preview := make([]byte, 1024)
+-	n, err := io.ReadFull(r, preview)
+-	switch {
+-	case err == io.ErrUnexpectedEOF:
+-		preview = preview[:n]
+-		r = bytes.NewReader(preview)
+-	case err != nil:
+-		return nil, err
+-	default:
+-		r = io.MultiReader(bytes.NewReader(preview), r)
+-	}
+-
+-	if e, _, _ := DetermineEncoding(preview, contentType); e != encoding.Nop {
+-		r = transform.NewReader(r, e.NewDecoder())
+-	}
+-	return r, nil
+-}
+-
+-func prescan(content []byte) (e encoding.Encoding, name string) {
+-	z := html.NewTokenizer(bytes.NewReader(content))
+-	for {
+-		switch z.Next() {
+-		case html.ErrorToken:
+-			return nil, ""
+-
+-		case html.StartTagToken, html.SelfClosingTagToken:
+-			tagName, hasAttr := z.TagName()
+-			if !bytes.Equal(tagName, []byte("meta")) {
+-				continue
+-			}
+-			attrList := make(map[string]bool)
+-			gotPragma := false
+-
+-			const (
+-				dontKnow = iota
+-				doNeedPragma
+-				doNotNeedPragma
+-			)
+-			needPragma := dontKnow
+-
+-			name = ""
+-			e = nil
+-			for hasAttr {
+-				var key, val []byte
+-				key, val, hasAttr = z.TagAttr()
+-				ks := string(key)
+-				if attrList[ks] {
+-					continue
+-				}
+-				attrList[ks] = true
+-				for i, c := range val {
+-					if 'A' <= c && c <= 'Z' {
+-						val[i] = c + 0x20
+-					}
+-				}
+-
+-				switch ks {
+-				case "http-equiv":
+-					if bytes.Equal(val, []byte("content-type")) {
+-						gotPragma = true
+-					}
+-
+-				case "content":
+-					if e == nil {
+-						name = fromMetaElement(string(val))
+-						if name != "" {
+-							e, name = Lookup(name)
+-							if e != nil {
+-								needPragma = doNeedPragma
+-							}
+-						}
+-					}
+-
+-				case "charset":
+-					e, name = Lookup(string(val))
+-					needPragma = doNotNeedPragma
+-				}
+-			}
+-
+-			if needPragma == dontKnow || needPragma == doNeedPragma && !gotPragma {
+-				continue
+-			}
+-
+-			if strings.HasPrefix(name, "utf-16") {
+-				name = "utf-8"
+-				e = encoding.Nop
+-			}
+-
+-			if e != nil {
+-				return e, name
+-			}
+-		}
+-	}
+-}
+-
+-func fromMetaElement(s string) string {
+-	for s != "" {
+-		csLoc := strings.Index(s, "charset")
+-		if csLoc == -1 {
+-			return ""
+-		}
+-		s = s[csLoc+len("charset"):]
+-		s = strings.TrimLeft(s, " \t\n\f\r")
+-		if !strings.HasPrefix(s, "=") {
+-			continue
+-		}
+-		s = s[1:]
+-		s = strings.TrimLeft(s, " \t\n\f\r")
+-		if s == "" {
+-			return ""
+-		}
+-		if q := s[0]; q == '"' || q == '\'' {
+-			s = s[1:]
+-			closeQuote := strings.IndexRune(s, rune(q))
+-			if closeQuote == -1 {
+-				return ""
+-			}
+-			return s[:closeQuote]
+-		}
+-
+-		end := strings.IndexAny(s, "; \t\n\f\r")
+-		if end == -1 {
+-			end = len(s)
+-		}
+-		return s[:end]
+-	}
+-	return ""
+-}
+-
+-var boms = []struct {
+-	bom []byte
+-	enc string
+-}{
+-	{[]byte{0xfe, 0xff}, "utf-16be"},
+-	{[]byte{0xff, 0xfe}, "utf-16le"},
+-	{[]byte{0xef, 0xbb, 0xbf}, "utf-8"},
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/charset_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/charset_test.go
+deleted file mode 100644
+index a656dd9..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/charset_test.go
++++ /dev/null
+@@ -1,200 +0,0 @@
+-package charset
+-
+-import (
+-	"bytes"
+-	"io/ioutil"
+-	"strings"
+-	"testing"
+-
+-	"code.google.com/p/go.text/transform"
+-)
+-
+-func transformString(t transform.Transformer, s string) (string, error) {
+-	r := transform.NewReader(strings.NewReader(s), t)
+-	b, err := ioutil.ReadAll(r)
+-	return string(b), err
+-}
+-
+-var testCases = []struct {
+-	utf8, other, otherEncoding string
+-}{
+-	{"Résumé", "Résumé", "utf8"},
+-	{"Résumé", "R\xe9sum\xe9", "latin1"},
+-	{"これは漢字です。", "S0\x8c0o0\"oW[g0Y0\x020", "UTF-16LE"},
+-	{"これは漢字です。", "0S0\x8c0oo\"[W0g0Y0\x02", "UTF-16BE"},
+-	{"Hello, world", "Hello, world", "ASCII"},
+-	{"Gdańsk", "Gda\xf1sk", "ISO-8859-2"},
+-	{"Ââ Čč Đđ Ŋŋ Õõ Šš Žž Åå Ää", "\xc2\xe2 \xc8\xe8 \xa9\xb9 \xaf\xbf \xd5\xf5 \xaa\xba \xac\xbc \xc5\xe5 \xc4\xe4", "ISO-8859-10"},
+-	{"สำหรับ", "\xca\xd3\xcb\xc3\u047a", "ISO-8859-11"},
+-	{"latviešu", "latvie\xf0u", "ISO-8859-13"},
+-	{"Seònaid", "Se\xf2naid", "ISO-8859-14"},
+-	{"€1 is cheap", "\xa41 is cheap", "ISO-8859-15"},
+-	{"românește", "rom\xe2ne\xbate", "ISO-8859-16"},
+-	{"nutraĵo", "nutra\xbco", "ISO-8859-3"},
+-	{"Kalâdlit", "Kal\xe2dlit", "ISO-8859-4"},
+-	{"русский", "\xe0\xe3\xe1\xe1\xda\xd8\xd9", "ISO-8859-5"},
+-	{"ελληνικά", "\xe5\xeb\xeb\xe7\xed\xe9\xea\xdc", "ISO-8859-7"},
+-	{"Kağan", "Ka\xf0an", "ISO-8859-9"},
+-	{"Résumé", "R\x8esum\x8e", "macintosh"},
+-	{"Gdańsk", "Gda\xf1sk", "windows-1250"},
+-	{"русский", "\xf0\xf3\xf1\xf1\xea\xe8\xe9", "windows-1251"},
+-	{"Résumé", "R\xe9sum\xe9", "windows-1252"},
+-	{"ελληνικά", "\xe5\xeb\xeb\xe7\xed\xe9\xea\xdc", "windows-1253"},
+-	{"Kağan", "Ka\xf0an", "windows-1254"},
+-	{"עִבְרִית", "\xf2\xc4\xe1\xc0\xf8\xc4\xe9\xfa", "windows-1255"},
+-	{"العربية", "\xc7\xe1\xda\xd1\xc8\xed\xc9", "windows-1256"},
+-	{"latviešu", "latvie\xf0u", "windows-1257"},
+-	{"Việt", "Vi\xea\xf2t", "windows-1258"},
+-	{"สำหรับ", "\xca\xd3\xcb\xc3\u047a", "windows-874"},
+-	{"русский", "\xd2\xd5\xd3\xd3\xcb\xc9\xca", "KOI8-R"},
+-	{"українська", "\xd5\xcb\xd2\xc1\xa7\xce\xd3\xd8\xcb\xc1", "KOI8-U"},
+-	{"Hello 常用國字標準字體表", "Hello \xb1`\xa5\u03b0\xea\xa6r\xbc\u0437\u01e6r\xc5\xe9\xaa\xed", "big5"},
+-	{"Hello 常用國字標準字體表", "Hello \xb3\xa3\xd3\xc3\x87\xf8\xd7\xd6\x98\xcb\x9c\xca\xd7\xd6\xf3\x77\xb1\xed", "gbk"},
+-	{"Hello 常用國字標準字體表", "Hello \xb3\xa3\xd3\xc3\x87\xf8\xd7\xd6\x98\xcb\x9c\xca\xd7\xd6\xf3\x77\xb1\xed", "gb18030"},
+-	{"עִבְרִית", "\x81\x30\xfb\x30\x81\x30\xf6\x34\x81\x30\xf9\x33\x81\x30\xf6\x30\x81\x30\xfb\x36\x81\x30\xf6\x34\x81\x30\xfa\x31\x81\x30\xfb\x38", "gb18030"},
+-	{"㧯", "\x82\x31\x89\x38", "gb18030"},
+-	{"これは漢字です。", "\x82\xb1\x82\xea\x82\xcd\x8a\xbf\x8e\x9a\x82\xc5\x82\xb7\x81B", "SJIS"},
+-	{"Hello, 世界!", "Hello, \x90\xa2\x8aE!", "SJIS"},
+-	{"イウエオカ", "\xb2\xb3\xb4\xb5\xb6", "SJIS"},
+-	{"これは漢字です。", "\xa4\xb3\xa4\xec\xa4\u03f4\xc1\xbb\xfa\xa4\u01e4\xb9\xa1\xa3", "EUC-JP"},
+-	{"Hello, 世界!", "Hello, \x1b$B@$3&\x1b(B!", "ISO-2022-JP"},
+-	{"네이트 | 즐거움의 시작, 슈파스(Spaβ) NATE", "\xb3\xd7\xc0\xcc\xc6\xae | \xc1\xf1\xb0\xc5\xbf\xf2\xc0\xc7 \xbd\xc3\xc0\xdb, \xbd\xb4\xc6\xc4\xbd\xba(Spa\xa5\xe2) NATE", "EUC-KR"},
+-}
+-
+-func TestDecode(t *testing.T) {
+-	for _, tc := range testCases {
+-		e, _ := Lookup(tc.otherEncoding)
+-		if e == nil {
+-			t.Errorf("%s: not found", tc.otherEncoding)
+-			continue
+-		}
+-		s, err := transformString(e.NewDecoder(), tc.other)
+-		if err != nil {
+-			t.Errorf("%s: decode %q: %v", tc.otherEncoding, tc.other, err)
+-			continue
+-		}
+-		if s != tc.utf8 {
+-			t.Errorf("%s: got %q, want %q", tc.otherEncoding, s, tc.utf8)
+-		}
+-	}
+-}
+-
+-func TestEncode(t *testing.T) {
+-	for _, tc := range testCases {
+-		e, _ := Lookup(tc.otherEncoding)
+-		if e == nil {
+-			t.Errorf("%s: not found", tc.otherEncoding)
+-			continue
+-		}
+-		s, err := transformString(e.NewEncoder(), tc.utf8)
+-		if err != nil {
+-			t.Errorf("%s: encode %q: %s", tc.otherEncoding, tc.utf8, err)
+-			continue
+-		}
+-		if s != tc.other {
+-			t.Errorf("%s: got %q, want %q", tc.otherEncoding, s, tc.other)
+-		}
+-	}
+-}
+-
+-// TestNames verifies that you can pass an encoding's name to Lookup and get
+-// the same encoding back (except for "replacement").
+-func TestNames(t *testing.T) {
+-	for _, e := range encodings {
+-		if e.name == "replacement" {
+-			continue
+-		}
+-		_, got := Lookup(e.name)
+-		if got != e.name {
+-			t.Errorf("got %q, want %q", got, e.name)
+-			continue
+-		}
+-	}
+-}
+-
+-var sniffTestCases = []struct {
+-	filename, declared, want string
+-}{
+-	{"HTTP-charset.html", "text/html; charset=iso-8859-15", "iso-8859-15"},
+-	{"UTF-16LE-BOM.html", "", "utf-16le"},
+-	{"UTF-16BE-BOM.html", "", "utf-16be"},
+-	{"meta-content-attribute.html", "text/html", "iso-8859-15"},
+-	{"meta-charset-attribute.html", "text/html", "iso-8859-15"},
+-	{"No-encoding-declaration.html", "text/html", "utf-8"},
+-	{"HTTP-vs-UTF-8-BOM.html", "text/html; charset=iso-8859-15", "utf-8"},
+-	{"HTTP-vs-meta-content.html", "text/html; charset=iso-8859-15", "iso-8859-15"},
+-	{"HTTP-vs-meta-charset.html", "text/html; charset=iso-8859-15", "iso-8859-15"},
+-	{"UTF-8-BOM-vs-meta-content.html", "text/html", "utf-8"},
+-	{"UTF-8-BOM-vs-meta-charset.html", "text/html", "utf-8"},
+-}
+-
+-func TestSniff(t *testing.T) {
+-	for _, tc := range sniffTestCases {
+-		content, err := ioutil.ReadFile("testdata/" + tc.filename)
+-		if err != nil {
+-			t.Errorf("%s: error reading file: %v", tc.filename, err)
+-			continue
+-		}
+-
+-		_, name, _ := DetermineEncoding(content, tc.declared)
+-		if name != tc.want {
+-			t.Errorf("%s: got %q, want %q", tc.filename, name, tc.want)
+-			continue
+-		}
+-	}
+-}
+-
+-func TestReader(t *testing.T) {
+-	for _, tc := range sniffTestCases {
+-		content, err := ioutil.ReadFile("testdata/" + tc.filename)
+-		if err != nil {
+-			t.Errorf("%s: error reading file: %v", tc.filename, err)
+-			continue
+-		}
+-
+-		r, err := NewReader(bytes.NewReader(content), tc.declared)
+-		if err != nil {
+-			t.Errorf("%s: error creating reader: %v", tc.filename, err)
+-			continue
+-		}
+-
+-		got, err := ioutil.ReadAll(r)
+-		if err != nil {
+-			t.Errorf("%s: error reading from charset.NewReader: %v", tc.filename, err)
+-			continue
+-		}
+-
+-		e, _ := Lookup(tc.want)
+-		want, err := ioutil.ReadAll(transform.NewReader(bytes.NewReader(content), e.NewDecoder()))
+-		if err != nil {
+-			t.Errorf("%s: error decoding with hard-coded charset name: %v", tc.filename, err)
+-			continue
+-		}
+-
+-		if !bytes.Equal(got, want) {
+-			t.Errorf("%s: got %q, want %q", tc.filename, got, want)
+-			continue
+-		}
+-	}
+-}
+-
+-var metaTestCases = []struct {
+-	meta, want string
+-}{
+-	{"", ""},
+-	{"text/html", ""},
+-	{"text/html; charset utf-8", ""},
+-	{"text/html; charset=latin-2", "latin-2"},
+-	{"text/html; charset; charset = utf-8", "utf-8"},
+-	{`charset="big5"`, "big5"},
+-	{"charset='shift_jis'", "shift_jis"},
+-}
+-
+-func TestFromMeta(t *testing.T) {
+-	for _, tc := range metaTestCases {
+-		got := fromMetaElement(tc.meta)
+-		if got != tc.want {
+-			t.Errorf("%q: got %q, want %q", tc.meta, got, tc.want)
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/gen.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/gen.go
+deleted file mode 100644
+index 25a9eb6..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/gen.go
++++ /dev/null
+@@ -1,107 +0,0 @@
+-// +build ignore
+-
+-package main
+-
+-// Download http://encoding.spec.whatwg.org/encodings.json and use it to
+-// generate table.go.
+-
+-import (
+-	"encoding/json"
+-	"fmt"
+-	"log"
+-	"net/http"
+-	"strings"
+-)
+-
+-type enc struct {
+-	Name   string
+-	Labels []string
+-}
+-
+-type group struct {
+-	Encodings []enc
+-	Heading   string
+-}
+-
+-const specURL = "http://encoding.spec.whatwg.org/encodings.json"
+-
+-func main() {
+-	resp, err := http.Get(specURL)
+-	if err != nil {
+-		log.Fatalf("error fetching %s: %s", specURL, err)
+-	}
+-	if resp.StatusCode != 200 {
+-		log.Fatalf("error fetching %s: HTTP status %s", specURL, resp.Status)
+-	}
+-	defer resp.Body.Close()
+-
+-	var groups []group
+-	d := json.NewDecoder(resp.Body)
+-	err = d.Decode(&groups)
+-	if err != nil {
+-		log.Fatalf("error reading encodings.json: %s", err)
+-	}
+-
+-	fmt.Println("// generated by go run gen.go; DO NOT EDIT")
+-	fmt.Println()
+-	fmt.Println("package charset")
+-	fmt.Println()
+-
+-	fmt.Println("import (")
+-	fmt.Println(`"code.google.com/p/go.text/encoding"`)
+-	for _, pkg := range []string{"charmap", "japanese", "korean", "simplifiedchinese", "traditionalchinese", "unicode"} {
+-		fmt.Printf("\"code.google.com/p/go.text/encoding/%s\"\n", pkg)
+-	}
+-	fmt.Println(")")
+-	fmt.Println()
+-
+-	fmt.Println("var encodings = map[string]struct{e encoding.Encoding; name string} {")
+-	for _, g := range groups {
+-		for _, e := range g.Encodings {
+-			goName, ok := miscNames[e.Name]
+-			if !ok {
+-				for k, v := range prefixes {
+-					if strings.HasPrefix(e.Name, k) {
+-						goName = v + e.Name[len(k):]
+-						break
+-					}
+-				}
+-				if goName == "" {
+-					log.Fatalf("unrecognized encoding name: %s", e.Name)
+-				}
+-			}
+-
+-			for _, label := range e.Labels {
+-				fmt.Printf("%q: {%s, %q},\n", label, goName, e.Name)
+-			}
+-		}
+-	}
+-	fmt.Println("}")
+-}
+-
+-var prefixes = map[string]string{
+-	"iso-8859-": "charmap.ISO8859_",
+-	"windows-":  "charmap.Windows",
+-}
+-
+-var miscNames = map[string]string{
+-	"utf-8":          "encoding.Nop",
+-	"ibm866":         "charmap.CodePage866",
+-	"iso-8859-8-i":   "charmap.ISO8859_8",
+-	"koi8-r":         "charmap.KOI8R",
+-	"koi8-u":         "charmap.KOI8U",
+-	"macintosh":      "charmap.Macintosh",
+-	"x-mac-cyrillic": "charmap.MacintoshCyrillic",
+-	"gbk":            "simplifiedchinese.GBK",
+-	"gb18030":        "simplifiedchinese.GB18030",
+-	"hz-gb-2312":     "simplifiedchinese.HZGB2312",
+-	"big5":           "traditionalchinese.Big5",
+-	"euc-jp":         "japanese.EUCJP",
+-	"iso-2022-jp":    "japanese.ISO2022JP",
+-	"shift_jis":      "japanese.ShiftJIS",
+-	"euc-kr":         "korean.EUCKR",
+-	"replacement":    "encoding.Replacement",
+-	"utf-16be":       "unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM)",
+-	"utf-16le":       "unicode.UTF16(unicode.LittleEndian, unicode.IgnoreBOM)",
+-	"x-user-defined": "charmap.XUserDefined",
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/table.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/table.go
+deleted file mode 100644
+index 66f8af1..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/table.go
++++ /dev/null
+@@ -1,235 +0,0 @@
+-// generated by go run gen.go; DO NOT EDIT
+-
+-package charset
+-
+-import (
+-	"code.google.com/p/go.text/encoding"
+-	"code.google.com/p/go.text/encoding/charmap"
+-	"code.google.com/p/go.text/encoding/japanese"
+-	"code.google.com/p/go.text/encoding/korean"
+-	"code.google.com/p/go.text/encoding/simplifiedchinese"
+-	"code.google.com/p/go.text/encoding/traditionalchinese"
+-	"code.google.com/p/go.text/encoding/unicode"
+-)
+-
+-var encodings = map[string]struct {
+-	e    encoding.Encoding
+-	name string
+-}{
+-	"unicode-1-1-utf-8":   {encoding.Nop, "utf-8"},
+-	"utf-8":               {encoding.Nop, "utf-8"},
+-	"utf8":                {encoding.Nop, "utf-8"},
+-	"866":                 {charmap.CodePage866, "ibm866"},
+-	"cp866":               {charmap.CodePage866, "ibm866"},
+-	"csibm866":            {charmap.CodePage866, "ibm866"},
+-	"ibm866":              {charmap.CodePage866, "ibm866"},
+-	"csisolatin2":         {charmap.ISO8859_2, "iso-8859-2"},
+-	"iso-8859-2":          {charmap.ISO8859_2, "iso-8859-2"},
+-	"iso-ir-101":          {charmap.ISO8859_2, "iso-8859-2"},
+-	"iso8859-2":           {charmap.ISO8859_2, "iso-8859-2"},
+-	"iso88592":            {charmap.ISO8859_2, "iso-8859-2"},
+-	"iso_8859-2":          {charmap.ISO8859_2, "iso-8859-2"},
+-	"iso_8859-2:1987":     {charmap.ISO8859_2, "iso-8859-2"},
+-	"l2":                  {charmap.ISO8859_2, "iso-8859-2"},
+-	"latin2":              {charmap.ISO8859_2, "iso-8859-2"},
+-	"csisolatin3":         {charmap.ISO8859_3, "iso-8859-3"},
+-	"iso-8859-3":          {charmap.ISO8859_3, "iso-8859-3"},
+-	"iso-ir-109":          {charmap.ISO8859_3, "iso-8859-3"},
+-	"iso8859-3":           {charmap.ISO8859_3, "iso-8859-3"},
+-	"iso88593":            {charmap.ISO8859_3, "iso-8859-3"},
+-	"iso_8859-3":          {charmap.ISO8859_3, "iso-8859-3"},
+-	"iso_8859-3:1988":     {charmap.ISO8859_3, "iso-8859-3"},
+-	"l3":                  {charmap.ISO8859_3, "iso-8859-3"},
+-	"latin3":              {charmap.ISO8859_3, "iso-8859-3"},
+-	"csisolatin4":         {charmap.ISO8859_4, "iso-8859-4"},
+-	"iso-8859-4":          {charmap.ISO8859_4, "iso-8859-4"},
+-	"iso-ir-110":          {charmap.ISO8859_4, "iso-8859-4"},
+-	"iso8859-4":           {charmap.ISO8859_4, "iso-8859-4"},
+-	"iso88594":            {charmap.ISO8859_4, "iso-8859-4"},
+-	"iso_8859-4":          {charmap.ISO8859_4, "iso-8859-4"},
+-	"iso_8859-4:1988":     {charmap.ISO8859_4, "iso-8859-4"},
+-	"l4":                  {charmap.ISO8859_4, "iso-8859-4"},
+-	"latin4":              {charmap.ISO8859_4, "iso-8859-4"},
+-	"csisolatincyrillic":  {charmap.ISO8859_5, "iso-8859-5"},
+-	"cyrillic":            {charmap.ISO8859_5, "iso-8859-5"},
+-	"iso-8859-5":          {charmap.ISO8859_5, "iso-8859-5"},
+-	"iso-ir-144":          {charmap.ISO8859_5, "iso-8859-5"},
+-	"iso8859-5":           {charmap.ISO8859_5, "iso-8859-5"},
+-	"iso88595":            {charmap.ISO8859_5, "iso-8859-5"},
+-	"iso_8859-5":          {charmap.ISO8859_5, "iso-8859-5"},
+-	"iso_8859-5:1988":     {charmap.ISO8859_5, "iso-8859-5"},
+-	"arabic":              {charmap.ISO8859_6, "iso-8859-6"},
+-	"asmo-708":            {charmap.ISO8859_6, "iso-8859-6"},
+-	"csiso88596e":         {charmap.ISO8859_6, "iso-8859-6"},
+-	"csiso88596i":         {charmap.ISO8859_6, "iso-8859-6"},
+-	"csisolatinarabic":    {charmap.ISO8859_6, "iso-8859-6"},
+-	"ecma-114":            {charmap.ISO8859_6, "iso-8859-6"},
+-	"iso-8859-6":          {charmap.ISO8859_6, "iso-8859-6"},
+-	"iso-8859-6-e":        {charmap.ISO8859_6, "iso-8859-6"},
+-	"iso-8859-6-i":        {charmap.ISO8859_6, "iso-8859-6"},
+-	"iso-ir-127":          {charmap.ISO8859_6, "iso-8859-6"},
+-	"iso8859-6":           {charmap.ISO8859_6, "iso-8859-6"},
+-	"iso88596":            {charmap.ISO8859_6, "iso-8859-6"},
+-	"iso_8859-6":          {charmap.ISO8859_6, "iso-8859-6"},
+-	"iso_8859-6:1987":     {charmap.ISO8859_6, "iso-8859-6"},
+-	"csisolatingreek":     {charmap.ISO8859_7, "iso-8859-7"},
+-	"ecma-118":            {charmap.ISO8859_7, "iso-8859-7"},
+-	"elot_928":            {charmap.ISO8859_7, "iso-8859-7"},
+-	"greek":               {charmap.ISO8859_7, "iso-8859-7"},
+-	"greek8":              {charmap.ISO8859_7, "iso-8859-7"},
+-	"iso-8859-7":          {charmap.ISO8859_7, "iso-8859-7"},
+-	"iso-ir-126":          {charmap.ISO8859_7, "iso-8859-7"},
+-	"iso8859-7":           {charmap.ISO8859_7, "iso-8859-7"},
+-	"iso88597":            {charmap.ISO8859_7, "iso-8859-7"},
+-	"iso_8859-7":          {charmap.ISO8859_7, "iso-8859-7"},
+-	"iso_8859-7:1987":     {charmap.ISO8859_7, "iso-8859-7"},
+-	"sun_eu_greek":        {charmap.ISO8859_7, "iso-8859-7"},
+-	"csiso88598e":         {charmap.ISO8859_8, "iso-8859-8"},
+-	"csisolatinhebrew":    {charmap.ISO8859_8, "iso-8859-8"},
+-	"hebrew":              {charmap.ISO8859_8, "iso-8859-8"},
+-	"iso-8859-8":          {charmap.ISO8859_8, "iso-8859-8"},
+-	"iso-8859-8-e":        {charmap.ISO8859_8, "iso-8859-8"},
+-	"iso-ir-138":          {charmap.ISO8859_8, "iso-8859-8"},
+-	"iso8859-8":           {charmap.ISO8859_8, "iso-8859-8"},
+-	"iso88598":            {charmap.ISO8859_8, "iso-8859-8"},
+-	"iso_8859-8":          {charmap.ISO8859_8, "iso-8859-8"},
+-	"iso_8859-8:1988":     {charmap.ISO8859_8, "iso-8859-8"},
+-	"visual":              {charmap.ISO8859_8, "iso-8859-8"},
+-	"csiso88598i":         {charmap.ISO8859_8, "iso-8859-8-i"},
+-	"iso-8859-8-i":        {charmap.ISO8859_8, "iso-8859-8-i"},
+-	"logical":             {charmap.ISO8859_8, "iso-8859-8-i"},
+-	"csisolatin6":         {charmap.ISO8859_10, "iso-8859-10"},
+-	"iso-8859-10":         {charmap.ISO8859_10, "iso-8859-10"},
+-	"iso-ir-157":          {charmap.ISO8859_10, "iso-8859-10"},
+-	"iso8859-10":          {charmap.ISO8859_10, "iso-8859-10"},
+-	"iso885910":           {charmap.ISO8859_10, "iso-8859-10"},
+-	"l6":                  {charmap.ISO8859_10, "iso-8859-10"},
+-	"latin6":              {charmap.ISO8859_10, "iso-8859-10"},
+-	"iso-8859-13":         {charmap.ISO8859_13, "iso-8859-13"},
+-	"iso8859-13":          {charmap.ISO8859_13, "iso-8859-13"},
+-	"iso885913":           {charmap.ISO8859_13, "iso-8859-13"},
+-	"iso-8859-14":         {charmap.ISO8859_14, "iso-8859-14"},
+-	"iso8859-14":          {charmap.ISO8859_14, "iso-8859-14"},
+-	"iso885914":           {charmap.ISO8859_14, "iso-8859-14"},
+-	"csisolatin9":         {charmap.ISO8859_15, "iso-8859-15"},
+-	"iso-8859-15":         {charmap.ISO8859_15, "iso-8859-15"},
+-	"iso8859-15":          {charmap.ISO8859_15, "iso-8859-15"},
+-	"iso885915":           {charmap.ISO8859_15, "iso-8859-15"},
+-	"iso_8859-15":         {charmap.ISO8859_15, "iso-8859-15"},
+-	"l9":                  {charmap.ISO8859_15, "iso-8859-15"},
+-	"iso-8859-16":         {charmap.ISO8859_16, "iso-8859-16"},
+-	"cskoi8r":             {charmap.KOI8R, "koi8-r"},
+-	"koi":                 {charmap.KOI8R, "koi8-r"},
+-	"koi8":                {charmap.KOI8R, "koi8-r"},
+-	"koi8-r":              {charmap.KOI8R, "koi8-r"},
+-	"koi8_r":              {charmap.KOI8R, "koi8-r"},
+-	"koi8-u":              {charmap.KOI8U, "koi8-u"},
+-	"csmacintosh":         {charmap.Macintosh, "macintosh"},
+-	"mac":                 {charmap.Macintosh, "macintosh"},
+-	"macintosh":           {charmap.Macintosh, "macintosh"},
+-	"x-mac-roman":         {charmap.Macintosh, "macintosh"},
+-	"dos-874":             {charmap.Windows874, "windows-874"},
+-	"iso-8859-11":         {charmap.Windows874, "windows-874"},
+-	"iso8859-11":          {charmap.Windows874, "windows-874"},
+-	"iso885911":           {charmap.Windows874, "windows-874"},
+-	"tis-620":             {charmap.Windows874, "windows-874"},
+-	"windows-874":         {charmap.Windows874, "windows-874"},
+-	"cp1250":              {charmap.Windows1250, "windows-1250"},
+-	"windows-1250":        {charmap.Windows1250, "windows-1250"},
+-	"x-cp1250":            {charmap.Windows1250, "windows-1250"},
+-	"cp1251":              {charmap.Windows1251, "windows-1251"},
+-	"windows-1251":        {charmap.Windows1251, "windows-1251"},
+-	"x-cp1251":            {charmap.Windows1251, "windows-1251"},
+-	"ansi_x3.4-1968":      {charmap.Windows1252, "windows-1252"},
+-	"ascii":               {charmap.Windows1252, "windows-1252"},
+-	"cp1252":              {charmap.Windows1252, "windows-1252"},
+-	"cp819":               {charmap.Windows1252, "windows-1252"},
+-	"csisolatin1":         {charmap.Windows1252, "windows-1252"},
+-	"ibm819":              {charmap.Windows1252, "windows-1252"},
+-	"iso-8859-1":          {charmap.Windows1252, "windows-1252"},
+-	"iso-ir-100":          {charmap.Windows1252, "windows-1252"},
+-	"iso8859-1":           {charmap.Windows1252, "windows-1252"},
+-	"iso88591":            {charmap.Windows1252, "windows-1252"},
+-	"iso_8859-1":          {charmap.Windows1252, "windows-1252"},
+-	"iso_8859-1:1987":     {charmap.Windows1252, "windows-1252"},
+-	"l1":                  {charmap.Windows1252, "windows-1252"},
+-	"latin1":              {charmap.Windows1252, "windows-1252"},
+-	"us-ascii":            {charmap.Windows1252, "windows-1252"},
+-	"windows-1252":        {charmap.Windows1252, "windows-1252"},
+-	"x-cp1252":            {charmap.Windows1252, "windows-1252"},
+-	"cp1253":              {charmap.Windows1253, "windows-1253"},
+-	"windows-1253":        {charmap.Windows1253, "windows-1253"},
+-	"x-cp1253":            {charmap.Windows1253, "windows-1253"},
+-	"cp1254":              {charmap.Windows1254, "windows-1254"},
+-	"csisolatin5":         {charmap.Windows1254, "windows-1254"},
+-	"iso-8859-9":          {charmap.Windows1254, "windows-1254"},
+-	"iso-ir-148":          {charmap.Windows1254, "windows-1254"},
+-	"iso8859-9":           {charmap.Windows1254, "windows-1254"},
+-	"iso88599":            {charmap.Windows1254, "windows-1254"},
+-	"iso_8859-9":          {charmap.Windows1254, "windows-1254"},
+-	"iso_8859-9:1989":     {charmap.Windows1254, "windows-1254"},
+-	"l5":                  {charmap.Windows1254, "windows-1254"},
+-	"latin5":              {charmap.Windows1254, "windows-1254"},
+-	"windows-1254":        {charmap.Windows1254, "windows-1254"},
+-	"x-cp1254":            {charmap.Windows1254, "windows-1254"},
+-	"cp1255":              {charmap.Windows1255, "windows-1255"},
+-	"windows-1255":        {charmap.Windows1255, "windows-1255"},
+-	"x-cp1255":            {charmap.Windows1255, "windows-1255"},
+-	"cp1256":              {charmap.Windows1256, "windows-1256"},
+-	"windows-1256":        {charmap.Windows1256, "windows-1256"},
+-	"x-cp1256":            {charmap.Windows1256, "windows-1256"},
+-	"cp1257":              {charmap.Windows1257, "windows-1257"},
+-	"windows-1257":        {charmap.Windows1257, "windows-1257"},
+-	"x-cp1257":            {charmap.Windows1257, "windows-1257"},
+-	"cp1258":              {charmap.Windows1258, "windows-1258"},
+-	"windows-1258":        {charmap.Windows1258, "windows-1258"},
+-	"x-cp1258":            {charmap.Windows1258, "windows-1258"},
+-	"x-mac-cyrillic":      {charmap.MacintoshCyrillic, "x-mac-cyrillic"},
+-	"x-mac-ukrainian":     {charmap.MacintoshCyrillic, "x-mac-cyrillic"},
+-	"chinese":             {simplifiedchinese.GBK, "gbk"},
+-	"csgb2312":            {simplifiedchinese.GBK, "gbk"},
+-	"csiso58gb231280":     {simplifiedchinese.GBK, "gbk"},
+-	"gb2312":              {simplifiedchinese.GBK, "gbk"},
+-	"gb_2312":             {simplifiedchinese.GBK, "gbk"},
+-	"gb_2312-80":          {simplifiedchinese.GBK, "gbk"},
+-	"gbk":                 {simplifiedchinese.GBK, "gbk"},
+-	"iso-ir-58":           {simplifiedchinese.GBK, "gbk"},
+-	"x-gbk":               {simplifiedchinese.GBK, "gbk"},
+-	"gb18030":             {simplifiedchinese.GB18030, "gb18030"},
+-	"hz-gb-2312":          {simplifiedchinese.HZGB2312, "hz-gb-2312"},
+-	"big5":                {traditionalchinese.Big5, "big5"},
+-	"big5-hkscs":          {traditionalchinese.Big5, "big5"},
+-	"cn-big5":             {traditionalchinese.Big5, "big5"},
+-	"csbig5":              {traditionalchinese.Big5, "big5"},
+-	"x-x-big5":            {traditionalchinese.Big5, "big5"},
+-	"cseucpkdfmtjapanese": {japanese.EUCJP, "euc-jp"},
+-	"euc-jp":              {japanese.EUCJP, "euc-jp"},
+-	"x-euc-jp":            {japanese.EUCJP, "euc-jp"},
+-	"csiso2022jp":         {japanese.ISO2022JP, "iso-2022-jp"},
+-	"iso-2022-jp":         {japanese.ISO2022JP, "iso-2022-jp"},
+-	"csshiftjis":          {japanese.ShiftJIS, "shift_jis"},
+-	"ms_kanji":            {japanese.ShiftJIS, "shift_jis"},
+-	"shift-jis":           {japanese.ShiftJIS, "shift_jis"},
+-	"shift_jis":           {japanese.ShiftJIS, "shift_jis"},
+-	"sjis":                {japanese.ShiftJIS, "shift_jis"},
+-	"windows-31j":         {japanese.ShiftJIS, "shift_jis"},
+-	"x-sjis":              {japanese.ShiftJIS, "shift_jis"},
+-	"cseuckr":             {korean.EUCKR, "euc-kr"},
+-	"csksc56011987":       {korean.EUCKR, "euc-kr"},
+-	"euc-kr":              {korean.EUCKR, "euc-kr"},
+-	"iso-ir-149":          {korean.EUCKR, "euc-kr"},
+-	"korean":              {korean.EUCKR, "euc-kr"},
+-	"ks_c_5601-1987":      {korean.EUCKR, "euc-kr"},
+-	"ks_c_5601-1989":      {korean.EUCKR, "euc-kr"},
+-	"ksc5601":             {korean.EUCKR, "euc-kr"},
+-	"ksc_5601":            {korean.EUCKR, "euc-kr"},
+-	"windows-949":         {korean.EUCKR, "euc-kr"},
+-	"csiso2022kr":         {encoding.Replacement, "replacement"},
+-	"iso-2022-kr":         {encoding.Replacement, "replacement"},
+-	"iso-2022-cn":         {encoding.Replacement, "replacement"},
+-	"iso-2022-cn-ext":     {encoding.Replacement, "replacement"},
+-	"utf-16be":            {unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM), "utf-16be"},
+-	"utf-16":              {unicode.UTF16(unicode.LittleEndian, unicode.IgnoreBOM), "utf-16le"},
+-	"utf-16le":            {unicode.UTF16(unicode.LittleEndian, unicode.IgnoreBOM), "utf-16le"},
+-	"x-user-defined":      {charmap.XUserDefined, "x-user-defined"},
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-charset.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-charset.html
+deleted file mode 100644
+index 9915fa0..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-charset.html
++++ /dev/null
+@@ -1,48 +0,0 @@
+-<!DOCTYPE html>
+-<html  lang="en" >
+-<head>
+-  <title>HTTP charset</title>
+-<link rel='author' title='Richard Ishida' href='mailto:ishida at w3.org'>
+-<link rel='help' href='http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream'>
+-<link rel="stylesheet" type="text/css" href="./generatedtests.css">
+-<script src="http://w3c-test.org/resources/testharness.js"></script>
+-<script src="http://w3c-test.org/resources/testharnessreport.js"></script>
+-<meta name='flags' content='http'>
+-<meta name="assert" content="The character encoding of a page can be set using the HTTP header charset declaration.">
+-<style type='text/css'>
+-.test div { width: 50px; }</style>
+-<link rel="stylesheet" type="text/css" href="the-input-byte-stream/support/encodingtests-15.css">
+-</head>
+-<body>
+-<p class='title'>HTTP charset</p>
+-
+-
+-<div id='log'></div>
+-
+-
+-<div class='test'><div id='box' class='ýäè'>&#xA0;</div></div>
+-
+-
+-
+-
+-
+-<div class='description'>
+-<p class="assertion" title="Assertion">The character encoding of a page can be set using the HTTP header charset declaration.</p>
+-<div class="notes"><p><p>The test contains a div with a class name that contains the following sequence of bytes: 0xC3 0xBD 0xC3 0xA4 0xC3 0xA8. These represent different sequences of characters in ISO 8859-15, ISO 8859-1 and UTF-8. The external, UTF-8-encoded stylesheet contains a selector <code>.test div.&#x00C3;&#x0153;&#x00C3;&#x20AC;&#x00C3;&#x0161;</code>. This matches the sequence of bytes above when they are interpreted as ISO 8859-15. If the class name matches the selector then the test will pass.</p><p>The only character encoding declaration for this HTML file is in the HTTP header, which sets the encoding to ISO 8859-15.</p></p>
+-</div>
+-</div>
+-<div class="nexttest"><div><a href="generate?test=the-input-byte-stream-003">Next test</a></div><div class="doctype">HTML5</div>
+-<p class="jump">the-input-byte-stream-001<br /><a href="/International/tests/html5/the-input-byte-stream/results-basics#basics" target="_blank">Result summary &amp; related tests</a><br /><a href="http://w3c-test.org/framework/details/i18n-html5/the-input-byte-stream-001" target="_blank">Detailed results for this test</a><br/>	<a href="http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream" target="_blank">Link to spec</a></p>
+-<div class='prereq'>Assumptions: <ul><li>The default encoding for the browser you are testing is not set to ISO 8859-15.</li>
+-				<li>The test is read from a server that supports HTTP.</li></ul></div>
+-</div>
+-<script>
+-test(function() {
+-assert_equals(document.getElementById('box').offsetWidth, 100);
+-}, " ");
+-</script>
+-
+-</body>
+-</html>
+-
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-UTF-8-BOM.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-UTF-8-BOM.html
+deleted file mode 100644
+index 26e5d8b..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-UTF-8-BOM.html
++++ /dev/null
+@@ -1,48 +0,0 @@
+-<!DOCTYPE html>
+-<html  lang="en" >
+-<head>
+-  <title>HTTP vs UTF-8 BOM</title>
+-<link rel='author' title='Richard Ishida' href='mailto:ishida at w3.org'>
+-<link rel='help' href='http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream'>
+-<link rel="stylesheet" type="text/css" href="./generatedtests.css">
+-<script src="http://w3c-test.org/resources/testharness.js"></script>
+-<script src="http://w3c-test.org/resources/testharnessreport.js"></script>
+-<meta name='flags' content='http'>
+-<meta name="assert" content="A character encoding set in the HTTP header has lower precedence than the UTF-8 signature.">
+-<style type='text/css'>
+-.test div { width: 50px; }</style>
+-<link rel="stylesheet" type="text/css" href="the-input-byte-stream/support/encodingtests-utf8.css">
+-</head>
+-<body>
+-<p class='title'>HTTP vs UTF-8 BOM</p>
+-
+-
+-<div id='log'></div>
+-
+-
+-<div class='test'><div id='box' class='ýäè'>&#xA0;</div></div>
+-
+-
+-
+-
+-
+-<div class='description'>
+-<p class="assertion" title="Assertion">A character encoding set in the HTTP header has lower precedence than the UTF-8 signature.</p>
+-<div class="notes"><p><p>The HTTP header attempts to set the character encoding to ISO 8859-15. The page starts with a UTF-8 signature.</p><p>The test contains a div with a class name that contains the following sequence of bytes: 0xC3 0xBD 0xC3 0xA4 0xC3 0xA8. These represent different sequences of characters in ISO 8859-15, ISO 8859-1 and UTF-8. The external, UTF-8-encoded stylesheet contains a selector <code>.test div.&#x00FD;&#x00E4;&#x00E8;</code>. This matches the sequence of bytes above when they are interpreted as UTF-8. If the class name matches the selector then the test will pass.</p><p>If the test is unsuccessful, the characters &#x00EF;&#x00BB;&#x00BF; should appear at the top of the page.  These represent the bytes that make up the UTF-8 signature when encountered in the ISO 8859-15 encoding.</p></p>
+-</div>
+-</div>
+-<div class="nexttest"><div><a href="generate?test=the-input-byte-stream-022">Next test</a></div><div class="doctype">HTML5</div>
+-<p class="jump">the-input-byte-stream-034<br /><a href="/International/tests/html5/the-input-byte-stream/results-basics#precedence" target="_blank">Result summary &amp; related tests</a><br /><a href="http://w3c-test.org/framework/details/i18n-html5/the-input-byte-stream-034" target="_blank">Detailed results for this test</a><br/>	<a href="http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream" target="_blank">Link to spec</a></p>
+-<div class='prereq'>Assumptions: <ul><li>The default encoding for the browser you are testing is not set to ISO 8859-15.</li>
+-				<li>The test is read from a server that supports HTTP.</li></ul></div>
+-</div>
+-<script>
+-test(function() {
+-assert_equals(document.getElementById('box').offsetWidth, 100);
+-}, " ");
+-</script>
+-
+-</body>
+-</html>
+-
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-meta-charset.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-meta-charset.html
+deleted file mode 100644
+index 2f07e95..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-meta-charset.html
++++ /dev/null
+@@ -1,49 +0,0 @@
+-<!DOCTYPE html>
+-<html  lang="en" >
+-<head>
+- <meta charset="iso-8859-1" > <title>HTTP vs meta charset</title>
+-<link rel='author' title='Richard Ishida' href='mailto:ishida at w3.org'>
+-<link rel='help' href='http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream'>
+-<link rel="stylesheet" type="text/css" href="./generatedtests.css">
+-<script src="http://w3c-test.org/resources/testharness.js"></script>
+-<script src="http://w3c-test.org/resources/testharnessreport.js"></script>
+-<meta name='flags' content='http'>
+-<meta name="assert" content="The HTTP header has a higher precedence than an encoding declaration in a meta charset attribute.">
+-<style type='text/css'>
+-.test div { width: 50px; }.test div { width: 90px; }
+-</style>
+-<link rel="stylesheet" type="text/css" href="the-input-byte-stream/support/encodingtests-15.css">
+-</head>
+-<body>
+-<p class='title'>HTTP vs meta charset</p>
+-
+-
+-<div id='log'></div>
+-
+-
+-<div class='test'><div id='box' class='ýäè'>&#xA0;</div></div>
+-
+-
+-
+-
+-
+-<div class='description'>
+-<p class="assertion" title="Assertion">The HTTP header has a higher precedence than an encoding declaration in a meta charset attribute.</p>
+-<div class="notes"><p><p>The HTTP header attempts to set the character encoding to ISO 8859-15. The page contains an encoding declaration in a meta charset attribute that attempts to set the character encoding to ISO 8859-1.</p><p>The test contains a div with a class name that contains the following sequence of bytes: 0xC3 0xBD 0xC3 0xA4 0xC3 0xA8. These represent different sequences of characters in ISO 8859-15, ISO 8859-1 and UTF-8. The external, UTF-8-encoded stylesheet contains a selector <code>.test div.&#x00C3;&#x0153;&#x00C3;&#x20AC;&#x00C3;&#x0161;</code>. This matches the sequence of bytes above when they are interpreted as ISO 8859-15. If the class name matches the selector then the test will pass.</p></p>
+-</div>
+-</div>
+-<div class="nexttest"><div><a href="generate?test=the-input-byte-stream-037">Next test</a></div><div class="doctype">HTML5</div>
+-<p class="jump">the-input-byte-stream-018<br /><a href="/International/tests/html5/the-input-byte-stream/results-basics#precedence" target="_blank">Result summary &amp; related tests</a><br /><a href="http://w3c-test.org/framework/details/i18n-html5/the-input-byte-stream-018" target="_blank">Detailed results for this test</a><br/>	<a href="http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream" target="_blank">Link to spec</a></p>
+-<div class='prereq'>Assumptions: <ul><li>The default encoding for the browser you are testing is not set to ISO 8859-15.</li>
+-				<li>The test is read from a server that supports HTTP.</li></ul></div>
+-</div>
+-<script>
+-test(function() {
+-assert_equals(document.getElementById('box').offsetWidth, 100);
+-}, " ");
+-</script>
+-
+-</body>
+-</html>
+-
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-meta-content.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-meta-content.html
+deleted file mode 100644
+index 6853cdd..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/HTTP-vs-meta-content.html
++++ /dev/null
+@@ -1,49 +0,0 @@
+-<!DOCTYPE html>
+-<html  lang="en" >
+-<head>
+- <meta http-equiv="content-type" content="text/html;charset=iso-8859-1" > <title>HTTP vs meta content</title>
+-<link rel='author' title='Richard Ishida' href='mailto:ishida at w3.org'>
+-<link rel='help' href='http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream'>
+-<link rel="stylesheet" type="text/css" href="./generatedtests.css">
+-<script src="http://w3c-test.org/resources/testharness.js"></script>
+-<script src="http://w3c-test.org/resources/testharnessreport.js"></script>
+-<meta name='flags' content='http'>
+-<meta name="assert" content="The HTTP header has a higher precedence than an encoding declaration in a meta content attribute.">
+-<style type='text/css'>
+-.test div { width: 50px; }.test div { width: 90px; }
+-</style>
+-<link rel="stylesheet" type="text/css" href="the-input-byte-stream/support/encodingtests-15.css">
+-</head>
+-<body>
+-<p class='title'>HTTP vs meta content</p>
+-
+-
+-<div id='log'></div>
+-
+-
+-<div class='test'><div id='box' class='ýäè'>&#xA0;</div></div>
+-
+-
+-
+-
+-
+-<div class='description'>
+-<p class="assertion" title="Assertion">The HTTP header has a higher precedence than an encoding declaration in a meta content attribute.</p>
+-<div class="notes"><p><p>The HTTP header attempts to set the character encoding to ISO 8859-15. The page contains an encoding declaration in a meta content attribute that attempts to set the character encoding to ISO 8859-1.</p><p>The test contains a div with a class name that contains the following sequence of bytes: 0xC3 0xBD 0xC3 0xA4 0xC3 0xA8. These represent different sequences of characters in ISO 8859-15, ISO 8859-1 and UTF-8. The external, UTF-8-encoded stylesheet contains a selector <code>.test div.&#x00C3;&#x0153;&#x00C3;&#x20AC;&#x00C3;&#x0161;</code>. This matches the sequence of bytes above when they are interpreted as ISO 8859-15. If the class name matches the selector then the test will pass.</p></p>
+-</div>
+-</div>
+-<div class="nexttest"><div><a href="generate?test=the-input-byte-stream-018">Next test</a></div><div class="doctype">HTML5</div>
+-<p class="jump">the-input-byte-stream-016<br /><a href="/International/tests/html5/the-input-byte-stream/results-basics#precedence" target="_blank">Result summary &amp; related tests</a><br /><a href="http://w3c-test.org/framework/details/i18n-html5/the-input-byte-stream-016" target="_blank">Detailed results for this test</a><br/>	<a href="http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream" target="_blank">Link to spec</a></p>
+-<div class='prereq'>Assumptions: <ul><li>The default encoding for the browser you are testing is not set to ISO 8859-15.</li>
+-				<li>The test is read from a server that supports HTTP.</li></ul></div>
+-</div>
+-<script>
+-test(function() {
+-assert_equals(document.getElementById('box').offsetWidth, 100);
+-}, " ");
+-</script>
+-
+-</body>
+-</html>
+-
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/No-encoding-declaration.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/No-encoding-declaration.html
+deleted file mode 100644
+index 612e26c..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/No-encoding-declaration.html
++++ /dev/null
+@@ -1,47 +0,0 @@
+-<!DOCTYPE html>
+-<html  lang="en" >
+-<head>
+-  <title>No encoding declaration</title>
+-<link rel='author' title='Richard Ishida' href='mailto:ishida at w3.org'>
+-<link rel='help' href='http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream'>
+-<link rel="stylesheet" type="text/css" href="./generatedtests.css">
+-<script src="http://w3c-test.org/resources/testharness.js"></script>
+-<script src="http://w3c-test.org/resources/testharnessreport.js"></script>
+-<meta name='flags' content='http'>
+-<meta name="assert" content="A page with no encoding information in HTTP, BOM, XML declaration or meta element will be treated as UTF-8.">
+-<style type='text/css'>
+-.test div { width: 50px; }</style>
+-<link rel="stylesheet" type="text/css" href="the-input-byte-stream/support/encodingtests-utf8.css">
+-</head>
+-<body>
+-<p class='title'>No encoding declaration</p>
+-
+-
+-<div id='log'></div>
+-
+-
+-<div class='test'><div id='box' class='ýäè'>&#xA0;</div></div>
+-
+-
+-
+-
+-
+-<div class='description'>
+-<p class="assertion" title="Assertion">A page with no encoding information in HTTP, BOM, XML declaration or meta element will be treated as UTF-8.</p>
+-<div class="notes"><p><p>The test on this page contains a div with a class name that contains the following sequence of bytes: 0xC3 0xBD 0xC3 0xA4 0xC3 0xA8. These represent different sequences of characters in ISO 8859-15, ISO 8859-1 and UTF-8. The external, UTF-8-encoded stylesheet contains a selector <code>.test div.&#x00FD;&#x00E4;&#x00E8;</code>. This matches the sequence of bytes above when they are interpreted as UTF-8. If the class name matches the selector then the test will pass.</p></p>
+-</div>
+-</div>
+-<div class="nexttest"><div><a href="generate?test=the-input-byte-stream-034">Next test</a></div><div class="doctype">HTML5</div>
+-<p class="jump">the-input-byte-stream-015<br /><a href="/International/tests/html5/the-input-byte-stream/results-basics#basics" target="_blank">Result summary &amp; related tests</a><br /><a href="http://w3c-test.org/framework/details/i18n-html5/the-input-byte-stream-015" target="_blank">Detailed results for this test</a><br/>	<a href="http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream" target="_blank">Link to spec</a></p>
+-<div class='prereq'>Assumptions: <ul><li>The test is read from a server that supports HTTP.</li></ul></div>
+-</div>
+-<script>
+-test(function() {
+-assert_equals(document.getElementById('box').offsetWidth, 100);
+-}, " ");
+-</script>
+-
+-</body>
+-</html>
+-
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/README b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/README
+deleted file mode 100644
+index a8e1fa4..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/README
++++ /dev/null
+@@ -1 +0,0 @@
+-These test cases come from http://www.w3.org/International/tests/html5/the-input-byte-stream/results-basics
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-16BE-BOM.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-16BE-BOM.html
+deleted file mode 100644
+index 3abf7a9343c20518e57dfea58b374fb0f4fb58a1..0000000000000000000000000000000000000000
+GIT binary patch
+literal 0
+HcmV?d00001
+
+literal 2670
+zcmcJR?QRoS5Qc}JAoU&=BQ-(7b^;2j8i*i3RV1JlO@;VXIsPurV!WHiDdLW}i`*CO
+z^UnC>tih=KsVr;H&Y7?C&O3AV(?534uG?e##U9y_y|!QNi4``n+D>d{2lky^LnFNx
+z?9HrarH$>rwQR_$g)Hk0*&STI*EYq|47~&U9sfUB+ji})9eR{QqCUra7oDsZ5obtB
+zdxP%<)-$4Q;rSHJiM>U(#ZI=;?n^BC?Dp6lu=~_1-lnX3u03&2BlmQIY>L+!Uq7<S
+znh)&E?pViTjIm26+mz45Gn;?mU1-%d$8(q8ng2R#e!F1tlD&lM9_z}^IdM&9OX8=U
+z8%PwVO_n7-g+SYm(XCxt at f1Qm>XoytKw^Q#oZSM?3*J?)&ojG&yzQRkC!M<M9xE_7
+zb;}_hR3kl=jSw#Vt-|I{q%Ck#9h-3za!uL)n~QLmd*yVN|H|tGZJ}Lo7NIwEW{jNQ
+zV@@K5_3@^fi08HMCj^^V*Hl9s7bDNfAUw%xiKL5{%KZf*9rq_B3%EJ8zj(gqf5v)%
+zbOLV*+p`@!Ep4CmhgBB}-xMo+eXUno4NY--$glQJ%^9|ktY at fB&Rr7SEd-RMIzBO=
+z at -E&3<2aeBADM{J>l5JE?ax;lp_NYEcdUht`ZswOviB~L5hmJ|pXI71nn20w;>vG!
+zQGB$EE9&wC``&J#_Ym~<oskhM*qPSKA~LzoN!pzH1>PgRu-Bd>1!pOp0||k`kr=VJ
+zfH6I6rmRaeHA7U-A^OTsT+|d2a^i(>DePzZ{)ibXoCBvJnuYrd-3kkN$u<La`*flh
+zDi+>y{qQK;=*Y;S87ro12aTgu^i*%f8zC3>a}9DIe4cfxOzsCw&(cqvP9{ud{N6f`
+z#TNDY(B6 at Gpr|uN+%&x^XZjBHdc at 2vsM(Tyc2=vshHQ5w+obmp>tuWT(t4BTUGAQw
+zxeI$UGSLUBg=WFbF;4f at 4=^P2AgY at CFn8A`bcC=_&~)fiDe)#cUARRBzJ^k|%X)69
+z+{Cb`wq}Rsg%B62CC_tK!AV(W{(MV?#mndR46CU#BUN<{8e?*oT+!pE5wF#O#TR#a
+z$9qRT)tpbw8zAI~QQJg2C3|6$I%(T(;`zOMy6SO+&;pG=c#2P|P-WZn$$DpWJlC3U
+z3*nvm<q%|^qPyLgA~&hNxH!U(CgUrj$Lv*i?ZToRve;kc at WJ`8#Z)Pn$q5nRA5|>z
+zwP{u~r$L?-m3uqp9I1+#3yE|3M$(s-BE<Joa8PqdUta}ZQ2KUivf!ALM1?f7$7oIM
+sZ)BUR)d7uk!p%4L`mByQto|PReD2~`cUQB{U7yke at NV7*jW5Z60Z{<B#sB~S
+
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-16LE-BOM.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-16LE-BOM.html
+deleted file mode 100644
+index 76254c980c29954f1bbd96e145c79f64c43e3355..0000000000000000000000000000000000000000
+GIT binary patch
+literal 0
+HcmV?d00001
+
+literal 2682
+zcmcJR?QRoS5Qc}JMdBV7Bat9sI{^h%O^8Y;RgubvAPDiRa{S#oi<{jL2gDuqE^=SM
+z^UnC>tih=LQ>`qYoiktOop<K!=TCcf-F~rW_RtRPjXk$VR at lU9JGPna+cmptdzbG8
+zdo$}<X=A%@EgQ0GA<KG0b_bX5wN3FfLvP<+;r~}_+qT`a-#y9!QJ>(wi%!;yh%+Rm
+z{e|xntY<{q!1F1Z6MKtngPm-p-4|H&+3m4AVE3_AyiHm6Tzlf4M(*ht*%YrezJ6kr
+zHGj4<yK5bfF~%;PY+XJR&uspUccE9?9M4^zGk-cOe!F1tg1v<E4(rO!IdM&93*x7p
+z8%PwVO_n7-g+SYm(5+os at h^mW)GKFOfy4<Gb9M_npYX1FeVy4|<ZbsPKk3w6_gI0!
+zsap>5pc?64*$Cm%-zseWMA`x;)v*~jA=i}szqts9xmQkS`M11|(H7bTXAycsXU53+
+zJ?120SRZeyiFjW7enPN`bxk$IaWV3o48oJF7D&2ysoY;6(s6%6vVfaYd&mC=erK!)
+zNGI^7upQgN)53OHe_VE<@J+G8*Y|p*)zB2Thdi}+YR<5QWHm!|a_*AoZXuv7)$xe|
+zm3Q$D7{|#}{m4X&UY!6(ZhyYi2(5JLzGE$H)W6BQklnjPMwn<<eiqA`XaXgx3wwFx
+z!u}~PtanA0H|+*`4?u6%85yyHooTHsB9rT!q|K?H;yvOEd+kY5aF)_JkPs*wi4l7z
+zFs6sily!-wW{B!JL|^%di<&}0PP`B<h5bg~A2MTwbKo>Yvv7Z*TVWwD*=E3QpH37*
+z#lqXJA0A~J9T_<^W5smspmDg2p6ac5Bjn<Ku0igDud_~-$^D?|S^A07$%M&_=dJTt
+zY*DWd?Qb#<6m_PEo2FOgOy8nj51F|IHCvF+)^fGekZmtz>+~LAoow%1TCdZ*$K8`O
+zw_$HaCi+0N&@7la#_7KL5r$+QL{)Pi=I&aDjt~|Knht#`CEi4*3%97i_fSfAS<fw%
+zn-~_=*6h%{5aL3$<o}#ia8j0;KmVn|;^h-=WmR6xNL8JK#+ckCSM<1P#A|h6v2v#$
+zaHn^?chpnO`P94t_OVibB~EP;@09$7PU at viyM@*V*V7k=p6Ga?P}?75Bwndfm2J{5
+zs~ytuoNMwC?x}AMK<F{Ln~iC5i;Ts|5q>lwUz0=3V0GCxY}z81UC-nP=CGt2OqYV$
+zoRCo+qM9YX*3FFORLC=<a&JeRBULkVB5_aOO8Vkbg!qmME@~d>E3B~S at +KROyk4r5
+yX7?DaslDfIebqXgC!KKp4IYy+W~X?ddE6o=`A+x#x0AK&6MF#W&AXxbRrv+SX}PNa
+
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-8-BOM-vs-meta-charset.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-8-BOM-vs-meta-charset.html
+deleted file mode 100644
+index 83de433..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-8-BOM-vs-meta-charset.html
++++ /dev/null
+@@ -1,49 +0,0 @@
+-<!DOCTYPE html>
+-<html  lang="en" >
+-<head>
+- <meta charset="iso-8859-15"> <title>UTF-8 BOM vs meta charset</title>
+-<link rel='author' title='Richard Ishida' href='mailto:ishida at w3.org'>
+-<link rel='help' href='http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream'>
+-<link rel="stylesheet" type="text/css" href="./generatedtests.css">
+-<script src="http://w3c-test.org/resources/testharness.js"></script>
+-<script src="http://w3c-test.org/resources/testharnessreport.js"></script>
+-<meta name='flags' content='http'>
+-<meta name="assert" content="A page with a UTF-8 BOM will be recognized as UTF-8 even if the meta charset attribute declares a different encoding.">
+-<style type='text/css'>
+-.test div { width: 50px; }.test div { width: 90px; }
+-</style>
+-<link rel="stylesheet" type="text/css" href="the-input-byte-stream/support/encodingtests-utf8.css">
+-</head>
+-<body>
+-<p class='title'>UTF-8 BOM vs meta charset</p>
+-
+-
+-<div id='log'></div>
+-
+-
+-<div class='test'><div id='box' class='ýäè'>&#xA0;</div></div>
+-
+-
+-
+-
+-
+-<div class='description'>
+-<p class="assertion" title="Assertion">A page with a UTF-8 BOM will be recognized as UTF-8 even if the meta charset attribute declares a different encoding.</p>
+-<div class="notes"><p><p>The page contains an encoding declaration in a meta charset attribute that attempts to set the character encoding to ISO 8859-15, but the file starts with a UTF-8 signature.</p><p>The test contains a div with a class name that contains the following sequence of bytes: 0xC3 0xBD 0xC3 0xA4 0xC3 0xA8. These represent different sequences of characters in ISO 8859-15, ISO 8859-1 and UTF-8. The external, UTF-8-encoded stylesheet contains a selector <code>.test div.&#x00FD;&#x00E4;&#x00E8;</code>. This matches the sequence of bytes above when they are interpreted as UTF-8. If the class name matches the selector then the test will pass.</p></p>
+-</div>
+-</div>
+-<div class="nexttest"><div><a href="generate?test=the-input-byte-stream-024">Next test</a></div><div class="doctype">HTML5</div>
+-<p class="jump">the-input-byte-stream-038<br /><a href="/International/tests/html5/the-input-byte-stream/results-basics#precedence" target="_blank">Result summary &amp; related tests</a><br /><a href="http://w3c-test.org/framework/details/i18n-html5/the-input-byte-stream-038" target="_blank">Detailed results for this test</a><br/>	<a href="http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream" target="_blank">Link to spec</a></p>
+-<div class='prereq'>Assumptions: <ul><li>The default encoding for the browser you are testing is not set to ISO 8859-15.</li>
+-				<li>The test is read from a server that supports HTTP.</li></ul></div>
+-</div>
+-<script>
+-test(function() {
+-assert_equals(document.getElementById('box').offsetWidth, 100);
+-}, " ");
+-</script>
+-
+-</body>
+-</html>
+-
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-8-BOM-vs-meta-content.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-8-BOM-vs-meta-content.html
+deleted file mode 100644
+index 501aac2..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/UTF-8-BOM-vs-meta-content.html
++++ /dev/null
+@@ -1,48 +0,0 @@
+-<!DOCTYPE html>
+-<html  lang="en" >
+-<head>
+- <meta http-equiv="content-type" content="text/html; charset=iso-8859-15"> <title>UTF-8 BOM vs meta content</title>
+-<link rel='author' title='Richard Ishida' href='mailto:ishida at w3.org'>
+-<link rel='help' href='http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream'>
+-<link rel="stylesheet" type="text/css" href="./generatedtests.css">
+-<script src="http://w3c-test.org/resources/testharness.js"></script>
+-<script src="http://w3c-test.org/resources/testharnessreport.js"></script>
+-<meta name='flags' content='http'>
+-<meta name="assert" content="A page with a UTF-8 BOM will be recognized as UTF-8 even if the meta content attribute declares a different encoding.">
+-<style type='text/css'>
+-.test div { width: 50px; }</style>
+-<link rel="stylesheet" type="text/css" href="the-input-byte-stream/support/encodingtests-utf8.css">
+-</head>
+-<body>
+-<p class='title'>UTF-8 BOM vs meta content</p>
+-
+-
+-<div id='log'></div>
+-
+-
+-<div class='test'><div id='box' class='ýäè'>&#xA0;</div></div>
+-
+-
+-
+-
+-
+-<div class='description'>
+-<p class="assertion" title="Assertion">A page with a UTF-8 BOM will be recognized as UTF-8 even if the meta content attribute declares a different encoding.</p>
+-<div class="notes"><p><p>The page contains an encoding declaration in a meta content attribute that attempts to set the character encoding to ISO 8859-15, but the file starts with a UTF-8 signature.</p><p>The test contains a div with a class name that contains the following sequence of bytes: 0xC3 0xBD 0xC3 0xA4 0xC3 0xA8. These represent different sequences of characters in ISO 8859-15, ISO 8859-1 and UTF-8. The external, UTF-8-encoded stylesheet contains a selector <code>.test div.&#x00FD;&#x00E4;&#x00E8;</code>. This matches the sequence of bytes above when they are interpreted as UTF-8. If the class name matches the selector then the test will pass.</p></p>
+-</div>
+-</div>
+-<div class="nexttest"><div><a href="generate?test=the-input-byte-stream-038">Next test</a></div><div class="doctype">HTML5</div>
+-<p class="jump">the-input-byte-stream-037<br /><a href="/International/tests/html5/the-input-byte-stream/results-basics#precedence" target="_blank">Result summary &amp; related tests</a><br /><a href="http://w3c-test.org/framework/details/i18n-html5/the-input-byte-stream-037" target="_blank">Detailed results for this test</a><br/>	<a href="http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream" target="_blank">Link to spec</a></p>
+-<div class='prereq'>Assumptions: <ul><li>The default encoding for the browser you are testing is not set to ISO 8859-15.</li>
+-				<li>The test is read from a server that supports HTTP.</li></ul></div>
+-</div>
+-<script>
+-test(function() {
+-assert_equals(document.getElementById('box').offsetWidth, 100);
+-}, " ");
+-</script>
+-
+-</body>
+-</html>
+-
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/meta-charset-attribute.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/meta-charset-attribute.html
+deleted file mode 100644
+index 2d7d25a..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/meta-charset-attribute.html
++++ /dev/null
+@@ -1,48 +0,0 @@
+-<!DOCTYPE html>
+-<html  lang="en" >
+-<head>
+- <meta charset="iso-8859-15"> <title>meta charset attribute</title>
+-<link rel='author' title='Richard Ishida' href='mailto:ishida at w3.org'>
+-<link rel='help' href='http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream'>
+-<link rel="stylesheet" type="text/css" href="./generatedtests.css">
+-<script src="http://w3c-test.org/resources/testharness.js"></script>
+-<script src="http://w3c-test.org/resources/testharnessreport.js"></script>
+-<meta name='flags' content='http'>
+-<meta name="assert" content="The character encoding of the page can be set by a meta element with charset attribute.">
+-<style type='text/css'>
+-.test div { width: 50px; }</style>
+-<link rel="stylesheet" type="text/css" href="the-input-byte-stream/support/encodingtests-15.css">
+-</head>
+-<body>
+-<p class='title'>meta charset attribute</p>
+-
+-
+-<div id='log'></div>
+-
+-
+-<div class='test'><div id='box' class='ýäè'>&#xA0;</div></div>
+-
+-
+-
+-
+-
+-<div class='description'>
+-<p class="assertion" title="Assertion">The character encoding of the page can be set by a meta element with charset attribute.</p>
+-<div class="notes"><p><p>The only character encoding declaration for this HTML file is in the charset attribute of the meta element, which declares the encoding to be ISO 8859-15.</p><p>The test contains a div with a class name that contains the following sequence of bytes: 0xC3 0xBD 0xC3 0xA4 0xC3 0xA8. These represent different sequences of characters in ISO 8859-15, ISO 8859-1 and UTF-8. The external, UTF-8-encoded stylesheet contains a selector <code>.test div.&#x00C3;&#x0153;&#x00C3;&#x20AC;&#x00C3;&#x0161;</code>. This matches the sequence of bytes above when they are interpreted as ISO 8859-15. If the class name matches the selector then the test will pass.</p></p>
+-</div>
+-</div>
+-<div class="nexttest"><div><a href="generate?test=the-input-byte-stream-015">Next test</a></div><div class="doctype">HTML5</div>
+-<p class="jump">the-input-byte-stream-009<br /><a href="/International/tests/html5/the-input-byte-stream/results-basics#basics" target="_blank">Result summary &amp; related tests</a><br /><a href="http://w3c-test.org/framework/details/i18n-html5/the-input-byte-stream-009" target="_blank">Detailed results for this test</a><br/>	<a href="http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream" target="_blank">Link to spec</a></p>
+-<div class='prereq'>Assumptions: <ul><li>The default encoding for the browser you are testing is not set to ISO 8859-15.</li>
+-				<li>The test is read from a server that supports HTTP.</li></ul></div>
+-</div>
+-<script>
+-test(function() {
+-assert_equals(document.getElementById('box').offsetWidth, 100);
+-}, " ");
+-</script>
+-
+-</body>
+-</html>
+-
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/meta-content-attribute.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/meta-content-attribute.html
+deleted file mode 100644
+index 1c3f228..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/charset/testdata/meta-content-attribute.html
++++ /dev/null
+@@ -1,48 +0,0 @@
+-<!DOCTYPE html>
+-<html  lang="en" >
+-<head>
+- <meta http-equiv="content-type" content="text/html; charset=iso-8859-15"> <title>meta content attribute</title>
+-<link rel='author' title='Richard Ishida' href='mailto:ishida at w3.org'>
+-<link rel='help' href='http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream'>
+-<link rel="stylesheet" type="text/css" href="./generatedtests.css">
+-<script src="http://w3c-test.org/resources/testharness.js"></script>
+-<script src="http://w3c-test.org/resources/testharnessreport.js"></script>
+-<meta name='flags' content='http'>
+-<meta name="assert" content="The character encoding of the page can be set by a meta element with http-equiv and content attributes.">
+-<style type='text/css'>
+-.test div { width: 50px; }</style>
+-<link rel="stylesheet" type="text/css" href="the-input-byte-stream/support/encodingtests-15.css">
+-</head>
+-<body>
+-<p class='title'>meta content attribute</p>
+-
+-
+-<div id='log'></div>
+-
+-
+-<div class='test'><div id='box' class='ýäè'>&#xA0;</div></div>
+-
+-
+-
+-
+-
+-<div class='description'>
+-<p class="assertion" title="Assertion">The character encoding of the page can be set by a meta element with http-equiv and content attributes.</p>
+-<div class="notes"><p><p>The only character encoding declaration for this HTML file is in the content attribute of the meta element, which declares the encoding to be ISO 8859-15.</p><p>The test contains a div with a class name that contains the following sequence of bytes: 0xC3 0xBD 0xC3 0xA4 0xC3 0xA8. These represent different sequences of characters in ISO 8859-15, ISO 8859-1 and UTF-8. The external, UTF-8-encoded stylesheet contains a selector <code>.test div.&#x00C3;&#x0153;&#x00C3;&#x20AC;&#x00C3;&#x0161;</code>. This matches the sequence of bytes above when they are interpreted as ISO 8859-15. If the class name matches the selector then the test will pass.</p></p>
+-</div>
+-</div>
+-<div class="nexttest"><div><a href="generate?test=the-input-byte-stream-009">Next test</a></div><div class="doctype">HTML5</div>
+-<p class="jump">the-input-byte-stream-007<br /><a href="/International/tests/html5/the-input-byte-stream/results-basics#basics" target="_blank">Result summary &amp; related tests</a><br /><a href="http://w3c-test.org/framework/details/i18n-html5/the-input-byte-stream-007" target="_blank">Detailed results for this test</a><br/>	<a href="http://www.w3.org/TR/html5/syntax.html#the-input-byte-stream" target="_blank">Link to spec</a></p>
+-<div class='prereq'>Assumptions: <ul><li>The default encoding for the browser you are testing is not set to ISO 8859-15.</li>
+-				<li>The test is read from a server that supports HTTP.</li></ul></div>
+-</div>
+-<script>
+-test(function() {
+-assert_equals(document.getElementById('box').offsetWidth, 100);
+-}, " ");
+-</script>
+-
+-</body>
+-</html>
+-
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/const.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/const.go
+deleted file mode 100644
+index d7cc8bb..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/const.go
++++ /dev/null
+@@ -1,100 +0,0 @@
+-// Copyright 2011 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-// Section 12.2.3.2 of the HTML5 specification says "The following elements
+-// have varying levels of special parsing rules".
+-// http://www.whatwg.org/specs/web-apps/current-work/multipage/parsing.html#the-stack-of-open-elements
+-var isSpecialElementMap = map[string]bool{
+-	"address":    true,
+-	"applet":     true,
+-	"area":       true,
+-	"article":    true,
+-	"aside":      true,
+-	"base":       true,
+-	"basefont":   true,
+-	"bgsound":    true,
+-	"blockquote": true,
+-	"body":       true,
+-	"br":         true,
+-	"button":     true,
+-	"caption":    true,
+-	"center":     true,
+-	"col":        true,
+-	"colgroup":   true,
+-	"command":    true,
+-	"dd":         true,
+-	"details":    true,
+-	"dir":        true,
+-	"div":        true,
+-	"dl":         true,
+-	"dt":         true,
+-	"embed":      true,
+-	"fieldset":   true,
+-	"figcaption": true,
+-	"figure":     true,
+-	"footer":     true,
+-	"form":       true,
+-	"frame":      true,
+-	"frameset":   true,
+-	"h1":         true,
+-	"h2":         true,
+-	"h3":         true,
+-	"h4":         true,
+-	"h5":         true,
+-	"h6":         true,
+-	"head":       true,
+-	"header":     true,
+-	"hgroup":     true,
+-	"hr":         true,
+-	"html":       true,
+-	"iframe":     true,
+-	"img":        true,
+-	"input":      true,
+-	"isindex":    true,
+-	"li":         true,
+-	"link":       true,
+-	"listing":    true,
+-	"marquee":    true,
+-	"menu":       true,
+-	"meta":       true,
+-	"nav":        true,
+-	"noembed":    true,
+-	"noframes":   true,
+-	"noscript":   true,
+-	"object":     true,
+-	"ol":         true,
+-	"p":          true,
+-	"param":      true,
+-	"plaintext":  true,
+-	"pre":        true,
+-	"script":     true,
+-	"section":    true,
+-	"select":     true,
+-	"style":      true,
+-	"summary":    true,
+-	"table":      true,
+-	"tbody":      true,
+-	"td":         true,
+-	"textarea":   true,
+-	"tfoot":      true,
+-	"th":         true,
+-	"thead":      true,
+-	"title":      true,
+-	"tr":         true,
+-	"ul":         true,
+-	"wbr":        true,
+-	"xmp":        true,
+-}
+-
+-func isSpecialElement(element *Node) bool {
+-	switch element.Namespace {
+-	case "", "html":
+-		return isSpecialElementMap[element.Data]
+-	case "svg":
+-		return element.Data == "foreignObject"
+-	}
+-	return false
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/doc.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/doc.go
+deleted file mode 100644
+index fac0f54..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/doc.go
++++ /dev/null
+@@ -1,106 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-/*
+-Package html implements an HTML5-compliant tokenizer and parser.
+-
+-Tokenization is done by creating a Tokenizer for an io.Reader r. It is the
+-caller's responsibility to ensure that r provides UTF-8 encoded HTML.
+-
+-	z := html.NewTokenizer(r)
+-
+-Given a Tokenizer z, the HTML is tokenized by repeatedly calling z.Next(),
+-which parses the next token and returns its type, or an error:
+-
+-	for {
+-		tt := z.Next()
+-		if tt == html.ErrorToken {
+-			// ...
+-			return ...
+-		}
+-		// Process the current token.
+-	}
+-
+-There are two APIs for retrieving the current token. The high-level API is to
+-call Token; the low-level API is to call Text or TagName / TagAttr. Both APIs
+-allow optionally calling Raw after Next but before Token, Text, TagName, or
+-TagAttr. In EBNF notation, the valid call sequence per token is:
+-
+-	Next {Raw} [ Token | Text | TagName {TagAttr} ]
+-
+-Token returns an independent data structure that completely describes a token.
+-Entities (such as "&lt;") are unescaped, tag names and attribute keys are
+-lower-cased, and attributes are collected into a []Attribute. For example:
+-
+-	for {
+-		if z.Next() == html.ErrorToken {
+-			// Returning io.EOF indicates success.
+-			return z.Err()
+-		}
+-		emitToken(z.Token())
+-	}
+-
+-The low-level API performs fewer allocations and copies, but the contents of
+-the []byte values returned by Text, TagName and TagAttr may change on the next
+-call to Next. For example, to extract an HTML page's anchor text:
+-
+-	depth := 0
+-	for {
+-		tt := z.Next()
+-		switch tt {
+-		case ErrorToken:
+-			return z.Err()
+-		case TextToken:
+-			if depth > 0 {
+-				// emitBytes should copy the []byte it receives,
+-				// if it doesn't process it immediately.
+-				emitBytes(z.Text())
+-			}
+-		case StartTagToken, EndTagToken:
+-			tn, _ := z.TagName()
+-			if len(tn) == 1 && tn[0] == 'a' {
+-				if tt == StartTagToken {
+-					depth++
+-				} else {
+-					depth--
+-				}
+-			}
+-		}
+-	}
+-
+-Parsing is done by calling Parse with an io.Reader, which returns the root of
+-the parse tree (the document element) as a *Node. It is the caller's
+-responsibility to ensure that the Reader provides UTF-8 encoded HTML. For
+-example, to process each anchor node in depth-first order:
+-
+-	doc, err := html.Parse(r)
+-	if err != nil {
+-		// ...
+-	}
+-	var f func(*html.Node)
+-	f = func(n *html.Node) {
+-		if n.Type == html.ElementNode && n.Data == "a" {
+-			// Do something with n...
+-		}
+-		for c := n.FirstChild; c != nil; c = c.NextSibling {
+-			f(c)
+-		}
+-	}
+-	f(doc)
+-
+-The relevant specifications include:
+-http://www.whatwg.org/specs/web-apps/current-work/multipage/syntax.html and
+-http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html
+-*/
+-package html
+-
+-// The tokenization algorithm implemented by this package is not a line-by-line
+-// transliteration of the relatively verbose state-machine in the WHATWG
+-// specification. A more direct approach is used instead, where the program
+-// counter implies the state, such as whether it is tokenizing a tag or a text
+-// node. Specification compliance is verified by checking expected and actual
+-// outputs over a test suite rather than aiming for algorithmic fidelity.
+-
+-// TODO(nigeltao): Does a DOM API belong in this package or a separate one?
+-// TODO(nigeltao): How does parsing interact with a JavaScript engine?
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/doctype.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/doctype.go
+deleted file mode 100644
+index c484e5a..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/doctype.go
++++ /dev/null
+@@ -1,156 +0,0 @@
+-// Copyright 2011 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"strings"
+-)
+-
+-// parseDoctype parses the data from a DoctypeToken into a name,
+-// public identifier, and system identifier. It returns a Node whose Type
+-// is DoctypeNode, whose Data is the name, and which has attributes
+-// named "system" and "public" for the two identifiers if they were present.
+-// quirks is whether the document should be parsed in "quirks mode".
+-func parseDoctype(s string) (n *Node, quirks bool) {
+-	n = &Node{Type: DoctypeNode}
+-
+-	// Find the name.
+-	space := strings.IndexAny(s, whitespace)
+-	if space == -1 {
+-		space = len(s)
+-	}
+-	n.Data = s[:space]
+-	// The comparison to "html" is case-sensitive.
+-	if n.Data != "html" {
+-		quirks = true
+-	}
+-	n.Data = strings.ToLower(n.Data)
+-	s = strings.TrimLeft(s[space:], whitespace)
+-
+-	if len(s) < 6 {
+-		// It can't start with "PUBLIC" or "SYSTEM".
+-		// Ignore the rest of the string.
+-		return n, quirks || s != ""
+-	}
+-
+-	key := strings.ToLower(s[:6])
+-	s = s[6:]
+-	for key == "public" || key == "system" {
+-		s = strings.TrimLeft(s, whitespace)
+-		if s == "" {
+-			break
+-		}
+-		quote := s[0]
+-		if quote != '"' && quote != '\'' {
+-			break
+-		}
+-		s = s[1:]
+-		q := strings.IndexRune(s, rune(quote))
+-		var id string
+-		if q == -1 {
+-			id = s
+-			s = ""
+-		} else {
+-			id = s[:q]
+-			s = s[q+1:]
+-		}
+-		n.Attr = append(n.Attr, Attribute{Key: key, Val: id})
+-		if key == "public" {
+-			key = "system"
+-		} else {
+-			key = ""
+-		}
+-	}
+-
+-	if key != "" || s != "" {
+-		quirks = true
+-	} else if len(n.Attr) > 0 {
+-		if n.Attr[0].Key == "public" {
+-			public := strings.ToLower(n.Attr[0].Val)
+-			switch public {
+-			case "-//w3o//dtd w3 html strict 3.0//en//", "-/w3d/dtd html 4.0 transitional/en", "html":
+-				quirks = true
+-			default:
+-				for _, q := range quirkyIDs {
+-					if strings.HasPrefix(public, q) {
+-						quirks = true
+-						break
+-					}
+-				}
+-			}
+-			// The following two public IDs only cause quirks mode if there is no system ID.
+-			if len(n.Attr) == 1 && (strings.HasPrefix(public, "-//w3c//dtd html 4.01 frameset//") ||
+-				strings.HasPrefix(public, "-//w3c//dtd html 4.01 transitional//")) {
+-				quirks = true
+-			}
+-		}
+-		if lastAttr := n.Attr[len(n.Attr)-1]; lastAttr.Key == "system" &&
+-			strings.ToLower(lastAttr.Val) == "http://www.ibm.com/data/dtd/v11/ibmxhtml1-transitional.dtd" {
+-			quirks = true
+-		}
+-	}
+-
+-	return n, quirks
+-}
+-
+-// quirkyIDs is a list of public doctype identifiers that cause a document
+-// to be interpreted in quirks mode. The identifiers should be in lower case.
+-var quirkyIDs = []string{
+-	"+//silmaril//dtd html pro v0r11 19970101//",
+-	"-//advasoft ltd//dtd html 3.0 aswedit + extensions//",
+-	"-//as//dtd html 3.0 aswedit + extensions//",
+-	"-//ietf//dtd html 2.0 level 1//",
+-	"-//ietf//dtd html 2.0 level 2//",
+-	"-//ietf//dtd html 2.0 strict level 1//",
+-	"-//ietf//dtd html 2.0 strict level 2//",
+-	"-//ietf//dtd html 2.0 strict//",
+-	"-//ietf//dtd html 2.0//",
+-	"-//ietf//dtd html 2.1e//",
+-	"-//ietf//dtd html 3.0//",
+-	"-//ietf//dtd html 3.2 final//",
+-	"-//ietf//dtd html 3.2//",
+-	"-//ietf//dtd html 3//",
+-	"-//ietf//dtd html level 0//",
+-	"-//ietf//dtd html level 1//",
+-	"-//ietf//dtd html level 2//",
+-	"-//ietf//dtd html level 3//",
+-	"-//ietf//dtd html strict level 0//",
+-	"-//ietf//dtd html strict level 1//",
+-	"-//ietf//dtd html strict level 2//",
+-	"-//ietf//dtd html strict level 3//",
+-	"-//ietf//dtd html strict//",
+-	"-//ietf//dtd html//",
+-	"-//metrius//dtd metrius presentational//",
+-	"-//microsoft//dtd internet explorer 2.0 html strict//",
+-	"-//microsoft//dtd internet explorer 2.0 html//",
+-	"-//microsoft//dtd internet explorer 2.0 tables//",
+-	"-//microsoft//dtd internet explorer 3.0 html strict//",
+-	"-//microsoft//dtd internet explorer 3.0 html//",
+-	"-//microsoft//dtd internet explorer 3.0 tables//",
+-	"-//netscape comm. corp.//dtd html//",
+-	"-//netscape comm. corp.//dtd strict html//",
+-	"-//o'reilly and associates//dtd html 2.0//",
+-	"-//o'reilly and associates//dtd html extended 1.0//",
+-	"-//o'reilly and associates//dtd html extended relaxed 1.0//",
+-	"-//softquad software//dtd hotmetal pro 6.0::19990601::extensions to html 4.0//",
+-	"-//softquad//dtd hotmetal pro 4.0::19971010::extensions to html 4.0//",
+-	"-//spyglass//dtd html 2.0 extended//",
+-	"-//sq//dtd html 2.0 hotmetal + extensions//",
+-	"-//sun microsystems corp.//dtd hotjava html//",
+-	"-//sun microsystems corp.//dtd hotjava strict html//",
+-	"-//w3c//dtd html 3 1995-03-24//",
+-	"-//w3c//dtd html 3.2 draft//",
+-	"-//w3c//dtd html 3.2 final//",
+-	"-//w3c//dtd html 3.2//",
+-	"-//w3c//dtd html 3.2s draft//",
+-	"-//w3c//dtd html 4.0 frameset//",
+-	"-//w3c//dtd html 4.0 transitional//",
+-	"-//w3c//dtd html experimental 19960712//",
+-	"-//w3c//dtd html experimental 970421//",
+-	"-//w3c//dtd w3 html//",
+-	"-//w3o//dtd w3 html 3.0//",
+-	"-//webtechs//dtd mozilla html 2.0//",
+-	"-//webtechs//dtd mozilla html//",
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/entity.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/entity.go
+deleted file mode 100644
+index af8a007..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/entity.go
++++ /dev/null
+@@ -1,2253 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-// All entities that do not end with ';' are 6 or fewer bytes long.
+-const longestEntityWithoutSemicolon = 6
+-
+-// entity is a map from HTML entity names to their values. The semicolon matters:
+-// http://www.whatwg.org/specs/web-apps/current-work/multipage/named-character-references.html
+-// lists both "amp" and "amp;" as two separate entries.
+-//
+-// Note that the HTML5 list is larger than the HTML4 list at
+-// http://www.w3.org/TR/html4/sgml/entities.html
+-var entity = map[string]rune{
+-	"AElig;":                           '\U000000C6',
+-	"AMP;":                             '\U00000026',
+-	"Aacute;":                          '\U000000C1',
+-	"Abreve;":                          '\U00000102',
+-	"Acirc;":                           '\U000000C2',
+-	"Acy;":                             '\U00000410',
+-	"Afr;":                             '\U0001D504',
+-	"Agrave;":                          '\U000000C0',
+-	"Alpha;":                           '\U00000391',
+-	"Amacr;":                           '\U00000100',
+-	"And;":                             '\U00002A53',
+-	"Aogon;":                           '\U00000104',
+-	"Aopf;":                            '\U0001D538',
+-	"ApplyFunction;":                   '\U00002061',
+-	"Aring;":                           '\U000000C5',
+-	"Ascr;":                            '\U0001D49C',
+-	"Assign;":                          '\U00002254',
+-	"Atilde;":                          '\U000000C3',
+-	"Auml;":                            '\U000000C4',
+-	"Backslash;":                       '\U00002216',
+-	"Barv;":                            '\U00002AE7',
+-	"Barwed;":                          '\U00002306',
+-	"Bcy;":                             '\U00000411',
+-	"Because;":                         '\U00002235',
+-	"Bernoullis;":                      '\U0000212C',
+-	"Beta;":                            '\U00000392',
+-	"Bfr;":                             '\U0001D505',
+-	"Bopf;":                            '\U0001D539',
+-	"Breve;":                           '\U000002D8',
+-	"Bscr;":                            '\U0000212C',
+-	"Bumpeq;":                          '\U0000224E',
+-	"CHcy;":                            '\U00000427',
+-	"COPY;":                            '\U000000A9',
+-	"Cacute;":                          '\U00000106',
+-	"Cap;":                             '\U000022D2',
+-	"CapitalDifferentialD;":            '\U00002145',
+-	"Cayleys;":                         '\U0000212D',
+-	"Ccaron;":                          '\U0000010C',
+-	"Ccedil;":                          '\U000000C7',
+-	"Ccirc;":                           '\U00000108',
+-	"Cconint;":                         '\U00002230',
+-	"Cdot;":                            '\U0000010A',
+-	"Cedilla;":                         '\U000000B8',
+-	"CenterDot;":                       '\U000000B7',
+-	"Cfr;":                             '\U0000212D',
+-	"Chi;":                             '\U000003A7',
+-	"CircleDot;":                       '\U00002299',
+-	"CircleMinus;":                     '\U00002296',
+-	"CirclePlus;":                      '\U00002295',
+-	"CircleTimes;":                     '\U00002297',
+-	"ClockwiseContourIntegral;":        '\U00002232',
+-	"CloseCurlyDoubleQuote;":           '\U0000201D',
+-	"CloseCurlyQuote;":                 '\U00002019',
+-	"Colon;":                           '\U00002237',
+-	"Colone;":                          '\U00002A74',
+-	"Congruent;":                       '\U00002261',
+-	"Conint;":                          '\U0000222F',
+-	"ContourIntegral;":                 '\U0000222E',
+-	"Copf;":                            '\U00002102',
+-	"Coproduct;":                       '\U00002210',
+-	"CounterClockwiseContourIntegral;": '\U00002233',
+-	"Cross;":                    '\U00002A2F',
+-	"Cscr;":                     '\U0001D49E',
+-	"Cup;":                      '\U000022D3',
+-	"CupCap;":                   '\U0000224D',
+-	"DD;":                       '\U00002145',
+-	"DDotrahd;":                 '\U00002911',
+-	"DJcy;":                     '\U00000402',
+-	"DScy;":                     '\U00000405',
+-	"DZcy;":                     '\U0000040F',
+-	"Dagger;":                   '\U00002021',
+-	"Darr;":                     '\U000021A1',
+-	"Dashv;":                    '\U00002AE4',
+-	"Dcaron;":                   '\U0000010E',
+-	"Dcy;":                      '\U00000414',
+-	"Del;":                      '\U00002207',
+-	"Delta;":                    '\U00000394',
+-	"Dfr;":                      '\U0001D507',
+-	"DiacriticalAcute;":         '\U000000B4',
+-	"DiacriticalDot;":           '\U000002D9',
+-	"DiacriticalDoubleAcute;":   '\U000002DD',
+-	"DiacriticalGrave;":         '\U00000060',
+-	"DiacriticalTilde;":         '\U000002DC',
+-	"Diamond;":                  '\U000022C4',
+-	"DifferentialD;":            '\U00002146',
+-	"Dopf;":                     '\U0001D53B',
+-	"Dot;":                      '\U000000A8',
+-	"DotDot;":                   '\U000020DC',
+-	"DotEqual;":                 '\U00002250',
+-	"DoubleContourIntegral;":    '\U0000222F',
+-	"DoubleDot;":                '\U000000A8',
+-	"DoubleDownArrow;":          '\U000021D3',
+-	"DoubleLeftArrow;":          '\U000021D0',
+-	"DoubleLeftRightArrow;":     '\U000021D4',
+-	"DoubleLeftTee;":            '\U00002AE4',
+-	"DoubleLongLeftArrow;":      '\U000027F8',
+-	"DoubleLongLeftRightArrow;": '\U000027FA',
+-	"DoubleLongRightArrow;":     '\U000027F9',
+-	"DoubleRightArrow;":         '\U000021D2',
+-	"DoubleRightTee;":           '\U000022A8',
+-	"DoubleUpArrow;":            '\U000021D1',
+-	"DoubleUpDownArrow;":        '\U000021D5',
+-	"DoubleVerticalBar;":        '\U00002225',
+-	"DownArrow;":                '\U00002193',
+-	"DownArrowBar;":             '\U00002913',
+-	"DownArrowUpArrow;":         '\U000021F5',
+-	"DownBreve;":                '\U00000311',
+-	"DownLeftRightVector;":      '\U00002950',
+-	"DownLeftTeeVector;":        '\U0000295E',
+-	"DownLeftVector;":           '\U000021BD',
+-	"DownLeftVectorBar;":        '\U00002956',
+-	"DownRightTeeVector;":       '\U0000295F',
+-	"DownRightVector;":          '\U000021C1',
+-	"DownRightVectorBar;":       '\U00002957',
+-	"DownTee;":                  '\U000022A4',
+-	"DownTeeArrow;":             '\U000021A7',
+-	"Downarrow;":                '\U000021D3',
+-	"Dscr;":                     '\U0001D49F',
+-	"Dstrok;":                   '\U00000110',
+-	"ENG;":                      '\U0000014A',
+-	"ETH;":                      '\U000000D0',
+-	"Eacute;":                   '\U000000C9',
+-	"Ecaron;":                   '\U0000011A',
+-	"Ecirc;":                    '\U000000CA',
+-	"Ecy;":                      '\U0000042D',
+-	"Edot;":                     '\U00000116',
+-	"Efr;":                      '\U0001D508',
+-	"Egrave;":                   '\U000000C8',
+-	"Element;":                  '\U00002208',
+-	"Emacr;":                    '\U00000112',
+-	"EmptySmallSquare;":         '\U000025FB',
+-	"EmptyVerySmallSquare;":     '\U000025AB',
+-	"Eogon;":                    '\U00000118',
+-	"Eopf;":                     '\U0001D53C',
+-	"Epsilon;":                  '\U00000395',
+-	"Equal;":                    '\U00002A75',
+-	"EqualTilde;":               '\U00002242',
+-	"Equilibrium;":              '\U000021CC',
+-	"Escr;":                     '\U00002130',
+-	"Esim;":                     '\U00002A73',
+-	"Eta;":                      '\U00000397',
+-	"Euml;":                     '\U000000CB',
+-	"Exists;":                   '\U00002203',
+-	"ExponentialE;":             '\U00002147',
+-	"Fcy;":                      '\U00000424',
+-	"Ffr;":                      '\U0001D509',
+-	"FilledSmallSquare;":        '\U000025FC',
+-	"FilledVerySmallSquare;":    '\U000025AA',
+-	"Fopf;":                     '\U0001D53D',
+-	"ForAll;":                   '\U00002200',
+-	"Fouriertrf;":               '\U00002131',
+-	"Fscr;":                     '\U00002131',
+-	"GJcy;":                     '\U00000403',
+-	"GT;":                       '\U0000003E',
+-	"Gamma;":                    '\U00000393',
+-	"Gammad;":                   '\U000003DC',
+-	"Gbreve;":                   '\U0000011E',
+-	"Gcedil;":                   '\U00000122',
+-	"Gcirc;":                    '\U0000011C',
+-	"Gcy;":                      '\U00000413',
+-	"Gdot;":                     '\U00000120',
+-	"Gfr;":                      '\U0001D50A',
+-	"Gg;":                       '\U000022D9',
+-	"Gopf;":                     '\U0001D53E',
+-	"GreaterEqual;":             '\U00002265',
+-	"GreaterEqualLess;":         '\U000022DB',
+-	"GreaterFullEqual;":         '\U00002267',
+-	"GreaterGreater;":           '\U00002AA2',
+-	"GreaterLess;":              '\U00002277',
+-	"GreaterSlantEqual;":        '\U00002A7E',
+-	"GreaterTilde;":             '\U00002273',
+-	"Gscr;":                     '\U0001D4A2',
+-	"Gt;":                       '\U0000226B',
+-	"HARDcy;":                   '\U0000042A',
+-	"Hacek;":                    '\U000002C7',
+-	"Hat;":                      '\U0000005E',
+-	"Hcirc;":                    '\U00000124',
+-	"Hfr;":                      '\U0000210C',
+-	"HilbertSpace;":             '\U0000210B',
+-	"Hopf;":                     '\U0000210D',
+-	"HorizontalLine;":           '\U00002500',
+-	"Hscr;":                     '\U0000210B',
+-	"Hstrok;":                   '\U00000126',
+-	"HumpDownHump;":             '\U0000224E',
+-	"HumpEqual;":                '\U0000224F',
+-	"IEcy;":                     '\U00000415',
+-	"IJlig;":                    '\U00000132',
+-	"IOcy;":                     '\U00000401',
+-	"Iacute;":                   '\U000000CD',
+-	"Icirc;":                    '\U000000CE',
+-	"Icy;":                      '\U00000418',
+-	"Idot;":                     '\U00000130',
+-	"Ifr;":                      '\U00002111',
+-	"Igrave;":                   '\U000000CC',
+-	"Im;":                       '\U00002111',
+-	"Imacr;":                    '\U0000012A',
+-	"ImaginaryI;":               '\U00002148',
+-	"Implies;":                  '\U000021D2',
+-	"Int;":                      '\U0000222C',
+-	"Integral;":                 '\U0000222B',
+-	"Intersection;":             '\U000022C2',
+-	"InvisibleComma;":           '\U00002063',
+-	"InvisibleTimes;":           '\U00002062',
+-	"Iogon;":                    '\U0000012E',
+-	"Iopf;":                     '\U0001D540',
+-	"Iota;":                     '\U00000399',
+-	"Iscr;":                     '\U00002110',
+-	"Itilde;":                   '\U00000128',
+-	"Iukcy;":                    '\U00000406',
+-	"Iuml;":                     '\U000000CF',
+-	"Jcirc;":                    '\U00000134',
+-	"Jcy;":                      '\U00000419',
+-	"Jfr;":                      '\U0001D50D',
+-	"Jopf;":                     '\U0001D541',
+-	"Jscr;":                     '\U0001D4A5',
+-	"Jsercy;":                   '\U00000408',
+-	"Jukcy;":                    '\U00000404',
+-	"KHcy;":                     '\U00000425',
+-	"KJcy;":                     '\U0000040C',
+-	"Kappa;":                    '\U0000039A',
+-	"Kcedil;":                   '\U00000136',
+-	"Kcy;":                      '\U0000041A',
+-	"Kfr;":                      '\U0001D50E',
+-	"Kopf;":                     '\U0001D542',
+-	"Kscr;":                     '\U0001D4A6',
+-	"LJcy;":                     '\U00000409',
+-	"LT;":                       '\U0000003C',
+-	"Lacute;":                   '\U00000139',
+-	"Lambda;":                   '\U0000039B',
+-	"Lang;":                     '\U000027EA',
+-	"Laplacetrf;":               '\U00002112',
+-	"Larr;":                     '\U0000219E',
+-	"Lcaron;":                   '\U0000013D',
+-	"Lcedil;":                   '\U0000013B',
+-	"Lcy;":                      '\U0000041B',
+-	"LeftAngleBracket;":         '\U000027E8',
+-	"LeftArrow;":                '\U00002190',
+-	"LeftArrowBar;":             '\U000021E4',
+-	"LeftArrowRightArrow;":      '\U000021C6',
+-	"LeftCeiling;":              '\U00002308',
+-	"LeftDoubleBracket;":        '\U000027E6',
+-	"LeftDownTeeVector;":        '\U00002961',
+-	"LeftDownVector;":           '\U000021C3',
+-	"LeftDownVectorBar;":        '\U00002959',
+-	"LeftFloor;":                '\U0000230A',
+-	"LeftRightArrow;":           '\U00002194',
+-	"LeftRightVector;":          '\U0000294E',
+-	"LeftTee;":                  '\U000022A3',
+-	"LeftTeeArrow;":             '\U000021A4',
+-	"LeftTeeVector;":            '\U0000295A',
+-	"LeftTriangle;":             '\U000022B2',
+-	"LeftTriangleBar;":          '\U000029CF',
+-	"LeftTriangleEqual;":        '\U000022B4',
+-	"LeftUpDownVector;":         '\U00002951',
+-	"LeftUpTeeVector;":          '\U00002960',
+-	"LeftUpVector;":             '\U000021BF',
+-	"LeftUpVectorBar;":          '\U00002958',
+-	"LeftVector;":               '\U000021BC',
+-	"LeftVectorBar;":            '\U00002952',
+-	"Leftarrow;":                '\U000021D0',
+-	"Leftrightarrow;":           '\U000021D4',
+-	"LessEqualGreater;":         '\U000022DA',
+-	"LessFullEqual;":            '\U00002266',
+-	"LessGreater;":              '\U00002276',
+-	"LessLess;":                 '\U00002AA1',
+-	"LessSlantEqual;":           '\U00002A7D',
+-	"LessTilde;":                '\U00002272',
+-	"Lfr;":                      '\U0001D50F',
+-	"Ll;":                       '\U000022D8',
+-	"Lleftarrow;":               '\U000021DA',
+-	"Lmidot;":                   '\U0000013F',
+-	"LongLeftArrow;":            '\U000027F5',
+-	"LongLeftRightArrow;":       '\U000027F7',
+-	"LongRightArrow;":           '\U000027F6',
+-	"Longleftarrow;":            '\U000027F8',
+-	"Longleftrightarrow;":       '\U000027FA',
+-	"Longrightarrow;":           '\U000027F9',
+-	"Lopf;":                     '\U0001D543',
+-	"LowerLeftArrow;":           '\U00002199',
+-	"LowerRightArrow;":          '\U00002198',
+-	"Lscr;":                     '\U00002112',
+-	"Lsh;":                      '\U000021B0',
+-	"Lstrok;":                   '\U00000141',
+-	"Lt;":                       '\U0000226A',
+-	"Map;":                      '\U00002905',
+-	"Mcy;":                      '\U0000041C',
+-	"MediumSpace;":              '\U0000205F',
+-	"Mellintrf;":                '\U00002133',
+-	"Mfr;":                      '\U0001D510',
+-	"MinusPlus;":                '\U00002213',
+-	"Mopf;":                     '\U0001D544',
+-	"Mscr;":                     '\U00002133',
+-	"Mu;":                       '\U0000039C',
+-	"NJcy;":                     '\U0000040A',
+-	"Nacute;":                   '\U00000143',
+-	"Ncaron;":                   '\U00000147',
+-	"Ncedil;":                   '\U00000145',
+-	"Ncy;":                      '\U0000041D',
+-	"NegativeMediumSpace;":      '\U0000200B',
+-	"NegativeThickSpace;":       '\U0000200B',
+-	"NegativeThinSpace;":        '\U0000200B',
+-	"NegativeVeryThinSpace;":    '\U0000200B',
+-	"NestedGreaterGreater;":     '\U0000226B',
+-	"NestedLessLess;":           '\U0000226A',
+-	"NewLine;":                  '\U0000000A',
+-	"Nfr;":                      '\U0001D511',
+-	"NoBreak;":                  '\U00002060',
+-	"NonBreakingSpace;":         '\U000000A0',
+-	"Nopf;":                     '\U00002115',
+-	"Not;":                      '\U00002AEC',
+-	"NotCongruent;":             '\U00002262',
+-	"NotCupCap;":                '\U0000226D',
+-	"NotDoubleVerticalBar;":     '\U00002226',
+-	"NotElement;":               '\U00002209',
+-	"NotEqual;":                 '\U00002260',
+-	"NotExists;":                '\U00002204',
+-	"NotGreater;":               '\U0000226F',
+-	"NotGreaterEqual;":          '\U00002271',
+-	"NotGreaterLess;":           '\U00002279',
+-	"NotGreaterTilde;":          '\U00002275',
+-	"NotLeftTriangle;":          '\U000022EA',
+-	"NotLeftTriangleEqual;":     '\U000022EC',
+-	"NotLess;":                  '\U0000226E',
+-	"NotLessEqual;":             '\U00002270',
+-	"NotLessGreater;":           '\U00002278',
+-	"NotLessTilde;":             '\U00002274',
+-	"NotPrecedes;":              '\U00002280',
+-	"NotPrecedesSlantEqual;":    '\U000022E0',
+-	"NotReverseElement;":        '\U0000220C',
+-	"NotRightTriangle;":         '\U000022EB',
+-	"NotRightTriangleEqual;":    '\U000022ED',
+-	"NotSquareSubsetEqual;":     '\U000022E2',
+-	"NotSquareSupersetEqual;":   '\U000022E3',
+-	"NotSubsetEqual;":           '\U00002288',
+-	"NotSucceeds;":              '\U00002281',
+-	"NotSucceedsSlantEqual;":    '\U000022E1',
+-	"NotSupersetEqual;":         '\U00002289',
+-	"NotTilde;":                 '\U00002241',
+-	"NotTildeEqual;":            '\U00002244',
+-	"NotTildeFullEqual;":        '\U00002247',
+-	"NotTildeTilde;":            '\U00002249',
+-	"NotVerticalBar;":           '\U00002224',
+-	"Nscr;":                     '\U0001D4A9',
+-	"Ntilde;":                   '\U000000D1',
+-	"Nu;":                       '\U0000039D',
+-	"OElig;":                    '\U00000152',
+-	"Oacute;":                   '\U000000D3',
+-	"Ocirc;":                    '\U000000D4',
+-	"Ocy;":                      '\U0000041E',
+-	"Odblac;":                   '\U00000150',
+-	"Ofr;":                      '\U0001D512',
+-	"Ograve;":                   '\U000000D2',
+-	"Omacr;":                    '\U0000014C',
+-	"Omega;":                    '\U000003A9',
+-	"Omicron;":                  '\U0000039F',
+-	"Oopf;":                     '\U0001D546',
+-	"OpenCurlyDoubleQuote;":     '\U0000201C',
+-	"OpenCurlyQuote;":           '\U00002018',
+-	"Or;":                       '\U00002A54',
+-	"Oscr;":                     '\U0001D4AA',
+-	"Oslash;":                   '\U000000D8',
+-	"Otilde;":                   '\U000000D5',
+-	"Otimes;":                   '\U00002A37',
+-	"Ouml;":                     '\U000000D6',
+-	"OverBar;":                  '\U0000203E',
+-	"OverBrace;":                '\U000023DE',
+-	"OverBracket;":              '\U000023B4',
+-	"OverParenthesis;":          '\U000023DC',
+-	"PartialD;":                 '\U00002202',
+-	"Pcy;":                      '\U0000041F',
+-	"Pfr;":                      '\U0001D513',
+-	"Phi;":                      '\U000003A6',
+-	"Pi;":                       '\U000003A0',
+-	"PlusMinus;":                '\U000000B1',
+-	"Poincareplane;":            '\U0000210C',
+-	"Popf;":                     '\U00002119',
+-	"Pr;":                       '\U00002ABB',
+-	"Precedes;":                 '\U0000227A',
+-	"PrecedesEqual;":            '\U00002AAF',
+-	"PrecedesSlantEqual;":       '\U0000227C',
+-	"PrecedesTilde;":            '\U0000227E',
+-	"Prime;":                    '\U00002033',
+-	"Product;":                  '\U0000220F',
+-	"Proportion;":               '\U00002237',
+-	"Proportional;":             '\U0000221D',
+-	"Pscr;":                     '\U0001D4AB',
+-	"Psi;":                      '\U000003A8',
+-	"QUOT;":                     '\U00000022',
+-	"Qfr;":                      '\U0001D514',
+-	"Qopf;":                     '\U0000211A',
+-	"Qscr;":                     '\U0001D4AC',
+-	"RBarr;":                    '\U00002910',
+-	"REG;":                      '\U000000AE',
+-	"Racute;":                   '\U00000154',
+-	"Rang;":                     '\U000027EB',
+-	"Rarr;":                     '\U000021A0',
+-	"Rarrtl;":                   '\U00002916',
+-	"Rcaron;":                   '\U00000158',
+-	"Rcedil;":                   '\U00000156',
+-	"Rcy;":                      '\U00000420',
+-	"Re;":                       '\U0000211C',
+-	"ReverseElement;":           '\U0000220B',
+-	"ReverseEquilibrium;":       '\U000021CB',
+-	"ReverseUpEquilibrium;":     '\U0000296F',
+-	"Rfr;":                      '\U0000211C',
+-	"Rho;":                      '\U000003A1',
+-	"RightAngleBracket;":        '\U000027E9',
+-	"RightArrow;":               '\U00002192',
+-	"RightArrowBar;":            '\U000021E5',
+-	"RightArrowLeftArrow;":      '\U000021C4',
+-	"RightCeiling;":             '\U00002309',
+-	"RightDoubleBracket;":       '\U000027E7',
+-	"RightDownTeeVector;":       '\U0000295D',
+-	"RightDownVector;":          '\U000021C2',
+-	"RightDownVectorBar;":       '\U00002955',
+-	"RightFloor;":               '\U0000230B',
+-	"RightTee;":                 '\U000022A2',
+-	"RightTeeArrow;":            '\U000021A6',
+-	"RightTeeVector;":           '\U0000295B',
+-	"RightTriangle;":            '\U000022B3',
+-	"RightTriangleBar;":         '\U000029D0',
+-	"RightTriangleEqual;":       '\U000022B5',
+-	"RightUpDownVector;":        '\U0000294F',
+-	"RightUpTeeVector;":         '\U0000295C',
+-	"RightUpVector;":            '\U000021BE',
+-	"RightUpVectorBar;":         '\U00002954',
+-	"RightVector;":              '\U000021C0',
+-	"RightVectorBar;":           '\U00002953',
+-	"Rightarrow;":               '\U000021D2',
+-	"Ropf;":                     '\U0000211D',
+-	"RoundImplies;":             '\U00002970',
+-	"Rrightarrow;":              '\U000021DB',
+-	"Rscr;":                     '\U0000211B',
+-	"Rsh;":                      '\U000021B1',
+-	"RuleDelayed;":              '\U000029F4',
+-	"SHCHcy;":                   '\U00000429',
+-	"SHcy;":                     '\U00000428',
+-	"SOFTcy;":                   '\U0000042C',
+-	"Sacute;":                   '\U0000015A',
+-	"Sc;":                       '\U00002ABC',
+-	"Scaron;":                   '\U00000160',
+-	"Scedil;":                   '\U0000015E',
+-	"Scirc;":                    '\U0000015C',
+-	"Scy;":                      '\U00000421',
+-	"Sfr;":                      '\U0001D516',
+-	"ShortDownArrow;":           '\U00002193',
+-	"ShortLeftArrow;":           '\U00002190',
+-	"ShortRightArrow;":          '\U00002192',
+-	"ShortUpArrow;":             '\U00002191',
+-	"Sigma;":                    '\U000003A3',
+-	"SmallCircle;":              '\U00002218',
+-	"Sopf;":                     '\U0001D54A',
+-	"Sqrt;":                     '\U0000221A',
+-	"Square;":                   '\U000025A1',
+-	"SquareIntersection;":       '\U00002293',
+-	"SquareSubset;":             '\U0000228F',
+-	"SquareSubsetEqual;":        '\U00002291',
+-	"SquareSuperset;":           '\U00002290',
+-	"SquareSupersetEqual;":      '\U00002292',
+-	"SquareUnion;":              '\U00002294',
+-	"Sscr;":                     '\U0001D4AE',
+-	"Star;":                     '\U000022C6',
+-	"Sub;":                      '\U000022D0',
+-	"Subset;":                   '\U000022D0',
+-	"SubsetEqual;":              '\U00002286',
+-	"Succeeds;":                 '\U0000227B',
+-	"SucceedsEqual;":            '\U00002AB0',
+-	"SucceedsSlantEqual;":       '\U0000227D',
+-	"SucceedsTilde;":            '\U0000227F',
+-	"SuchThat;":                 '\U0000220B',
+-	"Sum;":                      '\U00002211',
+-	"Sup;":                      '\U000022D1',
+-	"Superset;":                 '\U00002283',
+-	"SupersetEqual;":            '\U00002287',
+-	"Supset;":                   '\U000022D1',
+-	"THORN;":                    '\U000000DE',
+-	"TRADE;":                    '\U00002122',
+-	"TSHcy;":                    '\U0000040B',
+-	"TScy;":                     '\U00000426',
+-	"Tab;":                      '\U00000009',
+-	"Tau;":                      '\U000003A4',
+-	"Tcaron;":                   '\U00000164',
+-	"Tcedil;":                   '\U00000162',
+-	"Tcy;":                      '\U00000422',
+-	"Tfr;":                      '\U0001D517',
+-	"Therefore;":                '\U00002234',
+-	"Theta;":                    '\U00000398',
+-	"ThinSpace;":                '\U00002009',
+-	"Tilde;":                    '\U0000223C',
+-	"TildeEqual;":               '\U00002243',
+-	"TildeFullEqual;":           '\U00002245',
+-	"TildeTilde;":               '\U00002248',
+-	"Topf;":                     '\U0001D54B',
+-	"TripleDot;":                '\U000020DB',
+-	"Tscr;":                     '\U0001D4AF',
+-	"Tstrok;":                   '\U00000166',
+-	"Uacute;":                   '\U000000DA',
+-	"Uarr;":                     '\U0000219F',
+-	"Uarrocir;":                 '\U00002949',
+-	"Ubrcy;":                    '\U0000040E',
+-	"Ubreve;":                   '\U0000016C',
+-	"Ucirc;":                    '\U000000DB',
+-	"Ucy;":                      '\U00000423',
+-	"Udblac;":                   '\U00000170',
+-	"Ufr;":                      '\U0001D518',
+-	"Ugrave;":                   '\U000000D9',
+-	"Umacr;":                    '\U0000016A',
+-	"UnderBar;":                 '\U0000005F',
+-	"UnderBrace;":               '\U000023DF',
+-	"UnderBracket;":             '\U000023B5',
+-	"UnderParenthesis;":         '\U000023DD',
+-	"Union;":                    '\U000022C3',
+-	"UnionPlus;":                '\U0000228E',
+-	"Uogon;":                    '\U00000172',
+-	"Uopf;":                     '\U0001D54C',
+-	"UpArrow;":                  '\U00002191',
+-	"UpArrowBar;":               '\U00002912',
+-	"UpArrowDownArrow;":         '\U000021C5',
+-	"UpDownArrow;":              '\U00002195',
+-	"UpEquilibrium;":            '\U0000296E',
+-	"UpTee;":                    '\U000022A5',
+-	"UpTeeArrow;":               '\U000021A5',
+-	"Uparrow;":                  '\U000021D1',
+-	"Updownarrow;":              '\U000021D5',
+-	"UpperLeftArrow;":           '\U00002196',
+-	"UpperRightArrow;":          '\U00002197',
+-	"Upsi;":                     '\U000003D2',
+-	"Upsilon;":                  '\U000003A5',
+-	"Uring;":                    '\U0000016E',
+-	"Uscr;":                     '\U0001D4B0',
+-	"Utilde;":                   '\U00000168',
+-	"Uuml;":                     '\U000000DC',
+-	"VDash;":                    '\U000022AB',
+-	"Vbar;":                     '\U00002AEB',
+-	"Vcy;":                      '\U00000412',
+-	"Vdash;":                    '\U000022A9',
+-	"Vdashl;":                   '\U00002AE6',
+-	"Vee;":                      '\U000022C1',
+-	"Verbar;":                   '\U00002016',
+-	"Vert;":                     '\U00002016',
+-	"VerticalBar;":              '\U00002223',
+-	"VerticalLine;":             '\U0000007C',
+-	"VerticalSeparator;":        '\U00002758',
+-	"VerticalTilde;":            '\U00002240',
+-	"VeryThinSpace;":            '\U0000200A',
+-	"Vfr;":                      '\U0001D519',
+-	"Vopf;":                     '\U0001D54D',
+-	"Vscr;":                     '\U0001D4B1',
+-	"Vvdash;":                   '\U000022AA',
+-	"Wcirc;":                    '\U00000174',
+-	"Wedge;":                    '\U000022C0',
+-	"Wfr;":                      '\U0001D51A',
+-	"Wopf;":                     '\U0001D54E',
+-	"Wscr;":                     '\U0001D4B2',
+-	"Xfr;":                      '\U0001D51B',
+-	"Xi;":                       '\U0000039E',
+-	"Xopf;":                     '\U0001D54F',
+-	"Xscr;":                     '\U0001D4B3',
+-	"YAcy;":                     '\U0000042F',
+-	"YIcy;":                     '\U00000407',
+-	"YUcy;":                     '\U0000042E',
+-	"Yacute;":                   '\U000000DD',
+-	"Ycirc;":                    '\U00000176',
+-	"Ycy;":                      '\U0000042B',
+-	"Yfr;":                      '\U0001D51C',
+-	"Yopf;":                     '\U0001D550',
+-	"Yscr;":                     '\U0001D4B4',
+-	"Yuml;":                     '\U00000178',
+-	"ZHcy;":                     '\U00000416',
+-	"Zacute;":                   '\U00000179',
+-	"Zcaron;":                   '\U0000017D',
+-	"Zcy;":                      '\U00000417',
+-	"Zdot;":                     '\U0000017B',
+-	"ZeroWidthSpace;":           '\U0000200B',
+-	"Zeta;":                     '\U00000396',
+-	"Zfr;":                      '\U00002128',
+-	"Zopf;":                     '\U00002124',
+-	"Zscr;":                     '\U0001D4B5',
+-	"aacute;":                   '\U000000E1',
+-	"abreve;":                   '\U00000103',
+-	"ac;":                       '\U0000223E',
+-	"acd;":                      '\U0000223F',
+-	"acirc;":                    '\U000000E2',
+-	"acute;":                    '\U000000B4',
+-	"acy;":                      '\U00000430',
+-	"aelig;":                    '\U000000E6',
+-	"af;":                       '\U00002061',
+-	"afr;":                      '\U0001D51E',
+-	"agrave;":                   '\U000000E0',
+-	"alefsym;":                  '\U00002135',
+-	"aleph;":                    '\U00002135',
+-	"alpha;":                    '\U000003B1',
+-	"amacr;":                    '\U00000101',
+-	"amalg;":                    '\U00002A3F',
+-	"amp;":                      '\U00000026',
+-	"and;":                      '\U00002227',
+-	"andand;":                   '\U00002A55',
+-	"andd;":                     '\U00002A5C',
+-	"andslope;":                 '\U00002A58',
+-	"andv;":                     '\U00002A5A',
+-	"ang;":                      '\U00002220',
+-	"ange;":                     '\U000029A4',
+-	"angle;":                    '\U00002220',
+-	"angmsd;":                   '\U00002221',
+-	"angmsdaa;":                 '\U000029A8',
+-	"angmsdab;":                 '\U000029A9',
+-	"angmsdac;":                 '\U000029AA',
+-	"angmsdad;":                 '\U000029AB',
+-	"angmsdae;":                 '\U000029AC',
+-	"angmsdaf;":                 '\U000029AD',
+-	"angmsdag;":                 '\U000029AE',
+-	"angmsdah;":                 '\U000029AF',
+-	"angrt;":                    '\U0000221F',
+-	"angrtvb;":                  '\U000022BE',
+-	"angrtvbd;":                 '\U0000299D',
+-	"angsph;":                   '\U00002222',
+-	"angst;":                    '\U000000C5',
+-	"angzarr;":                  '\U0000237C',
+-	"aogon;":                    '\U00000105',
+-	"aopf;":                     '\U0001D552',
+-	"ap;":                       '\U00002248',
+-	"apE;":                      '\U00002A70',
+-	"apacir;":                   '\U00002A6F',
+-	"ape;":                      '\U0000224A',
+-	"apid;":                     '\U0000224B',
+-	"apos;":                     '\U00000027',
+-	"approx;":                   '\U00002248',
+-	"approxeq;":                 '\U0000224A',
+-	"aring;":                    '\U000000E5',
+-	"ascr;":                     '\U0001D4B6',
+-	"ast;":                      '\U0000002A',
+-	"asymp;":                    '\U00002248',
+-	"asympeq;":                  '\U0000224D',
+-	"atilde;":                   '\U000000E3',
+-	"auml;":                     '\U000000E4',
+-	"awconint;":                 '\U00002233',
+-	"awint;":                    '\U00002A11',
+-	"bNot;":                     '\U00002AED',
+-	"backcong;":                 '\U0000224C',
+-	"backepsilon;":              '\U000003F6',
+-	"backprime;":                '\U00002035',
+-	"backsim;":                  '\U0000223D',
+-	"backsimeq;":                '\U000022CD',
+-	"barvee;":                   '\U000022BD',
+-	"barwed;":                   '\U00002305',
+-	"barwedge;":                 '\U00002305',
+-	"bbrk;":                     '\U000023B5',
+-	"bbrktbrk;":                 '\U000023B6',
+-	"bcong;":                    '\U0000224C',
+-	"bcy;":                      '\U00000431',
+-	"bdquo;":                    '\U0000201E',
+-	"becaus;":                   '\U00002235',
+-	"because;":                  '\U00002235',
+-	"bemptyv;":                  '\U000029B0',
+-	"bepsi;":                    '\U000003F6',
+-	"bernou;":                   '\U0000212C',
+-	"beta;":                     '\U000003B2',
+-	"beth;":                     '\U00002136',
+-	"between;":                  '\U0000226C',
+-	"bfr;":                      '\U0001D51F',
+-	"bigcap;":                   '\U000022C2',
+-	"bigcirc;":                  '\U000025EF',
+-	"bigcup;":                   '\U000022C3',
+-	"bigodot;":                  '\U00002A00',
+-	"bigoplus;":                 '\U00002A01',
+-	"bigotimes;":                '\U00002A02',
+-	"bigsqcup;":                 '\U00002A06',
+-	"bigstar;":                  '\U00002605',
+-	"bigtriangledown;":          '\U000025BD',
+-	"bigtriangleup;":            '\U000025B3',
+-	"biguplus;":                 '\U00002A04',
+-	"bigvee;":                   '\U000022C1',
+-	"bigwedge;":                 '\U000022C0',
+-	"bkarow;":                   '\U0000290D',
+-	"blacklozenge;":             '\U000029EB',
+-	"blacksquare;":              '\U000025AA',
+-	"blacktriangle;":            '\U000025B4',
+-	"blacktriangledown;":        '\U000025BE',
+-	"blacktriangleleft;":        '\U000025C2',
+-	"blacktriangleright;":       '\U000025B8',
+-	"blank;":                    '\U00002423',
+-	"blk12;":                    '\U00002592',
+-	"blk14;":                    '\U00002591',
+-	"blk34;":                    '\U00002593',
+-	"block;":                    '\U00002588',
+-	"bnot;":                     '\U00002310',
+-	"bopf;":                     '\U0001D553',
+-	"bot;":                      '\U000022A5',
+-	"bottom;":                   '\U000022A5',
+-	"bowtie;":                   '\U000022C8',
+-	"boxDL;":                    '\U00002557',
+-	"boxDR;":                    '\U00002554',
+-	"boxDl;":                    '\U00002556',
+-	"boxDr;":                    '\U00002553',
+-	"boxH;":                     '\U00002550',
+-	"boxHD;":                    '\U00002566',
+-	"boxHU;":                    '\U00002569',
+-	"boxHd;":                    '\U00002564',
+-	"boxHu;":                    '\U00002567',
+-	"boxUL;":                    '\U0000255D',
+-	"boxUR;":                    '\U0000255A',
+-	"boxUl;":                    '\U0000255C',
+-	"boxUr;":                    '\U00002559',
+-	"boxV;":                     '\U00002551',
+-	"boxVH;":                    '\U0000256C',
+-	"boxVL;":                    '\U00002563',
+-	"boxVR;":                    '\U00002560',
+-	"boxVh;":                    '\U0000256B',
+-	"boxVl;":                    '\U00002562',
+-	"boxVr;":                    '\U0000255F',
+-	"boxbox;":                   '\U000029C9',
+-	"boxdL;":                    '\U00002555',
+-	"boxdR;":                    '\U00002552',
+-	"boxdl;":                    '\U00002510',
+-	"boxdr;":                    '\U0000250C',
+-	"boxh;":                     '\U00002500',
+-	"boxhD;":                    '\U00002565',
+-	"boxhU;":                    '\U00002568',
+-	"boxhd;":                    '\U0000252C',
+-	"boxhu;":                    '\U00002534',
+-	"boxminus;":                 '\U0000229F',
+-	"boxplus;":                  '\U0000229E',
+-	"boxtimes;":                 '\U000022A0',
+-	"boxuL;":                    '\U0000255B',
+-	"boxuR;":                    '\U00002558',
+-	"boxul;":                    '\U00002518',
+-	"boxur;":                    '\U00002514',
+-	"boxv;":                     '\U00002502',
+-	"boxvH;":                    '\U0000256A',
+-	"boxvL;":                    '\U00002561',
+-	"boxvR;":                    '\U0000255E',
+-	"boxvh;":                    '\U0000253C',
+-	"boxvl;":                    '\U00002524',
+-	"boxvr;":                    '\U0000251C',
+-	"bprime;":                   '\U00002035',
+-	"breve;":                    '\U000002D8',
+-	"brvbar;":                   '\U000000A6',
+-	"bscr;":                     '\U0001D4B7',
+-	"bsemi;":                    '\U0000204F',
+-	"bsim;":                     '\U0000223D',
+-	"bsime;":                    '\U000022CD',
+-	"bsol;":                     '\U0000005C',
+-	"bsolb;":                    '\U000029C5',
+-	"bsolhsub;":                 '\U000027C8',
+-	"bull;":                     '\U00002022',
+-	"bullet;":                   '\U00002022',
+-	"bump;":                     '\U0000224E',
+-	"bumpE;":                    '\U00002AAE',
+-	"bumpe;":                    '\U0000224F',
+-	"bumpeq;":                   '\U0000224F',
+-	"cacute;":                   '\U00000107',
+-	"cap;":                      '\U00002229',
+-	"capand;":                   '\U00002A44',
+-	"capbrcup;":                 '\U00002A49',
+-	"capcap;":                   '\U00002A4B',
+-	"capcup;":                   '\U00002A47',
+-	"capdot;":                   '\U00002A40',
+-	"caret;":                    '\U00002041',
+-	"caron;":                    '\U000002C7',
+-	"ccaps;":                    '\U00002A4D',
+-	"ccaron;":                   '\U0000010D',
+-	"ccedil;":                   '\U000000E7',
+-	"ccirc;":                    '\U00000109',
+-	"ccups;":                    '\U00002A4C',
+-	"ccupssm;":                  '\U00002A50',
+-	"cdot;":                     '\U0000010B',
+-	"cedil;":                    '\U000000B8',
+-	"cemptyv;":                  '\U000029B2',
+-	"cent;":                     '\U000000A2',
+-	"centerdot;":                '\U000000B7',
+-	"cfr;":                      '\U0001D520',
+-	"chcy;":                     '\U00000447',
+-	"check;":                    '\U00002713',
+-	"checkmark;":                '\U00002713',
+-	"chi;":                      '\U000003C7',
+-	"cir;":                      '\U000025CB',
+-	"cirE;":                     '\U000029C3',
+-	"circ;":                     '\U000002C6',
+-	"circeq;":                   '\U00002257',
+-	"circlearrowleft;":          '\U000021BA',
+-	"circlearrowright;":         '\U000021BB',
+-	"circledR;":                 '\U000000AE',
+-	"circledS;":                 '\U000024C8',
+-	"circledast;":               '\U0000229B',
+-	"circledcirc;":              '\U0000229A',
+-	"circleddash;":              '\U0000229D',
+-	"cire;":                     '\U00002257',
+-	"cirfnint;":                 '\U00002A10',
+-	"cirmid;":                   '\U00002AEF',
+-	"cirscir;":                  '\U000029C2',
+-	"clubs;":                    '\U00002663',
+-	"clubsuit;":                 '\U00002663',
+-	"colon;":                    '\U0000003A',
+-	"colone;":                   '\U00002254',
+-	"coloneq;":                  '\U00002254',
+-	"comma;":                    '\U0000002C',
+-	"commat;":                   '\U00000040',
+-	"comp;":                     '\U00002201',
+-	"compfn;":                   '\U00002218',
+-	"complement;":               '\U00002201',
+-	"complexes;":                '\U00002102',
+-	"cong;":                     '\U00002245',
+-	"congdot;":                  '\U00002A6D',
+-	"conint;":                   '\U0000222E',
+-	"copf;":                     '\U0001D554',
+-	"coprod;":                   '\U00002210',
+-	"copy;":                     '\U000000A9',
+-	"copysr;":                   '\U00002117',
+-	"crarr;":                    '\U000021B5',
+-	"cross;":                    '\U00002717',
+-	"cscr;":                     '\U0001D4B8',
+-	"csub;":                     '\U00002ACF',
+-	"csube;":                    '\U00002AD1',
+-	"csup;":                     '\U00002AD0',
+-	"csupe;":                    '\U00002AD2',
+-	"ctdot;":                    '\U000022EF',
+-	"cudarrl;":                  '\U00002938',
+-	"cudarrr;":                  '\U00002935',
+-	"cuepr;":                    '\U000022DE',
+-	"cuesc;":                    '\U000022DF',
+-	"cularr;":                   '\U000021B6',
+-	"cularrp;":                  '\U0000293D',
+-	"cup;":                      '\U0000222A',
+-	"cupbrcap;":                 '\U00002A48',
+-	"cupcap;":                   '\U00002A46',
+-	"cupcup;":                   '\U00002A4A',
+-	"cupdot;":                   '\U0000228D',
+-	"cupor;":                    '\U00002A45',
+-	"curarr;":                   '\U000021B7',
+-	"curarrm;":                  '\U0000293C',
+-	"curlyeqprec;":              '\U000022DE',
+-	"curlyeqsucc;":              '\U000022DF',
+-	"curlyvee;":                 '\U000022CE',
+-	"curlywedge;":               '\U000022CF',
+-	"curren;":                   '\U000000A4',
+-	"curvearrowleft;":           '\U000021B6',
+-	"curvearrowright;":          '\U000021B7',
+-	"cuvee;":                    '\U000022CE',
+-	"cuwed;":                    '\U000022CF',
+-	"cwconint;":                 '\U00002232',
+-	"cwint;":                    '\U00002231',
+-	"cylcty;":                   '\U0000232D',
+-	"dArr;":                     '\U000021D3',
+-	"dHar;":                     '\U00002965',
+-	"dagger;":                   '\U00002020',
+-	"daleth;":                   '\U00002138',
+-	"darr;":                     '\U00002193',
+-	"dash;":                     '\U00002010',
+-	"dashv;":                    '\U000022A3',
+-	"dbkarow;":                  '\U0000290F',
+-	"dblac;":                    '\U000002DD',
+-	"dcaron;":                   '\U0000010F',
+-	"dcy;":                      '\U00000434',
+-	"dd;":                       '\U00002146',
+-	"ddagger;":                  '\U00002021',
+-	"ddarr;":                    '\U000021CA',
+-	"ddotseq;":                  '\U00002A77',
+-	"deg;":                      '\U000000B0',
+-	"delta;":                    '\U000003B4',
+-	"demptyv;":                  '\U000029B1',
+-	"dfisht;":                   '\U0000297F',
+-	"dfr;":                      '\U0001D521',
+-	"dharl;":                    '\U000021C3',
+-	"dharr;":                    '\U000021C2',
+-	"diam;":                     '\U000022C4',
+-	"diamond;":                  '\U000022C4',
+-	"diamondsuit;":              '\U00002666',
+-	"diams;":                    '\U00002666',
+-	"die;":                      '\U000000A8',
+-	"digamma;":                  '\U000003DD',
+-	"disin;":                    '\U000022F2',
+-	"div;":                      '\U000000F7',
+-	"divide;":                   '\U000000F7',
+-	"divideontimes;":            '\U000022C7',
+-	"divonx;":                   '\U000022C7',
+-	"djcy;":                     '\U00000452',
+-	"dlcorn;":                   '\U0000231E',
+-	"dlcrop;":                   '\U0000230D',
+-	"dollar;":                   '\U00000024',
+-	"dopf;":                     '\U0001D555',
+-	"dot;":                      '\U000002D9',
+-	"doteq;":                    '\U00002250',
+-	"doteqdot;":                 '\U00002251',
+-	"dotminus;":                 '\U00002238',
+-	"dotplus;":                  '\U00002214',
+-	"dotsquare;":                '\U000022A1',
+-	"doublebarwedge;":           '\U00002306',
+-	"downarrow;":                '\U00002193',
+-	"downdownarrows;":           '\U000021CA',
+-	"downharpoonleft;":          '\U000021C3',
+-	"downharpoonright;":         '\U000021C2',
+-	"drbkarow;":                 '\U00002910',
+-	"drcorn;":                   '\U0000231F',
+-	"drcrop;":                   '\U0000230C',
+-	"dscr;":                     '\U0001D4B9',
+-	"dscy;":                     '\U00000455',
+-	"dsol;":                     '\U000029F6',
+-	"dstrok;":                   '\U00000111',
+-	"dtdot;":                    '\U000022F1',
+-	"dtri;":                     '\U000025BF',
+-	"dtrif;":                    '\U000025BE',
+-	"duarr;":                    '\U000021F5',
+-	"duhar;":                    '\U0000296F',
+-	"dwangle;":                  '\U000029A6',
+-	"dzcy;":                     '\U0000045F',
+-	"dzigrarr;":                 '\U000027FF',
+-	"eDDot;":                    '\U00002A77',
+-	"eDot;":                     '\U00002251',
+-	"eacute;":                   '\U000000E9',
+-	"easter;":                   '\U00002A6E',
+-	"ecaron;":                   '\U0000011B',
+-	"ecir;":                     '\U00002256',
+-	"ecirc;":                    '\U000000EA',
+-	"ecolon;":                   '\U00002255',
+-	"ecy;":                      '\U0000044D',
+-	"edot;":                     '\U00000117',
+-	"ee;":                       '\U00002147',
+-	"efDot;":                    '\U00002252',
+-	"efr;":                      '\U0001D522',
+-	"eg;":                       '\U00002A9A',
+-	"egrave;":                   '\U000000E8',
+-	"egs;":                      '\U00002A96',
+-	"egsdot;":                   '\U00002A98',
+-	"el;":                       '\U00002A99',
+-	"elinters;":                 '\U000023E7',
+-	"ell;":                      '\U00002113',
+-	"els;":                      '\U00002A95',
+-	"elsdot;":                   '\U00002A97',
+-	"emacr;":                    '\U00000113',
+-	"empty;":                    '\U00002205',
+-	"emptyset;":                 '\U00002205',
+-	"emptyv;":                   '\U00002205',
+-	"emsp;":                     '\U00002003',
+-	"emsp13;":                   '\U00002004',
+-	"emsp14;":                   '\U00002005',
+-	"eng;":                      '\U0000014B',
+-	"ensp;":                     '\U00002002',
+-	"eogon;":                    '\U00000119',
+-	"eopf;":                     '\U0001D556',
+-	"epar;":                     '\U000022D5',
+-	"eparsl;":                   '\U000029E3',
+-	"eplus;":                    '\U00002A71',
+-	"epsi;":                     '\U000003B5',
+-	"epsilon;":                  '\U000003B5',
+-	"epsiv;":                    '\U000003F5',
+-	"eqcirc;":                   '\U00002256',
+-	"eqcolon;":                  '\U00002255',
+-	"eqsim;":                    '\U00002242',
+-	"eqslantgtr;":               '\U00002A96',
+-	"eqslantless;":              '\U00002A95',
+-	"equals;":                   '\U0000003D',
+-	"equest;":                   '\U0000225F',
+-	"equiv;":                    '\U00002261',
+-	"equivDD;":                  '\U00002A78',
+-	"eqvparsl;":                 '\U000029E5',
+-	"erDot;":                    '\U00002253',
+-	"erarr;":                    '\U00002971',
+-	"escr;":                     '\U0000212F',
+-	"esdot;":                    '\U00002250',
+-	"esim;":                     '\U00002242',
+-	"eta;":                      '\U000003B7',
+-	"eth;":                      '\U000000F0',
+-	"euml;":                     '\U000000EB',
+-	"euro;":                     '\U000020AC',
+-	"excl;":                     '\U00000021',
+-	"exist;":                    '\U00002203',
+-	"expectation;":              '\U00002130',
+-	"exponentiale;":             '\U00002147',
+-	"fallingdotseq;":            '\U00002252',
+-	"fcy;":                      '\U00000444',
+-	"female;":                   '\U00002640',
+-	"ffilig;":                   '\U0000FB03',
+-	"fflig;":                    '\U0000FB00',
+-	"ffllig;":                   '\U0000FB04',
+-	"ffr;":                      '\U0001D523',
+-	"filig;":                    '\U0000FB01',
+-	"flat;":                     '\U0000266D',
+-	"fllig;":                    '\U0000FB02',
+-	"fltns;":                    '\U000025B1',
+-	"fnof;":                     '\U00000192',
+-	"fopf;":                     '\U0001D557',
+-	"forall;":                   '\U00002200',
+-	"fork;":                     '\U000022D4',
+-	"forkv;":                    '\U00002AD9',
+-	"fpartint;":                 '\U00002A0D',
+-	"frac12;":                   '\U000000BD',
+-	"frac13;":                   '\U00002153',
+-	"frac14;":                   '\U000000BC',
+-	"frac15;":                   '\U00002155',
+-	"frac16;":                   '\U00002159',
+-	"frac18;":                   '\U0000215B',
+-	"frac23;":                   '\U00002154',
+-	"frac25;":                   '\U00002156',
+-	"frac34;":                   '\U000000BE',
+-	"frac35;":                   '\U00002157',
+-	"frac38;":                   '\U0000215C',
+-	"frac45;":                   '\U00002158',
+-	"frac56;":                   '\U0000215A',
+-	"frac58;":                   '\U0000215D',
+-	"frac78;":                   '\U0000215E',
+-	"frasl;":                    '\U00002044',
+-	"frown;":                    '\U00002322',
+-	"fscr;":                     '\U0001D4BB',
+-	"gE;":                       '\U00002267',
+-	"gEl;":                      '\U00002A8C',
+-	"gacute;":                   '\U000001F5',
+-	"gamma;":                    '\U000003B3',
+-	"gammad;":                   '\U000003DD',
+-	"gap;":                      '\U00002A86',
+-	"gbreve;":                   '\U0000011F',
+-	"gcirc;":                    '\U0000011D',
+-	"gcy;":                      '\U00000433',
+-	"gdot;":                     '\U00000121',
+-	"ge;":                       '\U00002265',
+-	"gel;":                      '\U000022DB',
+-	"geq;":                      '\U00002265',
+-	"geqq;":                     '\U00002267',
+-	"geqslant;":                 '\U00002A7E',
+-	"ges;":                      '\U00002A7E',
+-	"gescc;":                    '\U00002AA9',
+-	"gesdot;":                   '\U00002A80',
+-	"gesdoto;":                  '\U00002A82',
+-	"gesdotol;":                 '\U00002A84',
+-	"gesles;":                   '\U00002A94',
+-	"gfr;":                      '\U0001D524',
+-	"gg;":                       '\U0000226B',
+-	"ggg;":                      '\U000022D9',
+-	"gimel;":                    '\U00002137',
+-	"gjcy;":                     '\U00000453',
+-	"gl;":                       '\U00002277',
+-	"glE;":                      '\U00002A92',
+-	"gla;":                      '\U00002AA5',
+-	"glj;":                      '\U00002AA4',
+-	"gnE;":                      '\U00002269',
+-	"gnap;":                     '\U00002A8A',
+-	"gnapprox;":                 '\U00002A8A',
+-	"gne;":                      '\U00002A88',
+-	"gneq;":                     '\U00002A88',
+-	"gneqq;":                    '\U00002269',
+-	"gnsim;":                    '\U000022E7',
+-	"gopf;":                     '\U0001D558',
+-	"grave;":                    '\U00000060',
+-	"gscr;":                     '\U0000210A',
+-	"gsim;":                     '\U00002273',
+-	"gsime;":                    '\U00002A8E',
+-	"gsiml;":                    '\U00002A90',
+-	"gt;":                       '\U0000003E',
+-	"gtcc;":                     '\U00002AA7',
+-	"gtcir;":                    '\U00002A7A',
+-	"gtdot;":                    '\U000022D7',
+-	"gtlPar;":                   '\U00002995',
+-	"gtquest;":                  '\U00002A7C',
+-	"gtrapprox;":                '\U00002A86',
+-	"gtrarr;":                   '\U00002978',
+-	"gtrdot;":                   '\U000022D7',
+-	"gtreqless;":                '\U000022DB',
+-	"gtreqqless;":               '\U00002A8C',
+-	"gtrless;":                  '\U00002277',
+-	"gtrsim;":                   '\U00002273',
+-	"hArr;":                     '\U000021D4',
+-	"hairsp;":                   '\U0000200A',
+-	"half;":                     '\U000000BD',
+-	"hamilt;":                   '\U0000210B',
+-	"hardcy;":                   '\U0000044A',
+-	"harr;":                     '\U00002194',
+-	"harrcir;":                  '\U00002948',
+-	"harrw;":                    '\U000021AD',
+-	"hbar;":                     '\U0000210F',
+-	"hcirc;":                    '\U00000125',
+-	"hearts;":                   '\U00002665',
+-	"heartsuit;":                '\U00002665',
+-	"hellip;":                   '\U00002026',
+-	"hercon;":                   '\U000022B9',
+-	"hfr;":                      '\U0001D525',
+-	"hksearow;":                 '\U00002925',
+-	"hkswarow;":                 '\U00002926',
+-	"hoarr;":                    '\U000021FF',
+-	"homtht;":                   '\U0000223B',
+-	"hookleftarrow;":            '\U000021A9',
+-	"hookrightarrow;":           '\U000021AA',
+-	"hopf;":                     '\U0001D559',
+-	"horbar;":                   '\U00002015',
+-	"hscr;":                     '\U0001D4BD',
+-	"hslash;":                   '\U0000210F',
+-	"hstrok;":                   '\U00000127',
+-	"hybull;":                   '\U00002043',
+-	"hyphen;":                   '\U00002010',
+-	"iacute;":                   '\U000000ED',
+-	"ic;":                       '\U00002063',
+-	"icirc;":                    '\U000000EE',
+-	"icy;":                      '\U00000438',
+-	"iecy;":                     '\U00000435',
+-	"iexcl;":                    '\U000000A1',
+-	"iff;":                      '\U000021D4',
+-	"ifr;":                      '\U0001D526',
+-	"igrave;":                   '\U000000EC',
+-	"ii;":                       '\U00002148',
+-	"iiiint;":                   '\U00002A0C',
+-	"iiint;":                    '\U0000222D',
+-	"iinfin;":                   '\U000029DC',
+-	"iiota;":                    '\U00002129',
+-	"ijlig;":                    '\U00000133',
+-	"imacr;":                    '\U0000012B',
+-	"image;":                    '\U00002111',
+-	"imagline;":                 '\U00002110',
+-	"imagpart;":                 '\U00002111',
+-	"imath;":                    '\U00000131',
+-	"imof;":                     '\U000022B7',
+-	"imped;":                    '\U000001B5',
+-	"in;":                       '\U00002208',
+-	"incare;":                   '\U00002105',
+-	"infin;":                    '\U0000221E',
+-	"infintie;":                 '\U000029DD',
+-	"inodot;":                   '\U00000131',
+-	"int;":                      '\U0000222B',
+-	"intcal;":                   '\U000022BA',
+-	"integers;":                 '\U00002124',
+-	"intercal;":                 '\U000022BA',
+-	"intlarhk;":                 '\U00002A17',
+-	"intprod;":                  '\U00002A3C',
+-	"iocy;":                     '\U00000451',
+-	"iogon;":                    '\U0000012F',
+-	"iopf;":                     '\U0001D55A',
+-	"iota;":                     '\U000003B9',
+-	"iprod;":                    '\U00002A3C',
+-	"iquest;":                   '\U000000BF',
+-	"iscr;":                     '\U0001D4BE',
+-	"isin;":                     '\U00002208',
+-	"isinE;":                    '\U000022F9',
+-	"isindot;":                  '\U000022F5',
+-	"isins;":                    '\U000022F4',
+-	"isinsv;":                   '\U000022F3',
+-	"isinv;":                    '\U00002208',
+-	"it;":                       '\U00002062',
+-	"itilde;":                   '\U00000129',
+-	"iukcy;":                    '\U00000456',
+-	"iuml;":                     '\U000000EF',
+-	"jcirc;":                    '\U00000135',
+-	"jcy;":                      '\U00000439',
+-	"jfr;":                      '\U0001D527',
+-	"jmath;":                    '\U00000237',
+-	"jopf;":                     '\U0001D55B',
+-	"jscr;":                     '\U0001D4BF',
+-	"jsercy;":                   '\U00000458',
+-	"jukcy;":                    '\U00000454',
+-	"kappa;":                    '\U000003BA',
+-	"kappav;":                   '\U000003F0',
+-	"kcedil;":                   '\U00000137',
+-	"kcy;":                      '\U0000043A',
+-	"kfr;":                      '\U0001D528',
+-	"kgreen;":                   '\U00000138',
+-	"khcy;":                     '\U00000445',
+-	"kjcy;":                     '\U0000045C',
+-	"kopf;":                     '\U0001D55C',
+-	"kscr;":                     '\U0001D4C0',
+-	"lAarr;":                    '\U000021DA',
+-	"lArr;":                     '\U000021D0',
+-	"lAtail;":                   '\U0000291B',
+-	"lBarr;":                    '\U0000290E',
+-	"lE;":                       '\U00002266',
+-	"lEg;":                      '\U00002A8B',
+-	"lHar;":                     '\U00002962',
+-	"lacute;":                   '\U0000013A',
+-	"laemptyv;":                 '\U000029B4',
+-	"lagran;":                   '\U00002112',
+-	"lambda;":                   '\U000003BB',
+-	"lang;":                     '\U000027E8',
+-	"langd;":                    '\U00002991',
+-	"langle;":                   '\U000027E8',
+-	"lap;":                      '\U00002A85',
+-	"laquo;":                    '\U000000AB',
+-	"larr;":                     '\U00002190',
+-	"larrb;":                    '\U000021E4',
+-	"larrbfs;":                  '\U0000291F',
+-	"larrfs;":                   '\U0000291D',
+-	"larrhk;":                   '\U000021A9',
+-	"larrlp;":                   '\U000021AB',
+-	"larrpl;":                   '\U00002939',
+-	"larrsim;":                  '\U00002973',
+-	"larrtl;":                   '\U000021A2',
+-	"lat;":                      '\U00002AAB',
+-	"latail;":                   '\U00002919',
+-	"late;":                     '\U00002AAD',
+-	"lbarr;":                    '\U0000290C',
+-	"lbbrk;":                    '\U00002772',
+-	"lbrace;":                   '\U0000007B',
+-	"lbrack;":                   '\U0000005B',
+-	"lbrke;":                    '\U0000298B',
+-	"lbrksld;":                  '\U0000298F',
+-	"lbrkslu;":                  '\U0000298D',
+-	"lcaron;":                   '\U0000013E',
+-	"lcedil;":                   '\U0000013C',
+-	"lceil;":                    '\U00002308',
+-	"lcub;":                     '\U0000007B',
+-	"lcy;":                      '\U0000043B',
+-	"ldca;":                     '\U00002936',
+-	"ldquo;":                    '\U0000201C',
+-	"ldquor;":                   '\U0000201E',
+-	"ldrdhar;":                  '\U00002967',
+-	"ldrushar;":                 '\U0000294B',
+-	"ldsh;":                     '\U000021B2',
+-	"le;":                       '\U00002264',
+-	"leftarrow;":                '\U00002190',
+-	"leftarrowtail;":            '\U000021A2',
+-	"leftharpoondown;":          '\U000021BD',
+-	"leftharpoonup;":            '\U000021BC',
+-	"leftleftarrows;":           '\U000021C7',
+-	"leftrightarrow;":           '\U00002194',
+-	"leftrightarrows;":          '\U000021C6',
+-	"leftrightharpoons;":        '\U000021CB',
+-	"leftrightsquigarrow;":      '\U000021AD',
+-	"leftthreetimes;":           '\U000022CB',
+-	"leg;":                      '\U000022DA',
+-	"leq;":                      '\U00002264',
+-	"leqq;":                     '\U00002266',
+-	"leqslant;":                 '\U00002A7D',
+-	"les;":                      '\U00002A7D',
+-	"lescc;":                    '\U00002AA8',
+-	"lesdot;":                   '\U00002A7F',
+-	"lesdoto;":                  '\U00002A81',
+-	"lesdotor;":                 '\U00002A83',
+-	"lesges;":                   '\U00002A93',
+-	"lessapprox;":               '\U00002A85',
+-	"lessdot;":                  '\U000022D6',
+-	"lesseqgtr;":                '\U000022DA',
+-	"lesseqqgtr;":               '\U00002A8B',
+-	"lessgtr;":                  '\U00002276',
+-	"lesssim;":                  '\U00002272',
+-	"lfisht;":                   '\U0000297C',
+-	"lfloor;":                   '\U0000230A',
+-	"lfr;":                      '\U0001D529',
+-	"lg;":                       '\U00002276',
+-	"lgE;":                      '\U00002A91',
+-	"lhard;":                    '\U000021BD',
+-	"lharu;":                    '\U000021BC',
+-	"lharul;":                   '\U0000296A',
+-	"lhblk;":                    '\U00002584',
+-	"ljcy;":                     '\U00000459',
+-	"ll;":                       '\U0000226A',
+-	"llarr;":                    '\U000021C7',
+-	"llcorner;":                 '\U0000231E',
+-	"llhard;":                   '\U0000296B',
+-	"lltri;":                    '\U000025FA',
+-	"lmidot;":                   '\U00000140',
+-	"lmoust;":                   '\U000023B0',
+-	"lmoustache;":               '\U000023B0',
+-	"lnE;":                      '\U00002268',
+-	"lnap;":                     '\U00002A89',
+-	"lnapprox;":                 '\U00002A89',
+-	"lne;":                      '\U00002A87',
+-	"lneq;":                     '\U00002A87',
+-	"lneqq;":                    '\U00002268',
+-	"lnsim;":                    '\U000022E6',
+-	"loang;":                    '\U000027EC',
+-	"loarr;":                    '\U000021FD',
+-	"lobrk;":                    '\U000027E6',
+-	"longleftarrow;":            '\U000027F5',
+-	"longleftrightarrow;":       '\U000027F7',
+-	"longmapsto;":               '\U000027FC',
+-	"longrightarrow;":           '\U000027F6',
+-	"looparrowleft;":            '\U000021AB',
+-	"looparrowright;":           '\U000021AC',
+-	"lopar;":                    '\U00002985',
+-	"lopf;":                     '\U0001D55D',
+-	"loplus;":                   '\U00002A2D',
+-	"lotimes;":                  '\U00002A34',
+-	"lowast;":                   '\U00002217',
+-	"lowbar;":                   '\U0000005F',
+-	"loz;":                      '\U000025CA',
+-	"lozenge;":                  '\U000025CA',
+-	"lozf;":                     '\U000029EB',
+-	"lpar;":                     '\U00000028',
+-	"lparlt;":                   '\U00002993',
+-	"lrarr;":                    '\U000021C6',
+-	"lrcorner;":                 '\U0000231F',
+-	"lrhar;":                    '\U000021CB',
+-	"lrhard;":                   '\U0000296D',
+-	"lrm;":                      '\U0000200E',
+-	"lrtri;":                    '\U000022BF',
+-	"lsaquo;":                   '\U00002039',
+-	"lscr;":                     '\U0001D4C1',
+-	"lsh;":                      '\U000021B0',
+-	"lsim;":                     '\U00002272',
+-	"lsime;":                    '\U00002A8D',
+-	"lsimg;":                    '\U00002A8F',
+-	"lsqb;":                     '\U0000005B',
+-	"lsquo;":                    '\U00002018',
+-	"lsquor;":                   '\U0000201A',
+-	"lstrok;":                   '\U00000142',
+-	"lt;":                       '\U0000003C',
+-	"ltcc;":                     '\U00002AA6',
+-	"ltcir;":                    '\U00002A79',
+-	"ltdot;":                    '\U000022D6',
+-	"lthree;":                   '\U000022CB',
+-	"ltimes;":                   '\U000022C9',
+-	"ltlarr;":                   '\U00002976',
+-	"ltquest;":                  '\U00002A7B',
+-	"ltrPar;":                   '\U00002996',
+-	"ltri;":                     '\U000025C3',
+-	"ltrie;":                    '\U000022B4',
+-	"ltrif;":                    '\U000025C2',
+-	"lurdshar;":                 '\U0000294A',
+-	"luruhar;":                  '\U00002966',
+-	"mDDot;":                    '\U0000223A',
+-	"macr;":                     '\U000000AF',
+-	"male;":                     '\U00002642',
+-	"malt;":                     '\U00002720',
+-	"maltese;":                  '\U00002720',
+-	"map;":                      '\U000021A6',
+-	"mapsto;":                   '\U000021A6',
+-	"mapstodown;":               '\U000021A7',
+-	"mapstoleft;":               '\U000021A4',
+-	"mapstoup;":                 '\U000021A5',
+-	"marker;":                   '\U000025AE',
+-	"mcomma;":                   '\U00002A29',
+-	"mcy;":                      '\U0000043C',
+-	"mdash;":                    '\U00002014',
+-	"measuredangle;":            '\U00002221',
+-	"mfr;":                      '\U0001D52A',
+-	"mho;":                      '\U00002127',
+-	"micro;":                    '\U000000B5',
+-	"mid;":                      '\U00002223',
+-	"midast;":                   '\U0000002A',
+-	"midcir;":                   '\U00002AF0',
+-	"middot;":                   '\U000000B7',
+-	"minus;":                    '\U00002212',
+-	"minusb;":                   '\U0000229F',
+-	"minusd;":                   '\U00002238',
+-	"minusdu;":                  '\U00002A2A',
+-	"mlcp;":                     '\U00002ADB',
+-	"mldr;":                     '\U00002026',
+-	"mnplus;":                   '\U00002213',
+-	"models;":                   '\U000022A7',
+-	"mopf;":                     '\U0001D55E',
+-	"mp;":                       '\U00002213',
+-	"mscr;":                     '\U0001D4C2',
+-	"mstpos;":                   '\U0000223E',
+-	"mu;":                       '\U000003BC',
+-	"multimap;":                 '\U000022B8',
+-	"mumap;":                    '\U000022B8',
+-	"nLeftarrow;":               '\U000021CD',
+-	"nLeftrightarrow;":          '\U000021CE',
+-	"nRightarrow;":              '\U000021CF',
+-	"nVDash;":                   '\U000022AF',
+-	"nVdash;":                   '\U000022AE',
+-	"nabla;":                    '\U00002207',
+-	"nacute;":                   '\U00000144',
+-	"nap;":                      '\U00002249',
+-	"napos;":                    '\U00000149',
+-	"napprox;":                  '\U00002249',
+-	"natur;":                    '\U0000266E',
+-	"natural;":                  '\U0000266E',
+-	"naturals;":                 '\U00002115',
+-	"nbsp;":                     '\U000000A0',
+-	"ncap;":                     '\U00002A43',
+-	"ncaron;":                   '\U00000148',
+-	"ncedil;":                   '\U00000146',
+-	"ncong;":                    '\U00002247',
+-	"ncup;":                     '\U00002A42',
+-	"ncy;":                      '\U0000043D',
+-	"ndash;":                    '\U00002013',
+-	"ne;":                       '\U00002260',
+-	"neArr;":                    '\U000021D7',
+-	"nearhk;":                   '\U00002924',
+-	"nearr;":                    '\U00002197',
+-	"nearrow;":                  '\U00002197',
+-	"nequiv;":                   '\U00002262',
+-	"nesear;":                   '\U00002928',
+-	"nexist;":                   '\U00002204',
+-	"nexists;":                  '\U00002204',
+-	"nfr;":                      '\U0001D52B',
+-	"nge;":                      '\U00002271',
+-	"ngeq;":                     '\U00002271',
+-	"ngsim;":                    '\U00002275',
+-	"ngt;":                      '\U0000226F',
+-	"ngtr;":                     '\U0000226F',
+-	"nhArr;":                    '\U000021CE',
+-	"nharr;":                    '\U000021AE',
+-	"nhpar;":                    '\U00002AF2',
+-	"ni;":                       '\U0000220B',
+-	"nis;":                      '\U000022FC',
+-	"nisd;":                     '\U000022FA',
+-	"niv;":                      '\U0000220B',
+-	"njcy;":                     '\U0000045A',
+-	"nlArr;":                    '\U000021CD',
+-	"nlarr;":                    '\U0000219A',
+-	"nldr;":                     '\U00002025',
+-	"nle;":                      '\U00002270',
+-	"nleftarrow;":               '\U0000219A',
+-	"nleftrightarrow;":          '\U000021AE',
+-	"nleq;":                     '\U00002270',
+-	"nless;":                    '\U0000226E',
+-	"nlsim;":                    '\U00002274',
+-	"nlt;":                      '\U0000226E',
+-	"nltri;":                    '\U000022EA',
+-	"nltrie;":                   '\U000022EC',
+-	"nmid;":                     '\U00002224',
+-	"nopf;":                     '\U0001D55F',
+-	"not;":                      '\U000000AC',
+-	"notin;":                    '\U00002209',
+-	"notinva;":                  '\U00002209',
+-	"notinvb;":                  '\U000022F7',
+-	"notinvc;":                  '\U000022F6',
+-	"notni;":                    '\U0000220C',
+-	"notniva;":                  '\U0000220C',
+-	"notnivb;":                  '\U000022FE',
+-	"notnivc;":                  '\U000022FD',
+-	"npar;":                     '\U00002226',
+-	"nparallel;":                '\U00002226',
+-	"npolint;":                  '\U00002A14',
+-	"npr;":                      '\U00002280',
+-	"nprcue;":                   '\U000022E0',
+-	"nprec;":                    '\U00002280',
+-	"nrArr;":                    '\U000021CF',
+-	"nrarr;":                    '\U0000219B',
+-	"nrightarrow;":              '\U0000219B',
+-	"nrtri;":                    '\U000022EB',
+-	"nrtrie;":                   '\U000022ED',
+-	"nsc;":                      '\U00002281',
+-	"nsccue;":                   '\U000022E1',
+-	"nscr;":                     '\U0001D4C3',
+-	"nshortmid;":                '\U00002224',
+-	"nshortparallel;":           '\U00002226',
+-	"nsim;":                     '\U00002241',
+-	"nsime;":                    '\U00002244',
+-	"nsimeq;":                   '\U00002244',
+-	"nsmid;":                    '\U00002224',
+-	"nspar;":                    '\U00002226',
+-	"nsqsube;":                  '\U000022E2',
+-	"nsqsupe;":                  '\U000022E3',
+-	"nsub;":                     '\U00002284',
+-	"nsube;":                    '\U00002288',
+-	"nsubseteq;":                '\U00002288',
+-	"nsucc;":                    '\U00002281',
+-	"nsup;":                     '\U00002285',
+-	"nsupe;":                    '\U00002289',
+-	"nsupseteq;":                '\U00002289',
+-	"ntgl;":                     '\U00002279',
+-	"ntilde;":                   '\U000000F1',
+-	"ntlg;":                     '\U00002278',
+-	"ntriangleleft;":            '\U000022EA',
+-	"ntrianglelefteq;":          '\U000022EC',
+-	"ntriangleright;":           '\U000022EB',
+-	"ntrianglerighteq;":         '\U000022ED',
+-	"nu;":                       '\U000003BD',
+-	"num;":                      '\U00000023',
+-	"numero;":                   '\U00002116',
+-	"numsp;":                    '\U00002007',
+-	"nvDash;":                   '\U000022AD',
+-	"nvHarr;":                   '\U00002904',
+-	"nvdash;":                   '\U000022AC',
+-	"nvinfin;":                  '\U000029DE',
+-	"nvlArr;":                   '\U00002902',
+-	"nvrArr;":                   '\U00002903',
+-	"nwArr;":                    '\U000021D6',
+-	"nwarhk;":                   '\U00002923',
+-	"nwarr;":                    '\U00002196',
+-	"nwarrow;":                  '\U00002196',
+-	"nwnear;":                   '\U00002927',
+-	"oS;":                       '\U000024C8',
+-	"oacute;":                   '\U000000F3',
+-	"oast;":                     '\U0000229B',
+-	"ocir;":                     '\U0000229A',
+-	"ocirc;":                    '\U000000F4',
+-	"ocy;":                      '\U0000043E',
+-	"odash;":                    '\U0000229D',
+-	"odblac;":                   '\U00000151',
+-	"odiv;":                     '\U00002A38',
+-	"odot;":                     '\U00002299',
+-	"odsold;":                   '\U000029BC',
+-	"oelig;":                    '\U00000153',
+-	"ofcir;":                    '\U000029BF',
+-	"ofr;":                      '\U0001D52C',
+-	"ogon;":                     '\U000002DB',
+-	"ograve;":                   '\U000000F2',
+-	"ogt;":                      '\U000029C1',
+-	"ohbar;":                    '\U000029B5',
+-	"ohm;":                      '\U000003A9',
+-	"oint;":                     '\U0000222E',
+-	"olarr;":                    '\U000021BA',
+-	"olcir;":                    '\U000029BE',
+-	"olcross;":                  '\U000029BB',
+-	"oline;":                    '\U0000203E',
+-	"olt;":                      '\U000029C0',
+-	"omacr;":                    '\U0000014D',
+-	"omega;":                    '\U000003C9',
+-	"omicron;":                  '\U000003BF',
+-	"omid;":                     '\U000029B6',
+-	"ominus;":                   '\U00002296',
+-	"oopf;":                     '\U0001D560',
+-	"opar;":                     '\U000029B7',
+-	"operp;":                    '\U000029B9',
+-	"oplus;":                    '\U00002295',
+-	"or;":                       '\U00002228',
+-	"orarr;":                    '\U000021BB',
+-	"ord;":                      '\U00002A5D',
+-	"order;":                    '\U00002134',
+-	"orderof;":                  '\U00002134',
+-	"ordf;":                     '\U000000AA',
+-	"ordm;":                     '\U000000BA',
+-	"origof;":                   '\U000022B6',
+-	"oror;":                     '\U00002A56',
+-	"orslope;":                  '\U00002A57',
+-	"orv;":                      '\U00002A5B',
+-	"oscr;":                     '\U00002134',
+-	"oslash;":                   '\U000000F8',
+-	"osol;":                     '\U00002298',
+-	"otilde;":                   '\U000000F5',
+-	"otimes;":                   '\U00002297',
+-	"otimesas;":                 '\U00002A36',
+-	"ouml;":                     '\U000000F6',
+-	"ovbar;":                    '\U0000233D',
+-	"par;":                      '\U00002225',
+-	"para;":                     '\U000000B6',
+-	"parallel;":                 '\U00002225',
+-	"parsim;":                   '\U00002AF3',
+-	"parsl;":                    '\U00002AFD',
+-	"part;":                     '\U00002202',
+-	"pcy;":                      '\U0000043F',
+-	"percnt;":                   '\U00000025',
+-	"period;":                   '\U0000002E',
+-	"permil;":                   '\U00002030',
+-	"perp;":                     '\U000022A5',
+-	"pertenk;":                  '\U00002031',
+-	"pfr;":                      '\U0001D52D',
+-	"phi;":                      '\U000003C6',
+-	"phiv;":                     '\U000003D5',
+-	"phmmat;":                   '\U00002133',
+-	"phone;":                    '\U0000260E',
+-	"pi;":                       '\U000003C0',
+-	"pitchfork;":                '\U000022D4',
+-	"piv;":                      '\U000003D6',
+-	"planck;":                   '\U0000210F',
+-	"planckh;":                  '\U0000210E',
+-	"plankv;":                   '\U0000210F',
+-	"plus;":                     '\U0000002B',
+-	"plusacir;":                 '\U00002A23',
+-	"plusb;":                    '\U0000229E',
+-	"pluscir;":                  '\U00002A22',
+-	"plusdo;":                   '\U00002214',
+-	"plusdu;":                   '\U00002A25',
+-	"pluse;":                    '\U00002A72',
+-	"plusmn;":                   '\U000000B1',
+-	"plussim;":                  '\U00002A26',
+-	"plustwo;":                  '\U00002A27',
+-	"pm;":                       '\U000000B1',
+-	"pointint;":                 '\U00002A15',
+-	"popf;":                     '\U0001D561',
+-	"pound;":                    '\U000000A3',
+-	"pr;":                       '\U0000227A',
+-	"prE;":                      '\U00002AB3',
+-	"prap;":                     '\U00002AB7',
+-	"prcue;":                    '\U0000227C',
+-	"pre;":                      '\U00002AAF',
+-	"prec;":                     '\U0000227A',
+-	"precapprox;":               '\U00002AB7',
+-	"preccurlyeq;":              '\U0000227C',
+-	"preceq;":                   '\U00002AAF',
+-	"precnapprox;":              '\U00002AB9',
+-	"precneqq;":                 '\U00002AB5',
+-	"precnsim;":                 '\U000022E8',
+-	"precsim;":                  '\U0000227E',
+-	"prime;":                    '\U00002032',
+-	"primes;":                   '\U00002119',
+-	"prnE;":                     '\U00002AB5',
+-	"prnap;":                    '\U00002AB9',
+-	"prnsim;":                   '\U000022E8',
+-	"prod;":                     '\U0000220F',
+-	"profalar;":                 '\U0000232E',
+-	"profline;":                 '\U00002312',
+-	"profsurf;":                 '\U00002313',
+-	"prop;":                     '\U0000221D',
+-	"propto;":                   '\U0000221D',
+-	"prsim;":                    '\U0000227E',
+-	"prurel;":                   '\U000022B0',
+-	"pscr;":                     '\U0001D4C5',
+-	"psi;":                      '\U000003C8',
+-	"puncsp;":                   '\U00002008',
+-	"qfr;":                      '\U0001D52E',
+-	"qint;":                     '\U00002A0C',
+-	"qopf;":                     '\U0001D562',
+-	"qprime;":                   '\U00002057',
+-	"qscr;":                     '\U0001D4C6',
+-	"quaternions;":              '\U0000210D',
+-	"quatint;":                  '\U00002A16',
+-	"quest;":                    '\U0000003F',
+-	"questeq;":                  '\U0000225F',
+-	"quot;":                     '\U00000022',
+-	"rAarr;":                    '\U000021DB',
+-	"rArr;":                     '\U000021D2',
+-	"rAtail;":                   '\U0000291C',
+-	"rBarr;":                    '\U0000290F',
+-	"rHar;":                     '\U00002964',
+-	"racute;":                   '\U00000155',
+-	"radic;":                    '\U0000221A',
+-	"raemptyv;":                 '\U000029B3',
+-	"rang;":                     '\U000027E9',
+-	"rangd;":                    '\U00002992',
+-	"range;":                    '\U000029A5',
+-	"rangle;":                   '\U000027E9',
+-	"raquo;":                    '\U000000BB',
+-	"rarr;":                     '\U00002192',
+-	"rarrap;":                   '\U00002975',
+-	"rarrb;":                    '\U000021E5',
+-	"rarrbfs;":                  '\U00002920',
+-	"rarrc;":                    '\U00002933',
+-	"rarrfs;":                   '\U0000291E',
+-	"rarrhk;":                   '\U000021AA',
+-	"rarrlp;":                   '\U000021AC',
+-	"rarrpl;":                   '\U00002945',
+-	"rarrsim;":                  '\U00002974',
+-	"rarrtl;":                   '\U000021A3',
+-	"rarrw;":                    '\U0000219D',
+-	"ratail;":                   '\U0000291A',
+-	"ratio;":                    '\U00002236',
+-	"rationals;":                '\U0000211A',
+-	"rbarr;":                    '\U0000290D',
+-	"rbbrk;":                    '\U00002773',
+-	"rbrace;":                   '\U0000007D',
+-	"rbrack;":                   '\U0000005D',
+-	"rbrke;":                    '\U0000298C',
+-	"rbrksld;":                  '\U0000298E',
+-	"rbrkslu;":                  '\U00002990',
+-	"rcaron;":                   '\U00000159',
+-	"rcedil;":                   '\U00000157',
+-	"rceil;":                    '\U00002309',
+-	"rcub;":                     '\U0000007D',
+-	"rcy;":                      '\U00000440',
+-	"rdca;":                     '\U00002937',
+-	"rdldhar;":                  '\U00002969',
+-	"rdquo;":                    '\U0000201D',
+-	"rdquor;":                   '\U0000201D',
+-	"rdsh;":                     '\U000021B3',
+-	"real;":                     '\U0000211C',
+-	"realine;":                  '\U0000211B',
+-	"realpart;":                 '\U0000211C',
+-	"reals;":                    '\U0000211D',
+-	"rect;":                     '\U000025AD',
+-	"reg;":                      '\U000000AE',
+-	"rfisht;":                   '\U0000297D',
+-	"rfloor;":                   '\U0000230B',
+-	"rfr;":                      '\U0001D52F',
+-	"rhard;":                    '\U000021C1',
+-	"rharu;":                    '\U000021C0',
+-	"rharul;":                   '\U0000296C',
+-	"rho;":                      '\U000003C1',
+-	"rhov;":                     '\U000003F1',
+-	"rightarrow;":               '\U00002192',
+-	"rightarrowtail;":           '\U000021A3',
+-	"rightharpoondown;":         '\U000021C1',
+-	"rightharpoonup;":           '\U000021C0',
+-	"rightleftarrows;":          '\U000021C4',
+-	"rightleftharpoons;":        '\U000021CC',
+-	"rightrightarrows;":         '\U000021C9',
+-	"rightsquigarrow;":          '\U0000219D',
+-	"rightthreetimes;":          '\U000022CC',
+-	"ring;":                     '\U000002DA',
+-	"risingdotseq;":             '\U00002253',
+-	"rlarr;":                    '\U000021C4',
+-	"rlhar;":                    '\U000021CC',
+-	"rlm;":                      '\U0000200F',
+-	"rmoust;":                   '\U000023B1',
+-	"rmoustache;":               '\U000023B1',
+-	"rnmid;":                    '\U00002AEE',
+-	"roang;":                    '\U000027ED',
+-	"roarr;":                    '\U000021FE',
+-	"robrk;":                    '\U000027E7',
+-	"ropar;":                    '\U00002986',
+-	"ropf;":                     '\U0001D563',
+-	"roplus;":                   '\U00002A2E',
+-	"rotimes;":                  '\U00002A35',
+-	"rpar;":                     '\U00000029',
+-	"rpargt;":                   '\U00002994',
+-	"rppolint;":                 '\U00002A12',
+-	"rrarr;":                    '\U000021C9',
+-	"rsaquo;":                   '\U0000203A',
+-	"rscr;":                     '\U0001D4C7',
+-	"rsh;":                      '\U000021B1',
+-	"rsqb;":                     '\U0000005D',
+-	"rsquo;":                    '\U00002019',
+-	"rsquor;":                   '\U00002019',
+-	"rthree;":                   '\U000022CC',
+-	"rtimes;":                   '\U000022CA',
+-	"rtri;":                     '\U000025B9',
+-	"rtrie;":                    '\U000022B5',
+-	"rtrif;":                    '\U000025B8',
+-	"rtriltri;":                 '\U000029CE',
+-	"ruluhar;":                  '\U00002968',
+-	"rx;":                       '\U0000211E',
+-	"sacute;":                   '\U0000015B',
+-	"sbquo;":                    '\U0000201A',
+-	"sc;":                       '\U0000227B',
+-	"scE;":                      '\U00002AB4',
+-	"scap;":                     '\U00002AB8',
+-	"scaron;":                   '\U00000161',
+-	"sccue;":                    '\U0000227D',
+-	"sce;":                      '\U00002AB0',
+-	"scedil;":                   '\U0000015F',
+-	"scirc;":                    '\U0000015D',
+-	"scnE;":                     '\U00002AB6',
+-	"scnap;":                    '\U00002ABA',
+-	"scnsim;":                   '\U000022E9',
+-	"scpolint;":                 '\U00002A13',
+-	"scsim;":                    '\U0000227F',
+-	"scy;":                      '\U00000441',
+-	"sdot;":                     '\U000022C5',
+-	"sdotb;":                    '\U000022A1',
+-	"sdote;":                    '\U00002A66',
+-	"seArr;":                    '\U000021D8',
+-	"searhk;":                   '\U00002925',
+-	"searr;":                    '\U00002198',
+-	"searrow;":                  '\U00002198',
+-	"sect;":                     '\U000000A7',
+-	"semi;":                     '\U0000003B',
+-	"seswar;":                   '\U00002929',
+-	"setminus;":                 '\U00002216',
+-	"setmn;":                    '\U00002216',
+-	"sext;":                     '\U00002736',
+-	"sfr;":                      '\U0001D530',
+-	"sfrown;":                   '\U00002322',
+-	"sharp;":                    '\U0000266F',
+-	"shchcy;":                   '\U00000449',
+-	"shcy;":                     '\U00000448',
+-	"shortmid;":                 '\U00002223',
+-	"shortparallel;":            '\U00002225',
+-	"shy;":                      '\U000000AD',
+-	"sigma;":                    '\U000003C3',
+-	"sigmaf;":                   '\U000003C2',
+-	"sigmav;":                   '\U000003C2',
+-	"sim;":                      '\U0000223C',
+-	"simdot;":                   '\U00002A6A',
+-	"sime;":                     '\U00002243',
+-	"simeq;":                    '\U00002243',
+-	"simg;":                     '\U00002A9E',
+-	"simgE;":                    '\U00002AA0',
+-	"siml;":                     '\U00002A9D',
+-	"simlE;":                    '\U00002A9F',
+-	"simne;":                    '\U00002246',
+-	"simplus;":                  '\U00002A24',
+-	"simrarr;":                  '\U00002972',
+-	"slarr;":                    '\U00002190',
+-	"smallsetminus;":            '\U00002216',
+-	"smashp;":                   '\U00002A33',
+-	"smeparsl;":                 '\U000029E4',
+-	"smid;":                     '\U00002223',
+-	"smile;":                    '\U00002323',
+-	"smt;":                      '\U00002AAA',
+-	"smte;":                     '\U00002AAC',
+-	"softcy;":                   '\U0000044C',
+-	"sol;":                      '\U0000002F',
+-	"solb;":                     '\U000029C4',
+-	"solbar;":                   '\U0000233F',
+-	"sopf;":                     '\U0001D564',
+-	"spades;":                   '\U00002660',
+-	"spadesuit;":                '\U00002660',
+-	"spar;":                     '\U00002225',
+-	"sqcap;":                    '\U00002293',
+-	"sqcup;":                    '\U00002294',
+-	"sqsub;":                    '\U0000228F',
+-	"sqsube;":                   '\U00002291',
+-	"sqsubset;":                 '\U0000228F',
+-	"sqsubseteq;":               '\U00002291',
+-	"sqsup;":                    '\U00002290',
+-	"sqsupe;":                   '\U00002292',
+-	"sqsupset;":                 '\U00002290',
+-	"sqsupseteq;":               '\U00002292',
+-	"squ;":                      '\U000025A1',
+-	"square;":                   '\U000025A1',
+-	"squarf;":                   '\U000025AA',
+-	"squf;":                     '\U000025AA',
+-	"srarr;":                    '\U00002192',
+-	"sscr;":                     '\U0001D4C8',
+-	"ssetmn;":                   '\U00002216',
+-	"ssmile;":                   '\U00002323',
+-	"sstarf;":                   '\U000022C6',
+-	"star;":                     '\U00002606',
+-	"starf;":                    '\U00002605',
+-	"straightepsilon;":          '\U000003F5',
+-	"straightphi;":              '\U000003D5',
+-	"strns;":                    '\U000000AF',
+-	"sub;":                      '\U00002282',
+-	"subE;":                     '\U00002AC5',
+-	"subdot;":                   '\U00002ABD',
+-	"sube;":                     '\U00002286',
+-	"subedot;":                  '\U00002AC3',
+-	"submult;":                  '\U00002AC1',
+-	"subnE;":                    '\U00002ACB',
+-	"subne;":                    '\U0000228A',
+-	"subplus;":                  '\U00002ABF',
+-	"subrarr;":                  '\U00002979',
+-	"subset;":                   '\U00002282',
+-	"subseteq;":                 '\U00002286',
+-	"subseteqq;":                '\U00002AC5',
+-	"subsetneq;":                '\U0000228A',
+-	"subsetneqq;":               '\U00002ACB',
+-	"subsim;":                   '\U00002AC7',
+-	"subsub;":                   '\U00002AD5',
+-	"subsup;":                   '\U00002AD3',
+-	"succ;":                     '\U0000227B',
+-	"succapprox;":               '\U00002AB8',
+-	"succcurlyeq;":              '\U0000227D',
+-	"succeq;":                   '\U00002AB0',
+-	"succnapprox;":              '\U00002ABA',
+-	"succneqq;":                 '\U00002AB6',
+-	"succnsim;":                 '\U000022E9',
+-	"succsim;":                  '\U0000227F',
+-	"sum;":                      '\U00002211',
+-	"sung;":                     '\U0000266A',
+-	"sup;":                      '\U00002283',
+-	"sup1;":                     '\U000000B9',
+-	"sup2;":                     '\U000000B2',
+-	"sup3;":                     '\U000000B3',
+-	"supE;":                     '\U00002AC6',
+-	"supdot;":                   '\U00002ABE',
+-	"supdsub;":                  '\U00002AD8',
+-	"supe;":                     '\U00002287',
+-	"supedot;":                  '\U00002AC4',
+-	"suphsol;":                  '\U000027C9',
+-	"suphsub;":                  '\U00002AD7',
+-	"suplarr;":                  '\U0000297B',
+-	"supmult;":                  '\U00002AC2',
+-	"supnE;":                    '\U00002ACC',
+-	"supne;":                    '\U0000228B',
+-	"supplus;":                  '\U00002AC0',
+-	"supset;":                   '\U00002283',
+-	"supseteq;":                 '\U00002287',
+-	"supseteqq;":                '\U00002AC6',
+-	"supsetneq;":                '\U0000228B',
+-	"supsetneqq;":               '\U00002ACC',
+-	"supsim;":                   '\U00002AC8',
+-	"supsub;":                   '\U00002AD4',
+-	"supsup;":                   '\U00002AD6',
+-	"swArr;":                    '\U000021D9',
+-	"swarhk;":                   '\U00002926',
+-	"swarr;":                    '\U00002199',
+-	"swarrow;":                  '\U00002199',
+-	"swnwar;":                   '\U0000292A',
+-	"szlig;":                    '\U000000DF',
+-	"target;":                   '\U00002316',
+-	"tau;":                      '\U000003C4',
+-	"tbrk;":                     '\U000023B4',
+-	"tcaron;":                   '\U00000165',
+-	"tcedil;":                   '\U00000163',
+-	"tcy;":                      '\U00000442',
+-	"tdot;":                     '\U000020DB',
+-	"telrec;":                   '\U00002315',
+-	"tfr;":                      '\U0001D531',
+-	"there4;":                   '\U00002234',
+-	"therefore;":                '\U00002234',
+-	"theta;":                    '\U000003B8',
+-	"thetasym;":                 '\U000003D1',
+-	"thetav;":                   '\U000003D1',
+-	"thickapprox;":              '\U00002248',
+-	"thicksim;":                 '\U0000223C',
+-	"thinsp;":                   '\U00002009',
+-	"thkap;":                    '\U00002248',
+-	"thksim;":                   '\U0000223C',
+-	"thorn;":                    '\U000000FE',
+-	"tilde;":                    '\U000002DC',
+-	"times;":                    '\U000000D7',
+-	"timesb;":                   '\U000022A0',
+-	"timesbar;":                 '\U00002A31',
+-	"timesd;":                   '\U00002A30',
+-	"tint;":                     '\U0000222D',
+-	"toea;":                     '\U00002928',
+-	"top;":                      '\U000022A4',
+-	"topbot;":                   '\U00002336',
+-	"topcir;":                   '\U00002AF1',
+-	"topf;":                     '\U0001D565',
+-	"topfork;":                  '\U00002ADA',
+-	"tosa;":                     '\U00002929',
+-	"tprime;":                   '\U00002034',
+-	"trade;":                    '\U00002122',
+-	"triangle;":                 '\U000025B5',
+-	"triangledown;":             '\U000025BF',
+-	"triangleleft;":             '\U000025C3',
+-	"trianglelefteq;":           '\U000022B4',
+-	"triangleq;":                '\U0000225C',
+-	"triangleright;":            '\U000025B9',
+-	"trianglerighteq;":          '\U000022B5',
+-	"tridot;":                   '\U000025EC',
+-	"trie;":                     '\U0000225C',
+-	"triminus;":                 '\U00002A3A',
+-	"triplus;":                  '\U00002A39',
+-	"trisb;":                    '\U000029CD',
+-	"tritime;":                  '\U00002A3B',
+-	"trpezium;":                 '\U000023E2',
+-	"tscr;":                     '\U0001D4C9',
+-	"tscy;":                     '\U00000446',
+-	"tshcy;":                    '\U0000045B',
+-	"tstrok;":                   '\U00000167',
+-	"twixt;":                    '\U0000226C',
+-	"twoheadleftarrow;":         '\U0000219E',
+-	"twoheadrightarrow;":        '\U000021A0',
+-	"uArr;":                     '\U000021D1',
+-	"uHar;":                     '\U00002963',
+-	"uacute;":                   '\U000000FA',
+-	"uarr;":                     '\U00002191',
+-	"ubrcy;":                    '\U0000045E',
+-	"ubreve;":                   '\U0000016D',
+-	"ucirc;":                    '\U000000FB',
+-	"ucy;":                      '\U00000443',
+-	"udarr;":                    '\U000021C5',
+-	"udblac;":                   '\U00000171',
+-	"udhar;":                    '\U0000296E',
+-	"ufisht;":                   '\U0000297E',
+-	"ufr;":                      '\U0001D532',
+-	"ugrave;":                   '\U000000F9',
+-	"uharl;":                    '\U000021BF',
+-	"uharr;":                    '\U000021BE',
+-	"uhblk;":                    '\U00002580',
+-	"ulcorn;":                   '\U0000231C',
+-	"ulcorner;":                 '\U0000231C',
+-	"ulcrop;":                   '\U0000230F',
+-	"ultri;":                    '\U000025F8',
+-	"umacr;":                    '\U0000016B',
+-	"uml;":                      '\U000000A8',
+-	"uogon;":                    '\U00000173',
+-	"uopf;":                     '\U0001D566',
+-	"uparrow;":                  '\U00002191',
+-	"updownarrow;":              '\U00002195',
+-	"upharpoonleft;":            '\U000021BF',
+-	"upharpoonright;":           '\U000021BE',
+-	"uplus;":                    '\U0000228E',
+-	"upsi;":                     '\U000003C5',
+-	"upsih;":                    '\U000003D2',
+-	"upsilon;":                  '\U000003C5',
+-	"upuparrows;":               '\U000021C8',
+-	"urcorn;":                   '\U0000231D',
+-	"urcorner;":                 '\U0000231D',
+-	"urcrop;":                   '\U0000230E',
+-	"uring;":                    '\U0000016F',
+-	"urtri;":                    '\U000025F9',
+-	"uscr;":                     '\U0001D4CA',
+-	"utdot;":                    '\U000022F0',
+-	"utilde;":                   '\U00000169',
+-	"utri;":                     '\U000025B5',
+-	"utrif;":                    '\U000025B4',
+-	"uuarr;":                    '\U000021C8',
+-	"uuml;":                     '\U000000FC',
+-	"uwangle;":                  '\U000029A7',
+-	"vArr;":                     '\U000021D5',
+-	"vBar;":                     '\U00002AE8',
+-	"vBarv;":                    '\U00002AE9',
+-	"vDash;":                    '\U000022A8',
+-	"vangrt;":                   '\U0000299C',
+-	"varepsilon;":               '\U000003F5',
+-	"varkappa;":                 '\U000003F0',
+-	"varnothing;":               '\U00002205',
+-	"varphi;":                   '\U000003D5',
+-	"varpi;":                    '\U000003D6',
+-	"varpropto;":                '\U0000221D',
+-	"varr;":                     '\U00002195',
+-	"varrho;":                   '\U000003F1',
+-	"varsigma;":                 '\U000003C2',
+-	"vartheta;":                 '\U000003D1',
+-	"vartriangleleft;":          '\U000022B2',
+-	"vartriangleright;":         '\U000022B3',
+-	"vcy;":                      '\U00000432',
+-	"vdash;":                    '\U000022A2',
+-	"vee;":                      '\U00002228',
+-	"veebar;":                   '\U000022BB',
+-	"veeeq;":                    '\U0000225A',
+-	"vellip;":                   '\U000022EE',
+-	"verbar;":                   '\U0000007C',
+-	"vert;":                     '\U0000007C',
+-	"vfr;":                      '\U0001D533',
+-	"vltri;":                    '\U000022B2',
+-	"vopf;":                     '\U0001D567',
+-	"vprop;":                    '\U0000221D',
+-	"vrtri;":                    '\U000022B3',
+-	"vscr;":                     '\U0001D4CB',
+-	"vzigzag;":                  '\U0000299A',
+-	"wcirc;":                    '\U00000175',
+-	"wedbar;":                   '\U00002A5F',
+-	"wedge;":                    '\U00002227',
+-	"wedgeq;":                   '\U00002259',
+-	"weierp;":                   '\U00002118',
+-	"wfr;":                      '\U0001D534',
+-	"wopf;":                     '\U0001D568',
+-	"wp;":                       '\U00002118',
+-	"wr;":                       '\U00002240',
+-	"wreath;":                   '\U00002240',
+-	"wscr;":                     '\U0001D4CC',
+-	"xcap;":                     '\U000022C2',
+-	"xcirc;":                    '\U000025EF',
+-	"xcup;":                     '\U000022C3',
+-	"xdtri;":                    '\U000025BD',
+-	"xfr;":                      '\U0001D535',
+-	"xhArr;":                    '\U000027FA',
+-	"xharr;":                    '\U000027F7',
+-	"xi;":                       '\U000003BE',
+-	"xlArr;":                    '\U000027F8',
+-	"xlarr;":                    '\U000027F5',
+-	"xmap;":                     '\U000027FC',
+-	"xnis;":                     '\U000022FB',
+-	"xodot;":                    '\U00002A00',
+-	"xopf;":                     '\U0001D569',
+-	"xoplus;":                   '\U00002A01',
+-	"xotime;":                   '\U00002A02',
+-	"xrArr;":                    '\U000027F9',
+-	"xrarr;":                    '\U000027F6',
+-	"xscr;":                     '\U0001D4CD',
+-	"xsqcup;":                   '\U00002A06',
+-	"xuplus;":                   '\U00002A04',
+-	"xutri;":                    '\U000025B3',
+-	"xvee;":                     '\U000022C1',
+-	"xwedge;":                   '\U000022C0',
+-	"yacute;":                   '\U000000FD',
+-	"yacy;":                     '\U0000044F',
+-	"ycirc;":                    '\U00000177',
+-	"ycy;":                      '\U0000044B',
+-	"yen;":                      '\U000000A5',
+-	"yfr;":                      '\U0001D536',
+-	"yicy;":                     '\U00000457',
+-	"yopf;":                     '\U0001D56A',
+-	"yscr;":                     '\U0001D4CE',
+-	"yucy;":                     '\U0000044E',
+-	"yuml;":                     '\U000000FF',
+-	"zacute;":                   '\U0000017A',
+-	"zcaron;":                   '\U0000017E',
+-	"zcy;":                      '\U00000437',
+-	"zdot;":                     '\U0000017C',
+-	"zeetrf;":                   '\U00002128',
+-	"zeta;":                     '\U000003B6',
+-	"zfr;":                      '\U0001D537',
+-	"zhcy;":                     '\U00000436',
+-	"zigrarr;":                  '\U000021DD',
+-	"zopf;":                     '\U0001D56B',
+-	"zscr;":                     '\U0001D4CF',
+-	"zwj;":                      '\U0000200D',
+-	"zwnj;":                     '\U0000200C',
+-	"AElig":                     '\U000000C6',
+-	"AMP":                       '\U00000026',
+-	"Aacute":                    '\U000000C1',
+-	"Acirc":                     '\U000000C2',
+-	"Agrave":                    '\U000000C0',
+-	"Aring":                     '\U000000C5',
+-	"Atilde":                    '\U000000C3',
+-	"Auml":                      '\U000000C4',
+-	"COPY":                      '\U000000A9',
+-	"Ccedil":                    '\U000000C7',
+-	"ETH":                       '\U000000D0',
+-	"Eacute":                    '\U000000C9',
+-	"Ecirc":                     '\U000000CA',
+-	"Egrave":                    '\U000000C8',
+-	"Euml":                      '\U000000CB',
+-	"GT":                        '\U0000003E',
+-	"Iacute":                    '\U000000CD',
+-	"Icirc":                     '\U000000CE',
+-	"Igrave":                    '\U000000CC',
+-	"Iuml":                      '\U000000CF',
+-	"LT":                        '\U0000003C',
+-	"Ntilde":                    '\U000000D1',
+-	"Oacute":                    '\U000000D3',
+-	"Ocirc":                     '\U000000D4',
+-	"Ograve":                    '\U000000D2',
+-	"Oslash":                    '\U000000D8',
+-	"Otilde":                    '\U000000D5',
+-	"Ouml":                      '\U000000D6',
+-	"QUOT":                      '\U00000022',
+-	"REG":                       '\U000000AE',
+-	"THORN":                     '\U000000DE',
+-	"Uacute":                    '\U000000DA',
+-	"Ucirc":                     '\U000000DB',
+-	"Ugrave":                    '\U000000D9',
+-	"Uuml":                      '\U000000DC',
+-	"Yacute":                    '\U000000DD',
+-	"aacute":                    '\U000000E1',
+-	"acirc":                     '\U000000E2',
+-	"acute":                     '\U000000B4',
+-	"aelig":                     '\U000000E6',
+-	"agrave":                    '\U000000E0',
+-	"amp":                       '\U00000026',
+-	"aring":                     '\U000000E5',
+-	"atilde":                    '\U000000E3',
+-	"auml":                      '\U000000E4',
+-	"brvbar":                    '\U000000A6',
+-	"ccedil":                    '\U000000E7',
+-	"cedil":                     '\U000000B8',
+-	"cent":                      '\U000000A2',
+-	"copy":                      '\U000000A9',
+-	"curren":                    '\U000000A4',
+-	"deg":                       '\U000000B0',
+-	"divide":                    '\U000000F7',
+-	"eacute":                    '\U000000E9',
+-	"ecirc":                     '\U000000EA',
+-	"egrave":                    '\U000000E8',
+-	"eth":                       '\U000000F0',
+-	"euml":                      '\U000000EB',
+-	"frac12":                    '\U000000BD',
+-	"frac14":                    '\U000000BC',
+-	"frac34":                    '\U000000BE',
+-	"gt":                        '\U0000003E',
+-	"iacute":                    '\U000000ED',
+-	"icirc":                     '\U000000EE',
+-	"iexcl":                     '\U000000A1',
+-	"igrave":                    '\U000000EC',
+-	"iquest":                    '\U000000BF',
+-	"iuml":                      '\U000000EF',
+-	"laquo":                     '\U000000AB',
+-	"lt":                        '\U0000003C',
+-	"macr":                      '\U000000AF',
+-	"micro":                     '\U000000B5',
+-	"middot":                    '\U000000B7',
+-	"nbsp":                      '\U000000A0',
+-	"not":                       '\U000000AC',
+-	"ntilde":                    '\U000000F1',
+-	"oacute":                    '\U000000F3',
+-	"ocirc":                     '\U000000F4',
+-	"ograve":                    '\U000000F2',
+-	"ordf":                      '\U000000AA',
+-	"ordm":                      '\U000000BA',
+-	"oslash":                    '\U000000F8',
+-	"otilde":                    '\U000000F5',
+-	"ouml":                      '\U000000F6',
+-	"para":                      '\U000000B6',
+-	"plusmn":                    '\U000000B1',
+-	"pound":                     '\U000000A3',
+-	"quot":                      '\U00000022',
+-	"raquo":                     '\U000000BB',
+-	"reg":                       '\U000000AE',
+-	"sect":                      '\U000000A7',
+-	"shy":                       '\U000000AD',
+-	"sup1":                      '\U000000B9',
+-	"sup2":                      '\U000000B2',
+-	"sup3":                      '\U000000B3',
+-	"szlig":                     '\U000000DF',
+-	"thorn":                     '\U000000FE',
+-	"times":                     '\U000000D7',
+-	"uacute":                    '\U000000FA',
+-	"ucirc":                     '\U000000FB',
+-	"ugrave":                    '\U000000F9',
+-	"uml":                       '\U000000A8',
+-	"uuml":                      '\U000000FC',
+-	"yacute":                    '\U000000FD',
+-	"yen":                       '\U000000A5',
+-	"yuml":                      '\U000000FF',
+-}
+-
+-// HTML entities that are two unicode codepoints.
+-var entity2 = map[string][2]rune{
+-	// TODO(nigeltao): Handle replacements that are wider than their names.
+-	// "nLt;":                     {'\u226A', '\u20D2'},
+-	// "nGt;":                     {'\u226B', '\u20D2'},
+-	"NotEqualTilde;":           {'\u2242', '\u0338'},
+-	"NotGreaterFullEqual;":     {'\u2267', '\u0338'},
+-	"NotGreaterGreater;":       {'\u226B', '\u0338'},
+-	"NotGreaterSlantEqual;":    {'\u2A7E', '\u0338'},
+-	"NotHumpDownHump;":         {'\u224E', '\u0338'},
+-	"NotHumpEqual;":            {'\u224F', '\u0338'},
+-	"NotLeftTriangleBar;":      {'\u29CF', '\u0338'},
+-	"NotLessLess;":             {'\u226A', '\u0338'},
+-	"NotLessSlantEqual;":       {'\u2A7D', '\u0338'},
+-	"NotNestedGreaterGreater;": {'\u2AA2', '\u0338'},
+-	"NotNestedLessLess;":       {'\u2AA1', '\u0338'},
+-	"NotPrecedesEqual;":        {'\u2AAF', '\u0338'},
+-	"NotRightTriangleBar;":     {'\u29D0', '\u0338'},
+-	"NotSquareSubset;":         {'\u228F', '\u0338'},
+-	"NotSquareSuperset;":       {'\u2290', '\u0338'},
+-	"NotSubset;":               {'\u2282', '\u20D2'},
+-	"NotSucceedsEqual;":        {'\u2AB0', '\u0338'},
+-	"NotSucceedsTilde;":        {'\u227F', '\u0338'},
+-	"NotSuperset;":             {'\u2283', '\u20D2'},
+-	"ThickSpace;":              {'\u205F', '\u200A'},
+-	"acE;":                     {'\u223E', '\u0333'},
+-	"bne;":                     {'\u003D', '\u20E5'},
+-	"bnequiv;":                 {'\u2261', '\u20E5'},
+-	"caps;":                    {'\u2229', '\uFE00'},
+-	"cups;":                    {'\u222A', '\uFE00'},
+-	"fjlig;":                   {'\u0066', '\u006A'},
+-	"gesl;":                    {'\u22DB', '\uFE00'},
+-	"gvertneqq;":               {'\u2269', '\uFE00'},
+-	"gvnE;":                    {'\u2269', '\uFE00'},
+-	"lates;":                   {'\u2AAD', '\uFE00'},
+-	"lesg;":                    {'\u22DA', '\uFE00'},
+-	"lvertneqq;":               {'\u2268', '\uFE00'},
+-	"lvnE;":                    {'\u2268', '\uFE00'},
+-	"nGg;":                     {'\u22D9', '\u0338'},
+-	"nGtv;":                    {'\u226B', '\u0338'},
+-	"nLl;":                     {'\u22D8', '\u0338'},
+-	"nLtv;":                    {'\u226A', '\u0338'},
+-	"nang;":                    {'\u2220', '\u20D2'},
+-	"napE;":                    {'\u2A70', '\u0338'},
+-	"napid;":                   {'\u224B', '\u0338'},
+-	"nbump;":                   {'\u224E', '\u0338'},
+-	"nbumpe;":                  {'\u224F', '\u0338'},
+-	"ncongdot;":                {'\u2A6D', '\u0338'},
+-	"nedot;":                   {'\u2250', '\u0338'},
+-	"nesim;":                   {'\u2242', '\u0338'},
+-	"ngE;":                     {'\u2267', '\u0338'},
+-	"ngeqq;":                   {'\u2267', '\u0338'},
+-	"ngeqslant;":               {'\u2A7E', '\u0338'},
+-	"nges;":                    {'\u2A7E', '\u0338'},
+-	"nlE;":                     {'\u2266', '\u0338'},
+-	"nleqq;":                   {'\u2266', '\u0338'},
+-	"nleqslant;":               {'\u2A7D', '\u0338'},
+-	"nles;":                    {'\u2A7D', '\u0338'},
+-	"notinE;":                  {'\u22F9', '\u0338'},
+-	"notindot;":                {'\u22F5', '\u0338'},
+-	"nparsl;":                  {'\u2AFD', '\u20E5'},
+-	"npart;":                   {'\u2202', '\u0338'},
+-	"npre;":                    {'\u2AAF', '\u0338'},
+-	"npreceq;":                 {'\u2AAF', '\u0338'},
+-	"nrarrc;":                  {'\u2933', '\u0338'},
+-	"nrarrw;":                  {'\u219D', '\u0338'},
+-	"nsce;":                    {'\u2AB0', '\u0338'},
+-	"nsubE;":                   {'\u2AC5', '\u0338'},
+-	"nsubset;":                 {'\u2282', '\u20D2'},
+-	"nsubseteqq;":              {'\u2AC5', '\u0338'},
+-	"nsucceq;":                 {'\u2AB0', '\u0338'},
+-	"nsupE;":                   {'\u2AC6', '\u0338'},
+-	"nsupset;":                 {'\u2283', '\u20D2'},
+-	"nsupseteqq;":              {'\u2AC6', '\u0338'},
+-	"nvap;":                    {'\u224D', '\u20D2'},
+-	"nvge;":                    {'\u2265', '\u20D2'},
+-	"nvgt;":                    {'\u003E', '\u20D2'},
+-	"nvle;":                    {'\u2264', '\u20D2'},
+-	"nvlt;":                    {'\u003C', '\u20D2'},
+-	"nvltrie;":                 {'\u22B4', '\u20D2'},
+-	"nvrtrie;":                 {'\u22B5', '\u20D2'},
+-	"nvsim;":                   {'\u223C', '\u20D2'},
+-	"race;":                    {'\u223D', '\u0331'},
+-	"smtes;":                   {'\u2AAC', '\uFE00'},
+-	"sqcaps;":                  {'\u2293', '\uFE00'},
+-	"sqcups;":                  {'\u2294', '\uFE00'},
+-	"varsubsetneq;":            {'\u228A', '\uFE00'},
+-	"varsubsetneqq;":           {'\u2ACB', '\uFE00'},
+-	"varsupsetneq;":            {'\u228B', '\uFE00'},
+-	"varsupsetneqq;":           {'\u2ACC', '\uFE00'},
+-	"vnsub;":                   {'\u2282', '\u20D2'},
+-	"vnsup;":                   {'\u2283', '\u20D2'},
+-	"vsubnE;":                  {'\u2ACB', '\uFE00'},
+-	"vsubne;":                  {'\u228A', '\uFE00'},
+-	"vsupnE;":                  {'\u2ACC', '\uFE00'},
+-	"vsupne;":                  {'\u228B', '\uFE00'},
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/entity_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/entity_test.go
+deleted file mode 100644
+index b53f866..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/entity_test.go
++++ /dev/null
+@@ -1,29 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"testing"
+-	"unicode/utf8"
+-)
+-
+-func TestEntityLength(t *testing.T) {
+-	// We verify that the length of UTF-8 encoding of each value is <= 1 + len(key).
+-	// The +1 comes from the leading "&". This property implies that the length of
+-	// unescaped text is <= the length of escaped text.
+-	for k, v := range entity {
+-		if 1+len(k) < utf8.RuneLen(v) {
+-			t.Error("escaped entity &" + k + " is shorter than its UTF-8 encoding " + string(v))
+-		}
+-		if len(k) > longestEntityWithoutSemicolon && k[len(k)-1] != ';' {
+-			t.Errorf("entity name %s is %d characters, but longestEntityWithoutSemicolon=%d", k, len(k), longestEntityWithoutSemicolon)
+-		}
+-	}
+-	for k, v := range entity2 {
+-		if 1+len(k) < utf8.RuneLen(v[0])+utf8.RuneLen(v[1]) {
+-			t.Error("escaped entity &" + k + " is shorter than its UTF-8 encoding " + string(v[0]) + string(v[1]))
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/escape.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/escape.go
+deleted file mode 100644
+index 75bddff..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/escape.go
++++ /dev/null
+@@ -1,258 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"bytes"
+-	"strings"
+-	"unicode/utf8"
+-)
+-
+-// These replacements permit compatibility with old numeric entities that
+-// assumed Windows-1252 encoding.
+-// http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#consume-a-character-reference
+-var replacementTable = [...]rune{
+-	'\u20AC', // First entry is what 0x80 should be replaced with.
+-	'\u0081',
+-	'\u201A',
+-	'\u0192',
+-	'\u201E',
+-	'\u2026',
+-	'\u2020',
+-	'\u2021',
+-	'\u02C6',
+-	'\u2030',
+-	'\u0160',
+-	'\u2039',
+-	'\u0152',
+-	'\u008D',
+-	'\u017D',
+-	'\u008F',
+-	'\u0090',
+-	'\u2018',
+-	'\u2019',
+-	'\u201C',
+-	'\u201D',
+-	'\u2022',
+-	'\u2013',
+-	'\u2014',
+-	'\u02DC',
+-	'\u2122',
+-	'\u0161',
+-	'\u203A',
+-	'\u0153',
+-	'\u009D',
+-	'\u017E',
+-	'\u0178', // Last entry is 0x9F.
+-	// 0x00->'\uFFFD' is handled programmatically.
+-	// 0x0D->'\u000D' is a no-op.
+-}
+-
+-// unescapeEntity reads an entity like "&lt;" from b[src:] and writes the
+-// corresponding "<" to b[dst:], returning the incremented dst and src cursors.
+-// Precondition: b[src] == '&' && dst <= src.
+-// attribute should be true if parsing an attribute value.
+-func unescapeEntity(b []byte, dst, src int, attribute bool) (dst1, src1 int) {
+-	// http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#consume-a-character-reference
+-
+-	// i starts at 1 because we already know that s[0] == '&'.
+-	i, s := 1, b[src:]
+-
+-	if len(s) <= 1 {
+-		b[dst] = b[src]
+-		return dst + 1, src + 1
+-	}
+-
+-	if s[i] == '#' {
+-		if len(s) <= 3 { // We need to have at least "&#.".
+-			b[dst] = b[src]
+-			return dst + 1, src + 1
+-		}
+-		i++
+-		c := s[i]
+-		hex := false
+-		if c == 'x' || c == 'X' {
+-			hex = true
+-			i++
+-		}
+-
+-		x := '\x00'
+-		for i < len(s) {
+-			c = s[i]
+-			i++
+-			if hex {
+-				if '0' <= c && c <= '9' {
+-					x = 16*x + rune(c) - '0'
+-					continue
+-				} else if 'a' <= c && c <= 'f' {
+-					x = 16*x + rune(c) - 'a' + 10
+-					continue
+-				} else if 'A' <= c && c <= 'F' {
+-					x = 16*x + rune(c) - 'A' + 10
+-					continue
+-				}
+-			} else if '0' <= c && c <= '9' {
+-				x = 10*x + rune(c) - '0'
+-				continue
+-			}
+-			if c != ';' {
+-				i--
+-			}
+-			break
+-		}
+-
+-		if i <= 3 { // No characters matched.
+-			b[dst] = b[src]
+-			return dst + 1, src + 1
+-		}
+-
+-		if 0x80 <= x && x <= 0x9F {
+-			// Replace characters from Windows-1252 with UTF-8 equivalents.
+-			x = replacementTable[x-0x80]
+-		} else if x == 0 || (0xD800 <= x && x <= 0xDFFF) || x > 0x10FFFF {
+-			// Replace invalid characters with the replacement character.
+-			x = '\uFFFD'
+-		}
+-
+-		return dst + utf8.EncodeRune(b[dst:], x), src + i
+-	}
+-
+-	// Consume the maximum number of characters possible, with the
+-	// consumed characters matching one of the named references.
+-
+-	for i < len(s) {
+-		c := s[i]
+-		i++
+-		// Lower-cased characters are more common in entities, so we check for them first.
+-		if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || '0' <= c && c <= '9' {
+-			continue
+-		}
+-		if c != ';' {
+-			i--
+-		}
+-		break
+-	}
+-
+-	entityName := string(s[1:i])
+-	if entityName == "" {
+-		// No-op.
+-	} else if attribute && entityName[len(entityName)-1] != ';' && len(s) > i && s[i] == '=' {
+-		// No-op.
+-	} else if x := entity[entityName]; x != 0 {
+-		return dst + utf8.EncodeRune(b[dst:], x), src + i
+-	} else if x := entity2[entityName]; x[0] != 0 {
+-		dst1 := dst + utf8.EncodeRune(b[dst:], x[0])
+-		return dst1 + utf8.EncodeRune(b[dst1:], x[1]), src + i
+-	} else if !attribute {
+-		maxLen := len(entityName) - 1
+-		if maxLen > longestEntityWithoutSemicolon {
+-			maxLen = longestEntityWithoutSemicolon
+-		}
+-		for j := maxLen; j > 1; j-- {
+-			if x := entity[entityName[:j]]; x != 0 {
+-				return dst + utf8.EncodeRune(b[dst:], x), src + j + 1
+-			}
+-		}
+-	}
+-
+-	dst1, src1 = dst+i, src+i
+-	copy(b[dst:dst1], b[src:src1])
+-	return dst1, src1
+-}
+-
+-// unescape unescapes b's entities in-place, so that "a&lt;b" becomes "a<b".
+-// attribute should be true if parsing an attribute value.
+-func unescape(b []byte, attribute bool) []byte {
+-	for i, c := range b {
+-		if c == '&' {
+-			dst, src := unescapeEntity(b, i, i, attribute)
+-			for src < len(b) {
+-				c := b[src]
+-				if c == '&' {
+-					dst, src = unescapeEntity(b, dst, src, attribute)
+-				} else {
+-					b[dst] = c
+-					dst, src = dst+1, src+1
+-				}
+-			}
+-			return b[0:dst]
+-		}
+-	}
+-	return b
+-}
+-
+-// lower lower-cases the A-Z bytes in b in-place, so that "aBc" becomes "abc".
+-func lower(b []byte) []byte {
+-	for i, c := range b {
+-		if 'A' <= c && c <= 'Z' {
+-			b[i] = c + 'a' - 'A'
+-		}
+-	}
+-	return b
+-}
+-
+-const escapedChars = "&'<>\"\r"
+-
+-func escape(w writer, s string) error {
+-	i := strings.IndexAny(s, escapedChars)
+-	for i != -1 {
+-		if _, err := w.WriteString(s[:i]); err != nil {
+-			return err
+-		}
+-		var esc string
+-		switch s[i] {
+-		case '&':
+-			esc = "&amp;"
+-		case '\'':
+-			// "&#39;" is shorter than "&apos;" and apos was not in HTML until HTML5.
+-			esc = "&#39;"
+-		case '<':
+-			esc = "&lt;"
+-		case '>':
+-			esc = "&gt;"
+-		case '"':
+-			// "&#34;" is shorter than "&quot;".
+-			esc = "&#34;"
+-		case '\r':
+-			esc = "&#13;"
+-		default:
+-			panic("unrecognized escape character")
+-		}
+-		s = s[i+1:]
+-		if _, err := w.WriteString(esc); err != nil {
+-			return err
+-		}
+-		i = strings.IndexAny(s, escapedChars)
+-	}
+-	_, err := w.WriteString(s)
+-	return err
+-}
+-
+-// EscapeString escapes special characters like "<" to become "&lt;". It
+-// escapes only five such characters: <, >, &, ' and ".
+-// UnescapeString(EscapeString(s)) == s always holds, but the converse isn't
+-// always true.
+-func EscapeString(s string) string {
+-	if strings.IndexAny(s, escapedChars) == -1 {
+-		return s
+-	}
+-	var buf bytes.Buffer
+-	escape(&buf, s)
+-	return buf.String()
+-}
+-
+-// UnescapeString unescapes entities like "&lt;" to become "<". It unescapes a
+-// larger range of entities than EscapeString escapes. For example, "&aacute;"
+-// unescapes to "á", as does "&#225;" and "&xE1;".
+-// UnescapeString(EscapeString(s)) == s always holds, but the converse isn't
+-// always true.
+-func UnescapeString(s string) string {
+-	for _, c := range s {
+-		if c == '&' {
+-			return string(unescape([]byte(s), false))
+-		}
+-	}
+-	return s
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/escape_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/escape_test.go
+deleted file mode 100644
+index b405d4b..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/escape_test.go
++++ /dev/null
+@@ -1,97 +0,0 @@
+-// Copyright 2013 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import "testing"
+-
+-type unescapeTest struct {
+-	// A short description of the test case.
+-	desc string
+-	// The HTML text.
+-	html string
+-	// The unescaped text.
+-	unescaped string
+-}
+-
+-var unescapeTests = []unescapeTest{
+-	// Handle no entities.
+-	{
+-		"copy",
+-		"A\ttext\nstring",
+-		"A\ttext\nstring",
+-	},
+-	// Handle simple named entities.
+-	{
+-		"simple",
+-		"&amp; &gt; &lt;",
+-		"& > <",
+-	},
+-	// Handle hitting the end of the string.
+-	{
+-		"stringEnd",
+-		"&amp &amp",
+-		"& &",
+-	},
+-	// Handle entities with two codepoints.
+-	{
+-		"multiCodepoint",
+-		"text &gesl; blah",
+-		"text \u22db\ufe00 blah",
+-	},
+-	// Handle decimal numeric entities.
+-	{
+-		"decimalEntity",
+-		"Delta = &#916; ",
+-		"Delta = Δ ",
+-	},
+-	// Handle hexadecimal numeric entities.
+-	{
+-		"hexadecimalEntity",
+-		"Lambda = &#x3bb; = &#X3Bb ",
+-		"Lambda = λ = λ ",
+-	},
+-	// Handle numeric early termination.
+-	{
+-		"numericEnds",
+-		"&# &#x &#128;43 &copy = &#169f = &#xa9",
+-		"&# &#x €43 © = ©f = ©",
+-	},
+-	// Handle numeric ISO-8859-1 entity replacements.
+-	{
+-		"numericReplacements",
+-		"Footnote&#x87;",
+-		"Footnote‡",
+-	},
+-}
+-
+-func TestUnescape(t *testing.T) {
+-	for _, tt := range unescapeTests {
+-		unescaped := UnescapeString(tt.html)
+-		if unescaped != tt.unescaped {
+-			t.Errorf("TestUnescape %s: want %q, got %q", tt.desc, tt.unescaped, unescaped)
+-		}
+-	}
+-}
+-
+-func TestUnescapeEscape(t *testing.T) {
+-	ss := []string{
+-		``,
+-		`abc def`,
+-		`a & b`,
+-		`a&amp;b`,
+-		`a &amp b`,
+-		`&quot;`,
+-		`"`,
+-		`"<&>"`,
+-		`&quot;&lt;&amp;&gt;&quot;`,
+-		`3&5==1 && 0<1, "0&lt;1", a+acute=&aacute;`,
+-		`The special characters are: <, >, &, ' and "`,
+-	}
+-	for _, s := range ss {
+-		if got := UnescapeString(EscapeString(s)); got != s {
+-			t.Errorf("got %q want %q", got, s)
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/example_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/example_test.go
+deleted file mode 100644
+index 47341f0..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/example_test.go
++++ /dev/null
+@@ -1,40 +0,0 @@
+-// Copyright 2012 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// This example demonstrates parsing HTML data and walking the resulting tree.
+-package html_test
+-
+-import (
+-	"fmt"
+-	"log"
+-	"strings"
+-
+-	"code.google.com/p/go.net/html"
+-)
+-
+-func ExampleParse() {
+-	s := `<p>Links:</p><ul><li><a href="foo">Foo</a><li><a href="/bar/baz">BarBaz</a></ul>`
+-	doc, err := html.Parse(strings.NewReader(s))
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-	var f func(*html.Node)
+-	f = func(n *html.Node) {
+-		if n.Type == html.ElementNode && n.Data == "a" {
+-			for _, a := range n.Attr {
+-				if a.Key == "href" {
+-					fmt.Println(a.Val)
+-					break
+-				}
+-			}
+-		}
+-		for c := n.FirstChild; c != nil; c = c.NextSibling {
+-			f(c)
+-		}
+-	}
+-	f(doc)
+-	// Output:
+-	// foo
+-	// /bar/baz
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/foreign.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/foreign.go
+deleted file mode 100644
+index d3b3844..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/foreign.go
++++ /dev/null
+@@ -1,226 +0,0 @@
+-// Copyright 2011 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"strings"
+-)
+-
+-func adjustAttributeNames(aa []Attribute, nameMap map[string]string) {
+-	for i := range aa {
+-		if newName, ok := nameMap[aa[i].Key]; ok {
+-			aa[i].Key = newName
+-		}
+-	}
+-}
+-
+-func adjustForeignAttributes(aa []Attribute) {
+-	for i, a := range aa {
+-		if a.Key == "" || a.Key[0] != 'x' {
+-			continue
+-		}
+-		switch a.Key {
+-		case "xlink:actuate", "xlink:arcrole", "xlink:href", "xlink:role", "xlink:show",
+-			"xlink:title", "xlink:type", "xml:base", "xml:lang", "xml:space", "xmlns:xlink":
+-			j := strings.Index(a.Key, ":")
+-			aa[i].Namespace = a.Key[:j]
+-			aa[i].Key = a.Key[j+1:]
+-		}
+-	}
+-}
+-
+-func htmlIntegrationPoint(n *Node) bool {
+-	if n.Type != ElementNode {
+-		return false
+-	}
+-	switch n.Namespace {
+-	case "math":
+-		if n.Data == "annotation-xml" {
+-			for _, a := range n.Attr {
+-				if a.Key == "encoding" {
+-					val := strings.ToLower(a.Val)
+-					if val == "text/html" || val == "application/xhtml+xml" {
+-						return true
+-					}
+-				}
+-			}
+-		}
+-	case "svg":
+-		switch n.Data {
+-		case "desc", "foreignObject", "title":
+-			return true
+-		}
+-	}
+-	return false
+-}
+-
+-func mathMLTextIntegrationPoint(n *Node) bool {
+-	if n.Namespace != "math" {
+-		return false
+-	}
+-	switch n.Data {
+-	case "mi", "mo", "mn", "ms", "mtext":
+-		return true
+-	}
+-	return false
+-}
+-
+-// Section 12.2.5.5.
+-var breakout = map[string]bool{
+-	"b":          true,
+-	"big":        true,
+-	"blockquote": true,
+-	"body":       true,
+-	"br":         true,
+-	"center":     true,
+-	"code":       true,
+-	"dd":         true,
+-	"div":        true,
+-	"dl":         true,
+-	"dt":         true,
+-	"em":         true,
+-	"embed":      true,
+-	"h1":         true,
+-	"h2":         true,
+-	"h3":         true,
+-	"h4":         true,
+-	"h5":         true,
+-	"h6":         true,
+-	"head":       true,
+-	"hr":         true,
+-	"i":          true,
+-	"img":        true,
+-	"li":         true,
+-	"listing":    true,
+-	"menu":       true,
+-	"meta":       true,
+-	"nobr":       true,
+-	"ol":         true,
+-	"p":          true,
+-	"pre":        true,
+-	"ruby":       true,
+-	"s":          true,
+-	"small":      true,
+-	"span":       true,
+-	"strong":     true,
+-	"strike":     true,
+-	"sub":        true,
+-	"sup":        true,
+-	"table":      true,
+-	"tt":         true,
+-	"u":          true,
+-	"ul":         true,
+-	"var":        true,
+-}
+-
+-// Section 12.2.5.5.
+-var svgTagNameAdjustments = map[string]string{
+-	"altglyph":            "altGlyph",
+-	"altglyphdef":         "altGlyphDef",
+-	"altglyphitem":        "altGlyphItem",
+-	"animatecolor":        "animateColor",
+-	"animatemotion":       "animateMotion",
+-	"animatetransform":    "animateTransform",
+-	"clippath":            "clipPath",
+-	"feblend":             "feBlend",
+-	"fecolormatrix":       "feColorMatrix",
+-	"fecomponenttransfer": "feComponentTransfer",
+-	"fecomposite":         "feComposite",
+-	"feconvolvematrix":    "feConvolveMatrix",
+-	"fediffuselighting":   "feDiffuseLighting",
+-	"fedisplacementmap":   "feDisplacementMap",
+-	"fedistantlight":      "feDistantLight",
+-	"feflood":             "feFlood",
+-	"fefunca":             "feFuncA",
+-	"fefuncb":             "feFuncB",
+-	"fefuncg":             "feFuncG",
+-	"fefuncr":             "feFuncR",
+-	"fegaussianblur":      "feGaussianBlur",
+-	"feimage":             "feImage",
+-	"femerge":             "feMerge",
+-	"femergenode":         "feMergeNode",
+-	"femorphology":        "feMorphology",
+-	"feoffset":            "feOffset",
+-	"fepointlight":        "fePointLight",
+-	"fespecularlighting":  "feSpecularLighting",
+-	"fespotlight":         "feSpotLight",
+-	"fetile":              "feTile",
+-	"feturbulence":        "feTurbulence",
+-	"foreignobject":       "foreignObject",
+-	"glyphref":            "glyphRef",
+-	"lineargradient":      "linearGradient",
+-	"radialgradient":      "radialGradient",
+-	"textpath":            "textPath",
+-}
+-
+-// Section 12.2.5.1
+-var mathMLAttributeAdjustments = map[string]string{
+-	"definitionurl": "definitionURL",
+-}
+-
+-var svgAttributeAdjustments = map[string]string{
+-	"attributename":             "attributeName",
+-	"attributetype":             "attributeType",
+-	"basefrequency":             "baseFrequency",
+-	"baseprofile":               "baseProfile",
+-	"calcmode":                  "calcMode",
+-	"clippathunits":             "clipPathUnits",
+-	"contentscripttype":         "contentScriptType",
+-	"contentstyletype":          "contentStyleType",
+-	"diffuseconstant":           "diffuseConstant",
+-	"edgemode":                  "edgeMode",
+-	"externalresourcesrequired": "externalResourcesRequired",
+-	"filterres":                 "filterRes",
+-	"filterunits":               "filterUnits",
+-	"glyphref":                  "glyphRef",
+-	"gradienttransform":         "gradientTransform",
+-	"gradientunits":             "gradientUnits",
+-	"kernelmatrix":              "kernelMatrix",
+-	"kernelunitlength":          "kernelUnitLength",
+-	"keypoints":                 "keyPoints",
+-	"keysplines":                "keySplines",
+-	"keytimes":                  "keyTimes",
+-	"lengthadjust":              "lengthAdjust",
+-	"limitingconeangle":         "limitingConeAngle",
+-	"markerheight":              "markerHeight",
+-	"markerunits":               "markerUnits",
+-	"markerwidth":               "markerWidth",
+-	"maskcontentunits":          "maskContentUnits",
+-	"maskunits":                 "maskUnits",
+-	"numoctaves":                "numOctaves",
+-	"pathlength":                "pathLength",
+-	"patterncontentunits":       "patternContentUnits",
+-	"patterntransform":          "patternTransform",
+-	"patternunits":              "patternUnits",
+-	"pointsatx":                 "pointsAtX",
+-	"pointsaty":                 "pointsAtY",
+-	"pointsatz":                 "pointsAtZ",
+-	"preservealpha":             "preserveAlpha",
+-	"preserveaspectratio":       "preserveAspectRatio",
+-	"primitiveunits":            "primitiveUnits",
+-	"refx":                      "refX",
+-	"refy":                      "refY",
+-	"repeatcount":               "repeatCount",
+-	"repeatdur":                 "repeatDur",
+-	"requiredextensions":        "requiredExtensions",
+-	"requiredfeatures":          "requiredFeatures",
+-	"specularconstant":          "specularConstant",
+-	"specularexponent":          "specularExponent",
+-	"spreadmethod":              "spreadMethod",
+-	"startoffset":               "startOffset",
+-	"stddeviation":              "stdDeviation",
+-	"stitchtiles":               "stitchTiles",
+-	"surfacescale":              "surfaceScale",
+-	"systemlanguage":            "systemLanguage",
+-	"tablevalues":               "tableValues",
+-	"targetx":                   "targetX",
+-	"targety":                   "targetY",
+-	"textlength":                "textLength",
+-	"viewbox":                   "viewBox",
+-	"viewtarget":                "viewTarget",
+-	"xchannelselector":          "xChannelSelector",
+-	"ychannelselector":          "yChannelSelector",
+-	"zoomandpan":                "zoomAndPan",
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/node.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/node.go
+deleted file mode 100644
+index e7b4e50..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/node.go
++++ /dev/null
+@@ -1,193 +0,0 @@
+-// Copyright 2011 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"code.google.com/p/go.net/html/atom"
+-)
+-
+-// A NodeType is the type of a Node.
+-type NodeType uint32
+-
+-const (
+-	ErrorNode NodeType = iota
+-	TextNode
+-	DocumentNode
+-	ElementNode
+-	CommentNode
+-	DoctypeNode
+-	scopeMarkerNode
+-)
+-
+-// Section 12.2.3.3 says "scope markers are inserted when entering applet
+-// elements, buttons, object elements, marquees, table cells, and table
+-// captions, and are used to prevent formatting from 'leaking'".
+-var scopeMarker = Node{Type: scopeMarkerNode}
+-
+-// A Node consists of a NodeType and some Data (tag name for element nodes,
+-// content for text) and are part of a tree of Nodes. Element nodes may also
+-// have a Namespace and contain a slice of Attributes. Data is unescaped, so
+-// that it looks like "a<b" rather than "a&lt;b". For element nodes, DataAtom
+-// is the atom for Data, or zero if Data is not a known tag name.
+-//
+-// An empty Namespace implies a "http://www.w3.org/1999/xhtml" namespace.
+-// Similarly, "math" is short for "http://www.w3.org/1998/Math/MathML", and
+-// "svg" is short for "http://www.w3.org/2000/svg".
+-type Node struct {
+-	Parent, FirstChild, LastChild, PrevSibling, NextSibling *Node
+-
+-	Type      NodeType
+-	DataAtom  atom.Atom
+-	Data      string
+-	Namespace string
+-	Attr      []Attribute
+-}
+-
+-// InsertBefore inserts newChild as a child of n, immediately before oldChild
+-// in the sequence of n's children. oldChild may be nil, in which case newChild
+-// is appended to the end of n's children.
+-//
+-// It will panic if newChild already has a parent or siblings.
+-func (n *Node) InsertBefore(newChild, oldChild *Node) {
+-	if newChild.Parent != nil || newChild.PrevSibling != nil || newChild.NextSibling != nil {
+-		panic("html: InsertBefore called for an attached child Node")
+-	}
+-	var prev, next *Node
+-	if oldChild != nil {
+-		prev, next = oldChild.PrevSibling, oldChild
+-	} else {
+-		prev = n.LastChild
+-	}
+-	if prev != nil {
+-		prev.NextSibling = newChild
+-	} else {
+-		n.FirstChild = newChild
+-	}
+-	if next != nil {
+-		next.PrevSibling = newChild
+-	} else {
+-		n.LastChild = newChild
+-	}
+-	newChild.Parent = n
+-	newChild.PrevSibling = prev
+-	newChild.NextSibling = next
+-}
+-
+-// AppendChild adds a node c as a child of n.
+-//
+-// It will panic if c already has a parent or siblings.
+-func (n *Node) AppendChild(c *Node) {
+-	if c.Parent != nil || c.PrevSibling != nil || c.NextSibling != nil {
+-		panic("html: AppendChild called for an attached child Node")
+-	}
+-	last := n.LastChild
+-	if last != nil {
+-		last.NextSibling = c
+-	} else {
+-		n.FirstChild = c
+-	}
+-	n.LastChild = c
+-	c.Parent = n
+-	c.PrevSibling = last
+-}
+-
+-// RemoveChild removes a node c that is a child of n. Afterwards, c will have
+-// no parent and no siblings.
+-//
+-// It will panic if c's parent is not n.
+-func (n *Node) RemoveChild(c *Node) {
+-	if c.Parent != n {
+-		panic("html: RemoveChild called for a non-child Node")
+-	}
+-	if n.FirstChild == c {
+-		n.FirstChild = c.NextSibling
+-	}
+-	if c.NextSibling != nil {
+-		c.NextSibling.PrevSibling = c.PrevSibling
+-	}
+-	if n.LastChild == c {
+-		n.LastChild = c.PrevSibling
+-	}
+-	if c.PrevSibling != nil {
+-		c.PrevSibling.NextSibling = c.NextSibling
+-	}
+-	c.Parent = nil
+-	c.PrevSibling = nil
+-	c.NextSibling = nil
+-}
+-
+-// reparentChildren reparents all of src's child nodes to dst.
+-func reparentChildren(dst, src *Node) {
+-	for {
+-		child := src.FirstChild
+-		if child == nil {
+-			break
+-		}
+-		src.RemoveChild(child)
+-		dst.AppendChild(child)
+-	}
+-}
+-
+-// clone returns a new node with the same type, data and attributes.
+-// The clone has no parent, no siblings and no children.
+-func (n *Node) clone() *Node {
+-	m := &Node{
+-		Type:     n.Type,
+-		DataAtom: n.DataAtom,
+-		Data:     n.Data,
+-		Attr:     make([]Attribute, len(n.Attr)),
+-	}
+-	copy(m.Attr, n.Attr)
+-	return m
+-}
+-
+-// nodeStack is a stack of nodes.
+-type nodeStack []*Node
+-
+-// pop pops the stack. It will panic if s is empty.
+-func (s *nodeStack) pop() *Node {
+-	i := len(*s)
+-	n := (*s)[i-1]
+-	*s = (*s)[:i-1]
+-	return n
+-}
+-
+-// top returns the most recently pushed node, or nil if s is empty.
+-func (s *nodeStack) top() *Node {
+-	if i := len(*s); i > 0 {
+-		return (*s)[i-1]
+-	}
+-	return nil
+-}
+-
+-// index returns the index of the top-most occurrence of n in the stack, or -1
+-// if n is not present.
+-func (s *nodeStack) index(n *Node) int {
+-	for i := len(*s) - 1; i >= 0; i-- {
+-		if (*s)[i] == n {
+-			return i
+-		}
+-	}
+-	return -1
+-}
+-
+-// insert inserts a node at the given index.
+-func (s *nodeStack) insert(i int, n *Node) {
+-	(*s) = append(*s, nil)
+-	copy((*s)[i+1:], (*s)[i:])
+-	(*s)[i] = n
+-}
+-
+-// remove removes a node from the stack. It is a no-op if n is not present.
+-func (s *nodeStack) remove(n *Node) {
+-	i := s.index(n)
+-	if i == -1 {
+-		return
+-	}
+-	copy((*s)[i:], (*s)[i+1:])
+-	j := len(*s) - 1
+-	(*s)[j] = nil
+-	*s = (*s)[:j]
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/node_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/node_test.go
+deleted file mode 100644
+index 471102f..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/node_test.go
++++ /dev/null
+@@ -1,146 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"fmt"
+-)
+-
+-// checkTreeConsistency checks that a node and its descendants are all
+-// consistent in their parent/child/sibling relationships.
+-func checkTreeConsistency(n *Node) error {
+-	return checkTreeConsistency1(n, 0)
+-}
+-
+-func checkTreeConsistency1(n *Node, depth int) error {
+-	if depth == 1e4 {
+-		return fmt.Errorf("html: tree looks like it contains a cycle")
+-	}
+-	if err := checkNodeConsistency(n); err != nil {
+-		return err
+-	}
+-	for c := n.FirstChild; c != nil; c = c.NextSibling {
+-		if err := checkTreeConsistency1(c, depth+1); err != nil {
+-			return err
+-		}
+-	}
+-	return nil
+-}
+-
+-// checkNodeConsistency checks that a node's parent/child/sibling relationships
+-// are consistent.
+-func checkNodeConsistency(n *Node) error {
+-	if n == nil {
+-		return nil
+-	}
+-
+-	nParent := 0
+-	for p := n.Parent; p != nil; p = p.Parent {
+-		nParent++
+-		if nParent == 1e4 {
+-			return fmt.Errorf("html: parent list looks like an infinite loop")
+-		}
+-	}
+-
+-	nForward := 0
+-	for c := n.FirstChild; c != nil; c = c.NextSibling {
+-		nForward++
+-		if nForward == 1e6 {
+-			return fmt.Errorf("html: forward list of children looks like an infinite loop")
+-		}
+-		if c.Parent != n {
+-			return fmt.Errorf("html: inconsistent child/parent relationship")
+-		}
+-	}
+-
+-	nBackward := 0
+-	for c := n.LastChild; c != nil; c = c.PrevSibling {
+-		nBackward++
+-		if nBackward == 1e6 {
+-			return fmt.Errorf("html: backward list of children looks like an infinite loop")
+-		}
+-		if c.Parent != n {
+-			return fmt.Errorf("html: inconsistent child/parent relationship")
+-		}
+-	}
+-
+-	if n.Parent != nil {
+-		if n.Parent == n {
+-			return fmt.Errorf("html: inconsistent parent relationship")
+-		}
+-		if n.Parent == n.FirstChild {
+-			return fmt.Errorf("html: inconsistent parent/first relationship")
+-		}
+-		if n.Parent == n.LastChild {
+-			return fmt.Errorf("html: inconsistent parent/last relationship")
+-		}
+-		if n.Parent == n.PrevSibling {
+-			return fmt.Errorf("html: inconsistent parent/prev relationship")
+-		}
+-		if n.Parent == n.NextSibling {
+-			return fmt.Errorf("html: inconsistent parent/next relationship")
+-		}
+-
+-		parentHasNAsAChild := false
+-		for c := n.Parent.FirstChild; c != nil; c = c.NextSibling {
+-			if c == n {
+-				parentHasNAsAChild = true
+-				break
+-			}
+-		}
+-		if !parentHasNAsAChild {
+-			return fmt.Errorf("html: inconsistent parent/child relationship")
+-		}
+-	}
+-
+-	if n.PrevSibling != nil && n.PrevSibling.NextSibling != n {
+-		return fmt.Errorf("html: inconsistent prev/next relationship")
+-	}
+-	if n.NextSibling != nil && n.NextSibling.PrevSibling != n {
+-		return fmt.Errorf("html: inconsistent next/prev relationship")
+-	}
+-
+-	if (n.FirstChild == nil) != (n.LastChild == nil) {
+-		return fmt.Errorf("html: inconsistent first/last relationship")
+-	}
+-	if n.FirstChild != nil && n.FirstChild == n.LastChild {
+-		// We have a sole child.
+-		if n.FirstChild.PrevSibling != nil || n.FirstChild.NextSibling != nil {
+-			return fmt.Errorf("html: inconsistent sole child's sibling relationship")
+-		}
+-	}
+-
+-	seen := map[*Node]bool{}
+-
+-	var last *Node
+-	for c := n.FirstChild; c != nil; c = c.NextSibling {
+-		if seen[c] {
+-			return fmt.Errorf("html: inconsistent repeated child")
+-		}
+-		seen[c] = true
+-		last = c
+-	}
+-	if last != n.LastChild {
+-		return fmt.Errorf("html: inconsistent last relationship")
+-	}
+-
+-	var first *Node
+-	for c := n.LastChild; c != nil; c = c.PrevSibling {
+-		if !seen[c] {
+-			return fmt.Errorf("html: inconsistent missing child")
+-		}
+-		delete(seen, c)
+-		first = c
+-	}
+-	if first != n.FirstChild {
+-		return fmt.Errorf("html: inconsistent first relationship")
+-	}
+-
+-	if len(seen) != 0 {
+-		return fmt.Errorf("html: inconsistent forwards/backwards child list")
+-	}
+-
+-	return nil
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/parse.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/parse.go
+deleted file mode 100644
+index 501c64a..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/parse.go
++++ /dev/null
+@@ -1,2092 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"errors"
+-	"fmt"
+-	"io"
+-	"strings"
+-
+-	a "code.google.com/p/go.net/html/atom"
+-)
+-
+-// A parser implements the HTML5 parsing algorithm:
+-// http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#tree-construction
+-type parser struct {
+-	// tokenizer provides the tokens for the parser.
+-	tokenizer *Tokenizer
+-	// tok is the most recently read token.
+-	tok Token
+-	// Self-closing tags like <hr/> are treated as start tags, except that
+-	// hasSelfClosingToken is set while they are being processed.
+-	hasSelfClosingToken bool
+-	// doc is the document root element.
+-	doc *Node
+-	// The stack of open elements (section 12.2.3.2) and active formatting
+-	// elements (section 12.2.3.3).
+-	oe, afe nodeStack
+-	// Element pointers (section 12.2.3.4).
+-	head, form *Node
+-	// Other parsing state flags (section 12.2.3.5).
+-	scripting, framesetOK bool
+-	// im is the current insertion mode.
+-	im insertionMode
+-	// originalIM is the insertion mode to go back to after completing a text
+-	// or inTableText insertion mode.
+-	originalIM insertionMode
+-	// fosterParenting is whether new elements should be inserted according to
+-	// the foster parenting rules (section 12.2.5.3).
+-	fosterParenting bool
+-	// quirks is whether the parser is operating in "quirks mode."
+-	quirks bool
+-	// fragment is whether the parser is parsing an HTML fragment.
+-	fragment bool
+-	// context is the context element when parsing an HTML fragment
+-	// (section 12.4).
+-	context *Node
+-}
+-
+-func (p *parser) top() *Node {
+-	if n := p.oe.top(); n != nil {
+-		return n
+-	}
+-	return p.doc
+-}
+-
+-// Stop tags for use in popUntil. These come from section 12.2.3.2.
+-var (
+-	defaultScopeStopTags = map[string][]a.Atom{
+-		"":     {a.Applet, a.Caption, a.Html, a.Table, a.Td, a.Th, a.Marquee, a.Object},
+-		"math": {a.AnnotationXml, a.Mi, a.Mn, a.Mo, a.Ms, a.Mtext},
+-		"svg":  {a.Desc, a.ForeignObject, a.Title},
+-	}
+-)
+-
+-type scope int
+-
+-const (
+-	defaultScope scope = iota
+-	listItemScope
+-	buttonScope
+-	tableScope
+-	tableRowScope
+-	tableBodyScope
+-	selectScope
+-)
+-
+-// popUntil pops the stack of open elements at the highest element whose tag
+-// is in matchTags, provided there is no higher element in the scope's stop
+-// tags (as defined in section 12.2.3.2). It returns whether or not there was
+-// such an element. If there was not, popUntil leaves the stack unchanged.
+-//
+-// For example, the set of stop tags for table scope is: "html", "table". If
+-// the stack was:
+-// ["html", "body", "font", "table", "b", "i", "u"]
+-// then popUntil(tableScope, "font") would return false, but
+-// popUntil(tableScope, "i") would return true and the stack would become:
+-// ["html", "body", "font", "table", "b"]
+-//
+-// If an element's tag is in both the stop tags and matchTags, then the stack
+-// will be popped and the function returns true (provided, of course, there was
+-// no higher element in the stack that was also in the stop tags). For example,
+-// popUntil(tableScope, "table") returns true and leaves:
+-// ["html", "body", "font"]
+-func (p *parser) popUntil(s scope, matchTags ...a.Atom) bool {
+-	if i := p.indexOfElementInScope(s, matchTags...); i != -1 {
+-		p.oe = p.oe[:i]
+-		return true
+-	}
+-	return false
+-}
+-
+-// indexOfElementInScope returns the index in p.oe of the highest element whose
+-// tag is in matchTags that is in scope. If no matching element is in scope, it
+-// returns -1.
+-func (p *parser) indexOfElementInScope(s scope, matchTags ...a.Atom) int {
+-	for i := len(p.oe) - 1; i >= 0; i-- {
+-		tagAtom := p.oe[i].DataAtom
+-		if p.oe[i].Namespace == "" {
+-			for _, t := range matchTags {
+-				if t == tagAtom {
+-					return i
+-				}
+-			}
+-			switch s {
+-			case defaultScope:
+-				// No-op.
+-			case listItemScope:
+-				if tagAtom == a.Ol || tagAtom == a.Ul {
+-					return -1
+-				}
+-			case buttonScope:
+-				if tagAtom == a.Button {
+-					return -1
+-				}
+-			case tableScope:
+-				if tagAtom == a.Html || tagAtom == a.Table {
+-					return -1
+-				}
+-			case selectScope:
+-				if tagAtom != a.Optgroup && tagAtom != a.Option {
+-					return -1
+-				}
+-			default:
+-				panic("unreachable")
+-			}
+-		}
+-		switch s {
+-		case defaultScope, listItemScope, buttonScope:
+-			for _, t := range defaultScopeStopTags[p.oe[i].Namespace] {
+-				if t == tagAtom {
+-					return -1
+-				}
+-			}
+-		}
+-	}
+-	return -1
+-}
+-
+-// elementInScope is like popUntil, except that it doesn't modify the stack of
+-// open elements.
+-func (p *parser) elementInScope(s scope, matchTags ...a.Atom) bool {
+-	return p.indexOfElementInScope(s, matchTags...) != -1
+-}
+-
+-// clearStackToContext pops elements off the stack of open elements until a
+-// scope-defined element is found.
+-func (p *parser) clearStackToContext(s scope) {
+-	for i := len(p.oe) - 1; i >= 0; i-- {
+-		tagAtom := p.oe[i].DataAtom
+-		switch s {
+-		case tableScope:
+-			if tagAtom == a.Html || tagAtom == a.Table {
+-				p.oe = p.oe[:i+1]
+-				return
+-			}
+-		case tableRowScope:
+-			if tagAtom == a.Html || tagAtom == a.Tr {
+-				p.oe = p.oe[:i+1]
+-				return
+-			}
+-		case tableBodyScope:
+-			if tagAtom == a.Html || tagAtom == a.Tbody || tagAtom == a.Tfoot || tagAtom == a.Thead {
+-				p.oe = p.oe[:i+1]
+-				return
+-			}
+-		default:
+-			panic("unreachable")
+-		}
+-	}
+-}
+-
+-// generateImpliedEndTags pops nodes off the stack of open elements as long as
+-// the top node has a tag name of dd, dt, li, option, optgroup, p, rp, or rt.
+-// If exceptions are specified, nodes with that name will not be popped off.
+-func (p *parser) generateImpliedEndTags(exceptions ...string) {
+-	var i int
+-loop:
+-	for i = len(p.oe) - 1; i >= 0; i-- {
+-		n := p.oe[i]
+-		if n.Type == ElementNode {
+-			switch n.DataAtom {
+-			case a.Dd, a.Dt, a.Li, a.Option, a.Optgroup, a.P, a.Rp, a.Rt:
+-				for _, except := range exceptions {
+-					if n.Data == except {
+-						break loop
+-					}
+-				}
+-				continue
+-			}
+-		}
+-		break
+-	}
+-
+-	p.oe = p.oe[:i+1]
+-}
+-
+-// addChild adds a child node n to the top element, and pushes n onto the stack
+-// of open elements if it is an element node.
+-func (p *parser) addChild(n *Node) {
+-	if p.shouldFosterParent() {
+-		p.fosterParent(n)
+-	} else {
+-		p.top().AppendChild(n)
+-	}
+-
+-	if n.Type == ElementNode {
+-		p.oe = append(p.oe, n)
+-	}
+-}
+-
+-// shouldFosterParent returns whether the next node to be added should be
+-// foster parented.
+-func (p *parser) shouldFosterParent() bool {
+-	if p.fosterParenting {
+-		switch p.top().DataAtom {
+-		case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr:
+-			return true
+-		}
+-	}
+-	return false
+-}
+-
+-// fosterParent adds a child node according to the foster parenting rules.
+-// Section 12.2.5.3, "foster parenting".
+-func (p *parser) fosterParent(n *Node) {
+-	var table, parent, prev *Node
+-	var i int
+-	for i = len(p.oe) - 1; i >= 0; i-- {
+-		if p.oe[i].DataAtom == a.Table {
+-			table = p.oe[i]
+-			break
+-		}
+-	}
+-
+-	if table == nil {
+-		// The foster parent is the html element.
+-		parent = p.oe[0]
+-	} else {
+-		parent = table.Parent
+-	}
+-	if parent == nil {
+-		parent = p.oe[i-1]
+-	}
+-
+-	if table != nil {
+-		prev = table.PrevSibling
+-	} else {
+-		prev = parent.LastChild
+-	}
+-	if prev != nil && prev.Type == TextNode && n.Type == TextNode {
+-		prev.Data += n.Data
+-		return
+-	}
+-
+-	parent.InsertBefore(n, table)
+-}
+-
+-// addText adds text to the preceding node if it is a text node, or else it
+-// calls addChild with a new text node.
+-func (p *parser) addText(text string) {
+-	if text == "" {
+-		return
+-	}
+-
+-	if p.shouldFosterParent() {
+-		p.fosterParent(&Node{
+-			Type: TextNode,
+-			Data: text,
+-		})
+-		return
+-	}
+-
+-	t := p.top()
+-	if n := t.LastChild; n != nil && n.Type == TextNode {
+-		n.Data += text
+-		return
+-	}
+-	p.addChild(&Node{
+-		Type: TextNode,
+-		Data: text,
+-	})
+-}
+-
+-// addElement adds a child element based on the current token.
+-func (p *parser) addElement() {
+-	p.addChild(&Node{
+-		Type:     ElementNode,
+-		DataAtom: p.tok.DataAtom,
+-		Data:     p.tok.Data,
+-		Attr:     p.tok.Attr,
+-	})
+-}
+-
+-// Section 12.2.3.3.
+-func (p *parser) addFormattingElement() {
+-	tagAtom, attr := p.tok.DataAtom, p.tok.Attr
+-	p.addElement()
+-
+-	// Implement the Noah's Ark clause, but with three per family instead of two.
+-	identicalElements := 0
+-findIdenticalElements:
+-	for i := len(p.afe) - 1; i >= 0; i-- {
+-		n := p.afe[i]
+-		if n.Type == scopeMarkerNode {
+-			break
+-		}
+-		if n.Type != ElementNode {
+-			continue
+-		}
+-		if n.Namespace != "" {
+-			continue
+-		}
+-		if n.DataAtom != tagAtom {
+-			continue
+-		}
+-		if len(n.Attr) != len(attr) {
+-			continue
+-		}
+-	compareAttributes:
+-		for _, t0 := range n.Attr {
+-			for _, t1 := range attr {
+-				if t0.Key == t1.Key && t0.Namespace == t1.Namespace && t0.Val == t1.Val {
+-					// Found a match for this attribute, continue with the next attribute.
+-					continue compareAttributes
+-				}
+-			}
+-			// If we get here, there is no attribute that matches a.
+-			// Therefore the element is not identical to the new one.
+-			continue findIdenticalElements
+-		}
+-
+-		identicalElements++
+-		if identicalElements >= 3 {
+-			p.afe.remove(n)
+-		}
+-	}
+-
+-	p.afe = append(p.afe, p.top())
+-}
+-
+-// Section 12.2.3.3.
+-func (p *parser) clearActiveFormattingElements() {
+-	for {
+-		n := p.afe.pop()
+-		if len(p.afe) == 0 || n.Type == scopeMarkerNode {
+-			return
+-		}
+-	}
+-}
+-
+-// Section 12.2.3.3.
+-func (p *parser) reconstructActiveFormattingElements() {
+-	n := p.afe.top()
+-	if n == nil {
+-		return
+-	}
+-	if n.Type == scopeMarkerNode || p.oe.index(n) != -1 {
+-		return
+-	}
+-	i := len(p.afe) - 1
+-	for n.Type != scopeMarkerNode && p.oe.index(n) == -1 {
+-		if i == 0 {
+-			i = -1
+-			break
+-		}
+-		i--
+-		n = p.afe[i]
+-	}
+-	for {
+-		i++
+-		clone := p.afe[i].clone()
+-		p.addChild(clone)
+-		p.afe[i] = clone
+-		if i == len(p.afe)-1 {
+-			break
+-		}
+-	}
+-}
+-
+-// Section 12.2.4.
+-func (p *parser) acknowledgeSelfClosingTag() {
+-	p.hasSelfClosingToken = false
+-}
+-
+-// An insertion mode (section 12.2.3.1) is the state transition function from
+-// a particular state in the HTML5 parser's state machine. It updates the
+-// parser's fields depending on parser.tok (where ErrorToken means EOF).
+-// It returns whether the token was consumed.
+-type insertionMode func(*parser) bool
+-
+-// setOriginalIM sets the insertion mode to return to after completing a text or
+-// inTableText insertion mode.
+-// Section 12.2.3.1, "using the rules for".
+-func (p *parser) setOriginalIM() {
+-	if p.originalIM != nil {
+-		panic("html: bad parser state: originalIM was set twice")
+-	}
+-	p.originalIM = p.im
+-}
+-
+-// Section 12.2.3.1, "reset the insertion mode".
+-func (p *parser) resetInsertionMode() {
+-	for i := len(p.oe) - 1; i >= 0; i-- {
+-		n := p.oe[i]
+-		if i == 0 && p.context != nil {
+-			n = p.context
+-		}
+-
+-		switch n.DataAtom {
+-		case a.Select:
+-			p.im = inSelectIM
+-		case a.Td, a.Th:
+-			p.im = inCellIM
+-		case a.Tr:
+-			p.im = inRowIM
+-		case a.Tbody, a.Thead, a.Tfoot:
+-			p.im = inTableBodyIM
+-		case a.Caption:
+-			p.im = inCaptionIM
+-		case a.Colgroup:
+-			p.im = inColumnGroupIM
+-		case a.Table:
+-			p.im = inTableIM
+-		case a.Head:
+-			p.im = inBodyIM
+-		case a.Body:
+-			p.im = inBodyIM
+-		case a.Frameset:
+-			p.im = inFramesetIM
+-		case a.Html:
+-			p.im = beforeHeadIM
+-		default:
+-			continue
+-		}
+-		return
+-	}
+-	p.im = inBodyIM
+-}
+-
+-const whitespace = " \t\r\n\f"
+-
+-// Section 12.2.5.4.1.
+-func initialIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case TextToken:
+-		p.tok.Data = strings.TrimLeft(p.tok.Data, whitespace)
+-		if len(p.tok.Data) == 0 {
+-			// It was all whitespace, so ignore it.
+-			return true
+-		}
+-	case CommentToken:
+-		p.doc.AppendChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	case DoctypeToken:
+-		n, quirks := parseDoctype(p.tok.Data)
+-		p.doc.AppendChild(n)
+-		p.quirks = quirks
+-		p.im = beforeHTMLIM
+-		return true
+-	}
+-	p.quirks = true
+-	p.im = beforeHTMLIM
+-	return false
+-}
+-
+-// Section 12.2.5.4.2.
+-func beforeHTMLIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case DoctypeToken:
+-		// Ignore the token.
+-		return true
+-	case TextToken:
+-		p.tok.Data = strings.TrimLeft(p.tok.Data, whitespace)
+-		if len(p.tok.Data) == 0 {
+-			// It was all whitespace, so ignore it.
+-			return true
+-		}
+-	case StartTagToken:
+-		if p.tok.DataAtom == a.Html {
+-			p.addElement()
+-			p.im = beforeHeadIM
+-			return true
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Head, a.Body, a.Html, a.Br:
+-			p.parseImpliedToken(StartTagToken, a.Html, a.Html.String())
+-			return false
+-		default:
+-			// Ignore the token.
+-			return true
+-		}
+-	case CommentToken:
+-		p.doc.AppendChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	}
+-	p.parseImpliedToken(StartTagToken, a.Html, a.Html.String())
+-	return false
+-}
+-
+-// Section 12.2.5.4.3.
+-func beforeHeadIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case TextToken:
+-		p.tok.Data = strings.TrimLeft(p.tok.Data, whitespace)
+-		if len(p.tok.Data) == 0 {
+-			// It was all whitespace, so ignore it.
+-			return true
+-		}
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Head:
+-			p.addElement()
+-			p.head = p.top()
+-			p.im = inHeadIM
+-			return true
+-		case a.Html:
+-			return inBodyIM(p)
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Head, a.Body, a.Html, a.Br:
+-			p.parseImpliedToken(StartTagToken, a.Head, a.Head.String())
+-			return false
+-		default:
+-			// Ignore the token.
+-			return true
+-		}
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	case DoctypeToken:
+-		// Ignore the token.
+-		return true
+-	}
+-
+-	p.parseImpliedToken(StartTagToken, a.Head, a.Head.String())
+-	return false
+-}
+-
+-// Section 12.2.5.4.4.
+-func inHeadIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case TextToken:
+-		s := strings.TrimLeft(p.tok.Data, whitespace)
+-		if len(s) < len(p.tok.Data) {
+-			// Add the initial whitespace to the current node.
+-			p.addText(p.tok.Data[:len(p.tok.Data)-len(s)])
+-			if s == "" {
+-				return true
+-			}
+-			p.tok.Data = s
+-		}
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Html:
+-			return inBodyIM(p)
+-		case a.Base, a.Basefont, a.Bgsound, a.Command, a.Link, a.Meta:
+-			p.addElement()
+-			p.oe.pop()
+-			p.acknowledgeSelfClosingTag()
+-			return true
+-		case a.Script, a.Title, a.Noscript, a.Noframes, a.Style:
+-			p.addElement()
+-			p.setOriginalIM()
+-			p.im = textIM
+-			return true
+-		case a.Head:
+-			// Ignore the token.
+-			return true
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Head:
+-			n := p.oe.pop()
+-			if n.DataAtom != a.Head {
+-				panic("html: bad parser state: <head> element not found, in the in-head insertion mode")
+-			}
+-			p.im = afterHeadIM
+-			return true
+-		case a.Body, a.Html, a.Br:
+-			p.parseImpliedToken(EndTagToken, a.Head, a.Head.String())
+-			return false
+-		default:
+-			// Ignore the token.
+-			return true
+-		}
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	case DoctypeToken:
+-		// Ignore the token.
+-		return true
+-	}
+-
+-	p.parseImpliedToken(EndTagToken, a.Head, a.Head.String())
+-	return false
+-}
+-
+-// Section 12.2.5.4.6.
+-func afterHeadIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case TextToken:
+-		s := strings.TrimLeft(p.tok.Data, whitespace)
+-		if len(s) < len(p.tok.Data) {
+-			// Add the initial whitespace to the current node.
+-			p.addText(p.tok.Data[:len(p.tok.Data)-len(s)])
+-			if s == "" {
+-				return true
+-			}
+-			p.tok.Data = s
+-		}
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Html:
+-			return inBodyIM(p)
+-		case a.Body:
+-			p.addElement()
+-			p.framesetOK = false
+-			p.im = inBodyIM
+-			return true
+-		case a.Frameset:
+-			p.addElement()
+-			p.im = inFramesetIM
+-			return true
+-		case a.Base, a.Basefont, a.Bgsound, a.Link, a.Meta, a.Noframes, a.Script, a.Style, a.Title:
+-			p.oe = append(p.oe, p.head)
+-			defer p.oe.remove(p.head)
+-			return inHeadIM(p)
+-		case a.Head:
+-			// Ignore the token.
+-			return true
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Body, a.Html, a.Br:
+-			// Drop down to creating an implied <body> tag.
+-		default:
+-			// Ignore the token.
+-			return true
+-		}
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	case DoctypeToken:
+-		// Ignore the token.
+-		return true
+-	}
+-
+-	p.parseImpliedToken(StartTagToken, a.Body, a.Body.String())
+-	p.framesetOK = true
+-	return false
+-}
+-
+-// copyAttributes copies attributes of src not found on dst to dst.
+-func copyAttributes(dst *Node, src Token) {
+-	if len(src.Attr) == 0 {
+-		return
+-	}
+-	attr := map[string]string{}
+-	for _, t := range dst.Attr {
+-		attr[t.Key] = t.Val
+-	}
+-	for _, t := range src.Attr {
+-		if _, ok := attr[t.Key]; !ok {
+-			dst.Attr = append(dst.Attr, t)
+-			attr[t.Key] = t.Val
+-		}
+-	}
+-}
+-
+-// Section 12.2.5.4.7.
+-func inBodyIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case TextToken:
+-		d := p.tok.Data
+-		switch n := p.oe.top(); n.DataAtom {
+-		case a.Pre, a.Listing:
+-			if n.FirstChild == nil {
+-				// Ignore a newline at the start of a <pre> block.
+-				if d != "" && d[0] == '\r' {
+-					d = d[1:]
+-				}
+-				if d != "" && d[0] == '\n' {
+-					d = d[1:]
+-				}
+-			}
+-		}
+-		d = strings.Replace(d, "\x00", "", -1)
+-		if d == "" {
+-			return true
+-		}
+-		p.reconstructActiveFormattingElements()
+-		p.addText(d)
+-		if p.framesetOK && strings.TrimLeft(d, whitespace) != "" {
+-			// There were non-whitespace characters inserted.
+-			p.framesetOK = false
+-		}
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Html:
+-			copyAttributes(p.oe[0], p.tok)
+-		case a.Base, a.Basefont, a.Bgsound, a.Command, a.Link, a.Meta, a.Noframes, a.Script, a.Style, a.Title:
+-			return inHeadIM(p)
+-		case a.Body:
+-			if len(p.oe) >= 2 {
+-				body := p.oe[1]
+-				if body.Type == ElementNode && body.DataAtom == a.Body {
+-					p.framesetOK = false
+-					copyAttributes(body, p.tok)
+-				}
+-			}
+-		case a.Frameset:
+-			if !p.framesetOK || len(p.oe) < 2 || p.oe[1].DataAtom != a.Body {
+-				// Ignore the token.
+-				return true
+-			}
+-			body := p.oe[1]
+-			if body.Parent != nil {
+-				body.Parent.RemoveChild(body)
+-			}
+-			p.oe = p.oe[:1]
+-			p.addElement()
+-			p.im = inFramesetIM
+-			return true
+-		case a.Address, a.Article, a.Aside, a.Blockquote, a.Center, a.Details, a.Dir, a.Div, a.Dl, a.Fieldset, a.Figcaption, a.Figure, a.Footer, a.Header, a.Hgroup, a.Menu, a.Nav, a.Ol, a.P, a.Section, a.Summary, a.Ul:
+-			p.popUntil(buttonScope, a.P)
+-			p.addElement()
+-		case a.H1, a.H2, a.H3, a.H4, a.H5, a.H6:
+-			p.popUntil(buttonScope, a.P)
+-			switch n := p.top(); n.DataAtom {
+-			case a.H1, a.H2, a.H3, a.H4, a.H5, a.H6:
+-				p.oe.pop()
+-			}
+-			p.addElement()
+-		case a.Pre, a.Listing:
+-			p.popUntil(buttonScope, a.P)
+-			p.addElement()
+-			// The newline, if any, will be dealt with by the TextToken case.
+-			p.framesetOK = false
+-		case a.Form:
+-			if p.form == nil {
+-				p.popUntil(buttonScope, a.P)
+-				p.addElement()
+-				p.form = p.top()
+-			}
+-		case a.Li:
+-			p.framesetOK = false
+-			for i := len(p.oe) - 1; i >= 0; i-- {
+-				node := p.oe[i]
+-				switch node.DataAtom {
+-				case a.Li:
+-					p.oe = p.oe[:i]
+-				case a.Address, a.Div, a.P:
+-					continue
+-				default:
+-					if !isSpecialElement(node) {
+-						continue
+-					}
+-				}
+-				break
+-			}
+-			p.popUntil(buttonScope, a.P)
+-			p.addElement()
+-		case a.Dd, a.Dt:
+-			p.framesetOK = false
+-			for i := len(p.oe) - 1; i >= 0; i-- {
+-				node := p.oe[i]
+-				switch node.DataAtom {
+-				case a.Dd, a.Dt:
+-					p.oe = p.oe[:i]
+-				case a.Address, a.Div, a.P:
+-					continue
+-				default:
+-					if !isSpecialElement(node) {
+-						continue
+-					}
+-				}
+-				break
+-			}
+-			p.popUntil(buttonScope, a.P)
+-			p.addElement()
+-		case a.Plaintext:
+-			p.popUntil(buttonScope, a.P)
+-			p.addElement()
+-		case a.Button:
+-			p.popUntil(defaultScope, a.Button)
+-			p.reconstructActiveFormattingElements()
+-			p.addElement()
+-			p.framesetOK = false
+-		case a.A:
+-			for i := len(p.afe) - 1; i >= 0 && p.afe[i].Type != scopeMarkerNode; i-- {
+-				if n := p.afe[i]; n.Type == ElementNode && n.DataAtom == a.A {
+-					p.inBodyEndTagFormatting(a.A)
+-					p.oe.remove(n)
+-					p.afe.remove(n)
+-					break
+-				}
+-			}
+-			p.reconstructActiveFormattingElements()
+-			p.addFormattingElement()
+-		case a.B, a.Big, a.Code, a.Em, a.Font, a.I, a.S, a.Small, a.Strike, a.Strong, a.Tt, a.U:
+-			p.reconstructActiveFormattingElements()
+-			p.addFormattingElement()
+-		case a.Nobr:
+-			p.reconstructActiveFormattingElements()
+-			if p.elementInScope(defaultScope, a.Nobr) {
+-				p.inBodyEndTagFormatting(a.Nobr)
+-				p.reconstructActiveFormattingElements()
+-			}
+-			p.addFormattingElement()
+-		case a.Applet, a.Marquee, a.Object:
+-			p.reconstructActiveFormattingElements()
+-			p.addElement()
+-			p.afe = append(p.afe, &scopeMarker)
+-			p.framesetOK = false
+-		case a.Table:
+-			if !p.quirks {
+-				p.popUntil(buttonScope, a.P)
+-			}
+-			p.addElement()
+-			p.framesetOK = false
+-			p.im = inTableIM
+-			return true
+-		case a.Area, a.Br, a.Embed, a.Img, a.Input, a.Keygen, a.Wbr:
+-			p.reconstructActiveFormattingElements()
+-			p.addElement()
+-			p.oe.pop()
+-			p.acknowledgeSelfClosingTag()
+-			if p.tok.DataAtom == a.Input {
+-				for _, t := range p.tok.Attr {
+-					if t.Key == "type" {
+-						if strings.ToLower(t.Val) == "hidden" {
+-							// Skip setting framesetOK = false
+-							return true
+-						}
+-					}
+-				}
+-			}
+-			p.framesetOK = false
+-		case a.Param, a.Source, a.Track:
+-			p.addElement()
+-			p.oe.pop()
+-			p.acknowledgeSelfClosingTag()
+-		case a.Hr:
+-			p.popUntil(buttonScope, a.P)
+-			p.addElement()
+-			p.oe.pop()
+-			p.acknowledgeSelfClosingTag()
+-			p.framesetOK = false
+-		case a.Image:
+-			p.tok.DataAtom = a.Img
+-			p.tok.Data = a.Img.String()
+-			return false
+-		case a.Isindex:
+-			if p.form != nil {
+-				// Ignore the token.
+-				return true
+-			}
+-			action := ""
+-			prompt := "This is a searchable index. Enter search keywords: "
+-			attr := []Attribute{{Key: "name", Val: "isindex"}}
+-			for _, t := range p.tok.Attr {
+-				switch t.Key {
+-				case "action":
+-					action = t.Val
+-				case "name":
+-					// Ignore the attribute.
+-				case "prompt":
+-					prompt = t.Val
+-				default:
+-					attr = append(attr, t)
+-				}
+-			}
+-			p.acknowledgeSelfClosingTag()
+-			p.popUntil(buttonScope, a.P)
+-			p.parseImpliedToken(StartTagToken, a.Form, a.Form.String())
+-			if action != "" {
+-				p.form.Attr = []Attribute{{Key: "action", Val: action}}
+-			}
+-			p.parseImpliedToken(StartTagToken, a.Hr, a.Hr.String())
+-			p.parseImpliedToken(StartTagToken, a.Label, a.Label.String())
+-			p.addText(prompt)
+-			p.addChild(&Node{
+-				Type:     ElementNode,
+-				DataAtom: a.Input,
+-				Data:     a.Input.String(),
+-				Attr:     attr,
+-			})
+-			p.oe.pop()
+-			p.parseImpliedToken(EndTagToken, a.Label, a.Label.String())
+-			p.parseImpliedToken(StartTagToken, a.Hr, a.Hr.String())
+-			p.parseImpliedToken(EndTagToken, a.Form, a.Form.String())
+-		case a.Textarea:
+-			p.addElement()
+-			p.setOriginalIM()
+-			p.framesetOK = false
+-			p.im = textIM
+-		case a.Xmp:
+-			p.popUntil(buttonScope, a.P)
+-			p.reconstructActiveFormattingElements()
+-			p.framesetOK = false
+-			p.addElement()
+-			p.setOriginalIM()
+-			p.im = textIM
+-		case a.Iframe:
+-			p.framesetOK = false
+-			p.addElement()
+-			p.setOriginalIM()
+-			p.im = textIM
+-		case a.Noembed, a.Noscript:
+-			p.addElement()
+-			p.setOriginalIM()
+-			p.im = textIM
+-		case a.Select:
+-			p.reconstructActiveFormattingElements()
+-			p.addElement()
+-			p.framesetOK = false
+-			p.im = inSelectIM
+-			return true
+-		case a.Optgroup, a.Option:
+-			if p.top().DataAtom == a.Option {
+-				p.oe.pop()
+-			}
+-			p.reconstructActiveFormattingElements()
+-			p.addElement()
+-		case a.Rp, a.Rt:
+-			if p.elementInScope(defaultScope, a.Ruby) {
+-				p.generateImpliedEndTags()
+-			}
+-			p.addElement()
+-		case a.Math, a.Svg:
+-			p.reconstructActiveFormattingElements()
+-			if p.tok.DataAtom == a.Math {
+-				adjustAttributeNames(p.tok.Attr, mathMLAttributeAdjustments)
+-			} else {
+-				adjustAttributeNames(p.tok.Attr, svgAttributeAdjustments)
+-			}
+-			adjustForeignAttributes(p.tok.Attr)
+-			p.addElement()
+-			p.top().Namespace = p.tok.Data
+-			if p.hasSelfClosingToken {
+-				p.oe.pop()
+-				p.acknowledgeSelfClosingTag()
+-			}
+-			return true
+-		case a.Caption, a.Col, a.Colgroup, a.Frame, a.Head, a.Tbody, a.Td, a.Tfoot, a.Th, a.Thead, a.Tr:
+-			// Ignore the token.
+-		default:
+-			p.reconstructActiveFormattingElements()
+-			p.addElement()
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Body:
+-			if p.elementInScope(defaultScope, a.Body) {
+-				p.im = afterBodyIM
+-			}
+-		case a.Html:
+-			if p.elementInScope(defaultScope, a.Body) {
+-				p.parseImpliedToken(EndTagToken, a.Body, a.Body.String())
+-				return false
+-			}
+-			return true
+-		case a.Address, a.Article, a.Aside, a.Blockquote, a.Button, a.Center, a.Details, a.Dir, a.Div, a.Dl, a.Fieldset, a.Figcaption, a.Figure, a.Footer, a.Header, a.Hgroup, a.Listing, a.Menu, a.Nav, a.Ol, a.Pre, a.Section, a.Summary, a.Ul:
+-			p.popUntil(defaultScope, p.tok.DataAtom)
+-		case a.Form:
+-			node := p.form
+-			p.form = nil
+-			i := p.indexOfElementInScope(defaultScope, a.Form)
+-			if node == nil || i == -1 || p.oe[i] != node {
+-				// Ignore the token.
+-				return true
+-			}
+-			p.generateImpliedEndTags()
+-			p.oe.remove(node)
+-		case a.P:
+-			if !p.elementInScope(buttonScope, a.P) {
+-				p.parseImpliedToken(StartTagToken, a.P, a.P.String())
+-			}
+-			p.popUntil(buttonScope, a.P)
+-		case a.Li:
+-			p.popUntil(listItemScope, a.Li)
+-		case a.Dd, a.Dt:
+-			p.popUntil(defaultScope, p.tok.DataAtom)
+-		case a.H1, a.H2, a.H3, a.H4, a.H5, a.H6:
+-			p.popUntil(defaultScope, a.H1, a.H2, a.H3, a.H4, a.H5, a.H6)
+-		case a.A, a.B, a.Big, a.Code, a.Em, a.Font, a.I, a.Nobr, a.S, a.Small, a.Strike, a.Strong, a.Tt, a.U:
+-			p.inBodyEndTagFormatting(p.tok.DataAtom)
+-		case a.Applet, a.Marquee, a.Object:
+-			if p.popUntil(defaultScope, p.tok.DataAtom) {
+-				p.clearActiveFormattingElements()
+-			}
+-		case a.Br:
+-			p.tok.Type = StartTagToken
+-			return false
+-		default:
+-			p.inBodyEndTagOther(p.tok.DataAtom)
+-		}
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-	}
+-
+-	return true
+-}
+-
+-func (p *parser) inBodyEndTagFormatting(tagAtom a.Atom) {
+-	// This is the "adoption agency" algorithm, described at
+-	// http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#adoptionAgency
+-
+-	// TODO: this is a fairly literal line-by-line translation of that algorithm.
+-	// Once the code successfully parses the comprehensive test suite, we should
+-	// refactor this code to be more idiomatic.
+-
+-	// Steps 1-3. The outer loop.
+-	for i := 0; i < 8; i++ {
+-		// Step 4. Find the formatting element.
+-		var formattingElement *Node
+-		for j := len(p.afe) - 1; j >= 0; j-- {
+-			if p.afe[j].Type == scopeMarkerNode {
+-				break
+-			}
+-			if p.afe[j].DataAtom == tagAtom {
+-				formattingElement = p.afe[j]
+-				break
+-			}
+-		}
+-		if formattingElement == nil {
+-			p.inBodyEndTagOther(tagAtom)
+-			return
+-		}
+-		feIndex := p.oe.index(formattingElement)
+-		if feIndex == -1 {
+-			p.afe.remove(formattingElement)
+-			return
+-		}
+-		if !p.elementInScope(defaultScope, tagAtom) {
+-			// Ignore the tag.
+-			return
+-		}
+-
+-		// Steps 5-6. Find the furthest block.
+-		var furthestBlock *Node
+-		for _, e := range p.oe[feIndex:] {
+-			if isSpecialElement(e) {
+-				furthestBlock = e
+-				break
+-			}
+-		}
+-		if furthestBlock == nil {
+-			e := p.oe.pop()
+-			for e != formattingElement {
+-				e = p.oe.pop()
+-			}
+-			p.afe.remove(e)
+-			return
+-		}
+-
+-		// Steps 7-8. Find the common ancestor and bookmark node.
+-		commonAncestor := p.oe[feIndex-1]
+-		bookmark := p.afe.index(formattingElement)
+-
+-		// Step 9. The inner loop. Find the lastNode to reparent.
+-		lastNode := furthestBlock
+-		node := furthestBlock
+-		x := p.oe.index(node)
+-		// Steps 9.1-9.3.
+-		for j := 0; j < 3; j++ {
+-			// Step 9.4.
+-			x--
+-			node = p.oe[x]
+-			// Step 9.5.
+-			if p.afe.index(node) == -1 {
+-				p.oe.remove(node)
+-				continue
+-			}
+-			// Step 9.6.
+-			if node == formattingElement {
+-				break
+-			}
+-			// Step 9.7.
+-			clone := node.clone()
+-			p.afe[p.afe.index(node)] = clone
+-			p.oe[p.oe.index(node)] = clone
+-			node = clone
+-			// Step 9.8.
+-			if lastNode == furthestBlock {
+-				bookmark = p.afe.index(node) + 1
+-			}
+-			// Step 9.9.
+-			if lastNode.Parent != nil {
+-				lastNode.Parent.RemoveChild(lastNode)
+-			}
+-			node.AppendChild(lastNode)
+-			// Step 9.10.
+-			lastNode = node
+-		}
+-
+-		// Step 10. Reparent lastNode to the common ancestor,
+-		// or for misnested table nodes, to the foster parent.
+-		if lastNode.Parent != nil {
+-			lastNode.Parent.RemoveChild(lastNode)
+-		}
+-		switch commonAncestor.DataAtom {
+-		case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr:
+-			p.fosterParent(lastNode)
+-		default:
+-			commonAncestor.AppendChild(lastNode)
+-		}
+-
+-		// Steps 11-13. Reparent nodes from the furthest block's children
+-		// to a clone of the formatting element.
+-		clone := formattingElement.clone()
+-		reparentChildren(clone, furthestBlock)
+-		furthestBlock.AppendChild(clone)
+-
+-		// Step 14. Fix up the list of active formatting elements.
+-		if oldLoc := p.afe.index(formattingElement); oldLoc != -1 && oldLoc < bookmark {
+-			// Move the bookmark with the rest of the list.
+-			bookmark--
+-		}
+-		p.afe.remove(formattingElement)
+-		p.afe.insert(bookmark, clone)
+-
+-		// Step 15. Fix up the stack of open elements.
+-		p.oe.remove(formattingElement)
+-		p.oe.insert(p.oe.index(furthestBlock)+1, clone)
+-	}
+-}
+-
+-// inBodyEndTagOther performs the "any other end tag" algorithm for inBodyIM.
+-func (p *parser) inBodyEndTagOther(tagAtom a.Atom) {
+-	for i := len(p.oe) - 1; i >= 0; i-- {
+-		if p.oe[i].DataAtom == tagAtom {
+-			p.oe = p.oe[:i]
+-			break
+-		}
+-		if isSpecialElement(p.oe[i]) {
+-			break
+-		}
+-	}
+-}
+-
+-// Section 12.2.5.4.8.
+-func textIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case ErrorToken:
+-		p.oe.pop()
+-	case TextToken:
+-		d := p.tok.Data
+-		if n := p.oe.top(); n.DataAtom == a.Textarea && n.FirstChild == nil {
+-			// Ignore a newline at the start of a <textarea> block.
+-			if d != "" && d[0] == '\r' {
+-				d = d[1:]
+-			}
+-			if d != "" && d[0] == '\n' {
+-				d = d[1:]
+-			}
+-		}
+-		if d == "" {
+-			return true
+-		}
+-		p.addText(d)
+-		return true
+-	case EndTagToken:
+-		p.oe.pop()
+-	}
+-	p.im = p.originalIM
+-	p.originalIM = nil
+-	return p.tok.Type == EndTagToken
+-}
+-
+-// Section 12.2.5.4.9.
+-func inTableIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case ErrorToken:
+-		// Stop parsing.
+-		return true
+-	case TextToken:
+-		p.tok.Data = strings.Replace(p.tok.Data, "\x00", "", -1)
+-		switch p.oe.top().DataAtom {
+-		case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr:
+-			if strings.Trim(p.tok.Data, whitespace) == "" {
+-				p.addText(p.tok.Data)
+-				return true
+-			}
+-		}
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Caption:
+-			p.clearStackToContext(tableScope)
+-			p.afe = append(p.afe, &scopeMarker)
+-			p.addElement()
+-			p.im = inCaptionIM
+-			return true
+-		case a.Colgroup:
+-			p.clearStackToContext(tableScope)
+-			p.addElement()
+-			p.im = inColumnGroupIM
+-			return true
+-		case a.Col:
+-			p.parseImpliedToken(StartTagToken, a.Colgroup, a.Colgroup.String())
+-			return false
+-		case a.Tbody, a.Tfoot, a.Thead:
+-			p.clearStackToContext(tableScope)
+-			p.addElement()
+-			p.im = inTableBodyIM
+-			return true
+-		case a.Td, a.Th, a.Tr:
+-			p.parseImpliedToken(StartTagToken, a.Tbody, a.Tbody.String())
+-			return false
+-		case a.Table:
+-			if p.popUntil(tableScope, a.Table) {
+-				p.resetInsertionMode()
+-				return false
+-			}
+-			// Ignore the token.
+-			return true
+-		case a.Style, a.Script:
+-			return inHeadIM(p)
+-		case a.Input:
+-			for _, t := range p.tok.Attr {
+-				if t.Key == "type" && strings.ToLower(t.Val) == "hidden" {
+-					p.addElement()
+-					p.oe.pop()
+-					return true
+-				}
+-			}
+-			// Otherwise drop down to the default action.
+-		case a.Form:
+-			if p.form != nil {
+-				// Ignore the token.
+-				return true
+-			}
+-			p.addElement()
+-			p.form = p.oe.pop()
+-		case a.Select:
+-			p.reconstructActiveFormattingElements()
+-			switch p.top().DataAtom {
+-			case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr:
+-				p.fosterParenting = true
+-			}
+-			p.addElement()
+-			p.fosterParenting = false
+-			p.framesetOK = false
+-			p.im = inSelectInTableIM
+-			return true
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Table:
+-			if p.popUntil(tableScope, a.Table) {
+-				p.resetInsertionMode()
+-				return true
+-			}
+-			// Ignore the token.
+-			return true
+-		case a.Body, a.Caption, a.Col, a.Colgroup, a.Html, a.Tbody, a.Td, a.Tfoot, a.Th, a.Thead, a.Tr:
+-			// Ignore the token.
+-			return true
+-		}
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	case DoctypeToken:
+-		// Ignore the token.
+-		return true
+-	}
+-
+-	p.fosterParenting = true
+-	defer func() { p.fosterParenting = false }()
+-
+-	return inBodyIM(p)
+-}
+-
+-// Section 12.2.5.4.11.
+-func inCaptionIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Caption, a.Col, a.Colgroup, a.Tbody, a.Td, a.Tfoot, a.Thead, a.Tr:
+-			if p.popUntil(tableScope, a.Caption) {
+-				p.clearActiveFormattingElements()
+-				p.im = inTableIM
+-				return false
+-			} else {
+-				// Ignore the token.
+-				return true
+-			}
+-		case a.Select:
+-			p.reconstructActiveFormattingElements()
+-			p.addElement()
+-			p.framesetOK = false
+-			p.im = inSelectInTableIM
+-			return true
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Caption:
+-			if p.popUntil(tableScope, a.Caption) {
+-				p.clearActiveFormattingElements()
+-				p.im = inTableIM
+-			}
+-			return true
+-		case a.Table:
+-			if p.popUntil(tableScope, a.Caption) {
+-				p.clearActiveFormattingElements()
+-				p.im = inTableIM
+-				return false
+-			} else {
+-				// Ignore the token.
+-				return true
+-			}
+-		case a.Body, a.Col, a.Colgroup, a.Html, a.Tbody, a.Td, a.Tfoot, a.Th, a.Thead, a.Tr:
+-			// Ignore the token.
+-			return true
+-		}
+-	}
+-	return inBodyIM(p)
+-}
+-
+-// Section 12.2.5.4.12.
+-func inColumnGroupIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case TextToken:
+-		s := strings.TrimLeft(p.tok.Data, whitespace)
+-		if len(s) < len(p.tok.Data) {
+-			// Add the initial whitespace to the current node.
+-			p.addText(p.tok.Data[:len(p.tok.Data)-len(s)])
+-			if s == "" {
+-				return true
+-			}
+-			p.tok.Data = s
+-		}
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	case DoctypeToken:
+-		// Ignore the token.
+-		return true
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Html:
+-			return inBodyIM(p)
+-		case a.Col:
+-			p.addElement()
+-			p.oe.pop()
+-			p.acknowledgeSelfClosingTag()
+-			return true
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Colgroup:
+-			if p.oe.top().DataAtom != a.Html {
+-				p.oe.pop()
+-				p.im = inTableIM
+-			}
+-			return true
+-		case a.Col:
+-			// Ignore the token.
+-			return true
+-		}
+-	}
+-	if p.oe.top().DataAtom != a.Html {
+-		p.oe.pop()
+-		p.im = inTableIM
+-		return false
+-	}
+-	return true
+-}
+-
+-// Section 12.2.5.4.13.
+-func inTableBodyIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Tr:
+-			p.clearStackToContext(tableBodyScope)
+-			p.addElement()
+-			p.im = inRowIM
+-			return true
+-		case a.Td, a.Th:
+-			p.parseImpliedToken(StartTagToken, a.Tr, a.Tr.String())
+-			return false
+-		case a.Caption, a.Col, a.Colgroup, a.Tbody, a.Tfoot, a.Thead:
+-			if p.popUntil(tableScope, a.Tbody, a.Thead, a.Tfoot) {
+-				p.im = inTableIM
+-				return false
+-			}
+-			// Ignore the token.
+-			return true
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Tbody, a.Tfoot, a.Thead:
+-			if p.elementInScope(tableScope, p.tok.DataAtom) {
+-				p.clearStackToContext(tableBodyScope)
+-				p.oe.pop()
+-				p.im = inTableIM
+-			}
+-			return true
+-		case a.Table:
+-			if p.popUntil(tableScope, a.Tbody, a.Thead, a.Tfoot) {
+-				p.im = inTableIM
+-				return false
+-			}
+-			// Ignore the token.
+-			return true
+-		case a.Body, a.Caption, a.Col, a.Colgroup, a.Html, a.Td, a.Th, a.Tr:
+-			// Ignore the token.
+-			return true
+-		}
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	}
+-
+-	return inTableIM(p)
+-}
+-
+-// Section 12.2.5.4.14.
+-func inRowIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Td, a.Th:
+-			p.clearStackToContext(tableRowScope)
+-			p.addElement()
+-			p.afe = append(p.afe, &scopeMarker)
+-			p.im = inCellIM
+-			return true
+-		case a.Caption, a.Col, a.Colgroup, a.Tbody, a.Tfoot, a.Thead, a.Tr:
+-			if p.popUntil(tableScope, a.Tr) {
+-				p.im = inTableBodyIM
+-				return false
+-			}
+-			// Ignore the token.
+-			return true
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Tr:
+-			if p.popUntil(tableScope, a.Tr) {
+-				p.im = inTableBodyIM
+-				return true
+-			}
+-			// Ignore the token.
+-			return true
+-		case a.Table:
+-			if p.popUntil(tableScope, a.Tr) {
+-				p.im = inTableBodyIM
+-				return false
+-			}
+-			// Ignore the token.
+-			return true
+-		case a.Tbody, a.Tfoot, a.Thead:
+-			if p.elementInScope(tableScope, p.tok.DataAtom) {
+-				p.parseImpliedToken(EndTagToken, a.Tr, a.Tr.String())
+-				return false
+-			}
+-			// Ignore the token.
+-			return true
+-		case a.Body, a.Caption, a.Col, a.Colgroup, a.Html, a.Td, a.Th:
+-			// Ignore the token.
+-			return true
+-		}
+-	}
+-
+-	return inTableIM(p)
+-}
+-
+-// Section 12.2.5.4.15.
+-func inCellIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Caption, a.Col, a.Colgroup, a.Tbody, a.Td, a.Tfoot, a.Th, a.Thead, a.Tr:
+-			if p.popUntil(tableScope, a.Td, a.Th) {
+-				// Close the cell and reprocess.
+-				p.clearActiveFormattingElements()
+-				p.im = inRowIM
+-				return false
+-			}
+-			// Ignore the token.
+-			return true
+-		case a.Select:
+-			p.reconstructActiveFormattingElements()
+-			p.addElement()
+-			p.framesetOK = false
+-			p.im = inSelectInTableIM
+-			return true
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Td, a.Th:
+-			if !p.popUntil(tableScope, p.tok.DataAtom) {
+-				// Ignore the token.
+-				return true
+-			}
+-			p.clearActiveFormattingElements()
+-			p.im = inRowIM
+-			return true
+-		case a.Body, a.Caption, a.Col, a.Colgroup, a.Html:
+-			// Ignore the token.
+-			return true
+-		case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr:
+-			if !p.elementInScope(tableScope, p.tok.DataAtom) {
+-				// Ignore the token.
+-				return true
+-			}
+-			// Close the cell and reprocess.
+-			p.popUntil(tableScope, a.Td, a.Th)
+-			p.clearActiveFormattingElements()
+-			p.im = inRowIM
+-			return false
+-		}
+-	}
+-	return inBodyIM(p)
+-}
+-
+-// Section 12.2.5.4.16.
+-func inSelectIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case ErrorToken:
+-		// Stop parsing.
+-		return true
+-	case TextToken:
+-		p.addText(strings.Replace(p.tok.Data, "\x00", "", -1))
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Html:
+-			return inBodyIM(p)
+-		case a.Option:
+-			if p.top().DataAtom == a.Option {
+-				p.oe.pop()
+-			}
+-			p.addElement()
+-		case a.Optgroup:
+-			if p.top().DataAtom == a.Option {
+-				p.oe.pop()
+-			}
+-			if p.top().DataAtom == a.Optgroup {
+-				p.oe.pop()
+-			}
+-			p.addElement()
+-		case a.Select:
+-			p.tok.Type = EndTagToken
+-			return false
+-		case a.Input, a.Keygen, a.Textarea:
+-			if p.elementInScope(selectScope, a.Select) {
+-				p.parseImpliedToken(EndTagToken, a.Select, a.Select.String())
+-				return false
+-			}
+-			// In order to properly ignore <textarea>, we need to change the tokenizer mode.
+-			p.tokenizer.NextIsNotRawText()
+-			// Ignore the token.
+-			return true
+-		case a.Script:
+-			return inHeadIM(p)
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Option:
+-			if p.top().DataAtom == a.Option {
+-				p.oe.pop()
+-			}
+-		case a.Optgroup:
+-			i := len(p.oe) - 1
+-			if p.oe[i].DataAtom == a.Option {
+-				i--
+-			}
+-			if p.oe[i].DataAtom == a.Optgroup {
+-				p.oe = p.oe[:i]
+-			}
+-		case a.Select:
+-			if p.popUntil(selectScope, a.Select) {
+-				p.resetInsertionMode()
+-			}
+-		}
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-	case DoctypeToken:
+-		// Ignore the token.
+-		return true
+-	}
+-
+-	return true
+-}
+-
+-// Section 12.2.5.4.17.
+-func inSelectInTableIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case StartTagToken, EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Caption, a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr, a.Td, a.Th:
+-			if p.tok.Type == StartTagToken || p.elementInScope(tableScope, p.tok.DataAtom) {
+-				p.parseImpliedToken(EndTagToken, a.Select, a.Select.String())
+-				return false
+-			} else {
+-				// Ignore the token.
+-				return true
+-			}
+-		}
+-	}
+-	return inSelectIM(p)
+-}
+-
+-// Section 12.2.5.4.18.
+-func afterBodyIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case ErrorToken:
+-		// Stop parsing.
+-		return true
+-	case TextToken:
+-		s := strings.TrimLeft(p.tok.Data, whitespace)
+-		if len(s) == 0 {
+-			// It was all whitespace.
+-			return inBodyIM(p)
+-		}
+-	case StartTagToken:
+-		if p.tok.DataAtom == a.Html {
+-			return inBodyIM(p)
+-		}
+-	case EndTagToken:
+-		if p.tok.DataAtom == a.Html {
+-			if !p.fragment {
+-				p.im = afterAfterBodyIM
+-			}
+-			return true
+-		}
+-	case CommentToken:
+-		// The comment is attached to the <html> element.
+-		if len(p.oe) < 1 || p.oe[0].DataAtom != a.Html {
+-			panic("html: bad parser state: <html> element not found, in the after-body insertion mode")
+-		}
+-		p.oe[0].AppendChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	}
+-	p.im = inBodyIM
+-	return false
+-}
+-
+-// Section 12.2.5.4.19.
+-func inFramesetIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-	case TextToken:
+-		// Ignore all text but whitespace.
+-		s := strings.Map(func(c rune) rune {
+-			switch c {
+-			case ' ', '\t', '\n', '\f', '\r':
+-				return c
+-			}
+-			return -1
+-		}, p.tok.Data)
+-		if s != "" {
+-			p.addText(s)
+-		}
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Html:
+-			return inBodyIM(p)
+-		case a.Frameset:
+-			p.addElement()
+-		case a.Frame:
+-			p.addElement()
+-			p.oe.pop()
+-			p.acknowledgeSelfClosingTag()
+-		case a.Noframes:
+-			return inHeadIM(p)
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Frameset:
+-			if p.oe.top().DataAtom != a.Html {
+-				p.oe.pop()
+-				if p.oe.top().DataAtom != a.Frameset {
+-					p.im = afterFramesetIM
+-					return true
+-				}
+-			}
+-		}
+-	default:
+-		// Ignore the token.
+-	}
+-	return true
+-}
+-
+-// Section 12.2.5.4.20.
+-func afterFramesetIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-	case TextToken:
+-		// Ignore all text but whitespace.
+-		s := strings.Map(func(c rune) rune {
+-			switch c {
+-			case ' ', '\t', '\n', '\f', '\r':
+-				return c
+-			}
+-			return -1
+-		}, p.tok.Data)
+-		if s != "" {
+-			p.addText(s)
+-		}
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Html:
+-			return inBodyIM(p)
+-		case a.Noframes:
+-			return inHeadIM(p)
+-		}
+-	case EndTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Html:
+-			p.im = afterAfterFramesetIM
+-			return true
+-		}
+-	default:
+-		// Ignore the token.
+-	}
+-	return true
+-}
+-
+-// Section 12.2.5.4.21.
+-func afterAfterBodyIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case ErrorToken:
+-		// Stop parsing.
+-		return true
+-	case TextToken:
+-		s := strings.TrimLeft(p.tok.Data, whitespace)
+-		if len(s) == 0 {
+-			// It was all whitespace.
+-			return inBodyIM(p)
+-		}
+-	case StartTagToken:
+-		if p.tok.DataAtom == a.Html {
+-			return inBodyIM(p)
+-		}
+-	case CommentToken:
+-		p.doc.AppendChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-		return true
+-	case DoctypeToken:
+-		return inBodyIM(p)
+-	}
+-	p.im = inBodyIM
+-	return false
+-}
+-
+-// Section 12.2.5.4.22.
+-func afterAfterFramesetIM(p *parser) bool {
+-	switch p.tok.Type {
+-	case CommentToken:
+-		p.doc.AppendChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-	case TextToken:
+-		// Ignore all text but whitespace.
+-		s := strings.Map(func(c rune) rune {
+-			switch c {
+-			case ' ', '\t', '\n', '\f', '\r':
+-				return c
+-			}
+-			return -1
+-		}, p.tok.Data)
+-		if s != "" {
+-			p.tok.Data = s
+-			return inBodyIM(p)
+-		}
+-	case StartTagToken:
+-		switch p.tok.DataAtom {
+-		case a.Html:
+-			return inBodyIM(p)
+-		case a.Noframes:
+-			return inHeadIM(p)
+-		}
+-	case DoctypeToken:
+-		return inBodyIM(p)
+-	default:
+-		// Ignore the token.
+-	}
+-	return true
+-}
+-
+-const whitespaceOrNUL = whitespace + "\x00"
+-
+-// Section 12.2.5.5.
+-func parseForeignContent(p *parser) bool {
+-	switch p.tok.Type {
+-	case TextToken:
+-		if p.framesetOK {
+-			p.framesetOK = strings.TrimLeft(p.tok.Data, whitespaceOrNUL) == ""
+-		}
+-		p.tok.Data = strings.Replace(p.tok.Data, "\x00", "\ufffd", -1)
+-		p.addText(p.tok.Data)
+-	case CommentToken:
+-		p.addChild(&Node{
+-			Type: CommentNode,
+-			Data: p.tok.Data,
+-		})
+-	case StartTagToken:
+-		b := breakout[p.tok.Data]
+-		if p.tok.DataAtom == a.Font {
+-		loop:
+-			for _, attr := range p.tok.Attr {
+-				switch attr.Key {
+-				case "color", "face", "size":
+-					b = true
+-					break loop
+-				}
+-			}
+-		}
+-		if b {
+-			for i := len(p.oe) - 1; i >= 0; i-- {
+-				n := p.oe[i]
+-				if n.Namespace == "" || htmlIntegrationPoint(n) || mathMLTextIntegrationPoint(n) {
+-					p.oe = p.oe[:i+1]
+-					break
+-				}
+-			}
+-			return false
+-		}
+-		switch p.top().Namespace {
+-		case "math":
+-			adjustAttributeNames(p.tok.Attr, mathMLAttributeAdjustments)
+-		case "svg":
+-			// Adjust SVG tag names. The tokenizer lower-cases tag names, but
+-			// SVG wants e.g. "foreignObject" with a capital second "O".
+-			if x := svgTagNameAdjustments[p.tok.Data]; x != "" {
+-				p.tok.DataAtom = a.Lookup([]byte(x))
+-				p.tok.Data = x
+-			}
+-			adjustAttributeNames(p.tok.Attr, svgAttributeAdjustments)
+-		default:
+-			panic("html: bad parser state: unexpected namespace")
+-		}
+-		adjustForeignAttributes(p.tok.Attr)
+-		namespace := p.top().Namespace
+-		p.addElement()
+-		p.top().Namespace = namespace
+-		if namespace != "" {
+-			// Don't let the tokenizer go into raw text mode in foreign content
+-			// (e.g. in an SVG <title> tag).
+-			p.tokenizer.NextIsNotRawText()
+-		}
+-		if p.hasSelfClosingToken {
+-			p.oe.pop()
+-			p.acknowledgeSelfClosingTag()
+-		}
+-	case EndTagToken:
+-		for i := len(p.oe) - 1; i >= 0; i-- {
+-			if p.oe[i].Namespace == "" {
+-				return p.im(p)
+-			}
+-			if strings.EqualFold(p.oe[i].Data, p.tok.Data) {
+-				p.oe = p.oe[:i]
+-				break
+-			}
+-		}
+-		return true
+-	default:
+-		// Ignore the token.
+-	}
+-	return true
+-}
+-
+-// Section 12.2.5.
+-func (p *parser) inForeignContent() bool {
+-	if len(p.oe) == 0 {
+-		return false
+-	}
+-	n := p.oe[len(p.oe)-1]
+-	if n.Namespace == "" {
+-		return false
+-	}
+-	if mathMLTextIntegrationPoint(n) {
+-		if p.tok.Type == StartTagToken && p.tok.DataAtom != a.Mglyph && p.tok.DataAtom != a.Malignmark {
+-			return false
+-		}
+-		if p.tok.Type == TextToken {
+-			return false
+-		}
+-	}
+-	if n.Namespace == "math" && n.DataAtom == a.AnnotationXml && p.tok.Type == StartTagToken && p.tok.DataAtom == a.Svg {
+-		return false
+-	}
+-	if htmlIntegrationPoint(n) && (p.tok.Type == StartTagToken || p.tok.Type == TextToken) {
+-		return false
+-	}
+-	if p.tok.Type == ErrorToken {
+-		return false
+-	}
+-	return true
+-}
+-
+-// parseImpliedToken parses a token as though it had appeared in the parser's
+-// input.
+-func (p *parser) parseImpliedToken(t TokenType, dataAtom a.Atom, data string) {
+-	realToken, selfClosing := p.tok, p.hasSelfClosingToken
+-	p.tok = Token{
+-		Type:     t,
+-		DataAtom: dataAtom,
+-		Data:     data,
+-	}
+-	p.hasSelfClosingToken = false
+-	p.parseCurrentToken()
+-	p.tok, p.hasSelfClosingToken = realToken, selfClosing
+-}
+-
+-// parseCurrentToken runs the current token through the parsing routines
+-// until it is consumed.
+-func (p *parser) parseCurrentToken() {
+-	if p.tok.Type == SelfClosingTagToken {
+-		p.hasSelfClosingToken = true
+-		p.tok.Type = StartTagToken
+-	}
+-
+-	consumed := false
+-	for !consumed {
+-		if p.inForeignContent() {
+-			consumed = parseForeignContent(p)
+-		} else {
+-			consumed = p.im(p)
+-		}
+-	}
+-
+-	if p.hasSelfClosingToken {
+-		// This is a parse error, but ignore it.
+-		p.hasSelfClosingToken = false
+-	}
+-}
+-
+-func (p *parser) parse() error {
+-	// Iterate until EOF. Any other error will cause an early return.
+-	var err error
+-	for err != io.EOF {
+-		// CDATA sections are allowed only in foreign content.
+-		n := p.oe.top()
+-		p.tokenizer.AllowCDATA(n != nil && n.Namespace != "")
+-		// Read and parse the next token.
+-		p.tokenizer.Next()
+-		p.tok = p.tokenizer.Token()
+-		if p.tok.Type == ErrorToken {
+-			err = p.tokenizer.Err()
+-			if err != nil && err != io.EOF {
+-				return err
+-			}
+-		}
+-		p.parseCurrentToken()
+-	}
+-	return nil
+-}
+-
+-// Parse returns the parse tree for the HTML from the given Reader.
+-// The input is assumed to be UTF-8 encoded.
+-func Parse(r io.Reader) (*Node, error) {
+-	p := &parser{
+-		tokenizer: NewTokenizer(r),
+-		doc: &Node{
+-			Type: DocumentNode,
+-		},
+-		scripting:  true,
+-		framesetOK: true,
+-		im:         initialIM,
+-	}
+-	err := p.parse()
+-	if err != nil {
+-		return nil, err
+-	}
+-	return p.doc, nil
+-}
+-
+-// ParseFragment parses a fragment of HTML and returns the nodes that were
+-// found. If the fragment is the InnerHTML for an existing element, pass that
+-// element in context.
+-func ParseFragment(r io.Reader, context *Node) ([]*Node, error) {
+-	contextTag := ""
+-	if context != nil {
+-		if context.Type != ElementNode {
+-			return nil, errors.New("html: ParseFragment of non-element Node")
+-		}
+-		// The next check isn't just context.DataAtom.String() == context.Data because
+-		// it is valid to pass an element whose tag isn't a known atom. For example,
+-		// DataAtom == 0 and Data = "tagfromthefuture" is perfectly consistent.
+-		if context.DataAtom != a.Lookup([]byte(context.Data)) {
+-			return nil, fmt.Errorf("html: inconsistent Node: DataAtom=%q, Data=%q", context.DataAtom, context.Data)
+-		}
+-		contextTag = context.DataAtom.String()
+-	}
+-	p := &parser{
+-		tokenizer: NewTokenizerFragment(r, contextTag),
+-		doc: &Node{
+-			Type: DocumentNode,
+-		},
+-		scripting: true,
+-		fragment:  true,
+-		context:   context,
+-	}
+-
+-	root := &Node{
+-		Type:     ElementNode,
+-		DataAtom: a.Html,
+-		Data:     a.Html.String(),
+-	}
+-	p.doc.AppendChild(root)
+-	p.oe = nodeStack{root}
+-	p.resetInsertionMode()
+-
+-	for n := context; n != nil; n = n.Parent {
+-		if n.Type == ElementNode && n.DataAtom == a.Form {
+-			p.form = n
+-			break
+-		}
+-	}
+-
+-	err := p.parse()
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	parent := p.doc
+-	if context != nil {
+-		parent = root
+-	}
+-
+-	var result []*Node
+-	for c := parent.FirstChild; c != nil; {
+-		next := c.NextSibling
+-		parent.RemoveChild(c)
+-		result = append(result, c)
+-		c = next
+-	}
+-	return result, nil
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/parse_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/parse_test.go
+deleted file mode 100644
+index 253b04d..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/parse_test.go
++++ /dev/null
+@@ -1,388 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"bufio"
+-	"bytes"
+-	"errors"
+-	"fmt"
+-	"io"
+-	"io/ioutil"
+-	"os"
+-	"path/filepath"
+-	"runtime"
+-	"sort"
+-	"strings"
+-	"testing"
+-
+-	"code.google.com/p/go.net/html/atom"
+-)
+-
+-// readParseTest reads a single test case from r.
+-func readParseTest(r *bufio.Reader) (text, want, context string, err error) {
+-	line, err := r.ReadSlice('\n')
+-	if err != nil {
+-		return "", "", "", err
+-	}
+-	var b []byte
+-
+-	// Read the HTML.
+-	if string(line) != "#data\n" {
+-		return "", "", "", fmt.Errorf(`got %q want "#data\n"`, line)
+-	}
+-	for {
+-		line, err = r.ReadSlice('\n')
+-		if err != nil {
+-			return "", "", "", err
+-		}
+-		if line[0] == '#' {
+-			break
+-		}
+-		b = append(b, line...)
+-	}
+-	text = strings.TrimSuffix(string(b), "\n")
+-	b = b[:0]
+-
+-	// Skip the error list.
+-	if string(line) != "#errors\n" {
+-		return "", "", "", fmt.Errorf(`got %q want "#errors\n"`, line)
+-	}
+-	for {
+-		line, err = r.ReadSlice('\n')
+-		if err != nil {
+-			return "", "", "", err
+-		}
+-		if line[0] == '#' {
+-			break
+-		}
+-	}
+-
+-	if string(line) == "#document-fragment\n" {
+-		line, err = r.ReadSlice('\n')
+-		if err != nil {
+-			return "", "", "", err
+-		}
+-		context = strings.TrimSpace(string(line))
+-		line, err = r.ReadSlice('\n')
+-		if err != nil {
+-			return "", "", "", err
+-		}
+-	}
+-
+-	// Read the dump of what the parse tree should be.
+-	if string(line) != "#document\n" {
+-		return "", "", "", fmt.Errorf(`got %q want "#document\n"`, line)
+-	}
+-	inQuote := false
+-	for {
+-		line, err = r.ReadSlice('\n')
+-		if err != nil && err != io.EOF {
+-			return "", "", "", err
+-		}
+-		trimmed := bytes.Trim(line, "| \n")
+-		if len(trimmed) > 0 {
+-			if line[0] == '|' && trimmed[0] == '"' {
+-				inQuote = true
+-			}
+-			if trimmed[len(trimmed)-1] == '"' && !(line[0] == '|' && len(trimmed) == 1) {
+-				inQuote = false
+-			}
+-		}
+-		if len(line) == 0 || len(line) == 1 && line[0] == '\n' && !inQuote {
+-			break
+-		}
+-		b = append(b, line...)
+-	}
+-	return text, string(b), context, nil
+-}
+-
+-func dumpIndent(w io.Writer, level int) {
+-	io.WriteString(w, "| ")
+-	for i := 0; i < level; i++ {
+-		io.WriteString(w, "  ")
+-	}
+-}
+-
+-type sortedAttributes []Attribute
+-
+-func (a sortedAttributes) Len() int {
+-	return len(a)
+-}
+-
+-func (a sortedAttributes) Less(i, j int) bool {
+-	if a[i].Namespace != a[j].Namespace {
+-		return a[i].Namespace < a[j].Namespace
+-	}
+-	return a[i].Key < a[j].Key
+-}
+-
+-func (a sortedAttributes) Swap(i, j int) {
+-	a[i], a[j] = a[j], a[i]
+-}
+-
+-func dumpLevel(w io.Writer, n *Node, level int) error {
+-	dumpIndent(w, level)
+-	switch n.Type {
+-	case ErrorNode:
+-		return errors.New("unexpected ErrorNode")
+-	case DocumentNode:
+-		return errors.New("unexpected DocumentNode")
+-	case ElementNode:
+-		if n.Namespace != "" {
+-			fmt.Fprintf(w, "<%s %s>", n.Namespace, n.Data)
+-		} else {
+-			fmt.Fprintf(w, "<%s>", n.Data)
+-		}
+-		attr := sortedAttributes(n.Attr)
+-		sort.Sort(attr)
+-		for _, a := range attr {
+-			io.WriteString(w, "\n")
+-			dumpIndent(w, level+1)
+-			if a.Namespace != "" {
+-				fmt.Fprintf(w, `%s %s="%s"`, a.Namespace, a.Key, a.Val)
+-			} else {
+-				fmt.Fprintf(w, `%s="%s"`, a.Key, a.Val)
+-			}
+-		}
+-	case TextNode:
+-		fmt.Fprintf(w, `"%s"`, n.Data)
+-	case CommentNode:
+-		fmt.Fprintf(w, "<!-- %s -->", n.Data)
+-	case DoctypeNode:
+-		fmt.Fprintf(w, "<!DOCTYPE %s", n.Data)
+-		if n.Attr != nil {
+-			var p, s string
+-			for _, a := range n.Attr {
+-				switch a.Key {
+-				case "public":
+-					p = a.Val
+-				case "system":
+-					s = a.Val
+-				}
+-			}
+-			if p != "" || s != "" {
+-				fmt.Fprintf(w, ` "%s"`, p)
+-				fmt.Fprintf(w, ` "%s"`, s)
+-			}
+-		}
+-		io.WriteString(w, ">")
+-	case scopeMarkerNode:
+-		return errors.New("unexpected scopeMarkerNode")
+-	default:
+-		return errors.New("unknown node type")
+-	}
+-	io.WriteString(w, "\n")
+-	for c := n.FirstChild; c != nil; c = c.NextSibling {
+-		if err := dumpLevel(w, c, level+1); err != nil {
+-			return err
+-		}
+-	}
+-	return nil
+-}
+-
+-func dump(n *Node) (string, error) {
+-	if n == nil || n.FirstChild == nil {
+-		return "", nil
+-	}
+-	var b bytes.Buffer
+-	for c := n.FirstChild; c != nil; c = c.NextSibling {
+-		if err := dumpLevel(&b, c, 0); err != nil {
+-			return "", err
+-		}
+-	}
+-	return b.String(), nil
+-}
+-
+-const testDataDir = "testdata/webkit/"
+-
+-func TestParser(t *testing.T) {
+-	testFiles, err := filepath.Glob(testDataDir + "*.dat")
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	for _, tf := range testFiles {
+-		f, err := os.Open(tf)
+-		if err != nil {
+-			t.Fatal(err)
+-		}
+-		defer f.Close()
+-		r := bufio.NewReader(f)
+-
+-		for i := 0; ; i++ {
+-			text, want, context, err := readParseTest(r)
+-			if err == io.EOF {
+-				break
+-			}
+-			if err != nil {
+-				t.Fatal(err)
+-			}
+-
+-			err = testParseCase(text, want, context)
+-
+-			if err != nil {
+-				t.Errorf("%s test #%d %q, %s", tf, i, text, err)
+-			}
+-		}
+-	}
+-}
+-
+-// testParseCase tests one test case from the test files. If the test does not
+-// pass, it returns an error that explains the failure.
+-// text is the HTML to be parsed, want is a dump of the correct parse tree,
+-// and context is the name of the context node, if any.
+-func testParseCase(text, want, context string) (err error) {
+-	defer func() {
+-		if x := recover(); x != nil {
+-			switch e := x.(type) {
+-			case error:
+-				err = e
+-			default:
+-				err = fmt.Errorf("%v", e)
+-			}
+-		}
+-	}()
+-
+-	var doc *Node
+-	if context == "" {
+-		doc, err = Parse(strings.NewReader(text))
+-		if err != nil {
+-			return err
+-		}
+-	} else {
+-		contextNode := &Node{
+-			Type:     ElementNode,
+-			DataAtom: atom.Lookup([]byte(context)),
+-			Data:     context,
+-		}
+-		nodes, err := ParseFragment(strings.NewReader(text), contextNode)
+-		if err != nil {
+-			return err
+-		}
+-		doc = &Node{
+-			Type: DocumentNode,
+-		}
+-		for _, n := range nodes {
+-			doc.AppendChild(n)
+-		}
+-	}
+-
+-	if err := checkTreeConsistency(doc); err != nil {
+-		return err
+-	}
+-
+-	got, err := dump(doc)
+-	if err != nil {
+-		return err
+-	}
+-	// Compare the parsed tree to the #document section.
+-	if got != want {
+-		return fmt.Errorf("got vs want:\n----\n%s----\n%s----", got, want)
+-	}
+-
+-	if renderTestBlacklist[text] || context != "" {
+-		return nil
+-	}
+-
+-	// Check that rendering and re-parsing results in an identical tree.
+-	pr, pw := io.Pipe()
+-	go func() {
+-		pw.CloseWithError(Render(pw, doc))
+-	}()
+-	doc1, err := Parse(pr)
+-	if err != nil {
+-		return err
+-	}
+-	got1, err := dump(doc1)
+-	if err != nil {
+-		return err
+-	}
+-	if got != got1 {
+-		return fmt.Errorf("got vs got1:\n----\n%s----\n%s----", got, got1)
+-	}
+-
+-	return nil
+-}
+-
+-// Some test input result in parse trees are not 'well-formed' despite
+-// following the HTML5 recovery algorithms. Rendering and re-parsing such a
+-// tree will not result in an exact clone of that tree. We blacklist such
+-// inputs from the render test.
+-var renderTestBlacklist = map[string]bool{
+-	// The second <a> will be reparented to the first <table>'s parent. This
+-	// results in an <a> whose parent is an <a>, which is not 'well-formed'.
+-	`<a><table><td><a><table></table><a></tr><a></table><b>X</b>C<a>Y`: true,
+-	// The same thing with a <p>:
+-	`<p><table></p>`: true,
+-	// More cases of <a> being reparented:
+-	`<a href="blah">aba<table><a href="foo">br<tr><td></td></tr>x</table>aoe`: true,
+-	`<a><table><a></table><p><a><div><a>`:                                     true,
+-	`<a><table><td><a><table></table><a></tr><a></table><a>`:                  true,
+-	// A similar reparenting situation involving <nobr>:
+-	`<!DOCTYPE html><body><b><nobr>1<table><nobr></b><i><nobr>2<nobr></i>3`: true,
+-	// A <plaintext> element is reparented, putting it before a table.
+-	// A <plaintext> element can't have anything after it in HTML.
+-	`<table><plaintext><td>`:                                   true,
+-	`<!doctype html><table><plaintext></plaintext>`:            true,
+-	`<!doctype html><table><tbody><plaintext></plaintext>`:     true,
+-	`<!doctype html><table><tbody><tr><plaintext></plaintext>`: true,
+-	// A form inside a table inside a form doesn't work either.
+-	`<!doctype html><form><table></form><form></table></form>`: true,
+-	// A script that ends at EOF may escape its own closing tag when rendered.
+-	`<!doctype html><script><!--<script `:          true,
+-	`<!doctype html><script><!--<script <`:         true,
+-	`<!doctype html><script><!--<script <a`:        true,
+-	`<!doctype html><script><!--<script </`:        true,
+-	`<!doctype html><script><!--<script </s`:       true,
+-	`<!doctype html><script><!--<script </script`:  true,
+-	`<!doctype html><script><!--<script </scripta`: true,
+-	`<!doctype html><script><!--<script -`:         true,
+-	`<!doctype html><script><!--<script -a`:        true,
+-	`<!doctype html><script><!--<script -<`:        true,
+-	`<!doctype html><script><!--<script --`:        true,
+-	`<!doctype html><script><!--<script --a`:       true,
+-	`<!doctype html><script><!--<script --<`:       true,
+-	`<script><!--<script `:                         true,
+-	`<script><!--<script <a`:                       true,
+-	`<script><!--<script </script`:                 true,
+-	`<script><!--<script </scripta`:                true,
+-	`<script><!--<script -`:                        true,
+-	`<script><!--<script -a`:                       true,
+-	`<script><!--<script --`:                       true,
+-	`<script><!--<script --a`:                      true,
+-	`<script><!--<script <`:                        true,
+-	`<script><!--<script </`:                       true,
+-	`<script><!--<script </s`:                      true,
+-	// Reconstructing the active formatting elements results in a <plaintext>
+-	// element that contains an <a> element.
+-	`<!doctype html><p><a><plaintext>b`: true,
+-}
+-
+-func TestNodeConsistency(t *testing.T) {
+-	// inconsistentNode is a Node whose DataAtom and Data do not agree.
+-	inconsistentNode := &Node{
+-		Type:     ElementNode,
+-		DataAtom: atom.Frameset,
+-		Data:     "table",
+-	}
+-	_, err := ParseFragment(strings.NewReader("<p>hello</p>"), inconsistentNode)
+-	if err == nil {
+-		t.Errorf("got nil error, want non-nil")
+-	}
+-}
+-
+-func BenchmarkParser(b *testing.B) {
+-	buf, err := ioutil.ReadFile("testdata/go1.html")
+-	if err != nil {
+-		b.Fatalf("could not read testdata/go1.html: %v", err)
+-	}
+-	b.SetBytes(int64(len(buf)))
+-	runtime.GC()
+-	b.ReportAllocs()
+-	b.ResetTimer()
+-	for i := 0; i < b.N; i++ {
+-		Parse(bytes.NewBuffer(buf))
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/render.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/render.go
+deleted file mode 100644
+index 4a833b4..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/render.go
++++ /dev/null
+@@ -1,271 +0,0 @@
+-// Copyright 2011 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"bufio"
+-	"errors"
+-	"fmt"
+-	"io"
+-	"strings"
+-)
+-
+-type writer interface {
+-	io.Writer
+-	WriteByte(c byte) error // in Go 1.1, use io.ByteWriter
+-	WriteString(string) (int, error)
+-}
+-
+-// Render renders the parse tree n to the given writer.
+-//
+-// Rendering is done on a 'best effort' basis: calling Parse on the output of
+-// Render will always result in something similar to the original tree, but it
+-// is not necessarily an exact clone unless the original tree was 'well-formed'.
+-// 'Well-formed' is not easily specified; the HTML5 specification is
+-// complicated.
+-//
+-// Calling Parse on arbitrary input typically results in a 'well-formed' parse
+-// tree. However, it is possible for Parse to yield a 'badly-formed' parse tree.
+-// For example, in a 'well-formed' parse tree, no <a> element is a child of
+-// another <a> element: parsing "<a><a>" results in two sibling elements.
+-// Similarly, in a 'well-formed' parse tree, no <a> element is a child of a
+-// <table> element: parsing "<p><table><a>" results in a <p> with two sibling
+-// children; the <a> is reparented to the <table>'s parent. However, calling
+-// Parse on "<a><table><a>" does not return an error, but the result has an <a>
+-// element with an <a> child, and is therefore not 'well-formed'.
+-//
+-// Programmatically constructed trees are typically also 'well-formed', but it
+-// is possible to construct a tree that looks innocuous but, when rendered and
+-// re-parsed, results in a different tree. A simple example is that a solitary
+-// text node would become a tree containing <html>, <head> and <body> elements.
+-// Another example is that the programmatic equivalent of "a<head>b</head>c"
+-// becomes "<html><head><head/><body>abc</body></html>".
+-func Render(w io.Writer, n *Node) error {
+-	if x, ok := w.(writer); ok {
+-		return render(x, n)
+-	}
+-	buf := bufio.NewWriter(w)
+-	if err := render(buf, n); err != nil {
+-		return err
+-	}
+-	return buf.Flush()
+-}
+-
+-// plaintextAbort is returned from render1 when a <plaintext> element
+-// has been rendered. No more end tags should be rendered after that.
+-var plaintextAbort = errors.New("html: internal error (plaintext abort)")
+-
+-func render(w writer, n *Node) error {
+-	err := render1(w, n)
+-	if err == plaintextAbort {
+-		err = nil
+-	}
+-	return err
+-}
+-
+-func render1(w writer, n *Node) error {
+-	// Render non-element nodes; these are the easy cases.
+-	switch n.Type {
+-	case ErrorNode:
+-		return errors.New("html: cannot render an ErrorNode node")
+-	case TextNode:
+-		return escape(w, n.Data)
+-	case DocumentNode:
+-		for c := n.FirstChild; c != nil; c = c.NextSibling {
+-			if err := render1(w, c); err != nil {
+-				return err
+-			}
+-		}
+-		return nil
+-	case ElementNode:
+-		// No-op.
+-	case CommentNode:
+-		if _, err := w.WriteString("<!--"); err != nil {
+-			return err
+-		}
+-		if _, err := w.WriteString(n.Data); err != nil {
+-			return err
+-		}
+-		if _, err := w.WriteString("-->"); err != nil {
+-			return err
+-		}
+-		return nil
+-	case DoctypeNode:
+-		if _, err := w.WriteString("<!DOCTYPE "); err != nil {
+-			return err
+-		}
+-		if _, err := w.WriteString(n.Data); err != nil {
+-			return err
+-		}
+-		if n.Attr != nil {
+-			var p, s string
+-			for _, a := range n.Attr {
+-				switch a.Key {
+-				case "public":
+-					p = a.Val
+-				case "system":
+-					s = a.Val
+-				}
+-			}
+-			if p != "" {
+-				if _, err := w.WriteString(" PUBLIC "); err != nil {
+-					return err
+-				}
+-				if err := writeQuoted(w, p); err != nil {
+-					return err
+-				}
+-				if s != "" {
+-					if err := w.WriteByte(' '); err != nil {
+-						return err
+-					}
+-					if err := writeQuoted(w, s); err != nil {
+-						return err
+-					}
+-				}
+-			} else if s != "" {
+-				if _, err := w.WriteString(" SYSTEM "); err != nil {
+-					return err
+-				}
+-				if err := writeQuoted(w, s); err != nil {
+-					return err
+-				}
+-			}
+-		}
+-		return w.WriteByte('>')
+-	default:
+-		return errors.New("html: unknown node type")
+-	}
+-
+-	// Render the <xxx> opening tag.
+-	if err := w.WriteByte('<'); err != nil {
+-		return err
+-	}
+-	if _, err := w.WriteString(n.Data); err != nil {
+-		return err
+-	}
+-	for _, a := range n.Attr {
+-		if err := w.WriteByte(' '); err != nil {
+-			return err
+-		}
+-		if a.Namespace != "" {
+-			if _, err := w.WriteString(a.Namespace); err != nil {
+-				return err
+-			}
+-			if err := w.WriteByte(':'); err != nil {
+-				return err
+-			}
+-		}
+-		if _, err := w.WriteString(a.Key); err != nil {
+-			return err
+-		}
+-		if _, err := w.WriteString(`="`); err != nil {
+-			return err
+-		}
+-		if err := escape(w, a.Val); err != nil {
+-			return err
+-		}
+-		if err := w.WriteByte('"'); err != nil {
+-			return err
+-		}
+-	}
+-	if voidElements[n.Data] {
+-		if n.FirstChild != nil {
+-			return fmt.Errorf("html: void element <%s> has child nodes", n.Data)
+-		}
+-		_, err := w.WriteString("/>")
+-		return err
+-	}
+-	if err := w.WriteByte('>'); err != nil {
+-		return err
+-	}
+-
+-	// Add initial newline where there is danger of a newline beging ignored.
+-	if c := n.FirstChild; c != nil && c.Type == TextNode && strings.HasPrefix(c.Data, "\n") {
+-		switch n.Data {
+-		case "pre", "listing", "textarea":
+-			if err := w.WriteByte('\n'); err != nil {
+-				return err
+-			}
+-		}
+-	}
+-
+-	// Render any child nodes.
+-	switch n.Data {
+-	case "iframe", "noembed", "noframes", "noscript", "plaintext", "script", "style", "xmp":
+-		for c := n.FirstChild; c != nil; c = c.NextSibling {
+-			if c.Type == TextNode {
+-				if _, err := w.WriteString(c.Data); err != nil {
+-					return err
+-				}
+-			} else {
+-				if err := render1(w, c); err != nil {
+-					return err
+-				}
+-			}
+-		}
+-		if n.Data == "plaintext" {
+-			// Don't render anything else. <plaintext> must be the
+-			// last element in the file, with no closing tag.
+-			return plaintextAbort
+-		}
+-	default:
+-		for c := n.FirstChild; c != nil; c = c.NextSibling {
+-			if err := render1(w, c); err != nil {
+-				return err
+-			}
+-		}
+-	}
+-
+-	// Render the </xxx> closing tag.
+-	if _, err := w.WriteString("</"); err != nil {
+-		return err
+-	}
+-	if _, err := w.WriteString(n.Data); err != nil {
+-		return err
+-	}
+-	return w.WriteByte('>')
+-}
+-
+-// writeQuoted writes s to w surrounded by quotes. Normally it will use double
+-// quotes, but if s contains a double quote, it will use single quotes.
+-// It is used for writing the identifiers in a doctype declaration.
+-// In valid HTML, they can't contain both types of quotes.
+-func writeQuoted(w writer, s string) error {
+-	var q byte = '"'
+-	if strings.Contains(s, `"`) {
+-		q = '\''
+-	}
+-	if err := w.WriteByte(q); err != nil {
+-		return err
+-	}
+-	if _, err := w.WriteString(s); err != nil {
+-		return err
+-	}
+-	if err := w.WriteByte(q); err != nil {
+-		return err
+-	}
+-	return nil
+-}
+-
+-// Section 12.1.2, "Elements", gives this list of void elements. Void elements
+-// are those that can't have any contents.
+-var voidElements = map[string]bool{
+-	"area":    true,
+-	"base":    true,
+-	"br":      true,
+-	"col":     true,
+-	"command": true,
+-	"embed":   true,
+-	"hr":      true,
+-	"img":     true,
+-	"input":   true,
+-	"keygen":  true,
+-	"link":    true,
+-	"meta":    true,
+-	"param":   true,
+-	"source":  true,
+-	"track":   true,
+-	"wbr":     true,
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/render_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/render_test.go
+deleted file mode 100644
+index 11da54b..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/render_test.go
++++ /dev/null
+@@ -1,156 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"bytes"
+-	"testing"
+-)
+-
+-func TestRenderer(t *testing.T) {
+-	nodes := [...]*Node{
+-		0: {
+-			Type: ElementNode,
+-			Data: "html",
+-		},
+-		1: {
+-			Type: ElementNode,
+-			Data: "head",
+-		},
+-		2: {
+-			Type: ElementNode,
+-			Data: "body",
+-		},
+-		3: {
+-			Type: TextNode,
+-			Data: "0<1",
+-		},
+-		4: {
+-			Type: ElementNode,
+-			Data: "p",
+-			Attr: []Attribute{
+-				{
+-					Key: "id",
+-					Val: "A",
+-				},
+-				{
+-					Key: "foo",
+-					Val: `abc"def`,
+-				},
+-			},
+-		},
+-		5: {
+-			Type: TextNode,
+-			Data: "2",
+-		},
+-		6: {
+-			Type: ElementNode,
+-			Data: "b",
+-			Attr: []Attribute{
+-				{
+-					Key: "empty",
+-					Val: "",
+-				},
+-			},
+-		},
+-		7: {
+-			Type: TextNode,
+-			Data: "3",
+-		},
+-		8: {
+-			Type: ElementNode,
+-			Data: "i",
+-			Attr: []Attribute{
+-				{
+-					Key: "backslash",
+-					Val: `\`,
+-				},
+-			},
+-		},
+-		9: {
+-			Type: TextNode,
+-			Data: "&4",
+-		},
+-		10: {
+-			Type: TextNode,
+-			Data: "5",
+-		},
+-		11: {
+-			Type: ElementNode,
+-			Data: "blockquote",
+-		},
+-		12: {
+-			Type: ElementNode,
+-			Data: "br",
+-		},
+-		13: {
+-			Type: TextNode,
+-			Data: "6",
+-		},
+-	}
+-
+-	// Build a tree out of those nodes, based on a textual representation.
+-	// Only the ".\t"s are significant. The trailing HTML-like text is
+-	// just commentary. The "0:" prefixes are for easy cross-reference with
+-	// the nodes array.
+-	treeAsText := [...]string{
+-		0: `<html>`,
+-		1: `.	<head>`,
+-		2: `.	<body>`,
+-		3: `.	.	"0&lt;1"`,
+-		4: `.	.	<p id="A" foo="abc&#34;def">`,
+-		5: `.	.	.	"2"`,
+-		6: `.	.	.	<b empty="">`,
+-		7: `.	.	.	.	"3"`,
+-		8: `.	.	.	<i backslash="\">`,
+-		9: `.	.	.	.	"&amp;4"`,
+-		10: `.	.	"5"`,
+-		11: `.	.	<blockquote>`,
+-		12: `.	.	<br>`,
+-		13: `.	.	"6"`,
+-	}
+-	if len(nodes) != len(treeAsText) {
+-		t.Fatal("len(nodes) != len(treeAsText)")
+-	}
+-	var stack [8]*Node
+-	for i, line := range treeAsText {
+-		level := 0
+-		for line[0] == '.' {
+-			// Strip a leading ".\t".
+-			line = line[2:]
+-			level++
+-		}
+-		n := nodes[i]
+-		if level == 0 {
+-			if stack[0] != nil {
+-				t.Fatal("multiple root nodes")
+-			}
+-			stack[0] = n
+-		} else {
+-			stack[level-1].AppendChild(n)
+-			stack[level] = n
+-			for i := level + 1; i < len(stack); i++ {
+-				stack[i] = nil
+-			}
+-		}
+-		// At each stage of tree construction, we check all nodes for consistency.
+-		for j, m := range nodes {
+-			if err := checkNodeConsistency(m); err != nil {
+-				t.Fatalf("i=%d, j=%d: %v", i, j, err)
+-			}
+-		}
+-	}
+-
+-	want := `<html><head></head><body>0&lt;1<p id="A" foo="abc&#34;def">` +
+-		`2<b empty="">3</b><i backslash="\">&amp;4</i></p>` +
+-		`5<blockquote></blockquote><br/>6</body></html>`
+-	b := new(bytes.Buffer)
+-	if err := Render(b, nodes[0]); err != nil {
+-		t.Fatal(err)
+-	}
+-	if got := b.String(); got != want {
+-		t.Errorf("got vs want:\n%s\n%s\n", got, want)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/go1.html b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/go1.html
+deleted file mode 100644
+index a782cc7..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/go1.html
++++ /dev/null
+@@ -1,2237 +0,0 @@
+-<!DOCTYPE html>
+-<html>
+-<head>
+-<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+-
+-  <title>Go 1 Release Notes - The Go Programming Language</title>
+-
+-<link type="text/css" rel="stylesheet" href="/doc/style.css">
+-<script type="text/javascript" src="/doc/godocs.js"></script>
+-
+-<link rel="search" type="application/opensearchdescription+xml" title="godoc" href="/opensearch.xml" />
+-
+-<script type="text/javascript">
+-var _gaq = _gaq || [];
+-_gaq.push(["_setAccount", "UA-11222381-2"]);
+-_gaq.push(["_trackPageview"]);
+-</script>
+-</head>
+-<body>
+-
+-<div id="topbar"><div class="container wide">
+-
+-<form method="GET" action="/search">
+-<div id="menu">
+-<a href="/doc/">Documents</a>
+-<a href="/ref/">References</a>
+-<a href="/pkg/">Packages</a>
+-<a href="/project/">The Project</a>
+-<a href="/help/">Help</a>
+-<input type="text" id="search" name="q" class="inactive" value="Search">
+-</div>
+-<div id="heading"><a href="/">The Go Programming Language</a></div>
+-</form>
+-
+-</div></div>
+-
+-<div id="page" class="wide">
+-
+-
+-  <div id="plusone"><g:plusone size="small" annotation="none"></g:plusone></div>
+-  <h1>Go 1 Release Notes</h1>
+-
+-
+-
+-
+-<div id="nav"></div>
+-
+-
+-
+-
+-<h2 id="introduction">Introduction to Go 1</h2>
+-
+-<p>
+-Go version 1, Go 1 for short, defines a language and a set of core libraries
+-that provide a stable foundation for creating reliable products, projects, and
+-publications.
+-</p>
+-
+-<p>
+-The driving motivation for Go 1 is stability for its users. People should be able to
+-write Go programs and expect that they will continue to compile and run without
+-change, on a time scale of years, including in production environments such as
+-Google App Engine. Similarly, people should be able to write books about Go, be
+-able to say which version of Go the book is describing, and have that version
+-number still be meaningful much later.
+-</p>
+-
+-<p>
+-Code that compiles in Go 1 should, with few exceptions, continue to compile and
+-run throughout the lifetime of that version, even as we issue updates and bug
+-fixes such as Go version 1.1, 1.2, and so on. Other than critical fixes, changes
+-made to the language and library for subsequent releases of Go 1 may
+-add functionality but will not break existing Go 1 programs.
+-<a href="go1compat.html">The Go 1 compatibility document</a>
+-explains the compatibility guidelines in more detail.
+-</p>
+-
+-<p>
+-Go 1 is a representation of Go as it used today, not a wholesale rethinking of
+-the language. We avoided designing new features and instead focused on cleaning
+-up problems and inconsistencies and improving portability. There are a number
+-changes to the Go language and packages that we had considered for some time and
+-prototyped but not released primarily because they are significant and
+-backwards-incompatible. Go 1 was an opportunity to get them out, which is
+-helpful for the long term, but also means that Go 1 introduces incompatibilities
+-for old programs. Fortunately, the <code>go</code> <code>fix</code> tool can
+-automate much of the work needed to bring programs up to the Go 1 standard.
+-</p>
+-
+-<p>
+-This document outlines the major changes in Go 1 that will affect programmers
+-updating existing code; its reference point is the prior release, r60 (tagged as
+-r60.3). It also explains how to update code from r60 to run under Go 1.
+-</p>
+-
+-<h2 id="language">Changes to the language</h2>
+-
+-<h3 id="append">Append</h3>
+-
+-<p>
+-The <code>append</code> predeclared variadic function makes it easy to grow a slice
+-by adding elements to the end.
+-A common use is to add bytes to the end of a byte slice when generating output.
+-However, <code>append</code> did not provide a way to append a string to a <code>[]byte</code>,
+-which is another common case.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/greeting := ..byte/` `/append.*hello/`}}
+--->    greeting := []byte{}
+-    greeting = append(greeting, []byte(&#34;hello &#34;)...)</pre>
+-
+-<p>
+-By analogy with the similar property of <code>copy</code>, Go 1
+-permits a string to be appended (byte-wise) directly to a byte
+-slice, reducing the friction between strings and byte slices.
+-The conversion is no longer necessary:
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/append.*world/`}}
+--->    greeting = append(greeting, &#34;world&#34;...)</pre>
+-
+-<p>
+-<em>Updating</em>:
+-This is a new feature, so existing code needs no changes.
+-</p>
+-
+-<h3 id="close">Close</h3>
+-
+-<p>
+-The <code>close</code> predeclared function provides a mechanism
+-for a sender to signal that no more values will be sent.
+-It is important to the implementation of <code>for</code> <code>range</code>
+-loops over channels and is helpful in other situations.
+-Partly by design and partly because of race conditions that can occur otherwise,
+-it is intended for use only by the goroutine sending on the channel,
+-not by the goroutine receiving data.
+-However, before Go 1 there was no compile-time checking that <code>close</code>
+-was being used correctly.
+-</p>
+-
+-<p>
+-To close this gap, at least in part, Go 1 disallows <code>close</code> on receive-only channels.
+-Attempting to close such a channel is a compile-time error.
+-</p>
+-
+-<pre>
+-    var c chan int
+-    var csend chan&lt;- int = c
+-    var crecv &lt;-chan int = c
+-    close(c)     // legal
+-    close(csend) // legal
+-    close(crecv) // illegal
+-</pre>
+-
+-<p>
+-<em>Updating</em>:
+-Existing code that attempts to close a receive-only channel was
+-erroneous even before Go 1 and should be fixed.  The compiler will
+-now reject such code.
+-</p>
+-
+-<h3 id="literals">Composite literals</h3>
+-
+-<p>
+-In Go 1, a composite literal of array, slice, or map type can elide the
+-type specification for the elements' initializers if they are of pointer type.
+-All four of the initializations in this example are legal; the last one was illegal before Go 1.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/type Date struct/` `/STOP/`}}
+--->    type Date struct {
+-        month string
+-        day   int
+-    }
+-    <span class="comment">// Struct values, fully qualified; always legal.</span>
+-    holiday1 := []Date{
+-        Date{&#34;Feb&#34;, 14},
+-        Date{&#34;Nov&#34;, 11},
+-        Date{&#34;Dec&#34;, 25},
+-    }
+-    <span class="comment">// Struct values, type name elided; always legal.</span>
+-    holiday2 := []Date{
+-        {&#34;Feb&#34;, 14},
+-        {&#34;Nov&#34;, 11},
+-        {&#34;Dec&#34;, 25},
+-    }
+-    <span class="comment">// Pointers, fully qualified, always legal.</span>
+-    holiday3 := []*Date{
+-        &amp;Date{&#34;Feb&#34;, 14},
+-        &amp;Date{&#34;Nov&#34;, 11},
+-        &amp;Date{&#34;Dec&#34;, 25},
+-    }
+-    <span class="comment">// Pointers, type name elided; legal in Go 1.</span>
+-    holiday4 := []*Date{
+-        {&#34;Feb&#34;, 14},
+-        {&#34;Nov&#34;, 11},
+-        {&#34;Dec&#34;, 25},
+-    }</pre>
+-
+-<p>
+-<em>Updating</em>:
+-This change has no effect on existing code, but the command
+-<code>gofmt</code> <code>-s</code> applied to existing source
+-will, among other things, elide explicit element types wherever permitted.
+-</p>
+-
+-
+-<h3 id="init">Goroutines during init</h3>
+-
+-<p>
+-The old language defined that <code>go</code> statements executed during initialization created goroutines but that they did not begin to run until initialization of the entire program was complete.
+-This introduced clumsiness in many places and, in effect, limited the utility
+-of the <code>init</code> construct:
+-if it was possible for another package to use the library during initialization, the library
+-was forced to avoid goroutines.
+-This design was done for reasons of simplicity and safety but,
+-as our confidence in the language grew, it seemed unnecessary.
+-Running goroutines during initialization is no more complex or unsafe than running them during normal execution.
+-</p>
+-
+-<p>
+-In Go 1, code that uses goroutines can be called from
+-<code>init</code> routines and global initialization expressions
+-without introducing a deadlock.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/PackageGlobal/` `/^}/`}}
+--->var PackageGlobal int
+-
+-func init() {
+-    c := make(chan int)
+-    go initializationFunction(c)
+-    PackageGlobal = &lt;-c
+-}</pre>
+-
+-<p>
+-<em>Updating</em>:
+-This is a new feature, so existing code needs no changes,
+-although it's possible that code that depends on goroutines not starting before <code>main</code> will break.
+-There was no such code in the standard repository.
+-</p>
+-
+-<h3 id="rune">The rune type</h3>
+-
+-<p>
+-The language spec allows the <code>int</code> type to be 32 or 64 bits wide, but current implementations set <code>int</code> to 32 bits even on 64-bit platforms.
+-It would be preferable to have <code>int</code> be 64 bits on 64-bit platforms.
+-(There are important consequences for indexing large slices.)
+-However, this change would waste space when processing Unicode characters with
+-the old language because the <code>int</code> type was also used to hold Unicode code points: each code point would waste an extra 32 bits of storage if <code>int</code> grew from 32 bits to 64.
+-</p>
+-
+-<p>
+-To make changing to 64-bit <code>int</code> feasible,
+-Go 1 introduces a new basic type, <code>rune</code>, to represent
+-individual Unicode code points.
+-It is an alias for <code>int32</code>, analogous to <code>byte</code>
+-as an alias for <code>uint8</code>.
+-</p>
+-
+-<p>
+-Character literals such as <code>'a'</code>, <code>'語'</code>, and <code>'\u0345'</code>
+-now have default type <code>rune</code>,
+-analogous to <code>1.0</code> having default type <code>float64</code>.
+-A variable initialized to a character constant will therefore
+-have type <code>rune</code> unless otherwise specified.
+-</p>
+-
+-<p>
+-Libraries have been updated to use <code>rune</code> rather than <code>int</code>
+-when appropriate. For instance, the functions <code>unicode.ToLower</code> and
+-relatives now take and return a <code>rune</code>.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/STARTRUNE/` `/ENDRUNE/`}}
+--->    delta := &#39;δ&#39; <span class="comment">// delta has type rune.</span>
+-    var DELTA rune
+-    DELTA = unicode.ToUpper(delta)
+-    epsilon := unicode.ToLower(DELTA + 1)
+-    if epsilon != &#39;δ&#39;+1 {
+-        log.Fatal(&#34;inconsistent casing for Greek&#34;)
+-    }</pre>
+-
+-<p>
+-<em>Updating</em>:
+-Most source code will be unaffected by this because the type inference from
+-<code>:=</code> initializers introduces the new type silently, and it propagates
+-from there.
+-Some code may get type errors that a trivial conversion will resolve.
+-</p>
+-
+-<h3 id="error">The error type</h3>
+-
+-<p>
+-Go 1 introduces a new built-in type, <code>error</code>, which has the following definition:
+-</p>
+-
+-<pre>
+-    type error interface {
+-        Error() string
+-    }
+-</pre>
+-
+-<p>
+-Since the consequences of this type are all in the package library,
+-it is discussed <a href="#errors">below</a>.
+-</p>
+-
+-<h3 id="delete">Deleting from maps</h3>
+-
+-<p>
+-In the old language, to delete the entry with key <code>k</code> from map <code>m</code>, one wrote the statement,
+-</p>
+-
+-<pre>
+-    m[k] = value, false
+-</pre>
+-
+-<p>
+-This syntax was a peculiar special case, the only two-to-one assignment.
+-It required passing a value (usually ignored) that is evaluated but discarded,
+-plus a boolean that was nearly always the constant <code>false</code>.
+-It did the job but was odd and a point of contention.
+-</p>
+-
+-<p>
+-In Go 1, that syntax has gone; instead there is a new built-in
+-function, <code>delete</code>.  The call
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/delete\(m, k\)/`}}
+--->    delete(m, k)</pre>
+-
+-<p>
+-will delete the map entry retrieved by the expression <code>m[k]</code>.
+-There is no return value. Deleting a non-existent entry is a no-op.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will convert expressions of the form <code>m[k] = value,
+-false</code> into <code>delete(m, k)</code> when it is clear that
+-the ignored value can be safely discarded from the program and
+-<code>false</code> refers to the predefined boolean constant.
+-The fix tool
+-will flag other uses of the syntax for inspection by the programmer.
+-</p>
+-
+-<h3 id="iteration">Iterating in maps</h3>
+-
+-<p>
+-The old language specification did not define the order of iteration for maps,
+-and in practice it differed across hardware platforms.
+-This caused tests that iterated over maps to be fragile and non-portable, with the
+-unpleasant property that a test might always pass on one machine but break on another.
+-</p>
+-
+-<p>
+-In Go 1, the order in which elements are visited when iterating
+-over a map using a <code>for</code> <code>range</code> statement
+-is defined to be unpredictable, even if the same loop is run multiple
+-times with the same map.
+-Code should not assume that the elements are visited in any particular order.
+-</p>
+-
+-<p>
+-This change means that code that depends on iteration order is very likely to break early and be fixed long before it becomes a problem.
+-Just as important, it allows the map implementation to ensure better map balancing even when programs are using range loops to select an element from a map.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/Sunday/` `/^	}/`}}
+--->    m := map[string]int{&#34;Sunday&#34;: 0, &#34;Monday&#34;: 1}
+-    for name, value := range m {
+-        <span class="comment">// This loop should not assume Sunday will be visited first.</span>
+-        f(name, value)
+-    }</pre>
+-
+-<p>
+-<em>Updating</em>:
+-This is one change where tools cannot help.  Most existing code
+-will be unaffected, but some programs may break or misbehave; we
+-recommend manual checking of all range statements over maps to
+-verify they do not depend on iteration order. There were a few such
+-examples in the standard repository; they have been fixed.
+-Note that it was already incorrect to depend on the iteration order, which
+-was unspecified. This change codifies the unpredictability.
+-</p>
+-
+-<h3 id="multiple_assignment">Multiple assignment</h3>
+-
+-<p>
+-The language specification has long guaranteed that in assignments
+-the right-hand-side expressions are all evaluated before any left-hand-side expressions are assigned.
+-To guarantee predictable behavior,
+-Go 1 refines the specification further.
+-</p>
+-
+-<p>
+-If the left-hand side of the assignment
+-statement contains expressions that require evaluation, such as
+-function calls or array indexing operations, these will all be done
+-using the usual left-to-right rule before any variables are assigned
+-their value.  Once everything is evaluated, the actual assignments
+-proceed in left-to-right order.
+-</p>
+-
+-<p>
+-These examples illustrate the behavior.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/sa :=/` `/then sc.0. = 2/`}}
+--->    sa := []int{1, 2, 3}
+-    i := 0
+-    i, sa[i] = 1, 2 <span class="comment">// sets i = 1, sa[0] = 2</span>
+-
+-    sb := []int{1, 2, 3}
+-    j := 0
+-    sb[j], j = 2, 1 <span class="comment">// sets sb[0] = 2, j = 1</span>
+-
+-    sc := []int{1, 2, 3}
+-    sc[0], sc[0] = 1, 2 <span class="comment">// sets sc[0] = 1, then sc[0] = 2 (so sc[0] = 2 at end)</span></pre>
+-
+-<p>
+-<em>Updating</em>:
+-This is one change where tools cannot help, but breakage is unlikely.
+-No code in the standard repository was broken by this change, and code
+-that depended on the previous unspecified behavior was already incorrect.
+-</p>
+-
+-<h3 id="shadowing">Returns and shadowed variables</h3>
+-
+-<p>
+-A common mistake is to use <code>return</code> (without arguments) after an assignment to a variable that has the same name as a result variable but is not the same variable.
+-This situation is called <em>shadowing</em>: the result variable has been shadowed by another variable with the same name declared in an inner scope.
+-</p>
+-
+-<p>
+-In functions with named return values,
+-the Go 1 compilers disallow return statements without arguments if any of the named return values is shadowed at the point of the return statement.
+-(It isn't part of the specification, because this is one area we are still exploring;
+-the situation is analogous to the compilers rejecting functions that do not end with an explicit return statement.)
+-</p>
+-
+-<p>
+-This function implicitly returns a shadowed return value and will be rejected by the compiler:
+-</p>
+-
+-<pre>
+-    func Bug() (i, j, k int) {
+-        for i = 0; i &lt; 5; i++ {
+-            for j := 0; j &lt; 5; j++ { // Redeclares j.
+-                k += i*j
+-                if k > 100 {
+-                    return // Rejected: j is shadowed here.
+-                }
+-            }
+-        }
+-        return // OK: j is not shadowed here.
+-    }
+-</pre>
+-
+-<p>
+-<em>Updating</em>:
+-Code that shadows return values in this way will be rejected by the compiler and will need to be fixed by hand.
+-The few cases that arose in the standard repository were mostly bugs.
+-</p>
+-
+-<h3 id="unexported">Copying structs with unexported fields</h3>
+-
+-<p>
+-The old language did not allow a package to make a copy of a struct value containing unexported fields belonging to a different package.
+-There was, however, a required exception for a method receiver;
+-also, the implementations of <code>copy</code> and <code>append</code> have never honored the restriction.
+-</p>
+-
+-<p>
+-Go 1 will allow packages to copy struct values containing unexported fields from other packages.
+-Besides resolving the inconsistency,
+-this change admits a new kind of API: a package can return an opaque value without resorting to a pointer or interface.
+-The new implementations of <code>time.Time</code> and
+-<code>reflect.Value</code> are examples of types taking advantage of this new property.
+-</p>
+-
+-<p>
+-As an example, if package <code>p</code> includes the definitions,
+-</p>
+-
+-<pre>
+-    type Struct struct {
+-        Public int
+-        secret int
+-    }
+-    func NewStruct(a int) Struct {  // Note: not a pointer.
+-        return Struct{a, f(a)}
+-    }
+-    func (s Struct) String() string {
+-        return fmt.Sprintf("{%d (secret %d)}", s.Public, s.secret)
+-    }
+-</pre>
+-
+-<p>
+-a package that imports <code>p</code> can assign and copy values of type
+-<code>p.Struct</code> at will.
+-Behind the scenes the unexported fields will be assigned and copied just
+-as if they were exported,
+-but the client code will never be aware of them. The code
+-</p>
+-
+-<pre>
+-    import "p"
+-
+-    myStruct := p.NewStruct(23)
+-    copyOfMyStruct := myStruct
+-    fmt.Println(myStruct, copyOfMyStruct)
+-</pre>
+-
+-<p>
+-will show that the secret field of the struct has been copied to the new value.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-This is a new feature, so existing code needs no changes.
+-</p>
+-
+-<h3 id="equality">Equality</h3>
+-
+-<p>
+-Before Go 1, the language did not define equality on struct and array values.
+-This meant,
+-among other things, that structs and arrays could not be used as map keys.
+-On the other hand, Go did define equality on function and map values.
+-Function equality was problematic in the presence of closures
+-(when are two closures equal?)
+-while map equality compared pointers, not the maps' content, which was usually
+-not what the user would want.
+-</p>
+-
+-<p>
+-Go 1 addressed these issues.
+-First, structs and arrays can be compared for equality and inequality
+-(<code>==</code> and <code>!=</code>),
+-and therefore be used as map keys,
+-provided they are composed from elements for which equality is also defined,
+-using element-wise comparison.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/type Day struct/` `/Printf/`}}
+--->    type Day struct {
+-        long  string
+-        short string
+-    }
+-    Christmas := Day{&#34;Christmas&#34;, &#34;XMas&#34;}
+-    Thanksgiving := Day{&#34;Thanksgiving&#34;, &#34;Turkey&#34;}
+-    holiday := map[Day]bool{
+-        Christmas:    true,
+-        Thanksgiving: true,
+-    }
+-    fmt.Printf(&#34;Christmas is a holiday: %t\n&#34;, holiday[Christmas])</pre>
+-
+-<p>
+-Second, Go 1 removes the definition of equality for function values,
+-except for comparison with <code>nil</code>.
+-Finally, map equality is gone too, also except for comparison with <code>nil</code>.
+-</p>
+-
+-<p>
+-Note that equality is still undefined for slices, for which the
+-calculation is in general infeasible.  Also note that the ordered
+-comparison operators (<code>&lt;</code> <code>&lt;=</code>
+-<code>&gt;</code> <code>&gt;=</code>) are still undefined for
+-structs and arrays.
+-
+-<p>
+-<em>Updating</em>:
+-Struct and array equality is a new feature, so existing code needs no changes.
+-Existing code that depends on function or map equality will be
+-rejected by the compiler and will need to be fixed by hand.
+-Few programs will be affected, but the fix may require some
+-redesign.
+-</p>
+-
+-<h2 id="packages">The package hierarchy</h2>
+-
+-<p>
+-Go 1 addresses many deficiencies in the old standard library and
+-cleans up a number of packages, making them more internally consistent
+-and portable.
+-</p>
+-
+-<p>
+-This section describes how the packages have been rearranged in Go 1.
+-Some have moved, some have been renamed, some have been deleted.
+-New packages are described in later sections.
+-</p>
+-
+-<h3 id="hierarchy">The package hierarchy</h3>
+-
+-<p>
+-Go 1 has a rearranged package hierarchy that groups related items
+-into subdirectories. For instance, <code>utf8</code> and
+-<code>utf16</code> now occupy subdirectories of <code>unicode</code>.
+-Also, <a href="#subrepo">some packages</a> have moved into
+-subrepositories of
+-<a href="http://code.google.com/p/go"><code>code.google.com/p/go</code></a>
+-while <a href="#deleted">others</a> have been deleted outright.
+-</p>
+-
+-<table class="codetable" frame="border" summary="Moved packages">
+-<colgroup align="left" width="60%"></colgroup>
+-<colgroup align="left" width="40%"></colgroup>
+-<tr>
+-<th align="left">Old path</th>
+-<th align="left">New path</th>
+-</tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>asn1</td> <td>encoding/asn1</td></tr>
+-<tr><td>csv</td> <td>encoding/csv</td></tr>
+-<tr><td>gob</td> <td>encoding/gob</td></tr>
+-<tr><td>json</td> <td>encoding/json</td></tr>
+-<tr><td>xml</td> <td>encoding/xml</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>exp/template/html</td> <td>html/template</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>big</td> <td>math/big</td></tr>
+-<tr><td>cmath</td> <td>math/cmplx</td></tr>
+-<tr><td>rand</td> <td>math/rand</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>http</td> <td>net/http</td></tr>
+-<tr><td>http/cgi</td> <td>net/http/cgi</td></tr>
+-<tr><td>http/fcgi</td> <td>net/http/fcgi</td></tr>
+-<tr><td>http/httptest</td> <td>net/http/httptest</td></tr>
+-<tr><td>http/pprof</td> <td>net/http/pprof</td></tr>
+-<tr><td>mail</td> <td>net/mail</td></tr>
+-<tr><td>rpc</td> <td>net/rpc</td></tr>
+-<tr><td>rpc/jsonrpc</td> <td>net/rpc/jsonrpc</td></tr>
+-<tr><td>smtp</td> <td>net/smtp</td></tr>
+-<tr><td>url</td> <td>net/url</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>exec</td> <td>os/exec</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>scanner</td> <td>text/scanner</td></tr>
+-<tr><td>tabwriter</td> <td>text/tabwriter</td></tr>
+-<tr><td>template</td> <td>text/template</td></tr>
+-<tr><td>template/parse</td> <td>text/template/parse</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>utf8</td> <td>unicode/utf8</td></tr>
+-<tr><td>utf16</td> <td>unicode/utf16</td></tr>
+-</table>
+-
+-<p>
+-Note that the package names for the old <code>cmath</code> and
+-<code>exp/template/html</code> packages have changed to <code>cmplx</code>
+-and <code>template</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update all imports and package renames for packages that
+-remain inside the standard repository.  Programs that import packages
+-that are no longer in the standard repository will need to be edited
+-by hand.
+-</p>
+-
+-<h3 id="exp">The package tree exp</h3>
+-
+-<p>
+-Because they are not standardized, the packages under the <code>exp</code> directory will not be available in the
+-standard Go 1 release distributions, although they will be available in source code form
+-in <a href="http://code.google.com/p/go/">the repository</a> for
+-developers who wish to use them.
+-</p>
+-
+-<p>
+-Several packages have moved under <code>exp</code> at the time of Go 1's release:
+-</p>
+-
+-<ul>
+-<li><code>ebnf</code></li>
+-<li><code>html</code><sup>&#8224;</sup></li>
+-<li><code>go/types</code></li>
+-</ul>
+-
+-<p>
+-(<sup>&#8224;</sup>The <code>EscapeString</code> and <code>UnescapeString</code> types remain
+-in package <code>html</code>.)
+-</p>
+-
+-<p>
+-All these packages are available under the same names, with the prefix <code>exp/</code>: <code>exp/ebnf</code> etc.
+-</p>
+-
+-<p>
+-Also, the <code>utf8.String</code> type has been moved to its own package, <code>exp/utf8string</code>.
+-</p>
+-
+-<p>
+-Finally, the <code>gotype</code> command now resides in <code>exp/gotype</code>, while
+-<code>ebnflint</code> is now in <code>exp/ebnflint</code>.
+-If they are installed, they now reside in <code>$GOROOT/bin/tool</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Code that uses packages in <code>exp</code> will need to be updated by hand,
+-or else compiled from an installation that has <code>exp</code> available.
+-The <code>go</code> <code>fix</code> tool or the compiler will complain about such uses.
+-</p>
+-
+-<h3 id="old">The package tree old</h3>
+-
+-<p>
+-Because they are deprecated, the packages under the <code>old</code> directory will not be available in the
+-standard Go 1 release distributions, although they will be available in source code form for
+-developers who wish to use them.
+-</p>
+-
+-<p>
+-The packages in their new locations are:
+-</p>
+-
+-<ul>
+-<li><code>old/netchan</code></li>
+-<li><code>old/regexp</code></li>
+-<li><code>old/template</code></li>
+-</ul>
+-
+-<p>
+-<em>Updating</em>:
+-Code that uses packages now in <code>old</code> will need to be updated by hand,
+-or else compiled from an installation that has <code>old</code> available.
+-The <code>go</code> <code>fix</code> tool will warn about such uses.
+-</p>
+-
+-<h3 id="deleted">Deleted packages</h3>
+-
+-<p>
+-Go 1 deletes several packages outright:
+-</p>
+-
+-<ul>
+-<li><code>container/vector</code></li>
+-<li><code>exp/datafmt</code></li>
+-<li><code>go/typechecker</code></li>
+-<li><code>try</code></li>
+-</ul>
+-
+-<p>
+-and also the command <code>gotry</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Code that uses <code>container/vector</code> should be updated to use
+-slices directly.  See
+-<a href="http://code.google.com/p/go-wiki/wiki/SliceTricks">the Go
+-Language Community Wiki</a> for some suggestions.
+-Code that uses the other packages (there should be almost zero) will need to be rethought.
+-</p>
+-
+-<h3 id="subrepo">Packages moving to subrepositories</h3>
+-
+-<p>
+-Go 1 has moved a number of packages into other repositories, usually sub-repositories of
+-<a href="http://code.google.com/p/go/">the main Go repository</a>.
+-This table lists the old and new import paths:
+-
+-<table class="codetable" frame="border" summary="Sub-repositories">
+-<colgroup align="left" width="40%"></colgroup>
+-<colgroup align="left" width="60%"></colgroup>
+-<tr>
+-<th align="left">Old</th>
+-<th align="left">New</th>
+-</tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>crypto/bcrypt</td> <td>code.google.com/p/go.crypto/bcrypt</tr>
+-<tr><td>crypto/blowfish</td> <td>code.google.com/p/go.crypto/blowfish</tr>
+-<tr><td>crypto/cast5</td> <td>code.google.com/p/go.crypto/cast5</tr>
+-<tr><td>crypto/md4</td> <td>code.google.com/p/go.crypto/md4</tr>
+-<tr><td>crypto/ocsp</td> <td>code.google.com/p/go.crypto/ocsp</tr>
+-<tr><td>crypto/openpgp</td> <td>code.google.com/p/go.crypto/openpgp</tr>
+-<tr><td>crypto/openpgp/armor</td> <td>code.google.com/p/go.crypto/openpgp/armor</tr>
+-<tr><td>crypto/openpgp/elgamal</td> <td>code.google.com/p/go.crypto/openpgp/elgamal</tr>
+-<tr><td>crypto/openpgp/errors</td> <td>code.google.com/p/go.crypto/openpgp/errors</tr>
+-<tr><td>crypto/openpgp/packet</td> <td>code.google.com/p/go.crypto/openpgp/packet</tr>
+-<tr><td>crypto/openpgp/s2k</td> <td>code.google.com/p/go.crypto/openpgp/s2k</tr>
+-<tr><td>crypto/ripemd160</td> <td>code.google.com/p/go.crypto/ripemd160</tr>
+-<tr><td>crypto/twofish</td> <td>code.google.com/p/go.crypto/twofish</tr>
+-<tr><td>crypto/xtea</td> <td>code.google.com/p/go.crypto/xtea</tr>
+-<tr><td>exp/ssh</td> <td>code.google.com/p/go.crypto/ssh</tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>image/bmp</td> <td>code.google.com/p/go.image/bmp</tr>
+-<tr><td>image/tiff</td> <td>code.google.com/p/go.image/tiff</tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>net/dict</td> <td>code.google.com/p/go.net/dict</tr>
+-<tr><td>net/websocket</td> <td>code.google.com/p/go.net/websocket</tr>
+-<tr><td>exp/spdy</td> <td>code.google.com/p/go.net/spdy</tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>encoding/git85</td> <td>code.google.com/p/go.codereview/git85</tr>
+-<tr><td>patch</td> <td>code.google.com/p/go.codereview/patch</tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>exp/wingui</td> <td>code.google.com/p/gowingui</tr>
+-</table>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update imports of these packages to use the new import paths.
+-Installations that depend on these packages will need to install them using
+-a <code>go get</code> command.
+-</p>
+-
+-<h2 id="major">Major changes to the library</h2>
+-
+-<p>
+-This section describes significant changes to the core libraries, the ones that
+-affect the most programs.
+-</p>
+-
+-<h3 id="errors">The error type and errors package</h3>
+-
+-<p>
+-The placement of <code>os.Error</code> in package <code>os</code> is mostly historical: errors first came up when implementing package <code>os</code>, and they seemed system-related at the time.
+-Since then it has become clear that errors are more fundamental than the operating system.  For example, it would be nice to use <code>Errors</code> in packages that <code>os</code> depends on, like <code>syscall</code>.
+-Also, having <code>Error</code> in <code>os</code> introduces many dependencies on <code>os</code> that would otherwise not exist.
+-</p>
+-
+-<p>
+-Go 1 solves these problems by introducing a built-in <code>error</code> interface type and a separate <code>errors</code> package (analogous to <code>bytes</code> and <code>strings</code>) that contains utility functions.
+-It replaces <code>os.NewError</code> with
+-<a href="/pkg/errors/#New"><code>errors.New</code></a>,
+-giving errors a more central place in the environment.
+-</p>
+-
+-<p>
+-So the widely-used <code>String</code> method does not cause accidental satisfaction
+-of the <code>error</code> interface, the <code>error</code> interface uses instead
+-the name <code>Error</code> for that method:
+-</p>
+-
+-<pre>
+-    type error interface {
+-        Error() string
+-    }
+-</pre>
+-
+-<p>
+-The <code>fmt</code> library automatically invokes <code>Error</code>, as it already
+-does for <code>String</code>, for easy printing of error values.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/START ERROR EXAMPLE/` `/END ERROR EXAMPLE/`}}
+--->type SyntaxError struct {
+-    File    string
+-    Line    int
+-    Message string
+-}
+-
+-func (se *SyntaxError) Error() string {
+-    return fmt.Sprintf(&#34;%s:%d: %s&#34;, se.File, se.Line, se.Message)
+-}</pre>
+-
+-<p>
+-All standard packages have been updated to use the new interface; the old <code>os.Error</code> is gone.
+-</p>
+-
+-<p>
+-A new package, <a href="/pkg/errors/"><code>errors</code></a>, contains the function
+-</p>
+-
+-<pre>
+-func New(text string) error
+-</pre>
+-
+-<p>
+-to turn a string into an error. It replaces the old <code>os.NewError</code>.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/ErrSyntax/`}}
+--->    var ErrSyntax = errors.New(&#34;syntax error&#34;)</pre>
+-		
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update almost all code affected by the change.
+-Code that defines error types with a <code>String</code> method will need to be updated
+-by hand to rename the methods to <code>Error</code>.
+-</p>
+-
+-<h3 id="errno">System call errors</h3>
+-
+-<p>
+-The old <code>syscall</code> package, which predated <code>os.Error</code>
+-(and just about everything else),
+-returned errors as <code>int</code> values.
+-In turn, the <code>os</code> package forwarded many of these errors, such
+-as <code>EINVAL</code>, but using a different set of errors on each platform.
+-This behavior was unpleasant and unportable.
+-</p>
+-
+-<p>
+-In Go 1, the
+-<a href="/pkg/syscall/"><code>syscall</code></a>
+-package instead returns an <code>error</code> for system call errors.
+-On Unix, the implementation is done by a
+-<a href="/pkg/syscall/#Errno"><code>syscall.Errno</code></a> type
+-that satisfies <code>error</code> and replaces the old <code>os.Errno</code>.
+-</p>
+-
+-<p>
+-The changes affecting <code>os.EINVAL</code> and relatives are
+-described <a href="#os">elsewhere</a>.
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update almost all code affected by the change.
+-Regardless, most code should use the <code>os</code> package
+-rather than <code>syscall</code> and so will be unaffected.
+-</p>
+-
+-<h3 id="time">Time</h3>
+-
+-<p>
+-Time is always a challenge to support well in a programming language.
+-The old Go <code>time</code> package had <code>int64</code> units, no
+-real type safety,
+-and no distinction between absolute times and durations.
+-</p>
+-
+-<p>
+-One of the most sweeping changes in the Go 1 library is therefore a
+-complete redesign of the
+-<a href="/pkg/time/"><code>time</code></a> package.
+-Instead of an integer number of nanoseconds as an <code>int64</code>,
+-and a separate <code>*time.Time</code> type to deal with human
+-units such as hours and years,
+-there are now two fundamental types:
+-<a href="/pkg/time/#Time"><code>time.Time</code></a>
+-(a value, so the <code>*</code> is gone), which represents a moment in time;
+-and <a href="/pkg/time/#Duration"><code>time.Duration</code></a>,
+-which represents an interval.
+-Both have nanosecond resolution.
+-A <code>Time</code> can represent any time into the ancient
+-past and remote future, while a <code>Duration</code> can
+-span plus or minus only about 290 years.
+-There are methods on these types, plus a number of helpful
+-predefined constant durations such as <code>time.Second</code>.
+-</p>
+-
+-<p>
+-Among the new methods are things like
+-<a href="/pkg/time/#Time.Add"><code>Time.Add</code></a>,
+-which adds a <code>Duration</code> to a <code>Time</code>, and
+-<a href="/pkg/time/#Time.Sub"><code>Time.Sub</code></a>,
+-which subtracts two <code>Times</code> to yield a <code>Duration</code>.
+-</p>
+-
+-<p>
+-The most important semantic change is that the Unix epoch (Jan 1, 1970) is now
+-relevant only for those functions and methods that mention Unix:
+-<a href="/pkg/time/#Unix"><code>time.Unix</code></a>
+-and the <a href="/pkg/time/#Time.Unix"><code>Unix</code></a>
+-and <a href="/pkg/time/#Time.UnixNano"><code>UnixNano</code></a> methods
+-of the <code>Time</code> type.
+-In particular,
+-<a href="/pkg/time/#Now"><code>time.Now</code></a>
+-returns a <code>time.Time</code> value rather than, in the old
+-API, an integer nanosecond count since the Unix epoch.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/sleepUntil/` `/^}/`}}
+---><span class="comment">// sleepUntil sleeps until the specified time. It returns immediately if it&#39;s too late.</span>
+-func sleepUntil(wakeup time.Time) {
+-    now := time.Now() <span class="comment">// A Time.</span>
+-    if !wakeup.After(now) {
+-        return
+-    }
+-    delta := wakeup.Sub(now) <span class="comment">// A Duration.</span>
+-    fmt.Printf(&#34;Sleeping for %.3fs\n&#34;, delta.Seconds())
+-    time.Sleep(delta)
+-}</pre>
+-
+-<p>
+-The new types, methods, and constants have been propagated through
+-all the standard packages that use time, such as <code>os</code> and
+-its representation of file time stamps.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-The <code>go</code> <code>fix</code> tool will update many uses of the old <code>time</code> package to use the new
+-types and methods, although it does not replace values such as <code>1e9</code>
+-representing nanoseconds per second.
+-Also, because of type changes in some of the values that arise,
+-some of the expressions rewritten by the fix tool may require
+-further hand editing; in such cases the rewrite will include
+-the correct function or method for the old functionality, but
+-may have the wrong type or require further analysis.
+-</p>
+-
+-<h2 id="minor">Minor changes to the library</h2>
+-
+-<p>
+-This section describes smaller changes, such as those to less commonly
+-used packages or that affect
+-few programs beyond the need to run <code>go</code> <code>fix</code>.
+-This category includes packages that are new in Go 1.
+-Collectively they improve portability, regularize behavior, and
+-make the interfaces more modern and Go-like.
+-</p>
+-
+-<h3 id="archive_zip">The archive/zip package</h3>
+-
+-<p>
+-In Go 1, <a href="/pkg/archive/zip/#Writer"><code>*zip.Writer</code></a> no
+-longer has a <code>Write</code> method. Its presence was a mistake.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-What little code is affected will be caught by the compiler and must be updated by hand.
+-</p>
+-
+-<h3 id="bufio">The bufio package</h3>
+-
+-<p>
+-In Go 1, <a href="/pkg/bufio/#NewReaderSize"><code>bufio.NewReaderSize</code></a>
+-and
+-<a href="/pkg/bufio/#NewWriterSize"><code>bufio.NewWriterSize</code></a>
+-functions no longer return an error for invalid sizes.
+-If the argument size is too small or invalid, it is adjusted.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update calls that assign the error to _.
+-Calls that aren't fixed will be caught by the compiler and must be updated by hand.
+-</p>
+-
+-<h3 id="compress">The compress/flate, compress/gzip and compress/zlib packages</h3>
+-
+-<p>
+-In Go 1, the <code>NewWriterXxx</code> functions in
+-<a href="/pkg/compress/flate"><code>compress/flate</code></a>,
+-<a href="/pkg/compress/gzip"><code>compress/gzip</code></a> and
+-<a href="/pkg/compress/zlib"><code>compress/zlib</code></a>
+-all return <code>(*Writer, error)</code> if they take a compression level,
+-and <code>*Writer</code> otherwise. Package <code>gzip</code>'s
+-<code>Compressor</code> and <code>Decompressor</code> types have been renamed
+-to <code>Writer</code> and <code>Reader</code>. Package <code>flate</code>'s
+-<code>WrongValueError</code> type has been removed.
+-</p>
+-
+-<p>
+-<em>Updating</em>
+-Running <code>go</code> <code>fix</code> will update old names and calls that assign the error to _.
+-Calls that aren't fixed will be caught by the compiler and must be updated by hand.
+-</p>
+-
+-<h3 id="crypto_aes_des">The crypto/aes and crypto/des packages</h3>
+-
+-<p>
+-In Go 1, the <code>Reset</code> method has been removed. Go does not guarantee
+-that memory is not copied and therefore this method was misleading.
+-</p>
+-
+-<p>
+-The cipher-specific types <code>*aes.Cipher</code>, <code>*des.Cipher</code>,
+-and <code>*des.TripleDESCipher</code> have been removed in favor of
+-<code>cipher.Block</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Remove the calls to Reset. Replace uses of the specific cipher types with
+-cipher.Block.
+-</p>
+-
+-<h3 id="crypto_elliptic">The crypto/elliptic package</h3>
+-
+-<p>
+-In Go 1, <a href="/pkg/crypto/elliptic/#Curve"><code>elliptic.Curve</code></a>
+-has been made an interface to permit alternative implementations. The curve
+-parameters have been moved to the
+-<a href="/pkg/crypto/elliptic/#CurveParams"><code>elliptic.CurveParams</code></a>
+-structure.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Existing users of <code>*elliptic.Curve</code> will need to change to
+-simply <code>elliptic.Curve</code>. Calls to <code>Marshal</code>,
+-<code>Unmarshal</code> and <code>GenerateKey</code> are now functions
+-in <code>crypto/elliptic</code> that take an <code>elliptic.Curve</code>
+-as their first argument.
+-</p>
+-
+-<h3 id="crypto_hmac">The crypto/hmac package</h3>
+-
+-<p>
+-In Go 1, the hash-specific functions, such as <code>hmac.NewMD5</code>, have
+-been removed from <code>crypto/hmac</code>. Instead, <code>hmac.New</code> takes
+-a function that returns a <code>hash.Hash</code>, such as <code>md5.New</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will perform the needed changes.
+-</p>
+-
+-<h3 id="crypto_x509">The crypto/x509 package</h3>
+-
+-<p>
+-In Go 1, the
+-<a href="/pkg/crypto/x509/#CreateCertificate"><code>CreateCertificate</code></a>
+-and
+-<a href="/pkg/crypto/x509/#CreateCRL"><code>CreateCRL</code></a>
+-functions in <code>crypto/x509</code> have been altered to take an
+-<code>interface{}</code> where they previously took a <code>*rsa.PublicKey</code>
+-or <code>*rsa.PrivateKey</code>. This will allow other public key algorithms
+-to be implemented in the future.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-No changes will be needed.
+-</p>
+-
+-<h3 id="encoding_binary">The encoding/binary package</h3>
+-
+-<p>
+-In Go 1, the <code>binary.TotalSize</code> function has been replaced by
+-<a href="/pkg/encoding/binary/#Size"><code>Size</code></a>,
+-which takes an <code>interface{}</code> argument rather than
+-a <code>reflect.Value</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-What little code is affected will be caught by the compiler and must be updated by hand.
+-</p>
+-
+-<h3 id="encoding_xml">The encoding/xml package</h3>
+-
+-<p>
+-In Go 1, the <a href="/pkg/encoding/xml/"><code>xml</code></a> package
+-has been brought closer in design to the other marshaling packages such
+-as <a href="/pkg/encoding/gob/"><code>encoding/gob</code></a>.
+-</p>
+-
+-<p>
+-The old <code>Parser</code> type is renamed
+-<a href="/pkg/encoding/xml/#Decoder"><code>Decoder</code></a> and has a new
+-<a href="/pkg/encoding/xml/#Decoder.Decode"><code>Decode</code></a> method. An
+-<a href="/pkg/encoding/xml/#Encoder"><code>Encoder</code></a> type was also introduced.
+-</p>
+-
+-<p>
+-The functions <a href="/pkg/encoding/xml/#Marshal"><code>Marshal</code></a>
+-and <a href="/pkg/encoding/xml/#Unmarshal"><code>Unmarshal</code></a>
+-work with <code>[]byte</code> values now. To work with streams,
+-use the new <a href="/pkg/encoding/xml/#Encoder"><code>Encoder</code></a>
+-and <a href="/pkg/encoding/xml/#Decoder"><code>Decoder</code></a> types.
+-</p>
+-
+-<p>
+-When marshaling or unmarshaling values, the format of supported flags in
+-field tags has changed to be closer to the
+-<a href="/pkg/encoding/json"><code>json</code></a> package
+-(<code>`xml:"name,flag"`</code>). The matching done between field tags, field
+-names, and the XML attribute and element names is now case-sensitive.
+-The <code>XMLName</code> field tag, if present, must also match the name
+-of the XML element being marshaled.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update most uses of the package except for some calls to
+-<code>Unmarshal</code>. Special care must be taken with field tags,
+-since the fix tool will not update them and if not fixed by hand they will
+-misbehave silently in some cases. For example, the old
+-<code>"attr"</code> is now written <code>",attr"</code> while plain
+-<code>"attr"</code> remains valid but with a different meaning.
+-</p>
+-
+-<h3 id="expvar">The expvar package</h3>
+-
+-<p>
+-In Go 1, the <code>RemoveAll</code> function has been removed.
+-The <code>Iter</code> function and Iter method on <code>*Map</code> have
+-been replaced by
+-<a href="/pkg/expvar/#Do"><code>Do</code></a>
+-and
+-<a href="/pkg/expvar/#Map.Do"><code>(*Map).Do</code></a>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Most code using <code>expvar</code> will not need changing. The rare code that used
+-<code>Iter</code> can be updated to pass a closure to <code>Do</code> to achieve the same effect.
+-</p>
+-
+-<h3 id="flag">The flag package</h3>
+-
+-<p>
+-In Go 1, the interface <a href="/pkg/flag/#Value"><code>flag.Value</code></a> has changed slightly.
+-The <code>Set</code> method now returns an <code>error</code> instead of
+-a <code>bool</code> to indicate success or failure.
+-</p>
+-
+-<p>
+-There is also a new kind of flag, <code>Duration</code>, to support argument
+-values specifying time intervals.
+-Values for such flags must be given units, just as <code>time.Duration</code>
+-formats them: <code>10s</code>, <code>1h30m</code>, etc.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/timeout/`}}
+--->var timeout = flag.Duration(&#34;timeout&#34;, 30*time.Second, &#34;how long to wait for completion&#34;)</pre>
+-
+-<p>
+-<em>Updating</em>:
+-Programs that implement their own flags will need minor manual fixes to update their
+-<code>Set</code> methods.
+-The <code>Duration</code> flag is new and affects no existing code.
+-</p>
+-
+-
+-<h3 id="go">The go/* packages</h3>
+-
+-<p>
+-Several packages under <code>go</code> have slightly revised APIs.
+-</p>
+-
+-<p>
+-A concrete <code>Mode</code> type was introduced for configuration mode flags
+-in the packages
+-<a href="/pkg/go/scanner/"><code>go/scanner</code></a>,
+-<a href="/pkg/go/parser/"><code>go/parser</code></a>,
+-<a href="/pkg/go/printer/"><code>go/printer</code></a>, and
+-<a href="/pkg/go/doc/"><code>go/doc</code></a>.
+-</p>
+-
+-<p>
+-The modes <code>AllowIllegalChars</code> and <code>InsertSemis</code> have been removed
+-from the <a href="/pkg/go/scanner/"><code>go/scanner</code></a> package. They were mostly
+-useful for scanning text other then Go source files. Instead, the
+-<a href="/pkg/text/scanner/"><code>text/scanner</code></a> package should be used
+-for that purpose.
+-</p>
+-
+-<p>
+-The <a href="/pkg/go/scanner/#ErrorHandler"><code>ErrorHandler</code></a> provided
+-to the scanner's <a href="/pkg/go/scanner/#Scanner.Init"><code>Init</code></a> method is
+-now simply a function rather than an interface. The <code>ErrorVector</code> type has
+-been removed in favor of the (existing) <a href="/pkg/go/scanner/#ErrorList"><code>ErrorList</code></a>
+-type, and the <code>ErrorVector</code> methods have been migrated. Instead of embedding
+-an <code>ErrorVector</code> in a client of the scanner, now a client should maintain
+-an <code>ErrorList</code>.
+-</p>
+-
+-<p>
+-The set of parse functions provided by the <a href="/pkg/go/parser/"><code>go/parser</code></a>
+-package has been reduced to the primary parse function
+-<a href="/pkg/go/parser/#ParseFile"><code>ParseFile</code></a>, and a couple of
+-convenience functions <a href="/pkg/go/parser/#ParseDir"><code>ParseDir</code></a>
+-and <a href="/pkg/go/parser/#ParseExpr"><code>ParseExpr</code></a>.
+-</p>
+-
+-<p>
+-The <a href="/pkg/go/printer/"><code>go/printer</code></a> package supports an additional
+-configuration mode <a href="/pkg/go/printer/#Mode"><code>SourcePos</code></a>;
+-if set, the printer will emit <code>//line</code> comments such that the generated
+-output contains the original source code position information. The new type
+-<a href="/pkg/go/printer/#CommentedNode"><code>CommentedNode</code></a> can be
+-used to provide comments associated with an arbitrary
+-<a href="/pkg/go/ast/#Node"><code>ast.Node</code></a> (until now only
+-<a href="/pkg/go/ast/#File"><code>ast.File</code></a> carried comment information).
+-</p>
+-
+-<p>
+-The type names of the <a href="/pkg/go/doc/"><code>go/doc</code></a> package have been
+-streamlined by removing the <code>Doc</code> suffix: <code>PackageDoc</code>
+-is now <code>Package</code>, <code>ValueDoc</code> is <code>Value</code>, etc.
+-Also, all types now consistently have a <code>Name</code> field (or <code>Names</code>,
+-in the case of type <code>Value</code>) and <code>Type.Factories</code> has become
+-<code>Type.Funcs</code>.
+-Instead of calling <code>doc.NewPackageDoc(pkg, importpath)</code>,
+-documentation for a package is created with:
+-</p>
+-
+-<pre>
+-    doc.New(pkg, importpath, mode)
+-</pre>
+-
+-<p>
+-where the new <code>mode</code> parameter specifies the operation mode:
+-if set to <a href="/pkg/go/doc/#AllDecls"><code>AllDecls</code></a>, all declarations
+-(not just exported ones) are considered.
+-The function <code>NewFileDoc</code> was removed, and the function
+-<code>CommentText</code> has become the method
+-<a href="/pkg/go/ast/#Text"><code>Text</code></a> of
+-<a href="/pkg/go/ast/#CommentGroup"><code>ast.CommentGroup</code></a>.
+-</p>
+-
+-<p>
+-In package <a href="/pkg/go/token/"><code>go/token</code></a>, the
+-<a href="/pkg/go/token/#FileSet"><code>token.FileSet</code></a> method <code>Files</code>
+-(which originally returned a channel of <code>*token.File</code>s) has been replaced
+-with the iterator <a href="/pkg/go/token/#FileSet.Iterate"><code>Iterate</code></a> that
+-accepts a function argument instead.
+-</p>
+-
+-<p>
+-In package <a href="/pkg/go/build/"><code>go/build</code></a>, the API
+-has been nearly completely replaced.
+-The package still computes Go package information
+-but it does not run the build: the <code>Cmd</code> and <code>Script</code>
+-types are gone.
+-(To build code, use the new
+-<a href="/cmd/go/"><code>go</code></a> command instead.)
+-The <code>DirInfo</code> type is now named
+-<a href="/pkg/go/build/#Package"><code>Package</code></a>.
+-<code>FindTree</code> and <code>ScanDir</code> are replaced by
+-<a href="/pkg/go/build/#Import"><code>Import</code></a>
+-and
+-<a href="/pkg/go/build/#ImportDir"><code>ImportDir</code></a>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Code that uses packages in <code>go</code> will have to be updated by hand; the
+-compiler will reject incorrect uses. Templates used in conjunction with any of the
+-<code>go/doc</code> types may need manual fixes; the renamed fields will lead
+-to run-time errors.
+-</p>
+-
+-<h3 id="hash">The hash package</h3>
+-
+-<p>
+-In Go 1, the definition of <a href="/pkg/hash/#Hash"><code>hash.Hash</code></a> includes
+-a new method, <code>BlockSize</code>.  This new method is used primarily in the
+-cryptographic libraries.
+-</p>
+-
+-<p>
+-The <code>Sum</code> method of the
+-<a href="/pkg/hash/#Hash"><code>hash.Hash</code></a> interface now takes a
+-<code>[]byte</code> argument, to which the hash value will be appended.
+-The previous behavior can be recreated by adding a <code>nil</code> argument to the call.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Existing implementations of <code>hash.Hash</code> will need to add a
+-<code>BlockSize</code> method.  Hashes that process the input one byte at
+-a time can implement <code>BlockSize</code> to return 1.
+-Running <code>go</code> <code>fix</code> will update calls to the <code>Sum</code> methods of the various
+-implementations of <code>hash.Hash</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Since the package's functionality is new, no updating is necessary.
+-</p>
+-
+-<h3 id="http">The http package</h3>
+-
+-<p>
+-In Go 1 the <a href="/pkg/net/http/"><code>http</code></a> package is refactored,
+-putting some of the utilities into a
+-<a href="/pkg/net/http/httputil/"><code>httputil</code></a> subdirectory.
+-These pieces are only rarely needed by HTTP clients.
+-The affected items are:
+-</p>
+-
+-<ul>
+-<li>ClientConn</li>
+-<li>DumpRequest</li>
+-<li>DumpRequestOut</li>
+-<li>DumpResponse</li>
+-<li>NewChunkedReader</li>
+-<li>NewChunkedWriter</li>
+-<li>NewClientConn</li>
+-<li>NewProxyClientConn</li>
+-<li>NewServerConn</li>
+-<li>NewSingleHostReverseProxy</li>
+-<li>ReverseProxy</li>
+-<li>ServerConn</li>
+-</ul>
+-
+-<p>
+-The <code>Request.RawURL</code> field has been removed; it was a
+-historical artifact.
+-</p>
+-
+-<p>
+-The <code>Handle</code> and <code>HandleFunc</code>
+-functions, and the similarly-named methods of <code>ServeMux</code>,
+-now panic if an attempt is made to register the same pattern twice.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update the few programs that are affected except for
+-uses of <code>RawURL</code>, which must be fixed by hand.
+-</p>
+-
+-<h3 id="image">The image package</h3>
+-
+-<p>
+-The <a href="/pkg/image/"><code>image</code></a> package has had a number of
+-minor changes, rearrangements and renamings.
+-</p>
+-
+-<p>
+-Most of the color handling code has been moved into its own package,
+-<a href="/pkg/image/color/"><code>image/color</code></a>.
+-For the elements that moved, a symmetry arises; for instance,
+-each pixel of an
+-<a href="/pkg/image/#RGBA"><code>image.RGBA</code></a>
+-is a
+-<a href="/pkg/image/color/#RGBA"><code>color.RGBA</code></a>.
+-</p>
+-
+-<p>
+-The old <code>image/ycbcr</code> package has been folded, with some
+-renamings, into the
+-<a href="/pkg/image/"><code>image</code></a>
+-and
+-<a href="/pkg/image/color/"><code>image/color</code></a>
+-packages.
+-</p>
+-
+-<p>
+-The old <code>image.ColorImage</code> type is still in the <code>image</code>
+-package but has been renamed
+-<a href="/pkg/image/#Uniform"><code>image.Uniform</code></a>,
+-while <code>image.Tiled</code> has been removed.
+-</p>
+-
+-<p>
+-This table lists the renamings.
+-</p>
+-
+-<table class="codetable" frame="border" summary="image renames">
+-<colgroup align="left" width="50%"></colgroup>
+-<colgroup align="left" width="50%"></colgroup>
+-<tr>
+-<th align="left">Old</th>
+-<th align="left">New</th>
+-</tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>image.Color</td> <td>color.Color</td></tr>
+-<tr><td>image.ColorModel</td> <td>color.Model</td></tr>
+-<tr><td>image.ColorModelFunc</td> <td>color.ModelFunc</td></tr>
+-<tr><td>image.PalettedColorModel</td> <td>color.Palette</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>image.RGBAColor</td> <td>color.RGBA</td></tr>
+-<tr><td>image.RGBA64Color</td> <td>color.RGBA64</td></tr>
+-<tr><td>image.NRGBAColor</td> <td>color.NRGBA</td></tr>
+-<tr><td>image.NRGBA64Color</td> <td>color.NRGBA64</td></tr>
+-<tr><td>image.AlphaColor</td> <td>color.Alpha</td></tr>
+-<tr><td>image.Alpha16Color</td> <td>color.Alpha16</td></tr>
+-<tr><td>image.GrayColor</td> <td>color.Gray</td></tr>
+-<tr><td>image.Gray16Color</td> <td>color.Gray16</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>image.RGBAColorModel</td> <td>color.RGBAModel</td></tr>
+-<tr><td>image.RGBA64ColorModel</td> <td>color.RGBA64Model</td></tr>
+-<tr><td>image.NRGBAColorModel</td> <td>color.NRGBAModel</td></tr>
+-<tr><td>image.NRGBA64ColorModel</td> <td>color.NRGBA64Model</td></tr>
+-<tr><td>image.AlphaColorModel</td> <td>color.AlphaModel</td></tr>
+-<tr><td>image.Alpha16ColorModel</td> <td>color.Alpha16Model</td></tr>
+-<tr><td>image.GrayColorModel</td> <td>color.GrayModel</td></tr>
+-<tr><td>image.Gray16ColorModel</td> <td>color.Gray16Model</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>ycbcr.RGBToYCbCr</td> <td>color.RGBToYCbCr</td></tr>
+-<tr><td>ycbcr.YCbCrToRGB</td> <td>color.YCbCrToRGB</td></tr>
+-<tr><td>ycbcr.YCbCrColorModel</td> <td>color.YCbCrModel</td></tr>
+-<tr><td>ycbcr.YCbCrColor</td> <td>color.YCbCr</td></tr>
+-<tr><td>ycbcr.YCbCr</td> <td>image.YCbCr</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>ycbcr.SubsampleRatio444</td> <td>image.YCbCrSubsampleRatio444</td></tr>
+-<tr><td>ycbcr.SubsampleRatio422</td> <td>image.YCbCrSubsampleRatio422</td></tr>
+-<tr><td>ycbcr.SubsampleRatio420</td> <td>image.YCbCrSubsampleRatio420</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>image.ColorImage</td> <td>image.Uniform</td></tr>
+-</table>
+-
+-<p>
+-The image package's <code>New</code> functions
+-(<a href="/pkg/image/#NewRGBA"><code>NewRGBA</code></a>,
+-<a href="/pkg/image/#NewRGBA64"><code>NewRGBA64</code></a>, etc.)
+-take an <a href="/pkg/image/#Rectangle"><code>image.Rectangle</code></a> as an argument
+-instead of four integers.
+-</p>
+-
+-<p>
+-Finally, there are new predefined <code>color.Color</code> variables
+-<a href="/pkg/image/color/#Black"><code>color.Black</code></a>,
+-<a href="/pkg/image/color/#White"><code>color.White</code></a>,
+-<a href="/pkg/image/color/#Opaque"><code>color.Opaque</code></a>
+-and
+-<a href="/pkg/image/color/#Transparent"><code>color.Transparent</code></a>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update almost all code affected by the change.
+-</p>
+-
+-<h3 id="log_syslog">The log/syslog package</h3>
+-
+-<p>
+-In Go 1, the <a href="/pkg/log/syslog/#NewLogger"><code>syslog.NewLogger</code></a>
+-function returns an error as well as a <code>log.Logger</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-What little code is affected will be caught by the compiler and must be updated by hand.
+-</p>
+-
+-<h3 id="mime">The mime package</h3>
+-
+-<p>
+-In Go 1, the <a href="/pkg/mime/#FormatMediaType"><code>FormatMediaType</code></a> function
+-of the <code>mime</code> package has  been simplified to make it
+-consistent with
+-<a href="/pkg/mime/#ParseMediaType"><code>ParseMediaType</code></a>.
+-It now takes <code>"text/html"</code> rather than <code>"text"</code> and <code>"html"</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-What little code is affected will be caught by the compiler and must be updated by hand.
+-</p>
+-
+-<h3 id="net">The net package</h3>
+-
+-<p>
+-In Go 1, the various <code>SetTimeout</code>,
+-<code>SetReadTimeout</code>, and <code>SetWriteTimeout</code> methods
+-have been replaced with
+-<a href="/pkg/net/#IPConn.SetDeadline"><code>SetDeadline</code></a>,
+-<a href="/pkg/net/#IPConn.SetReadDeadline"><code>SetReadDeadline</code></a>, and
+-<a href="/pkg/net/#IPConn.SetWriteDeadline"><code>SetWriteDeadline</code></a>,
+-respectively.  Rather than taking a timeout value in nanoseconds that
+-apply to any activity on the connection, the new methods set an
+-absolute deadline (as a <code>time.Time</code> value) after which
+-reads and writes will time out and no longer block.
+-</p>
+-
+-<p>
+-There are also new functions
+-<a href="/pkg/net/#DialTimeout"><code>net.DialTimeout</code></a>
+-to simplify timing out dialing a network address and
+-<a href="/pkg/net/#ListenMulticastUDP"><code>net.ListenMulticastUDP</code></a>
+-to allow multicast UDP to listen concurrently across multiple listeners.
+-The <code>net.ListenMulticastUDP</code> function replaces the old
+-<code>JoinGroup</code> and <code>LeaveGroup</code> methods.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Code that uses the old methods will fail to compile and must be updated by hand.
+-The semantic change makes it difficult for the fix tool to update automatically.
+-</p>
+-
+-<h3 id="os">The os package</h3>
+-
+-<p>
+-The <code>Time</code> function has been removed; callers should use
+-the <a href="/pkg/time/#Time"><code>Time</code></a> type from the
+-<code>time</code> package.
+-</p>
+-
+-<p>
+-The <code>Exec</code> function has been removed; callers should use
+-<code>Exec</code> from the <code>syscall</code> package, where available.
+-</p>
+-
+-<p>
+-The <code>ShellExpand</code> function has been renamed to <a
+-href="/pkg/os/#ExpandEnv"><code>ExpandEnv</code></a>.
+-</p>
+-
+-<p>
+-The <a href="/pkg/os/#NewFile"><code>NewFile</code></a> function
+-now takes a <code>uintptr</code> fd, instead of an <code>int</code>.
+-The <a href="/pkg/os/#File.Fd"><code>Fd</code></a> method on files now
+-also returns a <code>uintptr</code>.
+-</p>
+-
+-<p>
+-There are no longer error constants such as <code>EINVAL</code>
+-in the <code>os</code> package, since the set of values varied with
+-the underlying operating system. There are new portable functions like
+-<a href="/pkg/os/#IsPermission"><code>IsPermission</code></a>
+-to test common error properties, plus a few new error values
+-with more Go-like names, such as
+-<a href="/pkg/os/#ErrPermission"><code>ErrPermission</code></a>
+-and
+-<a href="/pkg/os/#ErrNoEnv"><code>ErrNoEnv</code></a>.
+-</p>
+-
+-<p>
+-The <code>Getenverror</code> function has been removed. To distinguish
+-between a non-existent environment variable and an empty string,
+-use <a href="/pkg/os/#Environ"><code>os.Environ</code></a> or
+-<a href="/pkg/syscall/#Getenv"><code>syscall.Getenv</code></a>.
+-</p>
+-
+-
+-<p>
+-The <a href="/pkg/os/#Process.Wait"><code>Process.Wait</code></a> method has
+-dropped its option argument and the associated constants are gone
+-from the package.
+-Also, the function <code>Wait</code> is gone; only the method of
+-the <code>Process</code> type persists.
+-</p>
+-
+-<p>
+-The <code>Waitmsg</code> type returned by
+-<a href="/pkg/os/#Process.Wait"><code>Process.Wait</code></a>
+-has been replaced with a more portable
+-<a href="/pkg/os/#ProcessState"><code>ProcessState</code></a>
+-type with accessor methods to recover information about the
+-process.
+-Because of changes to <code>Wait</code>, the <code>ProcessState</code>
+-value always describes an exited process.
+-Portability concerns simplified the interface in other ways, but the values returned by the
+-<a href="/pkg/os/#ProcessState.Sys"><code>ProcessState.Sys</code></a> and
+-<a href="/pkg/os/#ProcessState.SysUsage"><code>ProcessState.SysUsage</code></a>
+-methods can be type-asserted to underlying system-specific data structures such as
+-<a href="/pkg/syscall/#WaitStatus"><code>syscall.WaitStatus</code></a> and
+-<a href="/pkg/syscall/#Rusage"><code>syscall.Rusage</code></a> on Unix.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will drop a zero argument to <code>Process.Wait</code>.
+-All other changes will be caught by the compiler and must be updated by hand.
+-</p>
+-
+-<h4 id="os_fileinfo">The os.FileInfo type</h4>
+-
+-<p>
+-Go 1 redefines the <a href="/pkg/os/#FileInfo"><code>os.FileInfo</code></a> type,
+-changing it from a struct to an interface:
+-</p>
+-
+-<pre>
+-    type FileInfo interface {
+-        Name() string       // base name of the file
+-        Size() int64        // length in bytes
+-        Mode() FileMode     // file mode bits
+-        ModTime() time.Time // modification time
+-        IsDir() bool        // abbreviation for Mode().IsDir()
+-        Sys() interface{}   // underlying data source (can return nil)
+-    }
+-</pre>
+-
+-<p>
+-The file mode information has been moved into a subtype called
+-<a href="/pkg/os/#FileMode"><code>os.FileMode</code></a>,
+-a simple integer type with <code>IsDir</code>, <code>Perm</code>, and <code>String</code>
+-methods.
+-</p>
+-
+-<p>
+-The system-specific details of file modes and properties such as (on Unix)
+-i-number have been removed from <code>FileInfo</code> altogether.
+-Instead, each operating system's <code>os</code> package provides an
+-implementation of the <code>FileInfo</code> interface, which
+-has a <code>Sys</code> method that returns the
+-system-specific representation of file metadata.
+-For instance, to discover the i-number of a file on a Unix system, unpack
+-the <code>FileInfo</code> like this:
+-</p>
+-
+-<pre>
+-    fi, err := os.Stat("hello.go")
+-    if err != nil {
+-        log.Fatal(err)
+-    }
+-    // Check that it's a Unix file.
+-    unixStat, ok := fi.Sys().(*syscall.Stat_t)
+-    if !ok {
+-        log.Fatal("hello.go: not a Unix file")
+-    }
+-    fmt.Printf("file i-number: %d\n", unixStat.Ino)
+-</pre>
+-
+-<p>
+-Assuming (which is unwise) that <code>"hello.go"</code> is a Unix file,
+-the i-number expression could be contracted to
+-</p>
+-
+-<pre>
+-    fi.Sys().(*syscall.Stat_t).Ino
+-</pre>
+-
+-<p>
+-The vast majority of uses of <code>FileInfo</code> need only the methods
+-of the standard interface.
+-</p>
+-
+-<p>
+-The <code>os</code> package no longer contains wrappers for the POSIX errors
+-such as <code>ENOENT</code>.
+-For the few programs that need to verify particular error conditions, there are
+-now the boolean functions
+-<a href="/pkg/os/#IsExist"><code>IsExist</code></a>,
+-<a href="/pkg/os/#IsNotExist"><code>IsNotExist</code></a>
+-and
+-<a href="/pkg/os/#IsPermission"><code>IsPermission</code></a>.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/os\.Open/` `/}/`}}
+--->    f, err := os.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0600)
+-    if os.IsExist(err) {
+-        log.Printf(&#34;%s already exists&#34;, name)
+-    }</pre>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update code that uses the old equivalent of the current <code>os.FileInfo</code>
+-and <code>os.FileMode</code> API.
+-Code that needs system-specific file details will need to be updated by hand.
+-Code that uses the old POSIX error values from the <code>os</code> package
+-will fail to compile and will also need to be updated by hand.
+-</p>
+-
+-<h3 id="os_signal">The os/signal package</h3>
+-
+-<p>
+-The <code>os/signal</code> package in Go 1 replaces the
+-<code>Incoming</code> function, which returned a channel
+-that received all incoming signals,
+-with the selective <code>Notify</code> function, which asks
+-for delivery of specific signals on an existing channel.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Code must be updated by hand.
+-A literal translation of
+-</p>
+-<pre>
+-c := signal.Incoming()
+-</pre>
+-<p>
+-is
+-</p>
+-<pre>
+-c := make(chan os.Signal)
+-signal.Notify(c) // ask for all signals
+-</pre>
+-<p>
+-but most code should list the specific signals it wants to handle instead:
+-</p>
+-<pre>
+-c := make(chan os.Signal)
+-signal.Notify(c, syscall.SIGHUP, syscall.SIGQUIT)
+-</pre>
+-
+-<h3 id="path_filepath">The path/filepath package</h3>
+-
+-<p>
+-In Go 1, the <a href="/pkg/path/filepath/#Walk"><code>Walk</code></a> function of the
+-<code>path/filepath</code> package
+-has been changed to take a function value of type
+-<a href="/pkg/path/filepath/#WalkFunc"><code>WalkFunc</code></a>
+-instead of a <code>Visitor</code> interface value.
+-<code>WalkFunc</code> unifies the handling of both files and directories.
+-</p>
+-
+-<pre>
+-    type WalkFunc func(path string, info os.FileInfo, err error) error
+-</pre>
+-
+-<p>
+-The <code>WalkFunc</code> function will be called even for files or directories that could not be opened;
+-in such cases the error argument will describe the failure.
+-If a directory's contents are to be skipped,
+-the function should return the value <a href="/pkg/path/filepath/#variables"><code>filepath.SkipDir</code></a>
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/STARTWALK/` `/ENDWALK/`}}
+--->    markFn := func(path string, info os.FileInfo, err error) error {
+-        if path == &#34;pictures&#34; { <span class="comment">// Will skip walking of directory pictures and its contents.</span>
+-            return filepath.SkipDir
+-        }
+-        if err != nil {
+-            return err
+-        }
+-        log.Println(path)
+-        return nil
+-    }
+-    err := filepath.Walk(&#34;.&#34;, markFn)
+-    if err != nil {
+-        log.Fatal(err)
+-    }</pre>
+-
+-<p>
+-<em>Updating</em>:
+-The change simplifies most code but has subtle consequences, so affected programs
+-will need to be updated by hand.
+-The compiler will catch code using the old interface.
+-</p>
+-
+-<h3 id="regexp">The regexp package</h3>
+-
+-<p>
+-The <a href="/pkg/regexp/"><code>regexp</code></a> package has been rewritten.
+-It has the same interface but the specification of the regular expressions
+-it supports has changed from the old "egrep" form to that of
+-<a href="http://code.google.com/p/re2/">RE2</a>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Code that uses the package should have its regular expressions checked by hand.
+-</p>
+-
+-<h3 id="runtime">The runtime package</h3>
+-
+-<p>
+-In Go 1, much of the API exported by package
+-<code>runtime</code> has been removed in favor of
+-functionality provided by other packages.
+-Code using the <code>runtime.Type</code> interface
+-or its specific concrete type implementations should
+-now use package <a href="/pkg/reflect/"><code>reflect</code></a>.
+-Code using <code>runtime.Semacquire</code> or <code>runtime.Semrelease</code>
+-should use channels or the abstractions in package <a href="/pkg/sync/"><code>sync</code></a>.
+-The <code>runtime.Alloc</code>, <code>runtime.Free</code>,
+-and <code>runtime.Lookup</code> functions, an unsafe API created for
+-debugging the memory allocator, have no replacement.
+-</p>
+-
+-<p>
+-Before, <code>runtime.MemStats</code> was a global variable holding
+-statistics about memory allocation, and calls to <code>runtime.UpdateMemStats</code>
+-ensured that it was up to date.
+-In Go 1, <code>runtime.MemStats</code> is a struct type, and code should use
+-<a href="/pkg/runtime/#ReadMemStats"><code>runtime.ReadMemStats</code></a>
+-to obtain the current statistics.
+-</p>
+-
+-<p>
+-The package adds a new function,
+-<a href="/pkg/runtime/#NumCPU"><code>runtime.NumCPU</code></a>, that returns the number of CPUs available
+-for parallel execution, as reported by the operating system kernel.
+-Its value can inform the setting of <code>GOMAXPROCS</code>.
+-The <code>runtime.Cgocalls</code> and <code>runtime.Goroutines</code> functions
+-have been renamed to <code>runtime.NumCgoCall</code> and <code>runtime.NumGoroutine</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update code for the function renamings.
+-Other code will need to be updated by hand.
+-</p>
+-
+-<h3 id="strconv">The strconv package</h3>
+-
+-<p>
+-In Go 1, the
+-<a href="/pkg/strconv/"><code>strconv</code></a>
+-package has been significantly reworked to make it more Go-like and less C-like,
+-although <code>Atoi</code> lives on (it's similar to
+-<code>int(ParseInt(x, 10, 0))</code>, as does
+-<code>Itoa(x)</code> (<code>FormatInt(int64(x), 10)</code>).
+-There are also new variants of some of the functions that append to byte slices rather than
+-return strings, to allow control over allocation.
+-</p>
+-
+-<p>
+-This table summarizes the renamings; see the
+-<a href="/pkg/strconv/">package documentation</a>
+-for full details.
+-</p>
+-
+-<table class="codetable" frame="border" summary="strconv renames">
+-<colgroup align="left" width="50%"></colgroup>
+-<colgroup align="left" width="50%"></colgroup>
+-<tr>
+-<th align="left">Old call</th>
+-<th align="left">New call</th>
+-</tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Atob(x)</td> <td>ParseBool(x)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Atof32(x)</td> <td>ParseFloat(x, 32)§</td></tr>
+-<tr><td>Atof64(x)</td> <td>ParseFloat(x, 64)</td></tr>
+-<tr><td>AtofN(x, n)</td> <td>ParseFloat(x, n)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Atoi(x)</td> <td>Atoi(x)</td></tr>
+-<tr><td>Atoi(x)</td> <td>ParseInt(x, 10, 0)§</td></tr>
+-<tr><td>Atoi64(x)</td> <td>ParseInt(x, 10, 64)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Atoui(x)</td> <td>ParseUint(x, 10, 0)§</td></tr>
+-<tr><td>Atoui64(x)</td> <td>ParseUint(x, 10, 64)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Btoi64(x, b)</td> <td>ParseInt(x, b, 64)</td></tr>
+-<tr><td>Btoui64(x, b)</td> <td>ParseUint(x, b, 64)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Btoa(x)</td> <td>FormatBool(x)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Ftoa32(x, f, p)</td> <td>FormatFloat(float64(x), f, p, 32)</td></tr>
+-<tr><td>Ftoa64(x, f, p)</td> <td>FormatFloat(x, f, p, 64)</td></tr>
+-<tr><td>FtoaN(x, f, p, n)</td> <td>FormatFloat(x, f, p, n)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Itoa(x)</td> <td>Itoa(x)</td></tr>
+-<tr><td>Itoa(x)</td> <td>FormatInt(int64(x), 10)</td></tr>
+-<tr><td>Itoa64(x)</td> <td>FormatInt(x, 10)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Itob(x, b)</td> <td>FormatInt(int64(x), b)</td></tr>
+-<tr><td>Itob64(x, b)</td> <td>FormatInt(x, b)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Uitoa(x)</td> <td>FormatUint(uint64(x), 10)</td></tr>
+-<tr><td>Uitoa64(x)</td> <td>FormatUint(x, 10)</td></tr>
+-<tr>
+-<td colspan="2"><hr></td>
+-</tr>
+-<tr><td>Uitob(x, b)</td> <td>FormatUint(uint64(x), b)</td></tr>
+-<tr><td>Uitob64(x, b)</td> <td>FormatUint(x, b)</td></tr>
+-</table>
+-		
+-<p>
+-<em>Updating</em>:
+-Running <code>go</code> <code>fix</code> will update almost all code affected by the change.
+-<br>
+-§ <code>Atoi</code> persists but <code>Atoui</code> and <code>Atof32</code> do not, so
+-they may require
+-a cast that must be added by hand; the <code>go</code> <code>fix</code> tool will warn about it.
+-</p>
+-
+-
+-<h3 id="templates">The template packages</h3>
+-
+-<p>
+-The <code>template</code> and <code>exp/template/html</code> packages have moved to 
+-<a href="/pkg/text/template/"><code>text/template</code></a> and
+-<a href="/pkg/html/template/"><code>html/template</code></a>.
+-More significant, the interface to these packages has been simplified.
+-The template language is the same, but the concept of "template set" is gone
+-and the functions and methods of the packages have changed accordingly,
+-often by elimination.
+-</p>
+-
+-<p>
+-Instead of sets, a <code>Template</code> object
+-may contain multiple named template definitions,
+-in effect constructing
+-name spaces for template invocation.
+-A template can invoke any other template associated with it, but only those
+-templates associated with it.
+-The simplest way to associate templates is to parse them together, something
+-made easier with the new structure of the packages.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-The imports will be updated by fix tool.
+-Single-template uses will be otherwise be largely unaffected.
+-Code that uses multiple templates in concert will need to be updated by hand.
+-The <a href="/pkg/text/template/#examples">examples</a> in
+-the documentation for <code>text/template</code> can provide guidance.
+-</p>
+-
+-<h3 id="testing">The testing package</h3>
+-
+-<p>
+-The testing package has a type, <code>B</code>, passed as an argument to benchmark functions.
+-In Go 1, <code>B</code> has new methods, analogous to those of <code>T</code>, enabling
+-logging and failure reporting.
+-</p>
+-
+-<pre><!--{{code "/doc/progs/go1.go" `/func.*Benchmark/` `/^}/`}}
+--->func BenchmarkSprintf(b *testing.B) {
+-    <span class="comment">// Verify correctness before running benchmark.</span>
+-    b.StopTimer()
+-    got := fmt.Sprintf(&#34;%x&#34;, 23)
+-    const expect = &#34;17&#34;
+-    if expect != got {
+-        b.Fatalf(&#34;expected %q; got %q&#34;, expect, got)
+-    }
+-    b.StartTimer()
+-    for i := 0; i &lt; b.N; i++ {
+-        fmt.Sprintf(&#34;%x&#34;, 23)
+-    }
+-}</pre>
+-
+-<p>
+-<em>Updating</em>:
+-Existing code is unaffected, although benchmarks that use <code>println</code>
+-or <code>panic</code> should be updated to use the new methods.
+-</p>
+-
+-<h3 id="testing_script">The testing/script package</h3>
+-
+-<p>
+-The testing/script package has been deleted. It was a dreg.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-No code is likely to be affected.
+-</p>
+-
+-<h3 id="unsafe">The unsafe package</h3>
+-
+-<p>
+-In Go 1, the functions
+-<code>unsafe.Typeof</code>, <code>unsafe.Reflect</code>,
+-<code>unsafe.Unreflect</code>, <code>unsafe.New</code>, and
+-<code>unsafe.NewArray</code> have been removed;
+-they duplicated safer functionality provided by
+-package <a href="/pkg/reflect/"><code>reflect</code></a>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Code using these functions must be rewritten to use
+-package <a href="/pkg/reflect/"><code>reflect</code></a>.
+-The changes to <a href="http://code.google.com/p/go/source/detail?r=2646dc956207">encoding/gob</a> and the <a href="http://code.google.com/p/goprotobuf/source/detail?r=5340ad310031">protocol buffer library</a>
+-may be helpful as examples.
+-</p>
+-
+-<h3 id="url">The url package</h3>
+-
+-<p>
+-In Go 1 several fields from the <a href="/pkg/net/url/#URL"><code>url.URL</code></a> type
+-were removed or replaced.
+-</p>
+-
+-<p>
+-The <a href="/pkg/net/url/#URL.String"><code>String</code></a> method now
+-predictably rebuilds an encoded URL string using all of <code>URL</code>'s
+-fields as necessary. The resulting string will also no longer have
+-passwords escaped.
+-</p>
+-
+-<p>
+-The <code>Raw</code> field has been removed. In most cases the <code>String</code>
+-method may be used in its place.
+-</p>
+-
+-<p>
+-The old <code>RawUserinfo</code> field is replaced by the <code>User</code>
+-field, of type <a href="/pkg/net/url/#Userinfo"><code>*net.Userinfo</code></a>.
+-Values of this type may be created using the new <a href="/pkg/net/url/#User"><code>net.User</code></a>
+-and <a href="/pkg/net/url/#UserPassword"><code>net.UserPassword</code></a>
+-functions. The <code>EscapeUserinfo</code> and <code>UnescapeUserinfo</code>
+-functions are also gone.
+-</p>
+-
+-<p>
+-The <code>RawAuthority</code> field has been removed. The same information is
+-available in the <code>Host</code> and <code>User</code> fields.
+-</p>
+-
+-<p>
+-The <code>RawPath</code> field and the <code>EncodedPath</code> method have
+-been removed. The path information in rooted URLs (with a slash following the
+-schema) is now available only in decoded form in the <code>Path</code> field.
+-Occasionally, the encoded data may be required to obtain information that
+-was lost in the decoding process. These cases must be handled by accessing
+-the data the URL was built from.
+-</p>
+-
+-<p>
+-URLs with non-rooted paths, such as <code>"mailto:dev at golang.org?subject=Hi"</code>,
+-are also handled differently. The <code>OpaquePath</code> boolean field has been
+-removed and a new <code>Opaque</code> string field introduced to hold the encoded
+-path for such URLs. In Go 1, the cited URL parses as:
+-</p>
+-
+-<pre>
+-    URL{
+-        Scheme: "mailto",
+-        Opaque: "dev at golang.org",
+-        RawQuery: "subject=Hi",
+-    }
+-</pre>
+-
+-<p>
+-A new <a href="/pkg/net/url/#URL.RequestURI"><code>RequestURI</code></a> method was
+-added to <code>URL</code>.
+-</p>
+-
+-<p>
+-The <code>ParseWithReference</code> function has been renamed to <code>ParseWithFragment</code>.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Code that uses the old fields will fail to compile and must be updated by hand.
+-The semantic changes make it difficult for the fix tool to update automatically.
+-</p>
+-
+-<h2 id="cmd_go">The go command</h2>
+-
+-<p>
+-Go 1 introduces the <a href="/cmd/go/">go command</a>, a tool for fetching,
+-building, and installing Go packages and commands. The <code>go</code> command
+-does away with makefiles, instead using Go source code to find dependencies and
+-determine build conditions. Most existing Go programs will no longer require
+-makefiles to be built.
+-</p>
+-
+-<p>
+-See <a href="/doc/code.html">How to Write Go Code</a> for a primer on the
+-<code>go</code> command and the <a href="/cmd/go/">go command documentation</a>
+-for the full details.
+-</p>
+-
+-<p>
+-<em>Updating</em>:
+-Projects that depend on the Go project's old makefile-based build
+-infrastructure (<code>Make.pkg</code>, <code>Make.cmd</code>, and so on) should
+-switch to using the <code>go</code> command for building Go code and, if
+-necessary, rewrite their makefiles to perform any auxiliary build tasks.
+-</p>
+-
+-<h2 id="cmd_cgo">The cgo command</h2>
+-
+-<p>
+-In Go 1, the <a href="/cmd/cgo">cgo command</a>
+-uses a different <code>_cgo_export.h</code>
+-file, which is generated for packages containing <code>//export</code> lines.
+-The <code>_cgo_export.h</code> file now begins with the C preamble comment,
+-so that exported function definitions can use types defined there.
+-This has the effect of compiling the preamble multiple times, so a
+-package using <code>//export</code> must not put function definitions
+-or variable initializations in the C preamble.
+-</p>
+-
+-<h2 id="releases">Packaged releases</h2>
+-
+-<p>
+-One of the most significant changes associated with Go 1 is the availability
+-of prepackaged, downloadable distributions.
+-They are available for many combinations of architecture and operating system
+-(including Windows) and the list will grow.
+-Installation details are described on the
+-<a href="/doc/install">Getting Started</a> page, while
+-the distributions themselves are listed on the
+-<a href="http://code.google.com/p/go/downloads/list">downloads page</a>.
+-
+-
+-</div>
+-
+-<div id="footer">
+-Build version go1.0.1.<br>
+-Except as <a href="http://code.google.com/policies.html#restrictions">noted</a>,
+-the content of this page is licensed under the
+-Creative Commons Attribution 3.0 License,
+-and code is licensed under a <a href="/LICENSE">BSD license</a>.<br>
+-<a href="/doc/tos.html">Terms of Service</a> | 
+-<a href="http://www.google.com/intl/en/privacy/privacy-policy.html">Privacy Policy</a>
+-</div>
+-
+-<script type="text/javascript">
+-(function() {
+-  var ga = document.createElement("script"); ga.type = "text/javascript"; ga.async = true;
+-  ga.src = ("https:" == document.location.protocol ? "https://ssl" : "http://www") + ".google-analytics.com/ga.js";
+-  var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(ga, s);
+-})();
+-</script>
+-</body>
+-<script type="text/javascript">
+-  (function() {
+-    var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true;
+-    po.src = 'https://apis.google.com/js/plusone.js';
+-    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);
+-  })();
+-</script>
+-</html>
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/README b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/README
+deleted file mode 100644
+index 9b4c2d8..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/README
++++ /dev/null
+@@ -1,28 +0,0 @@
+-The *.dat files in this directory are copied from The WebKit Open Source
+-Project, specifically $WEBKITROOT/LayoutTests/html5lib/resources.
+-WebKit is licensed under a BSD style license.
+-http://webkit.org/coding/bsd-license.html says:
+-
+-Copyright (C) 2009 Apple Inc. All rights reserved.
+-
+-Redistribution and use in source and binary forms, with or without
+-modification, are permitted provided that the following conditions are met:
+-
+-1. Redistributions of source code must retain the above copyright notice,
+-this list of conditions and the following disclaimer.
+-
+-2. Redistributions in binary form must reproduce the above copyright notice,
+-this list of conditions and the following disclaimer in the documentation
+-and/or other materials provided with the distribution.
+-
+-THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS "AS IS" AND ANY
+-EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+-DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS BE LIABLE FOR ANY
+-DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+-(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+-LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+-ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+-SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+-
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/adoption01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/adoption01.dat
+deleted file mode 100644
+index 787e1b0..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/adoption01.dat
++++ /dev/null
+@@ -1,194 +0,0 @@
+-#data
+-<a><p></a></p>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|     <p>
+-|       <a>
+-
+-#data
+-<a>1<p>2</a>3</p>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       "1"
+-|     <p>
+-|       <a>
+-|         "2"
+-|       "3"
+-
+-#data
+-<a>1<button>2</a>3</button>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       "1"
+-|     <button>
+-|       <a>
+-|         "2"
+-|       "3"
+-
+-#data
+-<a>1<b>2</a>3</b>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       "1"
+-|       <b>
+-|         "2"
+-|     <b>
+-|       "3"
+-
+-#data
+-<a>1<div>2<div>3</a>4</div>5</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       "1"
+-|     <div>
+-|       <a>
+-|         "2"
+-|       <div>
+-|         <a>
+-|           "3"
+-|         "4"
+-|       "5"
+-
+-#data
+-<table><a>1<p>2</a>3</p>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       "1"
+-|     <p>
+-|       <a>
+-|         "2"
+-|       "3"
+-|     <table>
+-
+-#data
+-<b><b><a><p></a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <b>
+-|         <a>
+-|         <p>
+-|           <a>
+-
+-#data
+-<b><a><b><p></a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <a>
+-|         <b>
+-|       <b>
+-|         <p>
+-|           <a>
+-
+-#data
+-<a><b><b><p></a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       <b>
+-|         <b>
+-|     <b>
+-|       <b>
+-|         <p>
+-|           <a>
+-
+-#data
+-<p>1<s id="A">2<b id="B">3</p>4</s>5</b>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       "1"
+-|       <s>
+-|         id="A"
+-|         "2"
+-|         <b>
+-|           id="B"
+-|           "3"
+-|     <s>
+-|       id="A"
+-|       <b>
+-|         id="B"
+-|         "4"
+-|     <b>
+-|       id="B"
+-|       "5"
+-
+-#data
+-<table><a>1<td>2</td>3</table>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       "1"
+-|     <a>
+-|       "3"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "2"
+-
+-#data
+-<table>A<td>B</td>C</table>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "AC"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "B"
+-
+-#data
+-<a><svg><tr><input></a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       <svg svg>
+-|         <svg tr>
+-|           <svg input>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/adoption02.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/adoption02.dat
+deleted file mode 100644
+index d18151b..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/adoption02.dat
++++ /dev/null
+@@ -1,31 +0,0 @@
+-#data
+-<b>1<i>2<p>3</b>4
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       "1"
+-|       <i>
+-|         "2"
+-|     <i>
+-|       <p>
+-|         <b>
+-|           "3"
+-|         "4"
+-
+-#data
+-<a><div><style></style><address><a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|     <div>
+-|       <a>
+-|         <style>
+-|       <address>
+-|         <a>
+-|         <a>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/comments01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/comments01.dat
+deleted file mode 100644
+index 44f1876..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/comments01.dat
++++ /dev/null
+@@ -1,135 +0,0 @@
+-#data
+-FOO<!-- BAR -->BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!--  BAR  -->
+-|     "BAZ"
+-
+-#data
+-FOO<!-- BAR --!>BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!--  BAR  -->
+-|     "BAZ"
+-
+-#data
+-FOO<!-- BAR --   >BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!--  BAR --   >BAZ -->
+-
+-#data
+-FOO<!-- BAR -- <QUX> -- MUX -->BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!--  BAR -- <QUX> -- MUX  -->
+-|     "BAZ"
+-
+-#data
+-FOO<!-- BAR -- <QUX> -- MUX --!>BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!--  BAR -- <QUX> -- MUX  -->
+-|     "BAZ"
+-
+-#data
+-FOO<!-- BAR -- <QUX> -- MUX -- >BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!--  BAR -- <QUX> -- MUX -- >BAZ -->
+-
+-#data
+-FOO<!---->BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!--  -->
+-|     "BAZ"
+-
+-#data
+-FOO<!--->BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!--  -->
+-|     "BAZ"
+-
+-#data
+-FOO<!-->BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!--  -->
+-|     "BAZ"
+-
+-#data
+-<?xml version="1.0">Hi
+-#errors
+-#document
+-| <!-- ?xml version="1.0" -->
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hi"
+-
+-#data
+-<?xml version="1.0">
+-#errors
+-#document
+-| <!-- ?xml version="1.0" -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<?xml version
+-#errors
+-#document
+-| <!-- ?xml version -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-FOO<!----->BAZ
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <!-- - -->
+-|     "BAZ"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/doctype01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/doctype01.dat
+deleted file mode 100644
+index ae45732..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/doctype01.dat
++++ /dev/null
+@@ -1,370 +0,0 @@
+-#data
+-<!DOCTYPE html>Hello
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!dOctYpE HtMl>Hello
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPEhtml>Hello
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE>Hello
+-#errors
+-#document
+-| <!DOCTYPE >
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE >Hello
+-#errors
+-#document
+-| <!DOCTYPE >
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato >Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato taco>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato taco "ddd>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato sYstEM>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato sYstEM    >Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE   potato       sYstEM  ggg>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato SYSTEM taco  >Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato SYSTEM 'taco"'>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato "" "taco"">
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato SYSTEM "taco">Hello
+-#errors
+-#document
+-| <!DOCTYPE potato "" "taco">
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato SYSTEM "tai'co">Hello
+-#errors
+-#document
+-| <!DOCTYPE potato "" "tai'co">
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato SYSTEMtaco "ddd">Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato grass SYSTEM taco>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato pUbLIc>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato pUbLIc >Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato pUbLIcgoof>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato PUBLIC goof>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato PUBLIC "go'of">Hello
+-#errors
+-#document
+-| <!DOCTYPE potato "go'of" "">
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato PUBLIC 'go'of'>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato "go" "">
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato PUBLIC 'go:hh   of' >Hello
+-#errors
+-#document
+-| <!DOCTYPE potato "go:hh   of" "">
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE potato PUBLIC "W3C-//dfdf" SYSTEM ggg>Hello
+-#errors
+-#document
+-| <!DOCTYPE potato "W3C-//dfdf" "">
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
+-   "http://www.w3.org/TR/html4/strict.dtd">Hello
+-#errors
+-#document
+-| <!DOCTYPE html "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE ...>Hello
+-#errors
+-#document
+-| <!DOCTYPE ...>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Hello"
+-
+-#data
+-<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+-"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+-#errors
+-#document
+-| <!DOCTYPE html "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN"
+-"http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd">
+-#errors
+-#document
+-| <!DOCTYPE html "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd">
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE root-element [SYSTEM OR PUBLIC FPI] "uri" [ 
+-<!-- internal declarations -->
+-]>
+-#errors
+-#document
+-| <!DOCTYPE root-element>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "]>"
+-
+-#data
+-<!DOCTYPE html PUBLIC
+-  "-//WAPFORUM//DTD XHTML Mobile 1.0//EN"
+-    "http://www.wapforum.org/DTD/xhtml-mobile10.dtd">
+-#errors
+-#document
+-| <!DOCTYPE html "-//WAPFORUM//DTD XHTML Mobile 1.0//EN" "http://www.wapforum.org/DTD/xhtml-mobile10.dtd">
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE HTML SYSTEM "http://www.w3.org/DTD/HTML4-strict.dtd"><body><b>Mine!</b></body>
+-#errors
+-#document
+-| <!DOCTYPE html "" "http://www.w3.org/DTD/HTML4-strict.dtd">
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       "Mine!"
+-
+-#data
+-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
+-#errors
+-#document
+-| <!DOCTYPE html "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"'http://www.w3.org/TR/html4/strict.dtd'>
+-#errors
+-#document
+-| <!DOCTYPE html "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE HTML PUBLIC"-//W3C//DTD HTML 4.01//EN"'http://www.w3.org/TR/html4/strict.dtd'>
+-#errors
+-#document
+-| <!DOCTYPE html "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE HTML PUBLIC'-//W3C//DTD HTML 4.01//EN''http://www.w3.org/TR/html4/strict.dtd'>
+-#errors
+-#document
+-| <!DOCTYPE html "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+-| <html>
+-|   <head>
+-|   <body>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/entities01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/entities01.dat
+deleted file mode 100644
+index c8073b7..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/entities01.dat
++++ /dev/null
+@@ -1,603 +0,0 @@
+-#data
+-FOO&gt;BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO>BAR"
+-
+-#data
+-FOO&gtBAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO>BAR"
+-
+-#data
+-FOO&gt BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO> BAR"
+-
+-#data
+-FOO&gt;;;BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO>;;BAR"
+-
+-#data
+-I'm &notit; I tell you
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "I'm ¬it; I tell you"
+-
+-#data
+-I'm &notin; I tell you
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "I'm ∉ I tell you"
+-
+-#data
+-FOO& BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO& BAR"
+-
+-#data
+-FOO&<BAR>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO&"
+-|     <bar>
+-
+-#data
+-FOO&&&&gt;BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO&&&>BAR"
+-
+-#data
+-FOO&#41;BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO)BAR"
+-
+-#data
+-FOO&#x41;BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOABAR"
+-
+-#data
+-FOO&#X41;BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOABAR"
+-
+-#data
+-FOO&#BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO&#BAR"
+-
+-#data
+-FOO&#ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO&#ZOO"
+-
+-#data
+-FOO&#xBAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOºR"
+-
+-#data
+-FOO&#xZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO&#xZOO"
+-
+-#data
+-FOO&#XZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO&#XZOO"
+-
+-#data
+-FOO&#41BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO)BAR"
+-
+-#data
+-FOO&#x41BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO䆺R"
+-
+-#data
+-FOO&#x41ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOAZOO"
+-
+-#data
+-FOO&#x0000;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO�ZOO"
+-
+-#data
+-FOO&#x0078;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOxZOO"
+-
+-#data
+-FOO&#x0079;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOyZOO"
+-
+-#data
+-FOO&#x0080;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO€ZOO"
+-
+-#data
+-FOO&#x0081;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOZOO"
+-
+-#data
+-FOO&#x0082;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO‚ZOO"
+-
+-#data
+-FOO&#x0083;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOƒZOO"
+-
+-#data
+-FOO&#x0084;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO„ZOO"
+-
+-#data
+-FOO&#x0085;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO…ZOO"
+-
+-#data
+-FOO&#x0086;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO†ZOO"
+-
+-#data
+-FOO&#x0087;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO‡ZOO"
+-
+-#data
+-FOO&#x0088;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOˆZOO"
+-
+-#data
+-FOO&#x0089;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO‰ZOO"
+-
+-#data
+-FOO&#x008A;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOŠZOO"
+-
+-#data
+-FOO&#x008B;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO‹ZOO"
+-
+-#data
+-FOO&#x008C;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOŒZOO"
+-
+-#data
+-FOO&#x008D;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOZOO"
+-
+-#data
+-FOO&#x008E;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOŽZOO"
+-
+-#data
+-FOO&#x008F;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOZOO"
+-
+-#data
+-FOO&#x0090;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOZOO"
+-
+-#data
+-FOO&#x0091;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO‘ZOO"
+-
+-#data
+-FOO&#x0092;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO’ZOO"
+-
+-#data
+-FOO&#x0093;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO“ZOO"
+-
+-#data
+-FOO&#x0094;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO”ZOO"
+-
+-#data
+-FOO&#x0095;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO•ZOO"
+-
+-#data
+-FOO&#x0096;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO–ZOO"
+-
+-#data
+-FOO&#x0097;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO—ZOO"
+-
+-#data
+-FOO&#x0098;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO˜ZOO"
+-
+-#data
+-FOO&#x0099;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO™ZOO"
+-
+-#data
+-FOO&#x009A;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOšZOO"
+-
+-#data
+-FOO&#x009B;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO›ZOO"
+-
+-#data
+-FOO&#x009C;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOœZOO"
+-
+-#data
+-FOO&#x009D;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOZOO"
+-
+-#data
+-FOO&#x009E;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOžZOO"
+-
+-#data
+-FOO&#x009F;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOŸZOO"
+-
+-#data
+-FOO&#x00A0;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO ZOO"
+-
+-#data
+-FOO&#xD7FF;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO퟿ZOO"
+-
+-#data
+-FOO&#xD800;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO�ZOO"
+-
+-#data
+-FOO&#xD801;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO�ZOO"
+-
+-#data
+-FOO&#xDFFE;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO�ZOO"
+-
+-#data
+-FOO&#xDFFF;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO�ZOO"
+-
+-#data
+-FOO&#xE000;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOOZOO"
+-
+-#data
+-FOO&#x10FFFE;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO􏿾ZOO"
+-
+-#data
+-FOO&#x1087D4;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO􈟔ZOO"
+-
+-#data
+-FOO&#x10FFFF;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO􏿿ZOO"
+-
+-#data
+-FOO&#x110000;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO�ZOO"
+-
+-#data
+-FOO&#xFFFFFF;ZOO
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO�ZOO"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/entities02.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/entities02.dat
+deleted file mode 100644
+index e2fb42a..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/entities02.dat
++++ /dev/null
+@@ -1,249 +0,0 @@
+-#data
+-<div bar="ZZ&gt;YY"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ>YY"
+-
+-#data
+-<div bar="ZZ&"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&"
+-
+-#data
+-<div bar='ZZ&'></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&"
+-
+-#data
+-<div bar=ZZ&></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&"
+-
+-#data
+-<div bar="ZZ&gt=YY"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&gt=YY"
+-
+-#data
+-<div bar="ZZ&gt0YY"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&gt0YY"
+-
+-#data
+-<div bar="ZZ&gt9YY"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&gt9YY"
+-
+-#data
+-<div bar="ZZ&gtaYY"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&gtaYY"
+-
+-#data
+-<div bar="ZZ&gtZYY"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&gtZYY"
+-
+-#data
+-<div bar="ZZ&gt YY"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ> YY"
+-
+-#data
+-<div bar="ZZ&gt"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ>"
+-
+-#data
+-<div bar='ZZ&gt'></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ>"
+-
+-#data
+-<div bar=ZZ&gt></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ>"
+-
+-#data
+-<div bar="ZZ&pound_id=23"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ£_id=23"
+-
+-#data
+-<div bar="ZZ&prod_id=23"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&prod_id=23"
+-
+-#data
+-<div bar="ZZ&pound;_id=23"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ£_id=23"
+-
+-#data
+-<div bar="ZZ&prod;_id=23"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ∏_id=23"
+-
+-#data
+-<div bar="ZZ&pound=23"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&pound=23"
+-
+-#data
+-<div bar="ZZ&prod=23"></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       bar="ZZ&prod=23"
+-
+-#data
+-<div>ZZ&pound_id=23</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "ZZ£_id=23"
+-
+-#data
+-<div>ZZ&prod_id=23</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "ZZ&prod_id=23"
+-
+-#data
+-<div>ZZ&pound;_id=23</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "ZZ£_id=23"
+-
+-#data
+-<div>ZZ&prod;_id=23</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "ZZ∏_id=23"
+-
+-#data
+-<div>ZZ&pound=23</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "ZZ£=23"
+-
+-#data
+-<div>ZZ&prod=23</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "ZZ&prod=23"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/html5test-com.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/html5test-com.dat
+deleted file mode 100644
+index d7cb71d..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/html5test-com.dat
++++ /dev/null
+@@ -1,246 +0,0 @@
+-#data
+-<div<div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div<div>
+-
+-#data
+-<div foo<bar=''>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       foo<bar=""
+-
+-#data
+-<div foo=`bar`>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       foo="`bar`"
+-
+-#data
+-<div \"foo=''>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       \"foo=""
+-
+-#data
+-<a href='\nbar'></a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       href="\nbar"
+-
+-#data
+-<!DOCTYPE html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-&lang;&rang;
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "⟨⟩"
+-
+-#data
+-&apos;
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "'"
+-
+-#data
+-&ImaginaryI;
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "ⅈ"
+-
+-#data
+-&Kopf;
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "𝕂"
+-
+-#data
+-&notinva;
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "∉"
+-
+-#data
+-<?import namespace="foo" implementation="#bar">
+-#errors
+-#document
+-| <!-- ?import namespace="foo" implementation="#bar" -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!--foo--bar-->
+-#errors
+-#document
+-| <!-- foo--bar -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<![CDATA[x]]>
+-#errors
+-#document
+-| <!-- [CDATA[x]] -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<textarea><!--</textarea>--></textarea>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "<!--"
+-|     "-->"
+-
+-#data
+-<textarea><!--</textarea>-->
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "<!--"
+-|     "-->"
+-
+-#data
+-<style><!--</style>--></style>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<style><!--</style>-->
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<ul><li>A </li> <li>B</li></ul>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ul>
+-|       <li>
+-|         "A "
+-|       " "
+-|       <li>
+-|         "B"
+-
+-#data
+-<table><form><input type=hidden><input></form><div></div></table>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <input>
+-|     <div>
+-|     <table>
+-|       <form>
+-|       <input>
+-|         type="hidden"
+-
+-#data
+-<i>A<b>B<p></i>C</b>D
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <i>
+-|       "A"
+-|       <b>
+-|         "B"
+-|     <b>
+-|     <p>
+-|       <b>
+-|         <i>
+-|         "C"
+-|       "D"
+-
+-#data
+-<div></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-
+-#data
+-<svg></svg>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-
+-#data
+-<math></math>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/inbody01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/inbody01.dat
+deleted file mode 100644
+index 3f2bd37..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/inbody01.dat
++++ /dev/null
+@@ -1,43 +0,0 @@
+-#data
+-<button>1</foo>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <button>
+-|       "1"
+-
+-#data
+-<foo>1<p>2</foo>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <foo>
+-|       "1"
+-|       <p>
+-|         "2"
+-
+-#data
+-<dd>1</foo>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <dd>
+-|       "1"
+-
+-#data
+-<foo>1<dd>2</foo>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <foo>
+-|       "1"
+-|       <dd>
+-|         "2"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/isindex.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/isindex.dat
+deleted file mode 100644
+index 88325ff..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/isindex.dat
++++ /dev/null
+@@ -1,40 +0,0 @@
+-#data
+-<isindex>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       <hr>
+-|       <label>
+-|         "This is a searchable index. Enter search keywords: "
+-|         <input>
+-|           name="isindex"
+-|       <hr>
+-
+-#data
+-<isindex name="A" action="B" prompt="C" foo="D">
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       action="B"
+-|       <hr>
+-|       <label>
+-|         "C"
+-|         <input>
+-|           foo="D"
+-|           name="isindex"
+-|       <hr>
+-
+-#data
+-<form><isindex>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/pending-spec-changes-plain-text-unsafe.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/pending-spec-changes-plain-text-unsafe.dat
+deleted file mode 100644
+index a5ebb1eb285116af391137bc94beac0c8a6834b4..0000000000000000000000000000000000000000
+GIT binary patch
+literal 0
+HcmV?d00001
+
+literal 115
+zcmXZUQ3`+{41i&ucZ#9c5brYEqF^T2f`Sg8m2W@)!xxy0Am++fibh!_xp`HU=1fj=
+l5Tv!*b_iUjqsV4(V_d9g>VZ9lc;ttC7t#O7YxuDS4-Zl&BR>ED
+
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/pending-spec-changes.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/pending-spec-changes.dat
+deleted file mode 100644
+index 5a92084..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/pending-spec-changes.dat
++++ /dev/null
+@@ -1,52 +0,0 @@
+-#data
+-<input type="hidden"><frameset>
+-#errors
+-21: Start tag seen without seeing a doctype first. Expected “<!DOCTYPE html>”.
+-31: “frameset” start tag seen.
+-31: End of file seen and there were open elements.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!DOCTYPE html><table><caption><svg>foo</table>bar
+-#errors
+-47: End tag “table” did not match the name of the current open element (“svg”).
+-47: “table” closed but “caption” was still open.
+-47: End tag “table” seen, but there were open elements.
+-36: Unclosed element “svg”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <svg svg>
+-|           "foo"
+-|     "bar"
+-
+-#data
+-<table><tr><td><svg><desc><td></desc><circle>
+-#errors
+-7: Start tag seen without seeing a doctype first. Expected “<!DOCTYPE html>”.
+-30: A table cell was implicitly closed, but there were open elements.
+-26: Unclosed element “desc”.
+-20: Unclosed element “svg”.
+-37: Stray end tag “desc”.
+-45: End of file seen and there were open elements.
+-45: Unclosed element “circle”.
+-7: Unclosed element “table”.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <svg svg>
+-|               <svg desc>
+-|           <td>
+-|             <circle>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/plain-text-unsafe.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/plain-text-unsafe.dat
+deleted file mode 100644
+index 04cc11fb9d458ea32dca02e2f3bf39221196ab8e..0000000000000000000000000000000000000000
+GIT binary patch
+literal 0
+HcmV?d00001
+
+literal 4166
+zcmds4&1%~~5Y}1XcNojiOL5|Zdr6iB6Q@@dnjYGa!^&DaD*8j(rYZE*N*}4O(Ai!6
+zt?Ym#%MQd~yz+X#nfd0M+40P0g4rKk_ucGyu~@9HzqzhG<5`wuxjplf&5wx3!u}29
+zQA8od1>ll1zgT*S|4T0c9E6$RdB?_+5>}tF$TnjU&$*!FvRd{rQXeva!GcpkGm9M!
+zZBWBlo0XH%V%aC7h2%Ws8$qo;*=zDp0+b3F08~mq!5(ow4OtKi{*2LVgD~WoB_9R=
+z%96mMsPR;h$nTtgfB$G~Tu5~MsAZ5p?I at Yv->g at 6t9!$ThX*>G;HMo(<c>}#7Rl5a
+zZg4uE1I7jOIW4nFO4J6iM;kDRJYY at HxlJ-2>|)pZu4LM<KOUh3ErDsMA{%qAZOUw$
+zscvR?JZJVK)-qamv2krWRmhr-vcp#rkm+dl=W)$LH~WnyKCXT2=DO^$@Rb}6$4 at S`
+zDy!WdH|zeTS5P`WCOgJYv%MecK8>qS(UCIoh@(L9F*ZXarNczOPxy50-rRltbPH<s
+zA!)|x#GcqI^c|On6=j}5m2?=K6mlgf$6nP%Y{F?5Ca>+lsqMcUzhGX-DG?dIeM%yw
+zq)6Z5tV+moc?F- at Px$g4N7 at AhG2|lSEV{6lAFkjw_958<_GvD+SlP=VmQ!lVHXJsI
+znhY+?3E0d<$JA<%tK<^VtQR#nU at +CT{-PMJ<%52yKtV-o{8a81dy0d-O}vj9)n^7k
+zT4d@%G%wJ%%qhlePD%yYMMpP?=tpcJ%YZV=t3+x1nKCnh=v}&mgl&nSNPf^%ki)ze
+l`$yqfayHMBo}R^L^DOS^S$;Op@}8cl(m$8f+I>c;?LSw^)ujLc
+
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scriptdata01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scriptdata01.dat
+deleted file mode 100644
+index 76b67f4..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scriptdata01.dat
++++ /dev/null
+@@ -1,308 +0,0 @@
+-#data
+-FOO<script>'Hello'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'Hello'"
+-|     "BAR"
+-
+-#data
+-FOO<script></script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|     "BAR"
+-
+-#data
+-FOO<script></script >BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|     "BAR"
+-
+-#data
+-FOO<script></script/>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|     "BAR"
+-
+-#data
+-FOO<script></script/ >BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|     "BAR"
+-
+-#data
+-FOO<script type="text/plain"></scriptx>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "</scriptx>BAR"
+-
+-#data
+-FOO<script></script foo=">" dd>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|     "BAR"
+-
+-#data
+-FOO<script>'<'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<'"
+-|     "BAR"
+-
+-#data
+-FOO<script>'<!'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<!'"
+-|     "BAR"
+-
+-#data
+-FOO<script>'<!-'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<!-'"
+-|     "BAR"
+-
+-#data
+-FOO<script>'<!--'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<!--'"
+-|     "BAR"
+-
+-#data
+-FOO<script>'<!---'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<!---'"
+-|     "BAR"
+-
+-#data
+-FOO<script>'<!-->'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<!-->'"
+-|     "BAR"
+-
+-#data
+-FOO<script>'<!-->'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<!-->'"
+-|     "BAR"
+-
+-#data
+-FOO<script>'<!-- potato'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<!-- potato'"
+-|     "BAR"
+-
+-#data
+-FOO<script>'<!-- <sCrIpt'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<!-- <sCrIpt'"
+-|     "BAR"
+-
+-#data
+-FOO<script type="text/plain">'<!-- <sCrIpt>'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "'<!-- <sCrIpt>'</script>BAR"
+-
+-#data
+-FOO<script type="text/plain">'<!-- <sCrIpt> -'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "'<!-- <sCrIpt> -'</script>BAR"
+-
+-#data
+-FOO<script type="text/plain">'<!-- <sCrIpt> --'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "'<!-- <sCrIpt> --'</script>BAR"
+-
+-#data
+-FOO<script>'<!-- <sCrIpt> -->'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       "'<!-- <sCrIpt> -->'"
+-|     "BAR"
+-
+-#data
+-FOO<script type="text/plain">'<!-- <sCrIpt> --!>'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "'<!-- <sCrIpt> --!>'</script>BAR"
+-
+-#data
+-FOO<script type="text/plain">'<!-- <sCrIpt> -- >'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "'<!-- <sCrIpt> -- >'</script>BAR"
+-
+-#data
+-FOO<script type="text/plain">'<!-- <sCrIpt '</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "'<!-- <sCrIpt '</script>BAR"
+-
+-#data
+-FOO<script type="text/plain">'<!-- <sCrIpt/'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "'<!-- <sCrIpt/'</script>BAR"
+-
+-#data
+-FOO<script type="text/plain">'<!-- <sCrIpt\'</script>BAR
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "'<!-- <sCrIpt\'"
+-|     "BAR"
+-
+-#data
+-FOO<script type="text/plain">'<!-- <sCrIpt/'</script>BAR</script>QUX
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "FOO"
+-|     <script>
+-|       type="text/plain"
+-|       "'<!-- <sCrIpt/'</script>BAR"
+-|     "QUX"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scripted/adoption01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scripted/adoption01.dat
+deleted file mode 100644
+index 4e08d0e..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scripted/adoption01.dat
++++ /dev/null
+@@ -1,15 +0,0 @@
+-#data
+-<p><b id="A"><script>document.getElementById("A").id = "B"</script></p>TEXT</b>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <b>
+-|         id="B"
+-|         <script>
+-|           "document.getElementById("A").id = "B""
+-|     <b>
+-|       id="A"
+-|       "TEXT"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scripted/webkit01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scripted/webkit01.dat
+deleted file mode 100644
+index ef4a41c..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/scripted/webkit01.dat
++++ /dev/null
+@@ -1,28 +0,0 @@
+-#data
+-1<script>document.write("2")</script>3
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "1"
+-|     <script>
+-|       "document.write("2")"
+-|     "23"
+-
+-#data
+-1<script>document.write("<script>document.write('2')</scr"+ "ipt><script>document.write('3')</scr" + "ipt>")</script>4
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "1"
+-|     <script>
+-|       "document.write("<script>document.write('2')</scr"+ "ipt><script>document.write('3')</scr" + "ipt>")"
+-|     <script>
+-|       "document.write('2')"
+-|     "2"
+-|     <script>
+-|       "document.write('3')"
+-|     "34"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tables01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tables01.dat
+deleted file mode 100644
+index c4b47e4..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tables01.dat
++++ /dev/null
+@@ -1,212 +0,0 @@
+-#data
+-<table><th>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <th>
+-
+-#data
+-<table><td>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-
+-#data
+-<table><col foo='bar'>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <colgroup>
+-|         <col>
+-|           foo="bar"
+-
+-#data
+-<table><colgroup></html>foo
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "foo"
+-|     <table>
+-|       <colgroup>
+-
+-#data
+-<table></table><p>foo
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|     <p>
+-|       "foo"
+-
+-#data
+-<table></body></caption></col></colgroup></html></tbody></td></tfoot></th></thead></tr><td>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-
+-#data
+-<table><select><option>3</select></table>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-|         "3"
+-|     <table>
+-
+-#data
+-<table><select><table></table></select></table>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|     <table>
+-|     <table>
+-
+-#data
+-<table><select></table>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|     <table>
+-
+-#data
+-<table><select><option>A<tr><td>B</td></tr></table>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-|         "A"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "B"
+-
+-#data
+-<table><td></body></caption></col></colgroup></html>foo
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "foo"
+-
+-#data
+-<table><td>A</table>B
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "A"
+-|     "B"
+-
+-#data
+-<table><tr><caption>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|       <caption>
+-
+-#data
+-<table><tr></body></caption></col></colgroup></html></td></th><td>foo
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "foo"
+-
+-#data
+-<table><td><tr>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|         <tr>
+-
+-#data
+-<table><td><button><td>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <button>
+-|           <td>
+-
+-#data
+-<table><tr><td><svg><desc><td>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <svg svg>
+-|               <svg desc>
+-|           <td>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests1.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests1.dat
+deleted file mode 100644
+index cbf8bdd..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests1.dat
++++ /dev/null
+@@ -1,1952 +0,0 @@
+-#data
+-Test
+-#errors
+-Line: 1 Col: 4 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Test"
+-
+-#data
+-<p>One<p>Two
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (p). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       "One"
+-|     <p>
+-|       "Two"
+-
+-#data
+-Line1<br>Line2<br>Line3<br>Line4
+-#errors
+-Line: 1 Col: 5 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Line1"
+-|     <br>
+-|     "Line2"
+-|     <br>
+-|     "Line3"
+-|     <br>
+-|     "Line4"
+-
+-#data
+-<html>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<head>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<body>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (body). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<html><head>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<html><head></head>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<html><head></head><body>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<html><head></head><body></body>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<html><head><body></body></html>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<html><head></body></html>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-Line: 1 Col: 19 Unexpected end tag (body).
+-Line: 1 Col: 26 Unexpected end tag (html).
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<html><head><body></html>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<html><body></html>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<body></html>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (body). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<head></html>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE.
+-Line: 1 Col: 13 Unexpected end tag (html). Ignored.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-</head>
+-#errors
+-Line: 1 Col: 7 Unexpected end tag (head). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-</body>
+-#errors
+-Line: 1 Col: 7 Unexpected end tag (body). Expected DOCTYPE.
+-Line: 1 Col: 7 Unexpected end tag (body) after the (implied) root element.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-</html>
+-#errors
+-Line: 1 Col: 7 Unexpected end tag (html). Expected DOCTYPE.
+-Line: 1 Col: 7 Unexpected end tag (html) after the (implied) root element.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<b><table><td><i></table>
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected table cell start tag (td) in the table body phase.
+-Line: 1 Col: 25 Got table cell end tag (td) while required end tags are missing.
+-Line: 1 Col: 25 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <table>
+-|         <tbody>
+-|           <tr>
+-|             <td>
+-|               <i>
+-
+-#data
+-<b><table><td></b><i></table>X
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected table cell start tag (td) in the table body phase.
+-Line: 1 Col: 18 End tag (b) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 29 Got table cell end tag (td) while required end tags are missing.
+-Line: 1 Col: 30 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <table>
+-|         <tbody>
+-|           <tr>
+-|             <td>
+-|               <i>
+-|       "X"
+-
+-#data
+-<h1>Hello<h2>World
+-#errors
+-4: Start tag seen without seeing a doctype first. Expected “<!DOCTYPE html>”.
+-13: Heading cannot be a child of another heading.
+-18: End of file seen and there were open elements.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <h1>
+-|       "Hello"
+-|     <h2>
+-|       "World"
+-
+-#data
+-<a><p>X<a>Y</a>Z</p></a>
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (a). Expected DOCTYPE.
+-Line: 1 Col: 10 Unexpected start tag (a) implies end tag (a).
+-Line: 1 Col: 10 End tag (a) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 24 End tag (a) violates step 1, paragraph 1 of the adoption agency algorithm.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|     <p>
+-|       <a>
+-|         "X"
+-|       <a>
+-|         "Y"
+-|       "Z"
+-
+-#data
+-<b><button>foo</b>bar
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 15 End tag (b) violates step 1, paragraph 1 of the adoption agency algorithm.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|     <button>
+-|       <b>
+-|         "foo"
+-|       "bar"
+-
+-#data
+-<!DOCTYPE html><span><button>foo</span>bar
+-#errors
+-39: End tag “span” seen but there were unclosed elements.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <span>
+-|       <button>
+-|         "foobar"
+-
+-#data
+-<p><b><div><marquee></p></b></div>X
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (p). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected end tag (p). Ignored.
+-Line: 1 Col: 24 Unexpected end tag (p). Ignored.
+-Line: 1 Col: 28 End tag (b) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 34 End tag (div) seen too early. Expected other end tag.
+-Line: 1 Col: 35 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <b>
+-|     <div>
+-|       <b>
+-|         <marquee>
+-|           <p>
+-|           "X"
+-
+-#data
+-<script><div></script></div><title><p></title><p><p>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 28 Unexpected end tag (div). Ignored.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<div>"
+-|     <title>
+-|       "<p>"
+-|   <body>
+-|     <p>
+-|     <p>
+-
+-#data
+-<!--><div>--<!-->
+-#errors
+-Line: 1 Col: 5 Incorrect comment.
+-Line: 1 Col: 10 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 17 Incorrect comment.
+-Line: 1 Col: 17 Expected closing tag. Unexpected end of file.
+-#document
+-| <!--  -->
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "--"
+-|       <!--  -->
+-
+-#data
+-<p><hr></p>
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (p). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected end tag (p). Ignored.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <hr>
+-|     <p>
+-
+-#data
+-<select><b><option><select><option></b></select>X
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (select). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected start tag token (b) in the select phase. Ignored.
+-Line: 1 Col: 27 Unexpected select start tag in the select phase treated as select end tag.
+-Line: 1 Col: 39 End tag (b) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 48 Unexpected end tag (select). Ignored.
+-Line: 1 Col: 49 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-|     <option>
+-|       "X"
+-
+-#data
+-<a><table><td><a><table></table><a></tr><a></table><b>X</b>C<a>Y
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (a). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected table cell start tag (td) in the table body phase.
+-Line: 1 Col: 35 Unexpected start tag (a) implies end tag (a).
+-Line: 1 Col: 40 Got table cell end tag (td) while required end tags are missing.
+-Line: 1 Col: 43 Unexpected start tag (a) in table context caused voodoo mode.
+-Line: 1 Col: 43 Unexpected start tag (a) implies end tag (a).
+-Line: 1 Col: 43 End tag (a) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 51 Unexpected implied end tag (a) in the table phase.
+-Line: 1 Col: 63 Unexpected start tag (a) implies end tag (a).
+-Line: 1 Col: 64 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       <a>
+-|       <table>
+-|         <tbody>
+-|           <tr>
+-|             <td>
+-|               <a>
+-|                 <table>
+-|               <a>
+-|     <a>
+-|       <b>
+-|         "X"
+-|       "C"
+-|     <a>
+-|       "Y"
+-
+-#data
+-<a X>0<b>1<a Y>2
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (a). Expected DOCTYPE.
+-Line: 1 Col: 15 Unexpected start tag (a) implies end tag (a).
+-Line: 1 Col: 15 End tag (a) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 16 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       x=""
+-|       "0"
+-|       <b>
+-|         "1"
+-|     <b>
+-|       <a>
+-|         y=""
+-|         "2"
+-
+-#data
+-<!-----><font><div>hello<table>excite!<b>me!<th><i>please!</tr><!--X-->
+-#errors
+-Line: 1 Col: 7 Unexpected '-' after '--' found in comment.
+-Line: 1 Col: 14 Unexpected start tag (font). Expected DOCTYPE.
+-Line: 1 Col: 38 Unexpected non-space characters in table context caused voodoo mode.
+-Line: 1 Col: 41 Unexpected start tag (b) in table context caused voodoo mode.
+-Line: 1 Col: 48 Unexpected implied end tag (b) in the table phase.
+-Line: 1 Col: 48 Unexpected table cell start tag (th) in the table body phase.
+-Line: 1 Col: 63 Got table cell end tag (th) while required end tags are missing.
+-Line: 1 Col: 71 Unexpected end of file. Expected table content.
+-#document
+-| <!-- - -->
+-| <html>
+-|   <head>
+-|   <body>
+-|     <font>
+-|       <div>
+-|         "helloexcite!"
+-|         <b>
+-|           "me!"
+-|         <table>
+-|           <tbody>
+-|             <tr>
+-|               <th>
+-|                 <i>
+-|                   "please!"
+-|             <!-- X -->
+-
+-#data
+-<!DOCTYPE html><li>hello<li>world<ul>how<li>do</ul>you</body><!--do-->
+-#errors
+-Line: 1 Col: 61 Unexpected end tag (li). Missing end tag (body).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <li>
+-|       "hello"
+-|     <li>
+-|       "world"
+-|       <ul>
+-|         "how"
+-|         <li>
+-|           "do"
+-|       "you"
+-|   <!-- do -->
+-
+-#data
+-<!DOCTYPE html>A<option>B<optgroup>C<select>D</option>E
+-#errors
+-Line: 1 Col: 54 Unexpected end tag (option) in the select phase. Ignored.
+-Line: 1 Col: 55 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "A"
+-|     <option>
+-|       "B"
+-|     <optgroup>
+-|       "C"
+-|       <select>
+-|         "DE"
+-
+-#data
+-<
+-#errors
+-Line: 1 Col: 1 Expected tag name. Got something else instead
+-Line: 1 Col: 1 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "<"
+-
+-#data
+-<#
+-#errors
+-Line: 1 Col: 1 Expected tag name. Got something else instead
+-Line: 1 Col: 1 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "<#"
+-
+-#data
+-</
+-#errors
+-Line: 1 Col: 2 Expected closing tag. Unexpected end of file.
+-Line: 1 Col: 2 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "</"
+-
+-#data
+-</#
+-#errors
+-Line: 1 Col: 2 Expected closing tag. Unexpected character '#' found.
+-Line: 1 Col: 3 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!-- # -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<?
+-#errors
+-Line: 1 Col: 1 Expected tag name. Got '?' instead. (HTML doesn't support processing instructions.)
+-Line: 1 Col: 2 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!-- ? -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<?#
+-#errors
+-Line: 1 Col: 1 Expected tag name. Got '?' instead. (HTML doesn't support processing instructions.)
+-Line: 1 Col: 3 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!-- ?# -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!
+-#errors
+-Line: 1 Col: 2 Expected '--' or 'DOCTYPE'. Not found.
+-Line: 1 Col: 2 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!--  -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!#
+-#errors
+-Line: 1 Col: 3 Expected '--' or 'DOCTYPE'. Not found.
+-Line: 1 Col: 3 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!-- # -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<?COMMENT?>
+-#errors
+-Line: 1 Col: 1 Expected tag name. Got '?' instead. (HTML doesn't support processing instructions.)
+-Line: 1 Col: 11 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!-- ?COMMENT? -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!COMMENT>
+-#errors
+-Line: 1 Col: 2 Expected '--' or 'DOCTYPE'. Not found.
+-Line: 1 Col: 10 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!-- COMMENT -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-</ COMMENT >
+-#errors
+-Line: 1 Col: 2 Expected closing tag. Unexpected character ' ' found.
+-Line: 1 Col: 12 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!--  COMMENT  -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<?COM--MENT?>
+-#errors
+-Line: 1 Col: 1 Expected tag name. Got '?' instead. (HTML doesn't support processing instructions.)
+-Line: 1 Col: 13 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!-- ?COM--MENT? -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!COM--MENT>
+-#errors
+-Line: 1 Col: 2 Expected '--' or 'DOCTYPE'. Not found.
+-Line: 1 Col: 12 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!-- COM--MENT -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-</ COM--MENT >
+-#errors
+-Line: 1 Col: 2 Expected closing tag. Unexpected character ' ' found.
+-Line: 1 Col: 14 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!--  COM--MENT  -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><style> EOF
+-#errors
+-Line: 1 Col: 26 Unexpected end of file. Expected end tag (style).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <style>
+-|       " EOF"
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><script> <!-- </script> --> </script> EOF
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       " <!-- "
+-|     " "
+-|   <body>
+-|     "-->  EOF"
+-
+-#data
+-<b><p></b>TEST
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 10 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|     <p>
+-|       <b>
+-|       "TEST"
+-
+-#data
+-<p id=a><b><p id=b></b>TEST
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (p). Expected DOCTYPE.
+-Line: 1 Col: 19 Unexpected end tag (p). Ignored.
+-Line: 1 Col: 23 End tag (b) violates step 1, paragraph 2 of the adoption agency algorithm.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       id="a"
+-|       <b>
+-|     <p>
+-|       id="b"
+-|       "TEST"
+-
+-#data
+-<b id=a><p><b id=b></p></b>TEST
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 23 Unexpected end tag (p). Ignored.
+-Line: 1 Col: 27 End tag (b) violates step 1, paragraph 2 of the adoption agency algorithm.
+-Line: 1 Col: 31 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       id="a"
+-|       <p>
+-|         <b>
+-|           id="b"
+-|       "TEST"
+-
+-#data
+-<!DOCTYPE html><title>U-test</title><body><div><p>Test<u></p></div></body>
+-#errors
+-Line: 1 Col: 61 Unexpected end tag (p). Ignored.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <title>
+-|       "U-test"
+-|   <body>
+-|     <div>
+-|       <p>
+-|         "Test"
+-|         <u>
+-
+-#data
+-<!DOCTYPE html><font><table></font></table></font>
+-#errors
+-Line: 1 Col: 35 Unexpected end tag (font) in table context caused voodoo mode.
+-Line: 1 Col: 35 End tag (font) violates step 1, paragraph 1 of the adoption agency algorithm.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <font>
+-|       <table>
+-
+-#data
+-<font><p>hello<b>cruel</font>world
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (font). Expected DOCTYPE.
+-Line: 1 Col: 29 End tag (font) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 29 End tag (font) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 34 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <font>
+-|     <p>
+-|       <font>
+-|         "hello"
+-|         <b>
+-|           "cruel"
+-|       <b>
+-|         "world"
+-
+-#data
+-<b>Test</i>Test
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 11 End tag (i) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 15 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       "TestTest"
+-
+-#data
+-<b>A<cite>B<div>C
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 17 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       "A"
+-|       <cite>
+-|         "B"
+-|         <div>
+-|           "C"
+-
+-#data
+-<b>A<cite>B<div>C</cite>D
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 24 Unexpected end tag (cite). Ignored.
+-Line: 1 Col: 25 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       "A"
+-|       <cite>
+-|         "B"
+-|         <div>
+-|           "CD"
+-
+-#data
+-<b>A<cite>B<div>C</b>D
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 21 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 22 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       "A"
+-|       <cite>
+-|         "B"
+-|     <div>
+-|       <b>
+-|         "C"
+-|       "D"
+-
+-#data
+-
+-#errors
+-Line: 1 Col: 0 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<DIV>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 5 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-
+-#data
+-<DIV> abc
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 9 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc"
+-
+-#data
+-<DIV> abc <B>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 13 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-
+-#data
+-<DIV> abc <B> def
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 17 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def"
+-
+-#data
+-<DIV> abc <B> def <I>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 21 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-
+-#data
+-<DIV> abc <B> def <I> ghi
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 25 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-|           " ghi"
+-
+-#data
+-<DIV> abc <B> def <I> ghi <P>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 29 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-|           " ghi "
+-|           <p>
+-
+-#data
+-<DIV> abc <B> def <I> ghi <P> jkl
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 33 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-|           " ghi "
+-|           <p>
+-|             " jkl"
+-
+-#data
+-<DIV> abc <B> def <I> ghi <P> jkl </B>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 38 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 38 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-|           " ghi "
+-|       <i>
+-|         <p>
+-|           <b>
+-|             " jkl "
+-
+-#data
+-<DIV> abc <B> def <I> ghi <P> jkl </B> mno
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 38 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 42 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-|           " ghi "
+-|       <i>
+-|         <p>
+-|           <b>
+-|             " jkl "
+-|           " mno"
+-
+-#data
+-<DIV> abc <B> def <I> ghi <P> jkl </B> mno </I>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 38 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 47 End tag (i) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 47 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-|           " ghi "
+-|       <i>
+-|       <p>
+-|         <i>
+-|           <b>
+-|             " jkl "
+-|           " mno "
+-
+-#data
+-<DIV> abc <B> def <I> ghi <P> jkl </B> mno </I> pqr
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 38 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 47 End tag (i) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 51 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-|           " ghi "
+-|       <i>
+-|       <p>
+-|         <i>
+-|           <b>
+-|             " jkl "
+-|           " mno "
+-|         " pqr"
+-
+-#data
+-<DIV> abc <B> def <I> ghi <P> jkl </B> mno </I> pqr </P>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 38 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 47 End tag (i) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 56 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-|           " ghi "
+-|       <i>
+-|       <p>
+-|         <i>
+-|           <b>
+-|             " jkl "
+-|           " mno "
+-|         " pqr "
+-
+-#data
+-<DIV> abc <B> def <I> ghi <P> jkl </B> mno </I> pqr </P> stu
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 38 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 47 End tag (i) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 60 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       " abc "
+-|       <b>
+-|         " def "
+-|         <i>
+-|           " ghi "
+-|       <i>
+-|       <p>
+-|         <i>
+-|           <b>
+-|             " jkl "
+-|           " mno "
+-|         " pqr "
+-|       " stu"
+-
+-#data
+-<test attribute---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------->
+-#errors
+-Line: 1 Col: 1040 Unexpected start tag (test). Expected DOCTYPE.
+-Line: 1 Col: 1040 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <test>
+-|       attribute----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------=""
+-
+-#data
+-<a href="blah">aba<table><a href="foo">br<tr><td></td></tr>x</table>aoe
+-#errors
+-Line: 1 Col: 15 Unexpected start tag (a). Expected DOCTYPE.
+-Line: 1 Col: 39 Unexpected start tag (a) in table context caused voodoo mode.
+-Line: 1 Col: 39 Unexpected start tag (a) implies end tag (a).
+-Line: 1 Col: 39 End tag (a) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 45 Unexpected implied end tag (a) in the table phase.
+-Line: 1 Col: 68 Unexpected implied end tag (a) in the table phase.
+-Line: 1 Col: 71 Expected closing tag. Unexpected end of file.
+-
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       href="blah"
+-|       "aba"
+-|       <a>
+-|         href="foo"
+-|         "br"
+-|       <a>
+-|         href="foo"
+-|         "x"
+-|       <table>
+-|         <tbody>
+-|           <tr>
+-|             <td>
+-|     <a>
+-|       href="foo"
+-|       "aoe"
+-
+-#data
+-<a href="blah">aba<table><tr><td><a href="foo">br</td></tr>x</table>aoe
+-#errors
+-Line: 1 Col: 15 Unexpected start tag (a). Expected DOCTYPE.
+-Line: 1 Col: 54 Got table cell end tag (td) while required end tags are missing.
+-Line: 1 Col: 60 Unexpected non-space characters in table context caused voodoo mode.
+-Line: 1 Col: 71 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       href="blah"
+-|       "abax"
+-|       <table>
+-|         <tbody>
+-|           <tr>
+-|             <td>
+-|               <a>
+-|                 href="foo"
+-|                 "br"
+-|       "aoe"
+-
+-#data
+-<table><a href="blah">aba<tr><td><a href="foo">br</td></tr>x</table>aoe
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 22 Unexpected start tag (a) in table context caused voodoo mode.
+-Line: 1 Col: 29 Unexpected implied end tag (a) in the table phase.
+-Line: 1 Col: 54 Got table cell end tag (td) while required end tags are missing.
+-Line: 1 Col: 68 Unexpected implied end tag (a) in the table phase.
+-Line: 1 Col: 71 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       href="blah"
+-|       "aba"
+-|     <a>
+-|       href="blah"
+-|       "x"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <a>
+-|               href="foo"
+-|               "br"
+-|     <a>
+-|       href="blah"
+-|       "aoe"
+-
+-#data
+-<a href=a>aa<marquee>aa<a href=b>bb</marquee>aa
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (a). Expected DOCTYPE.
+-Line: 1 Col: 45 End tag (marquee) seen too early. Expected other end tag.
+-Line: 1 Col: 47 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       href="a"
+-|       "aa"
+-|       <marquee>
+-|         "aa"
+-|         <a>
+-|           href="b"
+-|           "bb"
+-|       "aa"
+-
+-#data
+-<wbr><strike><code></strike><code><strike></code>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (wbr). Expected DOCTYPE.
+-Line: 1 Col: 28 End tag (strike) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 49 Unexpected end tag (code). Ignored.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <wbr>
+-|     <strike>
+-|       <code>
+-|     <code>
+-|       <code>
+-|         <strike>
+-
+-#data
+-<!DOCTYPE html><spacer>foo
+-#errors
+-26: End of file seen and there were open elements.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <spacer>
+-|       "foo"
+-
+-#data
+-<title><meta></title><link><title><meta></title>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (title). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <title>
+-|       "<meta>"
+-|     <link>
+-|     <title>
+-|       "<meta>"
+-|   <body>
+-
+-#data
+-<style><!--</style><meta><script>--><link></script>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-Line: 1 Col: 51 Unexpected end of file. Expected end tag (style).
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--"
+-|     <meta>
+-|     <script>
+-|       "--><link>"
+-|   <body>
+-
+-#data
+-<head><meta></head><link>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE.
+-Line: 1 Col: 25 Unexpected start tag (link) that can be in head. Moved.
+-#document
+-| <html>
+-|   <head>
+-|     <meta>
+-|     <link>
+-|   <body>
+-
+-#data
+-<table><tr><tr><td><td><span><th><span>X</table>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 33 Got table cell end tag (td) while required end tags are missing.
+-Line: 1 Col: 48 Got table cell end tag (th) while required end tags are missing.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|         <tr>
+-|           <td>
+-|           <td>
+-|             <span>
+-|           <th>
+-|             <span>
+-|               "X"
+-
+-#data
+-<body><body><base><link><meta><title><p></title><body><p></body>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (body). Expected DOCTYPE.
+-Line: 1 Col: 12 Unexpected start tag (body).
+-Line: 1 Col: 54 Unexpected start tag (body).
+-Line: 1 Col: 64 Unexpected end tag (p). Missing end tag (body).
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <base>
+-|     <link>
+-|     <meta>
+-|     <title>
+-|       "<p>"
+-|     <p>
+-
+-#data
+-<textarea><p></textarea>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (textarea). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "<p>"
+-
+-#data
+-<p></isindex></noembed></noframes></noscript></optgroup></option></plaintext></textarea>
+-#errors
+-Line: 1 Col: 9 Unexpected end tag (strong). Expected DOCTYPE.
+-Line: 1 Col: 9 Unexpected end tag (strong) after the (implied) root element.
+-Line: 1 Col: 13 Unexpected end tag (b) after the (implied) root element.
+-Line: 1 Col: 18 Unexpected end tag (em) after the (implied) root element.
+-Line: 1 Col: 22 Unexpected end tag (i) after the (implied) root element.
+-Line: 1 Col: 26 Unexpected end tag (u) after the (implied) root element.
+-Line: 1 Col: 35 Unexpected end tag (strike) after the (implied) root element.
+-Line: 1 Col: 39 Unexpected end tag (s) after the (implied) root element.
+-Line: 1 Col: 47 Unexpected end tag (blink) after the (implied) root element.
+-Line: 1 Col: 52 Unexpected end tag (tt) after the (implied) root element.
+-Line: 1 Col: 58 Unexpected end tag (pre) after the (implied) root element.
+-Line: 1 Col: 64 Unexpected end tag (big) after the (implied) root element.
+-Line: 1 Col: 72 Unexpected end tag (small) after the (implied) root element.
+-Line: 1 Col: 79 Unexpected end tag (font) after the (implied) root element.
+-Line: 1 Col: 88 Unexpected end tag (select) after the (implied) root element.
+-Line: 1 Col: 93 Unexpected end tag (h1) after the (implied) root element.
+-Line: 1 Col: 98 Unexpected end tag (h2) after the (implied) root element.
+-Line: 1 Col: 103 Unexpected end tag (h3) after the (implied) root element.
+-Line: 1 Col: 108 Unexpected end tag (h4) after the (implied) root element.
+-Line: 1 Col: 113 Unexpected end tag (h5) after the (implied) root element.
+-Line: 1 Col: 118 Unexpected end tag (h6) after the (implied) root element.
+-Line: 1 Col: 125 Unexpected end tag (body) after the (implied) root element.
+-Line: 1 Col: 130 Unexpected end tag (br). Treated as br element.
+-Line: 1 Col: 134 End tag (a) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 140 This element (img) has no end tag.
+-Line: 1 Col: 148 Unexpected end tag (title). Ignored.
+-Line: 1 Col: 155 Unexpected end tag (span). Ignored.
+-Line: 1 Col: 163 Unexpected end tag (style). Ignored.
+-Line: 1 Col: 172 Unexpected end tag (script). Ignored.
+-Line: 1 Col: 180 Unexpected end tag (table). Ignored.
+-Line: 1 Col: 185 Unexpected end tag (th). Ignored.
+-Line: 1 Col: 190 Unexpected end tag (td). Ignored.
+-Line: 1 Col: 195 Unexpected end tag (tr). Ignored.
+-Line: 1 Col: 203 This element (frame) has no end tag.
+-Line: 1 Col: 210 This element (area) has no end tag.
+-Line: 1 Col: 217 Unexpected end tag (link). Ignored.
+-Line: 1 Col: 225 This element (param) has no end tag.
+-Line: 1 Col: 230 This element (hr) has no end tag.
+-Line: 1 Col: 238 This element (input) has no end tag.
+-Line: 1 Col: 244 Unexpected end tag (col). Ignored.
+-Line: 1 Col: 251 Unexpected end tag (base). Ignored.
+-Line: 1 Col: 258 Unexpected end tag (meta). Ignored.
+-Line: 1 Col: 269 This element (basefont) has no end tag.
+-Line: 1 Col: 279 This element (bgsound) has no end tag.
+-Line: 1 Col: 287 This element (embed) has no end tag.
+-Line: 1 Col: 296 This element (spacer) has no end tag.
+-Line: 1 Col: 300 Unexpected end tag (p). Ignored.
+-Line: 1 Col: 305 End tag (dd) seen too early. Expected other end tag.
+-Line: 1 Col: 310 End tag (dt) seen too early. Expected other end tag.
+-Line: 1 Col: 320 Unexpected end tag (caption). Ignored.
+-Line: 1 Col: 331 Unexpected end tag (colgroup). Ignored.
+-Line: 1 Col: 339 Unexpected end tag (tbody). Ignored.
+-Line: 1 Col: 347 Unexpected end tag (tfoot). Ignored.
+-Line: 1 Col: 355 Unexpected end tag (thead). Ignored.
+-Line: 1 Col: 365 End tag (address) seen too early. Expected other end tag.
+-Line: 1 Col: 378 End tag (blockquote) seen too early. Expected other end tag.
+-Line: 1 Col: 387 End tag (center) seen too early. Expected other end tag.
+-Line: 1 Col: 393 Unexpected end tag (dir). Ignored.
+-Line: 1 Col: 399 End tag (div) seen too early. Expected other end tag.
+-Line: 1 Col: 404 End tag (dl) seen too early. Expected other end tag.
+-Line: 1 Col: 415 End tag (fieldset) seen too early. Expected other end tag.
+-Line: 1 Col: 425 End tag (listing) seen too early. Expected other end tag.
+-Line: 1 Col: 432 End tag (menu) seen too early. Expected other end tag.
+-Line: 1 Col: 437 End tag (ol) seen too early. Expected other end tag.
+-Line: 1 Col: 442 End tag (ul) seen too early. Expected other end tag.
+-Line: 1 Col: 447 End tag (li) seen too early. Expected other end tag.
+-Line: 1 Col: 454 End tag (nobr) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 460 This element (wbr) has no end tag.
+-Line: 1 Col: 476 End tag (button) seen too early. Expected other end tag.
+-Line: 1 Col: 486 End tag (marquee) seen too early. Expected other end tag.
+-Line: 1 Col: 495 End tag (object) seen too early. Expected other end tag.
+-Line: 1 Col: 513 Unexpected end tag (html). Ignored.
+-Line: 1 Col: 513 Unexpected end tag (frameset). Ignored.
+-Line: 1 Col: 520 Unexpected end tag (head). Ignored.
+-Line: 1 Col: 529 Unexpected end tag (iframe). Ignored.
+-Line: 1 Col: 537 This element (image) has no end tag.
+-Line: 1 Col: 547 This element (isindex) has no end tag.
+-Line: 1 Col: 557 Unexpected end tag (noembed). Ignored.
+-Line: 1 Col: 568 Unexpected end tag (noframes). Ignored.
+-Line: 1 Col: 579 Unexpected end tag (noscript). Ignored.
+-Line: 1 Col: 590 Unexpected end tag (optgroup). Ignored.
+-Line: 1 Col: 599 Unexpected end tag (option). Ignored.
+-Line: 1 Col: 611 Unexpected end tag (plaintext). Ignored.
+-Line: 1 Col: 622 Unexpected end tag (textarea). Ignored.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <br>
+-|     <p>
+-
+-#data
+-<table><tr></strong></b></em></i></u></strike></s></blink></tt></pre></big></small></font></select></h1></h2></h3></h4></h5></h6></body></br></a></img></title></span></style></script></table></th></td></tr></frame></area></link></param></hr></input></col></base></meta></basefont></bgsound></embed></spacer></p></dd></dt></caption></colgroup></tbody></tfoot></thead></address></blockquote></center></dir></div></dl></fieldset></listing></menu></ol></ul></li></nobr></wbr></form></button></marquee></object></html></frameset></head></iframe></image></isindex></noembed></noframes></noscript></optgroup></option></plaintext></textarea>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 20 Unexpected end tag (strong) in table context caused voodoo mode.
+-Line: 1 Col: 20 End tag (strong) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 24 Unexpected end tag (b) in table context caused voodoo mode.
+-Line: 1 Col: 24 End tag (b) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 29 Unexpected end tag (em) in table context caused voodoo mode.
+-Line: 1 Col: 29 End tag (em) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 33 Unexpected end tag (i) in table context caused voodoo mode.
+-Line: 1 Col: 33 End tag (i) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 37 Unexpected end tag (u) in table context caused voodoo mode.
+-Line: 1 Col: 37 End tag (u) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 46 Unexpected end tag (strike) in table context caused voodoo mode.
+-Line: 1 Col: 46 End tag (strike) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 50 Unexpected end tag (s) in table context caused voodoo mode.
+-Line: 1 Col: 50 End tag (s) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 58 Unexpected end tag (blink) in table context caused voodoo mode.
+-Line: 1 Col: 58 Unexpected end tag (blink). Ignored.
+-Line: 1 Col: 63 Unexpected end tag (tt) in table context caused voodoo mode.
+-Line: 1 Col: 63 End tag (tt) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 69 Unexpected end tag (pre) in table context caused voodoo mode.
+-Line: 1 Col: 69 End tag (pre) seen too early. Expected other end tag.
+-Line: 1 Col: 75 Unexpected end tag (big) in table context caused voodoo mode.
+-Line: 1 Col: 75 End tag (big) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 83 Unexpected end tag (small) in table context caused voodoo mode.
+-Line: 1 Col: 83 End tag (small) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 90 Unexpected end tag (font) in table context caused voodoo mode.
+-Line: 1 Col: 90 End tag (font) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 99 Unexpected end tag (select) in table context caused voodoo mode.
+-Line: 1 Col: 99 Unexpected end tag (select). Ignored.
+-Line: 1 Col: 104 Unexpected end tag (h1) in table context caused voodoo mode.
+-Line: 1 Col: 104 End tag (h1) seen too early. Expected other end tag.
+-Line: 1 Col: 109 Unexpected end tag (h2) in table context caused voodoo mode.
+-Line: 1 Col: 109 End tag (h2) seen too early. Expected other end tag.
+-Line: 1 Col: 114 Unexpected end tag (h3) in table context caused voodoo mode.
+-Line: 1 Col: 114 End tag (h3) seen too early. Expected other end tag.
+-Line: 1 Col: 119 Unexpected end tag (h4) in table context caused voodoo mode.
+-Line: 1 Col: 119 End tag (h4) seen too early. Expected other end tag.
+-Line: 1 Col: 124 Unexpected end tag (h5) in table context caused voodoo mode.
+-Line: 1 Col: 124 End tag (h5) seen too early. Expected other end tag.
+-Line: 1 Col: 129 Unexpected end tag (h6) in table context caused voodoo mode.
+-Line: 1 Col: 129 End tag (h6) seen too early. Expected other end tag.
+-Line: 1 Col: 136 Unexpected end tag (body) in the table row phase. Ignored.
+-Line: 1 Col: 141 Unexpected end tag (br) in table context caused voodoo mode.
+-Line: 1 Col: 141 Unexpected end tag (br). Treated as br element.
+-Line: 1 Col: 145 Unexpected end tag (a) in table context caused voodoo mode.
+-Line: 1 Col: 145 End tag (a) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 151 Unexpected end tag (img) in table context caused voodoo mode.
+-Line: 1 Col: 151 This element (img) has no end tag.
+-Line: 1 Col: 159 Unexpected end tag (title) in table context caused voodoo mode.
+-Line: 1 Col: 159 Unexpected end tag (title). Ignored.
+-Line: 1 Col: 166 Unexpected end tag (span) in table context caused voodoo mode.
+-Line: 1 Col: 166 Unexpected end tag (span). Ignored.
+-Line: 1 Col: 174 Unexpected end tag (style) in table context caused voodoo mode.
+-Line: 1 Col: 174 Unexpected end tag (style). Ignored.
+-Line: 1 Col: 183 Unexpected end tag (script) in table context caused voodoo mode.
+-Line: 1 Col: 183 Unexpected end tag (script). Ignored.
+-Line: 1 Col: 196 Unexpected end tag (th). Ignored.
+-Line: 1 Col: 201 Unexpected end tag (td). Ignored.
+-Line: 1 Col: 206 Unexpected end tag (tr). Ignored.
+-Line: 1 Col: 214 This element (frame) has no end tag.
+-Line: 1 Col: 221 This element (area) has no end tag.
+-Line: 1 Col: 228 Unexpected end tag (link). Ignored.
+-Line: 1 Col: 236 This element (param) has no end tag.
+-Line: 1 Col: 241 This element (hr) has no end tag.
+-Line: 1 Col: 249 This element (input) has no end tag.
+-Line: 1 Col: 255 Unexpected end tag (col). Ignored.
+-Line: 1 Col: 262 Unexpected end tag (base). Ignored.
+-Line: 1 Col: 269 Unexpected end tag (meta). Ignored.
+-Line: 1 Col: 280 This element (basefont) has no end tag.
+-Line: 1 Col: 290 This element (bgsound) has no end tag.
+-Line: 1 Col: 298 This element (embed) has no end tag.
+-Line: 1 Col: 307 This element (spacer) has no end tag.
+-Line: 1 Col: 311 Unexpected end tag (p). Ignored.
+-Line: 1 Col: 316 End tag (dd) seen too early. Expected other end tag.
+-Line: 1 Col: 321 End tag (dt) seen too early. Expected other end tag.
+-Line: 1 Col: 331 Unexpected end tag (caption). Ignored.
+-Line: 1 Col: 342 Unexpected end tag (colgroup). Ignored.
+-Line: 1 Col: 350 Unexpected end tag (tbody). Ignored.
+-Line: 1 Col: 358 Unexpected end tag (tfoot). Ignored.
+-Line: 1 Col: 366 Unexpected end tag (thead). Ignored.
+-Line: 1 Col: 376 End tag (address) seen too early. Expected other end tag.
+-Line: 1 Col: 389 End tag (blockquote) seen too early. Expected other end tag.
+-Line: 1 Col: 398 End tag (center) seen too early. Expected other end tag.
+-Line: 1 Col: 404 Unexpected end tag (dir). Ignored.
+-Line: 1 Col: 410 End tag (div) seen too early. Expected other end tag.
+-Line: 1 Col: 415 End tag (dl) seen too early. Expected other end tag.
+-Line: 1 Col: 426 End tag (fieldset) seen too early. Expected other end tag.
+-Line: 1 Col: 436 End tag (listing) seen too early. Expected other end tag.
+-Line: 1 Col: 443 End tag (menu) seen too early. Expected other end tag.
+-Line: 1 Col: 448 End tag (ol) seen too early. Expected other end tag.
+-Line: 1 Col: 453 End tag (ul) seen too early. Expected other end tag.
+-Line: 1 Col: 458 End tag (li) seen too early. Expected other end tag.
+-Line: 1 Col: 465 End tag (nobr) violates step 1, paragraph 1 of the adoption agency algorithm.
+-Line: 1 Col: 471 This element (wbr) has no end tag.
+-Line: 1 Col: 487 End tag (button) seen too early. Expected other end tag.
+-Line: 1 Col: 497 End tag (marquee) seen too early. Expected other end tag.
+-Line: 1 Col: 506 End tag (object) seen too early. Expected other end tag.
+-Line: 1 Col: 524 Unexpected end tag (html). Ignored.
+-Line: 1 Col: 524 Unexpected end tag (frameset). Ignored.
+-Line: 1 Col: 531 Unexpected end tag (head). Ignored.
+-Line: 1 Col: 540 Unexpected end tag (iframe). Ignored.
+-Line: 1 Col: 548 This element (image) has no end tag.
+-Line: 1 Col: 558 This element (isindex) has no end tag.
+-Line: 1 Col: 568 Unexpected end tag (noembed). Ignored.
+-Line: 1 Col: 579 Unexpected end tag (noframes). Ignored.
+-Line: 1 Col: 590 Unexpected end tag (noscript). Ignored.
+-Line: 1 Col: 601 Unexpected end tag (optgroup). Ignored.
+-Line: 1 Col: 610 Unexpected end tag (option). Ignored.
+-Line: 1 Col: 622 Unexpected end tag (plaintext). Ignored.
+-Line: 1 Col: 633 Unexpected end tag (textarea). Ignored.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <br>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|     <p>
+-
+-#data
+-<frameset>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (frameset). Expected DOCTYPE.
+-Line: 1 Col: 10 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests10.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests10.dat
+deleted file mode 100644
+index 4f8df86..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests10.dat
++++ /dev/null
+@@ -1,799 +0,0 @@
+-#data
+-<!DOCTYPE html><svg></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-
+-#data
+-<!DOCTYPE html><svg></svg><![CDATA[a]]>
+-#errors
+-29: Bogus comment
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|     <!-- [CDATA[a]] -->
+-
+-#data
+-<!DOCTYPE html><body><svg></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-
+-#data
+-<!DOCTYPE html><body><select><svg></svg></select>
+-#errors
+-35: Stray “svg” start tag.
+-42: Stray end tag “svg”
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!DOCTYPE html><body><select><option><svg></svg></option></select>
+-#errors
+-43: Stray “svg” start tag.
+-50: Stray end tag “svg”
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-
+-#data
+-<!DOCTYPE html><body><table><svg></svg></table>
+-#errors
+-34: Start tag “svg” seen in “table”.
+-41: Stray end tag “svg”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|     <table>
+-
+-#data
+-<!DOCTYPE html><body><table><svg><g>foo</g></svg></table>
+-#errors
+-34: Start tag “svg” seen in “table”.
+-46: Stray end tag “g”.
+-53: Stray end tag “svg”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg g>
+-|         "foo"
+-|     <table>
+-
+-#data
+-<!DOCTYPE html><body><table><svg><g>foo</g><g>bar</g></svg></table>
+-#errors
+-34: Start tag “svg” seen in “table”.
+-46: Stray end tag “g”.
+-58: Stray end tag “g”.
+-65: Stray end tag “svg”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg g>
+-|         "foo"
+-|       <svg g>
+-|         "bar"
+-|     <table>
+-
+-#data
+-<!DOCTYPE html><body><table><tbody><svg><g>foo</g><g>bar</g></svg></tbody></table>
+-#errors
+-41: Start tag “svg” seen in “table”.
+-53: Stray end tag “g”.
+-65: Stray end tag “g”.
+-72: Stray end tag “svg”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg g>
+-|         "foo"
+-|       <svg g>
+-|         "bar"
+-|     <table>
+-|       <tbody>
+-
+-#data
+-<!DOCTYPE html><body><table><tbody><tr><svg><g>foo</g><g>bar</g></svg></tr></tbody></table>
+-#errors
+-45: Start tag “svg” seen in “table”.
+-57: Stray end tag “g”.
+-69: Stray end tag “g”.
+-76: Stray end tag “svg”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg g>
+-|         "foo"
+-|       <svg g>
+-|         "bar"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!DOCTYPE html><body><table><tbody><tr><td><svg><g>foo</g><g>bar</g></svg></td></tr></tbody></table>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <svg svg>
+-|               <svg g>
+-|                 "foo"
+-|               <svg g>
+-|                 "bar"
+-
+-#data
+-<!DOCTYPE html><body><table><tbody><tr><td><svg><g>foo</g><g>bar</g></svg><p>baz</td></tr></tbody></table>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <svg svg>
+-|               <svg g>
+-|                 "foo"
+-|               <svg g>
+-|                 "bar"
+-|             <p>
+-|               "baz"
+-
+-#data
+-<!DOCTYPE html><body><table><caption><svg><g>foo</g><g>bar</g></svg><p>baz</caption></table>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <svg svg>
+-|           <svg g>
+-|             "foo"
+-|           <svg g>
+-|             "bar"
+-|         <p>
+-|           "baz"
+-
+-#data
+-<!DOCTYPE html><body><table><caption><svg><g>foo</g><g>bar</g><p>baz</table><p>quux
+-#errors
+-70: HTML start tag “p” in a foreign namespace context.
+-81: “table” closed but “caption” was still open.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <svg svg>
+-|           <svg g>
+-|             "foo"
+-|           <svg g>
+-|             "bar"
+-|         <p>
+-|           "baz"
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body><table><caption><svg><g>foo</g><g>bar</g>baz</table><p>quux
+-#errors
+-78: “table” closed but “caption” was still open.
+-78: Unclosed elements on stack.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <svg svg>
+-|           <svg g>
+-|             "foo"
+-|           <svg g>
+-|             "bar"
+-|           "baz"
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body><table><colgroup><svg><g>foo</g><g>bar</g><p>baz</table><p>quux
+-#errors
+-44: Start tag “svg” seen in “table”.
+-56: Stray end tag “g”.
+-68: Stray end tag “g”.
+-71: HTML start tag “p” in a foreign namespace context.
+-71: Start tag “p” seen in “table”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg g>
+-|         "foo"
+-|       <svg g>
+-|         "bar"
+-|     <p>
+-|       "baz"
+-|     <table>
+-|       <colgroup>
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body><table><tr><td><select><svg><g>foo</g><g>bar</g><p>baz</table><p>quux
+-#errors
+-50: Stray “svg” start tag.
+-54: Stray “g” start tag.
+-62: Stray end tag “g”
+-66: Stray “g” start tag.
+-74: Stray end tag “g”
+-77: Stray “p” start tag.
+-88: “table” end tag with “select” open.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <select>
+-|               "foobarbaz"
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body><table><select><svg><g>foo</g><g>bar</g><p>baz</table><p>quux
+-#errors
+-36: Start tag “select” seen in “table”.
+-42: Stray “svg” start tag.
+-46: Stray “g” start tag.
+-54: Stray end tag “g”
+-58: Stray “g” start tag.
+-66: Stray end tag “g”
+-69: Stray “p” start tag.
+-80: “table” end tag with “select” open.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       "foobarbaz"
+-|     <table>
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body></body></html><svg><g>foo</g><g>bar</g><p>baz
+-#errors
+-41: Stray “svg” start tag.
+-68: HTML start tag “p” in a foreign namespace context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg g>
+-|         "foo"
+-|       <svg g>
+-|         "bar"
+-|     <p>
+-|       "baz"
+-
+-#data
+-<!DOCTYPE html><body></body><svg><g>foo</g><g>bar</g><p>baz
+-#errors
+-34: Stray “svg” start tag.
+-61: HTML start tag “p” in a foreign namespace context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg g>
+-|         "foo"
+-|       <svg g>
+-|         "bar"
+-|     <p>
+-|       "baz"
+-
+-#data
+-<!DOCTYPE html><frameset><svg><g></g><g></g><p><span>
+-#errors
+-31: Stray “svg” start tag.
+-35: Stray “g” start tag.
+-40: Stray end tag “g”
+-44: Stray “g” start tag.
+-49: Stray end tag “g”
+-52: Stray “p” start tag.
+-58: Stray “span” start tag.
+-58: End of file seen and there were open elements.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!DOCTYPE html><frameset></frameset><svg><g></g><g></g><p><span>
+-#errors
+-42: Stray “svg” start tag.
+-46: Stray “g” start tag.
+-51: Stray end tag “g”
+-55: Stray “g” start tag.
+-60: Stray end tag “g”
+-63: Stray “p” start tag.
+-69: Stray “span” start tag.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!DOCTYPE html><body xlink:href=foo><svg xlink:href=foo></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     xlink:href="foo"
+-|     <svg svg>
+-|       xlink href="foo"
+-
+-#data
+-<!DOCTYPE html><body xlink:href=foo xml:lang=en><svg><g xml:lang=en xlink:href=foo></g></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     xlink:href="foo"
+-|     xml:lang="en"
+-|     <svg svg>
+-|       <svg g>
+-|         xlink href="foo"
+-|         xml lang="en"
+-
+-#data
+-<!DOCTYPE html><body xlink:href=foo xml:lang=en><svg><g xml:lang=en xlink:href=foo /></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     xlink:href="foo"
+-|     xml:lang="en"
+-|     <svg svg>
+-|       <svg g>
+-|         xlink href="foo"
+-|         xml lang="en"
+-
+-#data
+-<!DOCTYPE html><body xlink:href=foo xml:lang=en><svg><g xml:lang=en xlink:href=foo />bar</svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     xlink:href="foo"
+-|     xml:lang="en"
+-|     <svg svg>
+-|       <svg g>
+-|         xlink href="foo"
+-|         xml lang="en"
+-|       "bar"
+-
+-#data
+-<svg></path>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-
+-#data
+-<div><svg></div>a
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <svg svg>
+-|     "a"
+-
+-#data
+-<div><svg><path></div>a
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <svg svg>
+-|         <svg path>
+-|     "a"
+-
+-#data
+-<div><svg><path></svg><path>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <svg svg>
+-|         <svg path>
+-|       <path>
+-
+-#data
+-<div><svg><path><foreignObject><math></div>a
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <svg svg>
+-|         <svg path>
+-|           <svg foreignObject>
+-|             <math math>
+-|               "a"
+-
+-#data
+-<div><svg><path><foreignObject><p></div>a
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <svg svg>
+-|         <svg path>
+-|           <svg foreignObject>
+-|             <p>
+-|               "a"
+-
+-#data
+-<!DOCTYPE html><svg><desc><div><svg><ul>a
+-#errors
+-40: HTML start tag “ul” in a foreign namespace context.
+-41: End of file in a foreign namespace context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg desc>
+-|         <div>
+-|           <svg svg>
+-|           <ul>
+-|             "a"
+-
+-#data
+-<!DOCTYPE html><svg><desc><svg><ul>a
+-#errors
+-35: HTML start tag “ul” in a foreign namespace context.
+-36: End of file in a foreign namespace context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg desc>
+-|         <svg svg>
+-|         <ul>
+-|           "a"
+-
+-#data
+-<!DOCTYPE html><p><svg><desc><p>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <svg svg>
+-|         <svg desc>
+-|           <p>
+-
+-#data
+-<!DOCTYPE html><p><svg><title><p>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <svg svg>
+-|         <svg title>
+-|           <p>
+-
+-#data
+-<div><svg><path><foreignObject><p></foreignObject><p>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <svg svg>
+-|         <svg path>
+-|           <svg foreignObject>
+-|             <p>
+-|             <p>
+-
+-#data
+-<math><mi><div><object><div><span></span></div></object></div></mi><mi>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         <div>
+-|           <object>
+-|             <div>
+-|               <span>
+-|       <math mi>
+-
+-#data
+-<math><mi><svg><foreignObject><div><div></div></div></foreignObject></svg></mi><mi>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         <svg svg>
+-|           <svg foreignObject>
+-|             <div>
+-|               <div>
+-|       <math mi>
+-
+-#data
+-<svg><script></script><path>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg script>
+-|       <svg path>
+-
+-#data
+-<table><svg></svg><tr>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<math><mi><mglyph>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         <math mglyph>
+-
+-#data
+-<math><mi><malignmark>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         <math malignmark>
+-
+-#data
+-<math><mo><mglyph>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mo>
+-|         <math mglyph>
+-
+-#data
+-<math><mo><malignmark>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mo>
+-|         <math malignmark>
+-
+-#data
+-<math><mn><mglyph>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mn>
+-|         <math mglyph>
+-
+-#data
+-<math><mn><malignmark>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mn>
+-|         <math malignmark>
+-
+-#data
+-<math><ms><mglyph>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math ms>
+-|         <math mglyph>
+-
+-#data
+-<math><ms><malignmark>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math ms>
+-|         <math malignmark>
+-
+-#data
+-<math><mtext><mglyph>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mtext>
+-|         <math mglyph>
+-
+-#data
+-<math><mtext><malignmark>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mtext>
+-|         <math malignmark>
+-
+-#data
+-<math><annotation-xml><svg></svg></annotation-xml><mi>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         <svg svg>
+-|       <math mi>
+-
+-#data
+-<math><annotation-xml><svg><foreignObject><div><math><mi></mi></math><span></span></div></foreignObject><path></path></svg></annotation-xml><mi>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         <svg svg>
+-|           <svg foreignObject>
+-|             <div>
+-|               <math math>
+-|                 <math mi>
+-|               <span>
+-|           <svg path>
+-|       <math mi>
+-
+-#data
+-<math><annotation-xml><svg><foreignObject><math><mi><svg></svg></mi><mo></mo></math><span></span></foreignObject><path></path></svg></annotation-xml><mi>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         <svg svg>
+-|           <svg foreignObject>
+-|             <math math>
+-|               <math mi>
+-|                 <svg svg>
+-|               <math mo>
+-|             <span>
+-|           <svg path>
+-|       <math mi>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests11.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests11.dat
+deleted file mode 100644
+index 638cde4..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests11.dat
++++ /dev/null
+@@ -1,482 +0,0 @@
+-#data
+-<!DOCTYPE html><body><svg attributeName='' attributeType='' baseFrequency='' baseProfile='' calcMode='' clipPathUnits='' contentScriptType='' contentStyleType='' diffuseConstant='' edgeMode='' externalResourcesRequired='' filterRes='' filterUnits='' glyphRef='' gradientTransform='' gradientUnits='' kernelMatrix='' kernelUnitLength='' keyPoints='' keySplines='' keyTimes='' lengthAdjust='' limitingConeAngle='' markerHeight='' markerUnits='' markerWidth='' maskContentUnits='' maskUnits='' numOctaves='' pathLength='' patternContentUnits='' patternTransform='' patternUnits='' pointsAtX='' pointsAtY='' pointsAtZ='' preserveAlpha='' preserveAspectRatio='' primitiveUnits='' refX='' refY='' repeatCount='' repeatDur='' requiredExtensions='' requiredFeatures='' specularConstant='' specularExponent='' spreadMethod='' startOffset='' stdDeviation='' stitchTiles='' surfaceScale='' systemLanguage='' tableValues='' targetX='' targetY='' textLength='' viewBox='' viewTarget='' xChannelSelector='' yChannelSelector='' zoomAndPan=''></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       attributeName=""
+-|       attributeType=""
+-|       baseFrequency=""
+-|       baseProfile=""
+-|       calcMode=""
+-|       clipPathUnits=""
+-|       contentScriptType=""
+-|       contentStyleType=""
+-|       diffuseConstant=""
+-|       edgeMode=""
+-|       externalResourcesRequired=""
+-|       filterRes=""
+-|       filterUnits=""
+-|       glyphRef=""
+-|       gradientTransform=""
+-|       gradientUnits=""
+-|       kernelMatrix=""
+-|       kernelUnitLength=""
+-|       keyPoints=""
+-|       keySplines=""
+-|       keyTimes=""
+-|       lengthAdjust=""
+-|       limitingConeAngle=""
+-|       markerHeight=""
+-|       markerUnits=""
+-|       markerWidth=""
+-|       maskContentUnits=""
+-|       maskUnits=""
+-|       numOctaves=""
+-|       pathLength=""
+-|       patternContentUnits=""
+-|       patternTransform=""
+-|       patternUnits=""
+-|       pointsAtX=""
+-|       pointsAtY=""
+-|       pointsAtZ=""
+-|       preserveAlpha=""
+-|       preserveAspectRatio=""
+-|       primitiveUnits=""
+-|       refX=""
+-|       refY=""
+-|       repeatCount=""
+-|       repeatDur=""
+-|       requiredExtensions=""
+-|       requiredFeatures=""
+-|       specularConstant=""
+-|       specularExponent=""
+-|       spreadMethod=""
+-|       startOffset=""
+-|       stdDeviation=""
+-|       stitchTiles=""
+-|       surfaceScale=""
+-|       systemLanguage=""
+-|       tableValues=""
+-|       targetX=""
+-|       targetY=""
+-|       textLength=""
+-|       viewBox=""
+-|       viewTarget=""
+-|       xChannelSelector=""
+-|       yChannelSelector=""
+-|       zoomAndPan=""
+-
+-#data
+-<!DOCTYPE html><BODY><SVG ATTRIBUTENAME='' ATTRIBUTETYPE='' BASEFREQUENCY='' BASEPROFILE='' CALCMODE='' CLIPPATHUNITS='' CONTENTSCRIPTTYPE='' CONTENTSTYLETYPE='' DIFFUSECONSTANT='' EDGEMODE='' EXTERNALRESOURCESREQUIRED='' FILTERRES='' FILTERUNITS='' GLYPHREF='' GRADIENTTRANSFORM='' GRADIENTUNITS='' KERNELMATRIX='' KERNELUNITLENGTH='' KEYPOINTS='' KEYSPLINES='' KEYTIMES='' LENGTHADJUST='' LIMITINGCONEANGLE='' MARKERHEIGHT='' MARKERUNITS='' MARKERWIDTH='' MASKCONTENTUNITS='' MASKUNITS='' NUMOCTAVES='' PATHLENGTH='' PATTERNCONTENTUNITS='' PATTERNTRANSFORM='' PATTERNUNITS='' POINTSATX='' POINTSATY='' POINTSATZ='' PRESERVEALPHA='' PRESERVEASPECTRATIO='' PRIMITIVEUNITS='' REFX='' REFY='' REPEATCOUNT='' REPEATDUR='' REQUIREDEXTENSIONS='' REQUIREDFEATURES='' SPECULARCONSTANT='' SPECULAREXPONENT='' SPREADMETHOD='' STARTOFFSET='' STDDEVIATION='' STITCHTILES='' SURFACESCALE='' SYSTEMLANGUAGE='' TABLEVALUES='' TARGETX='' TARGETY='' TEXTLENGTH='' VIEWBOX='' VIEWTARGET='' XCHANNELSELECTOR='' YCHANNELSELECTOR='' ZOOMANDPAN=''></SVG>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       attributeName=""
+-|       attributeType=""
+-|       baseFrequency=""
+-|       baseProfile=""
+-|       calcMode=""
+-|       clipPathUnits=""
+-|       contentScriptType=""
+-|       contentStyleType=""
+-|       diffuseConstant=""
+-|       edgeMode=""
+-|       externalResourcesRequired=""
+-|       filterRes=""
+-|       filterUnits=""
+-|       glyphRef=""
+-|       gradientTransform=""
+-|       gradientUnits=""
+-|       kernelMatrix=""
+-|       kernelUnitLength=""
+-|       keyPoints=""
+-|       keySplines=""
+-|       keyTimes=""
+-|       lengthAdjust=""
+-|       limitingConeAngle=""
+-|       markerHeight=""
+-|       markerUnits=""
+-|       markerWidth=""
+-|       maskContentUnits=""
+-|       maskUnits=""
+-|       numOctaves=""
+-|       pathLength=""
+-|       patternContentUnits=""
+-|       patternTransform=""
+-|       patternUnits=""
+-|       pointsAtX=""
+-|       pointsAtY=""
+-|       pointsAtZ=""
+-|       preserveAlpha=""
+-|       preserveAspectRatio=""
+-|       primitiveUnits=""
+-|       refX=""
+-|       refY=""
+-|       repeatCount=""
+-|       repeatDur=""
+-|       requiredExtensions=""
+-|       requiredFeatures=""
+-|       specularConstant=""
+-|       specularExponent=""
+-|       spreadMethod=""
+-|       startOffset=""
+-|       stdDeviation=""
+-|       stitchTiles=""
+-|       surfaceScale=""
+-|       systemLanguage=""
+-|       tableValues=""
+-|       targetX=""
+-|       targetY=""
+-|       textLength=""
+-|       viewBox=""
+-|       viewTarget=""
+-|       xChannelSelector=""
+-|       yChannelSelector=""
+-|       zoomAndPan=""
+-
+-#data
+-<!DOCTYPE html><body><svg attributename='' attributetype='' basefrequency='' baseprofile='' calcmode='' clippathunits='' contentscripttype='' contentstyletype='' diffuseconstant='' edgemode='' externalresourcesrequired='' filterres='' filterunits='' glyphref='' gradienttransform='' gradientunits='' kernelmatrix='' kernelunitlength='' keypoints='' keysplines='' keytimes='' lengthadjust='' limitingconeangle='' markerheight='' markerunits='' markerwidth='' maskcontentunits='' maskunits='' numoctaves='' pathlength='' patterncontentunits='' patterntransform='' patternunits='' pointsatx='' pointsaty='' pointsatz='' preservealpha='' preserveaspectratio='' primitiveunits='' refx='' refy='' repeatcount='' repeatdur='' requiredextensions='' requiredfeatures='' specularconstant='' specularexponent='' spreadmethod='' startoffset='' stddeviation='' stitchtiles='' surfacescale='' systemlanguage='' tablevalues='' targetx='' targety='' textlength='' viewbox='' viewtarget='' xchannelselector='' ychannelselector='' zoomandpan=''></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       attributeName=""
+-|       attributeType=""
+-|       baseFrequency=""
+-|       baseProfile=""
+-|       calcMode=""
+-|       clipPathUnits=""
+-|       contentScriptType=""
+-|       contentStyleType=""
+-|       diffuseConstant=""
+-|       edgeMode=""
+-|       externalResourcesRequired=""
+-|       filterRes=""
+-|       filterUnits=""
+-|       glyphRef=""
+-|       gradientTransform=""
+-|       gradientUnits=""
+-|       kernelMatrix=""
+-|       kernelUnitLength=""
+-|       keyPoints=""
+-|       keySplines=""
+-|       keyTimes=""
+-|       lengthAdjust=""
+-|       limitingConeAngle=""
+-|       markerHeight=""
+-|       markerUnits=""
+-|       markerWidth=""
+-|       maskContentUnits=""
+-|       maskUnits=""
+-|       numOctaves=""
+-|       pathLength=""
+-|       patternContentUnits=""
+-|       patternTransform=""
+-|       patternUnits=""
+-|       pointsAtX=""
+-|       pointsAtY=""
+-|       pointsAtZ=""
+-|       preserveAlpha=""
+-|       preserveAspectRatio=""
+-|       primitiveUnits=""
+-|       refX=""
+-|       refY=""
+-|       repeatCount=""
+-|       repeatDur=""
+-|       requiredExtensions=""
+-|       requiredFeatures=""
+-|       specularConstant=""
+-|       specularExponent=""
+-|       spreadMethod=""
+-|       startOffset=""
+-|       stdDeviation=""
+-|       stitchTiles=""
+-|       surfaceScale=""
+-|       systemLanguage=""
+-|       tableValues=""
+-|       targetX=""
+-|       targetY=""
+-|       textLength=""
+-|       viewBox=""
+-|       viewTarget=""
+-|       xChannelSelector=""
+-|       yChannelSelector=""
+-|       zoomAndPan=""
+-
+-#data
+-<!DOCTYPE html><body><math attributeName='' attributeType='' baseFrequency='' baseProfile='' calcMode='' clipPathUnits='' contentScriptType='' contentStyleType='' diffuseConstant='' edgeMode='' externalResourcesRequired='' filterRes='' filterUnits='' glyphRef='' gradientTransform='' gradientUnits='' kernelMatrix='' kernelUnitLength='' keyPoints='' keySplines='' keyTimes='' lengthAdjust='' limitingConeAngle='' markerHeight='' markerUnits='' markerWidth='' maskContentUnits='' maskUnits='' numOctaves='' pathLength='' patternContentUnits='' patternTransform='' patternUnits='' pointsAtX='' pointsAtY='' pointsAtZ='' preserveAlpha='' preserveAspectRatio='' primitiveUnits='' refX='' refY='' repeatCount='' repeatDur='' requiredExtensions='' requiredFeatures='' specularConstant='' specularExponent='' spreadMethod='' startOffset='' stdDeviation='' stitchTiles='' surfaceScale='' systemLanguage='' tableValues='' targetX='' targetY='' textLength='' viewBox='' viewTarget='' xChannelSelector='' yChannelSelector='' zoomAndPan=''></math>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       attributename=""
+-|       attributetype=""
+-|       basefrequency=""
+-|       baseprofile=""
+-|       calcmode=""
+-|       clippathunits=""
+-|       contentscripttype=""
+-|       contentstyletype=""
+-|       diffuseconstant=""
+-|       edgemode=""
+-|       externalresourcesrequired=""
+-|       filterres=""
+-|       filterunits=""
+-|       glyphref=""
+-|       gradienttransform=""
+-|       gradientunits=""
+-|       kernelmatrix=""
+-|       kernelunitlength=""
+-|       keypoints=""
+-|       keysplines=""
+-|       keytimes=""
+-|       lengthadjust=""
+-|       limitingconeangle=""
+-|       markerheight=""
+-|       markerunits=""
+-|       markerwidth=""
+-|       maskcontentunits=""
+-|       maskunits=""
+-|       numoctaves=""
+-|       pathlength=""
+-|       patterncontentunits=""
+-|       patterntransform=""
+-|       patternunits=""
+-|       pointsatx=""
+-|       pointsaty=""
+-|       pointsatz=""
+-|       preservealpha=""
+-|       preserveaspectratio=""
+-|       primitiveunits=""
+-|       refx=""
+-|       refy=""
+-|       repeatcount=""
+-|       repeatdur=""
+-|       requiredextensions=""
+-|       requiredfeatures=""
+-|       specularconstant=""
+-|       specularexponent=""
+-|       spreadmethod=""
+-|       startoffset=""
+-|       stddeviation=""
+-|       stitchtiles=""
+-|       surfacescale=""
+-|       systemlanguage=""
+-|       tablevalues=""
+-|       targetx=""
+-|       targety=""
+-|       textlength=""
+-|       viewbox=""
+-|       viewtarget=""
+-|       xchannelselector=""
+-|       ychannelselector=""
+-|       zoomandpan=""
+-
+-#data
+-<!DOCTYPE html><body><svg><altGlyph /><altGlyphDef /><altGlyphItem /><animateColor /><animateMotion /><animateTransform /><clipPath /><feBlend /><feColorMatrix /><feComponentTransfer /><feComposite /><feConvolveMatrix /><feDiffuseLighting /><feDisplacementMap /><feDistantLight /><feFlood /><feFuncA /><feFuncB /><feFuncG /><feFuncR /><feGaussianBlur /><feImage /><feMerge /><feMergeNode /><feMorphology /><feOffset /><fePointLight /><feSpecularLighting /><feSpotLight /><feTile /><feTurbulence /><foreignObject /><glyphRef /><linearGradient /><radialGradient /><textPath /></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg altGlyph>
+-|       <svg altGlyphDef>
+-|       <svg altGlyphItem>
+-|       <svg animateColor>
+-|       <svg animateMotion>
+-|       <svg animateTransform>
+-|       <svg clipPath>
+-|       <svg feBlend>
+-|       <svg feColorMatrix>
+-|       <svg feComponentTransfer>
+-|       <svg feComposite>
+-|       <svg feConvolveMatrix>
+-|       <svg feDiffuseLighting>
+-|       <svg feDisplacementMap>
+-|       <svg feDistantLight>
+-|       <svg feFlood>
+-|       <svg feFuncA>
+-|       <svg feFuncB>
+-|       <svg feFuncG>
+-|       <svg feFuncR>
+-|       <svg feGaussianBlur>
+-|       <svg feImage>
+-|       <svg feMerge>
+-|       <svg feMergeNode>
+-|       <svg feMorphology>
+-|       <svg feOffset>
+-|       <svg fePointLight>
+-|       <svg feSpecularLighting>
+-|       <svg feSpotLight>
+-|       <svg feTile>
+-|       <svg feTurbulence>
+-|       <svg foreignObject>
+-|       <svg glyphRef>
+-|       <svg linearGradient>
+-|       <svg radialGradient>
+-|       <svg textPath>
+-
+-#data
+-<!DOCTYPE html><body><svg><altglyph /><altglyphdef /><altglyphitem /><animatecolor /><animatemotion /><animatetransform /><clippath /><feblend /><fecolormatrix /><fecomponenttransfer /><fecomposite /><feconvolvematrix /><fediffuselighting /><fedisplacementmap /><fedistantlight /><feflood /><fefunca /><fefuncb /><fefuncg /><fefuncr /><fegaussianblur /><feimage /><femerge /><femergenode /><femorphology /><feoffset /><fepointlight /><fespecularlighting /><fespotlight /><fetile /><feturbulence /><foreignobject /><glyphref /><lineargradient /><radialgradient /><textpath /></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg altGlyph>
+-|       <svg altGlyphDef>
+-|       <svg altGlyphItem>
+-|       <svg animateColor>
+-|       <svg animateMotion>
+-|       <svg animateTransform>
+-|       <svg clipPath>
+-|       <svg feBlend>
+-|       <svg feColorMatrix>
+-|       <svg feComponentTransfer>
+-|       <svg feComposite>
+-|       <svg feConvolveMatrix>
+-|       <svg feDiffuseLighting>
+-|       <svg feDisplacementMap>
+-|       <svg feDistantLight>
+-|       <svg feFlood>
+-|       <svg feFuncA>
+-|       <svg feFuncB>
+-|       <svg feFuncG>
+-|       <svg feFuncR>
+-|       <svg feGaussianBlur>
+-|       <svg feImage>
+-|       <svg feMerge>
+-|       <svg feMergeNode>
+-|       <svg feMorphology>
+-|       <svg feOffset>
+-|       <svg fePointLight>
+-|       <svg feSpecularLighting>
+-|       <svg feSpotLight>
+-|       <svg feTile>
+-|       <svg feTurbulence>
+-|       <svg foreignObject>
+-|       <svg glyphRef>
+-|       <svg linearGradient>
+-|       <svg radialGradient>
+-|       <svg textPath>
+-
+-#data
+-<!DOCTYPE html><BODY><SVG><ALTGLYPH /><ALTGLYPHDEF /><ALTGLYPHITEM /><ANIMATECOLOR /><ANIMATEMOTION /><ANIMATETRANSFORM /><CLIPPATH /><FEBLEND /><FECOLORMATRIX /><FECOMPONENTTRANSFER /><FECOMPOSITE /><FECONVOLVEMATRIX /><FEDIFFUSELIGHTING /><FEDISPLACEMENTMAP /><FEDISTANTLIGHT /><FEFLOOD /><FEFUNCA /><FEFUNCB /><FEFUNCG /><FEFUNCR /><FEGAUSSIANBLUR /><FEIMAGE /><FEMERGE /><FEMERGENODE /><FEMORPHOLOGY /><FEOFFSET /><FEPOINTLIGHT /><FESPECULARLIGHTING /><FESPOTLIGHT /><FETILE /><FETURBULENCE /><FOREIGNOBJECT /><GLYPHREF /><LINEARGRADIENT /><RADIALGRADIENT /><TEXTPATH /></SVG>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg altGlyph>
+-|       <svg altGlyphDef>
+-|       <svg altGlyphItem>
+-|       <svg animateColor>
+-|       <svg animateMotion>
+-|       <svg animateTransform>
+-|       <svg clipPath>
+-|       <svg feBlend>
+-|       <svg feColorMatrix>
+-|       <svg feComponentTransfer>
+-|       <svg feComposite>
+-|       <svg feConvolveMatrix>
+-|       <svg feDiffuseLighting>
+-|       <svg feDisplacementMap>
+-|       <svg feDistantLight>
+-|       <svg feFlood>
+-|       <svg feFuncA>
+-|       <svg feFuncB>
+-|       <svg feFuncG>
+-|       <svg feFuncR>
+-|       <svg feGaussianBlur>
+-|       <svg feImage>
+-|       <svg feMerge>
+-|       <svg feMergeNode>
+-|       <svg feMorphology>
+-|       <svg feOffset>
+-|       <svg fePointLight>
+-|       <svg feSpecularLighting>
+-|       <svg feSpotLight>
+-|       <svg feTile>
+-|       <svg feTurbulence>
+-|       <svg foreignObject>
+-|       <svg glyphRef>
+-|       <svg linearGradient>
+-|       <svg radialGradient>
+-|       <svg textPath>
+-
+-#data
+-<!DOCTYPE html><body><math><altGlyph /><altGlyphDef /><altGlyphItem /><animateColor /><animateMotion /><animateTransform /><clipPath /><feBlend /><feColorMatrix /><feComponentTransfer /><feComposite /><feConvolveMatrix /><feDiffuseLighting /><feDisplacementMap /><feDistantLight /><feFlood /><feFuncA /><feFuncB /><feFuncG /><feFuncR /><feGaussianBlur /><feImage /><feMerge /><feMergeNode /><feMorphology /><feOffset /><fePointLight /><feSpecularLighting /><feSpotLight /><feTile /><feTurbulence /><foreignObject /><glyphRef /><linearGradient /><radialGradient /><textPath /></math>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math altglyph>
+-|       <math altglyphdef>
+-|       <math altglyphitem>
+-|       <math animatecolor>
+-|       <math animatemotion>
+-|       <math animatetransform>
+-|       <math clippath>
+-|       <math feblend>
+-|       <math fecolormatrix>
+-|       <math fecomponenttransfer>
+-|       <math fecomposite>
+-|       <math feconvolvematrix>
+-|       <math fediffuselighting>
+-|       <math fedisplacementmap>
+-|       <math fedistantlight>
+-|       <math feflood>
+-|       <math fefunca>
+-|       <math fefuncb>
+-|       <math fefuncg>
+-|       <math fefuncr>
+-|       <math fegaussianblur>
+-|       <math feimage>
+-|       <math femerge>
+-|       <math femergenode>
+-|       <math femorphology>
+-|       <math feoffset>
+-|       <math fepointlight>
+-|       <math fespecularlighting>
+-|       <math fespotlight>
+-|       <math fetile>
+-|       <math feturbulence>
+-|       <math foreignobject>
+-|       <math glyphref>
+-|       <math lineargradient>
+-|       <math radialgradient>
+-|       <math textpath>
+-
+-#data
+-<!DOCTYPE html><body><svg><solidColor /></svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg solidcolor>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests12.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests12.dat
+deleted file mode 100644
+index 63107d2..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests12.dat
++++ /dev/null
+@@ -1,62 +0,0 @@
+-#data
+-<!DOCTYPE html><body><p>foo<math><mtext><i>baz</i></mtext><annotation-xml><svg><desc><b>eggs</b></desc><g><foreignObject><P>spam<TABLE><tr><td><img></td></table></foreignObject></g><g>quux</g></svg></annotation-xml></math>bar
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       "foo"
+-|       <math math>
+-|         <math mtext>
+-|           <i>
+-|             "baz"
+-|         <math annotation-xml>
+-|           <svg svg>
+-|             <svg desc>
+-|               <b>
+-|                 "eggs"
+-|             <svg g>
+-|               <svg foreignObject>
+-|                 <p>
+-|                   "spam"
+-|                 <table>
+-|                   <tbody>
+-|                     <tr>
+-|                       <td>
+-|                         <img>
+-|             <svg g>
+-|               "quux"
+-|       "bar"
+-
+-#data
+-<!DOCTYPE html><body>foo<math><mtext><i>baz</i></mtext><annotation-xml><svg><desc><b>eggs</b></desc><g><foreignObject><P>spam<TABLE><tr><td><img></td></table></foreignObject></g><g>quux</g></svg></annotation-xml></math>bar
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "foo"
+-|     <math math>
+-|       <math mtext>
+-|         <i>
+-|           "baz"
+-|       <math annotation-xml>
+-|         <svg svg>
+-|           <svg desc>
+-|             <b>
+-|               "eggs"
+-|           <svg g>
+-|             <svg foreignObject>
+-|               <p>
+-|                 "spam"
+-|               <table>
+-|                 <tbody>
+-|                   <tr>
+-|                     <td>
+-|                       <img>
+-|           <svg g>
+-|             "quux"
+-|     "bar"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests14.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests14.dat
+deleted file mode 100644
+index b8713f8..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests14.dat
++++ /dev/null
+@@ -1,74 +0,0 @@
+-#data
+-<!DOCTYPE html><html><body><xyz:abc></xyz:abc>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <xyz:abc>
+-
+-#data
+-<!DOCTYPE html><html><body><xyz:abc></xyz:abc><span></span>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <xyz:abc>
+-|     <span>
+-
+-#data
+-<!DOCTYPE html><html><html abc:def=gh><xyz:abc></xyz:abc>
+-#errors
+-15: Unexpected start tag html
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   abc:def="gh"
+-|   <head>
+-|   <body>
+-|     <xyz:abc>
+-
+-#data
+-<!DOCTYPE html><html xml:lang=bar><html xml:lang=foo>
+-#errors
+-15: Unexpected start tag html
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   xml:lang="bar"
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><html 123=456>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   123="456"
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><html 123=456><html 789=012>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   123="456"
+-|   789="012"
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><html><body 789=012>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     789="012"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests15.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests15.dat
+deleted file mode 100644
+index 6ce1c0d..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests15.dat
++++ /dev/null
+@@ -1,208 +0,0 @@
+-#data
+-<!DOCTYPE html><p><b><i><u></p> <p>X
+-#errors
+-Line: 1 Col: 31 Unexpected end tag (p). Ignored.
+-Line: 1 Col: 36 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <b>
+-|         <i>
+-|           <u>
+-|     <b>
+-|       <i>
+-|         <u>
+-|           " "
+-|           <p>
+-|             "X"
+-
+-#data
+-<p><b><i><u></p>
+-<p>X
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (p). Expected DOCTYPE.
+-Line: 1 Col: 16 Unexpected end tag (p). Ignored.
+-Line: 2 Col: 4 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <b>
+-|         <i>
+-|           <u>
+-|     <b>
+-|       <i>
+-|         <u>
+-|           "
+-"
+-|           <p>
+-|             "X"
+-
+-#data
+-<!doctype html></html> <head>
+-#errors
+-Line: 1 Col: 22 Unexpected end tag (html) after the (implied) root element.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     " "
+-
+-#data
+-<!doctype html></body><meta>
+-#errors
+-Line: 1 Col: 22 Unexpected end tag (body) after the (implied) root element.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <meta>
+-
+-#data
+-<html></html><!-- foo -->
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-Line: 1 Col: 13 Unexpected end tag (html) after the (implied) root element.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-| <!--  foo  -->
+-
+-#data
+-<!doctype html></body><title>X</title>
+-#errors
+-Line: 1 Col: 22 Unexpected end tag (body) after the (implied) root element.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <title>
+-|       "X"
+-
+-#data
+-<!doctype html><table> X<meta></table>
+-#errors
+-Line: 1 Col: 24 Unexpected non-space characters in table context caused voodoo mode.
+-Line: 1 Col: 30 Unexpected start tag (meta) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     " X"
+-|     <meta>
+-|     <table>
+-
+-#data
+-<!doctype html><table> x</table>
+-#errors
+-Line: 1 Col: 24 Unexpected non-space characters in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     " x"
+-|     <table>
+-
+-#data
+-<!doctype html><table> x </table>
+-#errors
+-Line: 1 Col: 25 Unexpected non-space characters in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     " x "
+-|     <table>
+-
+-#data
+-<!doctype html><table><tr> x</table>
+-#errors
+-Line: 1 Col: 28 Unexpected non-space characters in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     " x"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!doctype html><table>X<style> <tr>x </style> </table>
+-#errors
+-Line: 1 Col: 23 Unexpected non-space characters in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "X"
+-|     <table>
+-|       <style>
+-|         " <tr>x "
+-|       " "
+-
+-#data
+-<!doctype html><div><table><a>foo</a> <tr><td>bar</td> </tr></table></div>
+-#errors
+-Line: 1 Col: 30 Unexpected start tag (a) in table context caused voodoo mode.
+-Line: 1 Col: 37 Unexpected end tag (a) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <a>
+-|         "foo"
+-|       <table>
+-|         " "
+-|         <tbody>
+-|           <tr>
+-|             <td>
+-|               "bar"
+-|             " "
+-
+-#data
+-<frame></frame></frame><frameset><frame><frameset><frame></frameset><noframes></frameset><noframes>
+-#errors
+-6: Start tag seen without seeing a doctype first. Expected “<!DOCTYPE html>”.
+-13: Stray start tag “frame”.
+-21: Stray end tag “frame”.
+-29: Stray end tag “frame”.
+-39: “frameset” start tag after “body” already open.
+-105: End of file seen inside an [R]CDATA element.
+-105: End of file seen and there were open elements.
+-XXX: These errors are wrong, please fix me!
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-|     <frame>
+-|     <frameset>
+-|       <frame>
+-|     <noframes>
+-|       "</frameset><noframes>"
+-
+-#data
+-<!DOCTYPE html><object></html>
+-#errors
+-1: Expected closing tag. Unexpected end of file
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <object>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests16.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests16.dat
+deleted file mode 100644
+index c8ef66f..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests16.dat
++++ /dev/null
+@@ -1,2299 +0,0 @@
+-#data
+-<!doctype html><script>
+-#errors
+-Line: 1 Col: 23 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|   <body>
+-
+-#data
+-<!doctype html><script>a
+-#errors
+-Line: 1 Col: 24 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "a"
+-|   <body>
+-
+-#data
+-<!doctype html><script><
+-#errors
+-Line: 1 Col: 24 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<"
+-|   <body>
+-
+-#data
+-<!doctype html><script></
+-#errors
+-Line: 1 Col: 25 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</"
+-|   <body>
+-
+-#data
+-<!doctype html><script></S
+-#errors
+-Line: 1 Col: 26 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</S"
+-|   <body>
+-
+-#data
+-<!doctype html><script></SC
+-#errors
+-Line: 1 Col: 27 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SC"
+-|   <body>
+-
+-#data
+-<!doctype html><script></SCR
+-#errors
+-Line: 1 Col: 28 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SCR"
+-|   <body>
+-
+-#data
+-<!doctype html><script></SCRI
+-#errors
+-Line: 1 Col: 29 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SCRI"
+-|   <body>
+-
+-#data
+-<!doctype html><script></SCRIP
+-#errors
+-Line: 1 Col: 30 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SCRIP"
+-|   <body>
+-
+-#data
+-<!doctype html><script></SCRIPT
+-#errors
+-Line: 1 Col: 31 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SCRIPT"
+-|   <body>
+-
+-#data
+-<!doctype html><script></SCRIPT 
+-#errors
+-Line: 1 Col: 32 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|   <body>
+-
+-#data
+-<!doctype html><script></s
+-#errors
+-Line: 1 Col: 26 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</s"
+-|   <body>
+-
+-#data
+-<!doctype html><script></sc
+-#errors
+-Line: 1 Col: 27 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</sc"
+-|   <body>
+-
+-#data
+-<!doctype html><script></scr
+-#errors
+-Line: 1 Col: 28 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</scr"
+-|   <body>
+-
+-#data
+-<!doctype html><script></scri
+-#errors
+-Line: 1 Col: 29 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</scri"
+-|   <body>
+-
+-#data
+-<!doctype html><script></scrip
+-#errors
+-Line: 1 Col: 30 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</scrip"
+-|   <body>
+-
+-#data
+-<!doctype html><script></script
+-#errors
+-Line: 1 Col: 31 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</script"
+-|   <body>
+-
+-#data
+-<!doctype html><script></script 
+-#errors
+-Line: 1 Col: 32 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|   <body>
+-
+-#data
+-<!doctype html><script><!
+-#errors
+-Line: 1 Col: 25 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!a
+-#errors
+-Line: 1 Col: 26 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!a"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!-
+-#errors
+-Line: 1 Col: 26 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!-"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!-a
+-#errors
+-Line: 1 Col: 27 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!-a"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--
+-#errors
+-Line: 1 Col: 27 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--a
+-#errors
+-Line: 1 Col: 28 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--a"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<
+-#errors
+-Line: 1 Col: 28 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<a
+-#errors
+-Line: 1 Col: 29 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<a"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--</
+-#errors
+-Line: 1 Col: 27 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--</"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--</script
+-#errors
+-Line: 1 Col: 35 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--</script"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--</script 
+-#errors
+-Line: 1 Col: 36 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<s
+-#errors
+-Line: 1 Col: 29 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<s"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script
+-#errors
+-Line: 1 Col: 34 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script 
+-#errors
+-Line: 1 Col: 35 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script "
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script <
+-#errors
+-Line: 1 Col: 36 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script <"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script <a
+-#errors
+-Line: 1 Col: 37 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script <a"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </
+-#errors
+-Line: 1 Col: 37 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </s
+-#errors
+-Line: 1 Col: 38 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </s"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script
+-#errors
+-Line: 1 Col: 43 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </scripta
+-#errors
+-Line: 1 Col: 44 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </scripta"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script 
+-#errors
+-Line: 1 Col: 44 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script "
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script>
+-#errors
+-Line: 1 Col: 44 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script>"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script/
+-#errors
+-Line: 1 Col: 44 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script/"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script <
+-#errors
+-Line: 1 Col: 45 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script <"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script <a
+-#errors
+-Line: 1 Col: 46 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script <a"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script </
+-#errors
+-Line: 1 Col: 46 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script </"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script </script
+-#errors
+-Line: 1 Col: 52 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script </script"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script </script 
+-#errors
+-Line: 1 Col: 53 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script "
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script </script/
+-#errors
+-Line: 1 Col: 53 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script "
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script </script </script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script "
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script -
+-#errors
+-Line: 1 Col: 36 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script -a
+-#errors
+-Line: 1 Col: 37 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -a"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script -<
+-#errors
+-Line: 1 Col: 37 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -<"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script --
+-#errors
+-Line: 1 Col: 37 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script --a
+-#errors
+-Line: 1 Col: 38 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --a"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script --<
+-#errors
+-Line: 1 Col: 38 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --<"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script -->
+-#errors
+-Line: 1 Col: 38 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -->"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script --><
+-#errors
+-Line: 1 Col: 39 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --><"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script --></
+-#errors
+-Line: 1 Col: 40 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --></"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script --></script
+-#errors
+-Line: 1 Col: 46 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --></script"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script --></script 
+-#errors
+-Line: 1 Col: 47 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -->"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script --></script/
+-#errors
+-Line: 1 Col: 47 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -->"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script --></script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -->"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script><\/script>--></script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script><\/script>-->"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script></scr'+'ipt>--></script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></scr'+'ipt>-->"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script></script><script></script></script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script></script><script></script>--><!--</script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>--><!--"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script></script><script></script>-- ></script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>-- >"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script></script><script></script>- -></script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>- ->"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script></script><script></script>- - ></script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>- - >"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script></script><script></script>-></script>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>->"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<script>--!></script>X
+-#errors
+-Line: 1 Col: 49 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script>--!></script>X"
+-|   <body>
+-
+-#data
+-<!doctype html><script><!--<scr'+'ipt></script>--></script>
+-#errors
+-Line: 1 Col: 59 Unexpected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<scr'+'ipt>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<!doctype html><script><!--<script></scr'+'ipt></script>X
+-#errors
+-Line: 1 Col: 57 Unexpected end of file. Expected end tag (script).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></scr'+'ipt></script>X"
+-|   <body>
+-
+-#data
+-<!doctype html><style><!--<style></style>--></style>
+-#errors
+-Line: 1 Col: 52 Unexpected end tag (style).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--<style>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<!doctype html><style><!--</style>X
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--"
+-|   <body>
+-|     "X"
+-
+-#data
+-<!doctype html><style><!--...</style>...--></style>
+-#errors
+-Line: 1 Col: 51 Unexpected end tag (style).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--..."
+-|   <body>
+-|     "...-->"
+-
+-#data
+-<!doctype html><style><!--<br><html xmlns:v="urn:schemas-microsoft-com:vml"><!--[if !mso]><style></style>X
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--<br><html xmlns:v="urn:schemas-microsoft-com:vml"><!--[if !mso]><style>"
+-|   <body>
+-|     "X"
+-
+-#data
+-<!doctype html><style><!--...<style><!--...--!></style>--></style>
+-#errors
+-Line: 1 Col: 66 Unexpected end tag (style).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--...<style><!--...--!>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<!doctype html><style><!--...</style><!-- --><style>@import ...</style>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--..."
+-|     <!--   -->
+-|     <style>
+-|       "@import ..."
+-|   <body>
+-
+-#data
+-<!doctype html><style>...<style><!--...</style><!-- --></style>
+-#errors
+-Line: 1 Col: 63 Unexpected end tag (style).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <style>
+-|       "...<style><!--..."
+-|     <!--   -->
+-|   <body>
+-
+-#data
+-<!doctype html><style>...<!--[if IE]><style>...</style>X
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <style>
+-|       "...<!--[if IE]><style>..."
+-|   <body>
+-|     "X"
+-
+-#data
+-<!doctype html><title><!--<title></title>--></title>
+-#errors
+-Line: 1 Col: 52 Unexpected end tag (title).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <title>
+-|       "<!--<title>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<!doctype html><title>&lt;/title></title>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <title>
+-|       "</title>"
+-|   <body>
+-
+-#data
+-<!doctype html><title>foo/title><link></head><body>X
+-#errors
+-Line: 1 Col: 52 Unexpected end of file. Expected end tag (title).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <title>
+-|       "foo/title><link></head><body>X"
+-|   <body>
+-
+-#data
+-<!doctype html><noscript><!--<noscript></noscript>--></noscript>
+-#errors
+-Line: 1 Col: 64 Unexpected end tag (noscript).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <noscript>
+-|       "<!--<noscript>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<!doctype html><noscript><!--</noscript>X<noscript>--></noscript>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <noscript>
+-|       "<!--"
+-|   <body>
+-|     "X"
+-|     <noscript>
+-|       "-->"
+-
+-#data
+-<!doctype html><noscript><iframe></noscript>X
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <noscript>
+-|       "<iframe>"
+-|   <body>
+-|     "X"
+-
+-#data
+-<!doctype html><noframes><!--<noframes></noframes>--></noframes>
+-#errors
+-Line: 1 Col: 64 Unexpected end tag (noframes).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <noframes>
+-|       "<!--<noframes>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<!doctype html><noframes><body><script><!--...</script></body></noframes></html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <noframes>
+-|       "<body><script><!--...</script></body>"
+-|   <body>
+-
+-#data
+-<!doctype html><textarea><!--<textarea></textarea>--></textarea>
+-#errors
+-Line: 1 Col: 64 Unexpected end tag (textarea).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "<!--<textarea>"
+-|     "-->"
+-
+-#data
+-<!doctype html><textarea>&lt;/textarea></textarea>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "</textarea>"
+-
+-#data
+-<!doctype html><textarea>&lt;</textarea>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "<"
+-
+-#data
+-<!doctype html><textarea>a&lt;b</textarea>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "a<b"
+-
+-#data
+-<!doctype html><iframe><!--<iframe></iframe>--></iframe>
+-#errors
+-Line: 1 Col: 56 Unexpected end tag (iframe).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <iframe>
+-|       "<!--<iframe>"
+-|     "-->"
+-
+-#data
+-<!doctype html><iframe>...<!--X->...<!--/X->...</iframe>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <iframe>
+-|       "...<!--X->...<!--/X->..."
+-
+-#data
+-<!doctype html><xmp><!--<xmp></xmp>--></xmp>
+-#errors
+-Line: 1 Col: 44 Unexpected end tag (xmp).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <xmp>
+-|       "<!--<xmp>"
+-|     "-->"
+-
+-#data
+-<!doctype html><noembed><!--<noembed></noembed>--></noembed>
+-#errors
+-Line: 1 Col: 60 Unexpected end tag (noembed).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <noembed>
+-|       "<!--<noembed>"
+-|     "-->"
+-
+-#data
+-<script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 8 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|   <body>
+-
+-#data
+-<script>a
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 9 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "a"
+-|   <body>
+-
+-#data
+-<script><
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 9 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<"
+-|   <body>
+-
+-#data
+-<script></
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 10 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</"
+-|   <body>
+-
+-#data
+-<script></S
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</S"
+-|   <body>
+-
+-#data
+-<script></SC
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 12 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SC"
+-|   <body>
+-
+-#data
+-<script></SCR
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 13 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SCR"
+-|   <body>
+-
+-#data
+-<script></SCRI
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SCRI"
+-|   <body>
+-
+-#data
+-<script></SCRIP
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 15 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SCRIP"
+-|   <body>
+-
+-#data
+-<script></SCRIPT
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 16 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</SCRIPT"
+-|   <body>
+-
+-#data
+-<script></SCRIPT 
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 17 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|   <body>
+-
+-#data
+-<script></s
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</s"
+-|   <body>
+-
+-#data
+-<script></sc
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 12 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</sc"
+-|   <body>
+-
+-#data
+-<script></scr
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 13 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</scr"
+-|   <body>
+-
+-#data
+-<script></scri
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</scri"
+-|   <body>
+-
+-#data
+-<script></scrip
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 15 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</scrip"
+-|   <body>
+-
+-#data
+-<script></script
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 16 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</script"
+-|   <body>
+-
+-#data
+-<script></script 
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 17 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|   <body>
+-
+-#data
+-<script><!
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 10 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!"
+-|   <body>
+-
+-#data
+-<script><!a
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!a"
+-|   <body>
+-
+-#data
+-<script><!-
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!-"
+-|   <body>
+-
+-#data
+-<script><!-a
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 12 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!-a"
+-|   <body>
+-
+-#data
+-<script><!--
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 12 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--"
+-|   <body>
+-
+-#data
+-<script><!--a
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 13 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--a"
+-|   <body>
+-
+-#data
+-<script><!--<
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 13 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<"
+-|   <body>
+-
+-#data
+-<script><!--<a
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<a"
+-|   <body>
+-
+-#data
+-<script><!--</
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--</"
+-|   <body>
+-
+-#data
+-<script><!--</script
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 20 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--</script"
+-|   <body>
+-
+-#data
+-<script><!--</script 
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 21 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--"
+-|   <body>
+-
+-#data
+-<script><!--<s
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<s"
+-|   <body>
+-
+-#data
+-<script><!--<script
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 19 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script"
+-|   <body>
+-
+-#data
+-<script><!--<script 
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 20 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script "
+-|   <body>
+-
+-#data
+-<script><!--<script <
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 21 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script <"
+-|   <body>
+-
+-#data
+-<script><!--<script <a
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 22 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script <a"
+-|   <body>
+-
+-#data
+-<script><!--<script </
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 22 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </"
+-|   <body>
+-
+-#data
+-<script><!--<script </s
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 23 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </s"
+-|   <body>
+-
+-#data
+-<script><!--<script </script
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 28 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script"
+-|   <body>
+-
+-#data
+-<script><!--<script </scripta
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 29 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </scripta"
+-|   <body>
+-
+-#data
+-<script><!--<script </script 
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 29 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script "
+-|   <body>
+-
+-#data
+-<script><!--<script </script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 29 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script>"
+-|   <body>
+-
+-#data
+-<script><!--<script </script/
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 29 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script/"
+-|   <body>
+-
+-#data
+-<script><!--<script </script <
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 30 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script <"
+-|   <body>
+-
+-#data
+-<script><!--<script </script <a
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 31 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script <a"
+-|   <body>
+-
+-#data
+-<script><!--<script </script </
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 31 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script </"
+-|   <body>
+-
+-#data
+-<script><!--<script </script </script
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 38 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script </script"
+-|   <body>
+-
+-#data
+-<script><!--<script </script </script 
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 38 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script "
+-|   <body>
+-
+-#data
+-<script><!--<script </script </script/
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 38 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script "
+-|   <body>
+-
+-#data
+-<script><!--<script </script </script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script </script "
+-|   <body>
+-
+-#data
+-<script><!--<script -
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 21 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -"
+-|   <body>
+-
+-#data
+-<script><!--<script -a
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 22 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -a"
+-|   <body>
+-
+-#data
+-<script><!--<script --
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 22 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --"
+-|   <body>
+-
+-#data
+-<script><!--<script --a
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 23 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --a"
+-|   <body>
+-
+-#data
+-<script><!--<script -->
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 23 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -->"
+-|   <body>
+-
+-#data
+-<script><!--<script --><
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 24 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --><"
+-|   <body>
+-
+-#data
+-<script><!--<script --></
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 25 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --></"
+-|   <body>
+-
+-#data
+-<script><!--<script --></script
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 31 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script --></script"
+-|   <body>
+-
+-#data
+-<script><!--<script --></script 
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 32 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -->"
+-|   <body>
+-
+-#data
+-<script><!--<script --></script/
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 32 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -->"
+-|   <body>
+-
+-#data
+-<script><!--<script --></script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script -->"
+-|   <body>
+-
+-#data
+-<script><!--<script><\/script>--></script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script><\/script>-->"
+-|   <body>
+-
+-#data
+-<script><!--<script></scr'+'ipt>--></script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></scr'+'ipt>-->"
+-|   <body>
+-
+-#data
+-<script><!--<script></script><script></script></script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>"
+-|   <body>
+-
+-#data
+-<script><!--<script></script><script></script>--><!--</script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>--><!--"
+-|   <body>
+-
+-#data
+-<script><!--<script></script><script></script>-- ></script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>-- >"
+-|   <body>
+-
+-#data
+-<script><!--<script></script><script></script>- -></script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>- ->"
+-|   <body>
+-
+-#data
+-<script><!--<script></script><script></script>- - ></script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>- - >"
+-|   <body>
+-
+-#data
+-<script><!--<script></script><script></script>-></script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></script><script></script>->"
+-|   <body>
+-
+-#data
+-<script><!--<script>--!></script>X
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 34 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script>--!></script>X"
+-|   <body>
+-
+-#data
+-<script><!--<scr'+'ipt></script>--></script>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 44 Unexpected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<scr'+'ipt>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<script><!--<script></scr'+'ipt></script>X
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 42 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "<!--<script></scr'+'ipt></script>X"
+-|   <body>
+-
+-#data
+-<style><!--<style></style>--></style>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-Line: 1 Col: 37 Unexpected end tag (style).
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--<style>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<style><!--</style>X
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--"
+-|   <body>
+-|     "X"
+-
+-#data
+-<style><!--...</style>...--></style>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-Line: 1 Col: 36 Unexpected end tag (style).
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--..."
+-|   <body>
+-|     "...-->"
+-
+-#data
+-<style><!--<br><html xmlns:v="urn:schemas-microsoft-com:vml"><!--[if !mso]><style></style>X
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--<br><html xmlns:v="urn:schemas-microsoft-com:vml"><!--[if !mso]><style>"
+-|   <body>
+-|     "X"
+-
+-#data
+-<style><!--...<style><!--...--!></style>--></style>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-Line: 1 Col: 51 Unexpected end tag (style).
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--...<style><!--...--!>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<style><!--...</style><!-- --><style>@import ...</style>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "<!--..."
+-|     <!--   -->
+-|     <style>
+-|       "@import ..."
+-|   <body>
+-
+-#data
+-<style>...<style><!--...</style><!-- --></style>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-Line: 1 Col: 48 Unexpected end tag (style).
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "...<style><!--..."
+-|     <!--   -->
+-|   <body>
+-
+-#data
+-<style>...<!--[if IE]><style>...</style>X
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       "...<!--[if IE]><style>..."
+-|   <body>
+-|     "X"
+-
+-#data
+-<title><!--<title></title>--></title>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (title). Expected DOCTYPE.
+-Line: 1 Col: 37 Unexpected end tag (title).
+-#document
+-| <html>
+-|   <head>
+-|     <title>
+-|       "<!--<title>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<title>&lt;/title></title>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (title). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <title>
+-|       "</title>"
+-|   <body>
+-
+-#data
+-<title>foo/title><link></head><body>X
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (title). Expected DOCTYPE.
+-Line: 1 Col: 37 Unexpected end of file. Expected end tag (title).
+-#document
+-| <html>
+-|   <head>
+-|     <title>
+-|       "foo/title><link></head><body>X"
+-|   <body>
+-
+-#data
+-<noscript><!--<noscript></noscript>--></noscript>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (noscript). Expected DOCTYPE.
+-Line: 1 Col: 49 Unexpected end tag (noscript).
+-#document
+-| <html>
+-|   <head>
+-|     <noscript>
+-|       "<!--<noscript>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<noscript><!--</noscript>X<noscript>--></noscript>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (noscript). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <noscript>
+-|       "<!--"
+-|   <body>
+-|     "X"
+-|     <noscript>
+-|       "-->"
+-
+-#data
+-<noscript><iframe></noscript>X
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (noscript). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <noscript>
+-|       "<iframe>"
+-|   <body>
+-|     "X"
+-
+-#data
+-<noframes><!--<noframes></noframes>--></noframes>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (noframes). Expected DOCTYPE.
+-Line: 1 Col: 49 Unexpected end tag (noframes).
+-#document
+-| <html>
+-|   <head>
+-|     <noframes>
+-|       "<!--<noframes>"
+-|   <body>
+-|     "-->"
+-
+-#data
+-<noframes><body><script><!--...</script></body></noframes></html>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (noframes). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <noframes>
+-|       "<body><script><!--...</script></body>"
+-|   <body>
+-
+-#data
+-<textarea><!--<textarea></textarea>--></textarea>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (textarea). Expected DOCTYPE.
+-Line: 1 Col: 49 Unexpected end tag (textarea).
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "<!--<textarea>"
+-|     "-->"
+-
+-#data
+-<textarea>&lt;/textarea></textarea>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (textarea). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "</textarea>"
+-
+-#data
+-<iframe><!--<iframe></iframe>--></iframe>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (iframe). Expected DOCTYPE.
+-Line: 1 Col: 41 Unexpected end tag (iframe).
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <iframe>
+-|       "<!--<iframe>"
+-|     "-->"
+-
+-#data
+-<iframe>...<!--X->...<!--/X->...</iframe>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (iframe). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <iframe>
+-|       "...<!--X->...<!--/X->..."
+-
+-#data
+-<xmp><!--<xmp></xmp>--></xmp>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (xmp). Expected DOCTYPE.
+-Line: 1 Col: 29 Unexpected end tag (xmp).
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <xmp>
+-|       "<!--<xmp>"
+-|     "-->"
+-
+-#data
+-<noembed><!--<noembed></noembed>--></noembed>
+-#errors
+-Line: 1 Col: 9 Unexpected start tag (noembed). Expected DOCTYPE.
+-Line: 1 Col: 45 Unexpected end tag (noembed).
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <noembed>
+-|       "<!--<noembed>"
+-|     "-->"
+-
+-#data
+-<!doctype html><table>
+-
+-#errors
+-Line 2 Col 0 Unexpected end of file. Expected table content.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       "
+-"
+-
+-#data
+-<!doctype html><table><td><span><font></span><span>
+-#errors
+-Line 1 Col 26 Unexpected table cell start tag (td) in the table body phase.
+-Line 1 Col 45 Unexpected end tag (span).
+-Line 1 Col 51 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <span>
+-|               <font>
+-|             <font>
+-|               <span>
+-
+-#data
+-<!doctype html><form><table></form><form></table></form>
+-#errors
+-35: Stray end tag “form”.
+-41: Start tag “form” seen in “table”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       <table>
+-|         <form>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests17.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests17.dat
+deleted file mode 100644
+index 7b555f8..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests17.dat
++++ /dev/null
+@@ -1,153 +0,0 @@
+-#data
+-<!doctype html><table><tbody><select><tr>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!doctype html><table><tr><select><td>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-
+-#data
+-<!doctype html><table><tr><td><select><td>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <select>
+-|           <td>
+-
+-#data
+-<!doctype html><table><tr><th><select><td>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <th>
+-|             <select>
+-|           <td>
+-
+-#data
+-<!doctype html><table><caption><select><tr>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <select>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!doctype html><select><tr>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!doctype html><select><td>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!doctype html><select><th>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!doctype html><select><tbody>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!doctype html><select><thead>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!doctype html><select><tfoot>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!doctype html><select><caption>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!doctype html><table><tr></table>a
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|     "a"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests18.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests18.dat
+deleted file mode 100644
+index 680e1f0..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests18.dat
++++ /dev/null
+@@ -1,269 +0,0 @@
+-#data
+-<!doctype html><plaintext></plaintext>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <plaintext>
+-|       "</plaintext>"
+-
+-#data
+-<!doctype html><table><plaintext></plaintext>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <plaintext>
+-|       "</plaintext>"
+-|     <table>
+-
+-#data
+-<!doctype html><table><tbody><plaintext></plaintext>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <plaintext>
+-|       "</plaintext>"
+-|     <table>
+-|       <tbody>
+-
+-#data
+-<!doctype html><table><tbody><tr><plaintext></plaintext>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <plaintext>
+-|       "</plaintext>"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!doctype html><table><tbody><tr><plaintext></plaintext>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <plaintext>
+-|       "</plaintext>"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!doctype html><table><td><plaintext></plaintext>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <plaintext>
+-|               "</plaintext>"
+-
+-#data
+-<!doctype html><table><caption><plaintext></plaintext>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <plaintext>
+-|           "</plaintext>"
+-
+-#data
+-<!doctype html><table><tr><style></script></style>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "abc"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <style>
+-|             "</script>"
+-
+-#data
+-<!doctype html><table><tr><script></style></script>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "abc"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <script>
+-|             "</style>"
+-
+-#data
+-<!doctype html><table><caption><style></script></style>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <style>
+-|           "</script>"
+-|         "abc"
+-
+-#data
+-<!doctype html><table><td><style></script></style>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <style>
+-|               "</script>"
+-|             "abc"
+-
+-#data
+-<!doctype html><select><script></style></script>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <script>
+-|         "</style>"
+-|       "abc"
+-
+-#data
+-<!doctype html><table><select><script></style></script>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <script>
+-|         "</style>"
+-|       "abc"
+-|     <table>
+-
+-#data
+-<!doctype html><table><tr><select><script></style></script>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <script>
+-|         "</style>"
+-|       "abc"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!doctype html><frameset></frameset><noframes>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   <noframes>
+-|     "abc"
+-
+-#data
+-<!doctype html><frameset></frameset><noframes>abc</noframes><!--abc-->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   <noframes>
+-|     "abc"
+-|   <!-- abc -->
+-
+-#data
+-<!doctype html><frameset></frameset></html><noframes>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   <noframes>
+-|     "abc"
+-
+-#data
+-<!doctype html><frameset></frameset></html><noframes>abc</noframes><!--abc-->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   <noframes>
+-|     "abc"
+-| <!-- abc -->
+-
+-#data
+-<!doctype html><table><tr></tbody><tfoot>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|       <tfoot>
+-
+-#data
+-<!doctype html><table><td><svg></svg>abc<td>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <svg svg>
+-|             "abc"
+-|           <td>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests19.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests19.dat
+deleted file mode 100644
+index 0d62f5a..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests19.dat
++++ /dev/null
+@@ -1,1237 +0,0 @@
+-#data
+-<!doctype html><math><mn DefinitionUrl="foo">
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mn>
+-|         definitionURL="foo"
+-
+-#data
+-<!doctype html><html></p><!--foo-->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <!-- foo -->
+-|   <head>
+-|   <body>
+-
+-#data
+-<!doctype html><head></head></p><!--foo-->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <!-- foo -->
+-|   <body>
+-
+-#data
+-<!doctype html><body><p><pre>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <pre>
+-
+-#data
+-<!doctype html><body><p><listing>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <listing>
+-
+-#data
+-<!doctype html><p><plaintext>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <plaintext>
+-
+-#data
+-<!doctype html><p><h1>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <h1>
+-
+-#data
+-<!doctype html><form><isindex>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-
+-#data
+-<!doctype html><isindex action="POST">
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       action="POST"
+-|       <hr>
+-|       <label>
+-|         "This is a searchable index. Enter search keywords: "
+-|         <input>
+-|           name="isindex"
+-|       <hr>
+-
+-#data
+-<!doctype html><isindex prompt="this is isindex">
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       <hr>
+-|       <label>
+-|         "this is isindex"
+-|         <input>
+-|           name="isindex"
+-|       <hr>
+-
+-#data
+-<!doctype html><isindex type="hidden">
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       <hr>
+-|       <label>
+-|         "This is a searchable index. Enter search keywords: "
+-|         <input>
+-|           name="isindex"
+-|           type="hidden"
+-|       <hr>
+-
+-#data
+-<!doctype html><isindex name="foo">
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       <hr>
+-|       <label>
+-|         "This is a searchable index. Enter search keywords: "
+-|         <input>
+-|           name="isindex"
+-|       <hr>
+-
+-#data
+-<!doctype html><ruby><p><rp>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ruby>
+-|       <p>
+-|       <rp>
+-
+-#data
+-<!doctype html><ruby><div><span><rp>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ruby>
+-|       <div>
+-|         <span>
+-|           <rp>
+-
+-#data
+-<!doctype html><ruby><div><p><rp>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ruby>
+-|       <div>
+-|         <p>
+-|         <rp>
+-
+-#data
+-<!doctype html><ruby><p><rt>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ruby>
+-|       <p>
+-|       <rt>
+-
+-#data
+-<!doctype html><ruby><div><span><rt>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ruby>
+-|       <div>
+-|         <span>
+-|           <rt>
+-
+-#data
+-<!doctype html><ruby><div><p><rt>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ruby>
+-|       <div>
+-|         <p>
+-|         <rt>
+-
+-#data
+-<!doctype html><math/><foo>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|     <foo>
+-
+-#data
+-<!doctype html><svg/><foo>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|     <foo>
+-
+-#data
+-<!doctype html><div></body><!--foo-->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|   <!-- foo -->
+-
+-#data
+-<!doctype html><h1><div><h3><span></h1>foo
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <h1>
+-|       <div>
+-|         <h3>
+-|           <span>
+-|         "foo"
+-
+-#data
+-<!doctype html><p></h3>foo
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       "foo"
+-
+-#data
+-<!doctype html><h3><li>abc</h2>foo
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <h3>
+-|       <li>
+-|         "abc"
+-|     "foo"
+-
+-#data
+-<!doctype html><table>abc<!--foo-->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "abc"
+-|     <table>
+-|       <!-- foo -->
+-
+-#data
+-<!doctype html><table>  <!--foo-->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       "  "
+-|       <!-- foo -->
+-
+-#data
+-<!doctype html><table> b <!--foo-->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     " b "
+-|     <table>
+-|       <!-- foo -->
+-
+-#data
+-<!doctype html><select><option><option>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-|       <option>
+-
+-#data
+-<!doctype html><select><option></optgroup>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-
+-#data
+-<!doctype html><select><option></optgroup>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-
+-#data
+-<!doctype html><p><math><mi><p><h1>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <math math>
+-|         <math mi>
+-|           <p>
+-|           <h1>
+-
+-#data
+-<!doctype html><p><math><mo><p><h1>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <math math>
+-|         <math mo>
+-|           <p>
+-|           <h1>
+-
+-#data
+-<!doctype html><p><math><mn><p><h1>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <math math>
+-|         <math mn>
+-|           <p>
+-|           <h1>
+-
+-#data
+-<!doctype html><p><math><ms><p><h1>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <math math>
+-|         <math ms>
+-|           <p>
+-|           <h1>
+-
+-#data
+-<!doctype html><p><math><mtext><p><h1>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <math math>
+-|         <math mtext>
+-|           <p>
+-|           <h1>
+-
+-#data
+-<!doctype html><frameset></noframes>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!doctype html><html c=d><body></html><html a=b>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   a="b"
+-|   c="d"
+-|   <head>
+-|   <body>
+-
+-#data
+-<!doctype html><html c=d><frameset></frameset></html><html a=b>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   a="b"
+-|   c="d"
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!doctype html><html><frameset></frameset></html><!--foo-->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-| <!-- foo -->
+-
+-#data
+-<!doctype html><html><frameset></frameset></html>  
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   "  "
+-
+-#data
+-<!doctype html><html><frameset></frameset></html>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!doctype html><html><frameset></frameset></html><p>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!doctype html><html><frameset></frameset></html></p>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<html><frameset></frameset></html><!doctype html>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!doctype html><body><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!doctype html><p><frameset><frame>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|     <frame>
+-
+-#data
+-<!doctype html><p>a<frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       "a"
+-
+-#data
+-<!doctype html><p> <frameset><frame>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|     <frame>
+-
+-#data
+-<!doctype html><pre><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <pre>
+-
+-#data
+-<!doctype html><listing><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <listing>
+-
+-#data
+-<!doctype html><li><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <li>
+-
+-#data
+-<!doctype html><dd><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <dd>
+-
+-#data
+-<!doctype html><dt><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <dt>
+-
+-#data
+-<!doctype html><button><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <button>
+-
+-#data
+-<!doctype html><applet><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <applet>
+-
+-#data
+-<!doctype html><marquee><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <marquee>
+-
+-#data
+-<!doctype html><object><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <object>
+-
+-#data
+-<!doctype html><table><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-
+-#data
+-<!doctype html><area><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <area>
+-
+-#data
+-<!doctype html><basefont><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <basefont>
+-|   <frameset>
+-
+-#data
+-<!doctype html><bgsound><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <bgsound>
+-|   <frameset>
+-
+-#data
+-<!doctype html><br><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <br>
+-
+-#data
+-<!doctype html><embed><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <embed>
+-
+-#data
+-<!doctype html><img><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <img>
+-
+-#data
+-<!doctype html><input><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <input>
+-
+-#data
+-<!doctype html><keygen><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <keygen>
+-
+-#data
+-<!doctype html><wbr><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <wbr>
+-
+-#data
+-<!doctype html><hr><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <hr>
+-
+-#data
+-<!doctype html><textarea></textarea><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-
+-#data
+-<!doctype html><xmp></xmp><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <xmp>
+-
+-#data
+-<!doctype html><iframe></iframe><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <iframe>
+-
+-#data
+-<!doctype html><select></select><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!doctype html><svg></svg><frameset><frame>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|     <frame>
+-
+-#data
+-<!doctype html><math></math><frameset><frame>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|     <frame>
+-
+-#data
+-<!doctype html><svg><foreignObject><div> <frameset><frame>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|     <frame>
+-
+-#data
+-<!doctype html><svg>a</svg><frameset><frame>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "a"
+-
+-#data
+-<!doctype html><svg> </svg><frameset><frame>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-|     <frame>
+-
+-#data
+-<html>aaa<frameset></frameset>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "aaa"
+-
+-#data
+-<html> a <frameset></frameset>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "a "
+-
+-#data
+-<!doctype html><div><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!doctype html><div><body><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-
+-#data
+-<!doctype html><p><math></p>a
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <math math>
+-|     "a"
+-
+-#data
+-<!doctype html><p><math><mn><span></p>a
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <math math>
+-|         <math mn>
+-|           <span>
+-|             <p>
+-|             "a"
+-
+-#data
+-<!doctype html><math></html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-
+-#data
+-<!doctype html><meta charset="ascii">
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <meta>
+-|       charset="ascii"
+-|   <body>
+-
+-#data
+-<!doctype html><meta http-equiv="content-type" content="text/html;charset=ascii">
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <meta>
+-|       content="text/html;charset=ascii"
+-|       http-equiv="content-type"
+-|   <body>
+-
+-#data
+-<!doctype html><head><!--aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa--><meta charset="utf8">
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <!-- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa -->
+-|     <meta>
+-|       charset="utf8"
+-|   <body>
+-
+-#data
+-<!doctype html><html a=b><head></head><html c=d>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   a="b"
+-|   c="d"
+-|   <head>
+-|   <body>
+-
+-#data
+-<!doctype html><image/>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <img>
+-
+-#data
+-<!doctype html>a<i>b<table>c<b>d</i>e</b>f
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "a"
+-|     <i>
+-|       "bc"
+-|       <b>
+-|         "de"
+-|       "f"
+-|       <table>
+-
+-#data
+-<!doctype html><table><i>a<b>b<div>c<a>d</i>e</b>f
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <i>
+-|       "a"
+-|       <b>
+-|         "b"
+-|     <b>
+-|     <div>
+-|       <b>
+-|         <i>
+-|           "c"
+-|           <a>
+-|             "d"
+-|         <a>
+-|           "e"
+-|       <a>
+-|         "f"
+-|     <table>
+-
+-#data
+-<!doctype html><i>a<b>b<div>c<a>d</i>e</b>f
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <i>
+-|       "a"
+-|       <b>
+-|         "b"
+-|     <b>
+-|     <div>
+-|       <b>
+-|         <i>
+-|           "c"
+-|           <a>
+-|             "d"
+-|         <a>
+-|           "e"
+-|       <a>
+-|         "f"
+-
+-#data
+-<!doctype html><table><i>a<b>b<div>c</i>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <i>
+-|       "a"
+-|       <b>
+-|         "b"
+-|     <b>
+-|       <div>
+-|         <i>
+-|           "c"
+-|     <table>
+-
+-#data
+-<!doctype html><table><i>a<b>b<div>c<a>d</i>e</b>f
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <i>
+-|       "a"
+-|       <b>
+-|         "b"
+-|     <b>
+-|     <div>
+-|       <b>
+-|         <i>
+-|           "c"
+-|           <a>
+-|             "d"
+-|         <a>
+-|           "e"
+-|       <a>
+-|         "f"
+-|     <table>
+-
+-#data
+-<!doctype html><table><i>a<div>b<tr>c<b>d</i>e
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <i>
+-|       "a"
+-|       <div>
+-|         "b"
+-|     <i>
+-|       "c"
+-|       <b>
+-|         "d"
+-|     <b>
+-|       "e"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!doctype html><table><td><table><i>a<div>b<b>c</i>d
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <i>
+-|               "a"
+-|             <div>
+-|               <i>
+-|                 "b"
+-|                 <b>
+-|                   "c"
+-|               <b>
+-|                 "d"
+-|             <table>
+-
+-#data
+-<!doctype html><body><bgsound>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <bgsound>
+-
+-#data
+-<!doctype html><body><basefont>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <basefont>
+-
+-#data
+-<!doctype html><a><b></a><basefont>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       <b>
+-|     <basefont>
+-
+-#data
+-<!doctype html><a><b></a><bgsound>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       <b>
+-|     <bgsound>
+-
+-#data
+-<!doctype html><figcaption><article></figcaption>a
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <figcaption>
+-|       <article>
+-|     "a"
+-
+-#data
+-<!doctype html><summary><article></summary>a
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <summary>
+-|       <article>
+-|     "a"
+-
+-#data
+-<!doctype html><p><a><plaintext>b
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <a>
+-|     <plaintext>
+-|       <a>
+-|         "b"
+-
+-#data
+-<!DOCTYPE html><div>a<a></div>b<p>c</p>d
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "a"
+-|       <a>
+-|     <a>
+-|       "b"
+-|       <p>
+-|         "c"
+-|       "d"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests2.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests2.dat
+deleted file mode 100644
+index 60d8592..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests2.dat
++++ /dev/null
+@@ -1,763 +0,0 @@
+-#data
+-<!DOCTYPE html>Test
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Test"
+-
+-#data
+-<textarea>test</div>test
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (textarea). Expected DOCTYPE.
+-Line: 1 Col: 24 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "test</div>test"
+-
+-#data
+-<table><td>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected table cell start tag (td) in the table body phase.
+-Line: 1 Col: 11 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-
+-#data
+-<table><td>test</tbody></table>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected table cell start tag (td) in the table body phase.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "test"
+-
+-#data
+-<frame>test
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (frame). Expected DOCTYPE.
+-Line: 1 Col: 7 Unexpected start tag frame. Ignored.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "test"
+-
+-#data
+-<!DOCTYPE html><frameset>test
+-#errors
+-Line: 1 Col: 29 Unepxected characters in the frameset phase. Characters ignored.
+-Line: 1 Col: 29 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!DOCTYPE html><frameset><!DOCTYPE html>
+-#errors
+-Line: 1 Col: 40 Unexpected DOCTYPE. Ignored.
+-Line: 1 Col: 40 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!DOCTYPE html><font><p><b>test</font>
+-#errors
+-Line: 1 Col: 38 End tag (font) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 38 End tag (font) violates step 1, paragraph 3 of the adoption agency algorithm.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <font>
+-|     <p>
+-|       <font>
+-|         <b>
+-|           "test"
+-
+-#data
+-<!DOCTYPE html><dt><div><dd>
+-#errors
+-Line: 1 Col: 28 Missing end tag (div, dt).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <dt>
+-|       <div>
+-|     <dd>
+-
+-#data
+-<script></x
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-Line: 1 Col: 11 Unexpected end of file. Expected end tag (script).
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       "</x"
+-|   <body>
+-
+-#data
+-<table><plaintext><td>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 18 Unexpected start tag (plaintext) in table context caused voodoo mode.
+-Line: 1 Col: 22 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <plaintext>
+-|       "<td>"
+-|     <table>
+-
+-#data
+-<plaintext></plaintext>
+-#errors
+-Line: 1 Col: 11 Unexpected start tag (plaintext). Expected DOCTYPE.
+-Line: 1 Col: 23 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <plaintext>
+-|       "</plaintext>"
+-
+-#data
+-<!DOCTYPE html><table><tr>TEST
+-#errors
+-Line: 1 Col: 30 Unexpected non-space characters in table context caused voodoo mode.
+-Line: 1 Col: 30 Unexpected end of file. Expected table content.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "TEST"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!DOCTYPE html><body t1=1><body t2=2><body t3=3 t4=4>
+-#errors
+-Line: 1 Col: 37 Unexpected start tag (body).
+-Line: 1 Col: 53 Unexpected start tag (body).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     t1="1"
+-|     t2="2"
+-|     t3="3"
+-|     t4="4"
+-
+-#data
+-</b test
+-#errors
+-Line: 1 Col: 8 Unexpected end of file in attribute name.
+-Line: 1 Col: 8 End tag contains unexpected attributes.
+-Line: 1 Col: 8 Unexpected end tag (b). Expected DOCTYPE.
+-Line: 1 Col: 8 Unexpected end tag (b) after the (implied) root element.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html></b test<b &=&amp>X
+-#errors
+-Line: 1 Col: 32 Named entity didn't end with ';'.
+-Line: 1 Col: 33 End tag contains unexpected attributes.
+-Line: 1 Col: 33 Unexpected end tag (b) after the (implied) root element.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "X"
+-
+-#data
+-<!doctypehtml><scrIPt type=text/x-foobar;baz>X</SCRipt
+-#errors
+-Line: 1 Col: 9 No space after literal string 'DOCTYPE'.
+-Line: 1 Col: 54 Unexpected end of file in the tag name.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       type="text/x-foobar;baz"
+-|       "X</SCRipt"
+-|   <body>
+-
+-#data
+-&
+-#errors
+-Line: 1 Col: 1 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "&"
+-
+-#data
+-&#
+-#errors
+-Line: 1 Col: 1 Numeric entity expected. Got end of file instead.
+-Line: 1 Col: 1 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "&#"
+-
+-#data
+-&#X
+-#errors
+-Line: 1 Col: 3 Numeric entity expected but none found.
+-Line: 1 Col: 3 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "&#X"
+-
+-#data
+-&#x
+-#errors
+-Line: 1 Col: 3 Numeric entity expected but none found.
+-Line: 1 Col: 3 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "&#x"
+-
+-#data
+-&#45
+-#errors
+-Line: 1 Col: 4 Numeric entity didn't end with ';'.
+-Line: 1 Col: 4 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "-"
+-
+-#data
+-&x-test
+-#errors
+-Line: 1 Col: 1 Named entity expected. Got none.
+-Line: 1 Col: 1 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "&x-test"
+-
+-#data
+-<!doctypehtml><p><li>
+-#errors
+-Line: 1 Col: 9 No space after literal string 'DOCTYPE'.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <li>
+-
+-#data
+-<!doctypehtml><p><dt>
+-#errors
+-Line: 1 Col: 9 No space after literal string 'DOCTYPE'.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <dt>
+-
+-#data
+-<!doctypehtml><p><dd>
+-#errors
+-Line: 1 Col: 9 No space after literal string 'DOCTYPE'.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <dd>
+-
+-#data
+-<!doctypehtml><p><form>
+-#errors
+-Line: 1 Col: 9 No space after literal string 'DOCTYPE'.
+-Line: 1 Col: 23 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <form>
+-
+-#data
+-<!DOCTYPE html><p></P>X
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     "X"
+-
+-#data
+-&AMP
+-#errors
+-Line: 1 Col: 4 Named entity didn't end with ';'.
+-Line: 1 Col: 4 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "&"
+-
+-#data
+-&AMp;
+-#errors
+-Line: 1 Col: 1 Named entity expected. Got none.
+-Line: 1 Col: 1 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "&AMp;"
+-
+-#data
+-<!DOCTYPE html><html><head></head><body><thisISasillyTESTelementNameToMakeSureCrazyTagNamesArePARSEDcorrectLY>
+-#errors
+-Line: 1 Col: 110 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <thisisasillytestelementnametomakesurecrazytagnamesareparsedcorrectly>
+-
+-#data
+-<!DOCTYPE html>X</body>X
+-#errors
+-Line: 1 Col: 24 Unexpected non-space characters in the after body phase.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "XX"
+-
+-#data
+-<!DOCTYPE html><!-- X
+-#errors
+-Line: 1 Col: 21 Unexpected end of file in comment.
+-#document
+-| <!DOCTYPE html>
+-| <!--  X -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><table><caption>test TEST</caption><td>test
+-#errors
+-Line: 1 Col: 54 Unexpected table cell start tag (td) in the table body phase.
+-Line: 1 Col: 58 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         "test TEST"
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "test"
+-
+-#data
+-<!DOCTYPE html><select><option><optgroup>
+-#errors
+-Line: 1 Col: 41 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-|       <optgroup>
+-
+-#data
+-<!DOCTYPE html><select><optgroup><option></optgroup><option><select><option>
+-#errors
+-Line: 1 Col: 68 Unexpected select start tag in the select phase treated as select end tag.
+-Line: 1 Col: 76 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <optgroup>
+-|         <option>
+-|       <option>
+-|     <option>
+-
+-#data
+-<!DOCTYPE html><select><optgroup><option><optgroup>
+-#errors
+-Line: 1 Col: 51 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <optgroup>
+-|         <option>
+-|       <optgroup>
+-
+-#data
+-<!DOCTYPE html><datalist><option>foo</datalist>bar
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <datalist>
+-|       <option>
+-|         "foo"
+-|     "bar"
+-
+-#data
+-<!DOCTYPE html><font><input><input></font>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <font>
+-|       <input>
+-|       <input>
+-
+-#data
+-<!DOCTYPE html><!-- XXX - XXX -->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <!--  XXX - XXX  -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><!-- XXX - XXX
+-#errors
+-Line: 1 Col: 29 Unexpected end of file in comment (-)
+-#document
+-| <!DOCTYPE html>
+-| <!--  XXX - XXX -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><!-- XXX - XXX - XXX -->
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <!--  XXX - XXX - XXX  -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<isindex test=x name=x>
+-#errors
+-Line: 1 Col: 23 Unexpected start tag (isindex). Expected DOCTYPE.
+-Line: 1 Col: 23 Unexpected start tag isindex. Don't use it!
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       <hr>
+-|       <label>
+-|         "This is a searchable index. Enter search keywords: "
+-|         <input>
+-|           name="isindex"
+-|           test="x"
+-|       <hr>
+-
+-#data
+-test
+-test
+-#errors
+-Line: 2 Col: 4 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "test
+-test"
+-
+-#data
+-<!DOCTYPE html><body><title>test</body></title>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <title>
+-|       "test</body>"
+-
+-#data
+-<!DOCTYPE html><body><title>X</title><meta name=z><link rel=foo><style>
+-x { content:"</style" } </style>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <title>
+-|       "X"
+-|     <meta>
+-|       name="z"
+-|     <link>
+-|       rel="foo"
+-|     <style>
+-|       "
+-x { content:"</style" } "
+-
+-#data
+-<!DOCTYPE html><select><optgroup></optgroup></select>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <optgroup>
+-
+-#data
+- 
+- 
+-#errors
+-Line: 2 Col: 1 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html>  <html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><script>
+-</script>  <title>x</title>  </head>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <script>
+-|       "
+-"
+-|     "  "
+-|     <title>
+-|       "x"
+-|     "  "
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><html><body><html id=x>
+-#errors
+-Line: 1 Col: 38 html needs to be the first start tag.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   id="x"
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html>X</body><html id="x">
+-#errors
+-Line: 1 Col: 36 Unexpected start tag token (html) in the after body phase.
+-Line: 1 Col: 36 html needs to be the first start tag.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   id="x"
+-|   <head>
+-|   <body>
+-|     "X"
+-
+-#data
+-<!DOCTYPE html><head><html id=x>
+-#errors
+-Line: 1 Col: 32 html needs to be the first start tag.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   id="x"
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html>X</html>X
+-#errors
+-Line: 1 Col: 24 Unexpected non-space characters in the after body phase.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "XX"
+-
+-#data
+-<!DOCTYPE html>X</html> 
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "X "
+-
+-#data
+-<!DOCTYPE html>X</html><p>X
+-#errors
+-Line: 1 Col: 26 Unexpected start tag (p).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "X"
+-|     <p>
+-|       "X"
+-
+-#data
+-<!DOCTYPE html>X<p/x/y/z>
+-#errors
+-Line: 1 Col: 19 Expected a > after the /.
+-Line: 1 Col: 21 Solidus (/) incorrectly placed in tag.
+-Line: 1 Col: 23 Solidus (/) incorrectly placed in tag.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "X"
+-|     <p>
+-|       x=""
+-|       y=""
+-|       z=""
+-
+-#data
+-<!DOCTYPE html><!--x--
+-#errors
+-Line: 1 Col: 22 Unexpected end of file in comment (--).
+-#document
+-| <!DOCTYPE html>
+-| <!-- x -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><table><tr><td></p></table>
+-#errors
+-Line: 1 Col: 34 Unexpected end tag (p). Ignored.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <p>
+-
+-#data
+-<!DOCTYPE <!DOCTYPE HTML>><!--<!--x-->-->
+-#errors
+-Line: 1 Col: 20 Expected space or '>'. Got ''
+-Line: 1 Col: 25 Erroneous DOCTYPE.
+-Line: 1 Col: 35 Unexpected character in comment found.
+-#document
+-| <!DOCTYPE <!doctype>
+-| <html>
+-|   <head>
+-|   <body>
+-|     ">"
+-|     <!-- <!--x -->
+-|     "-->"
+-
+-#data
+-<!doctype html><div><form></form><div></div></div>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <form>
+-|       <div>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests20.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests20.dat
+deleted file mode 100644
+index 6bd8256..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests20.dat
++++ /dev/null
+@@ -1,455 +0,0 @@
+-#data
+-<!doctype html><p><button><button>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|       <button>
+-
+-#data
+-<!doctype html><p><button><address>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <address>
+-
+-#data
+-<!doctype html><p><button><blockquote>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <blockquote>
+-
+-#data
+-<!doctype html><p><button><menu>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <menu>
+-
+-#data
+-<!doctype html><p><button><p>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <p>
+-
+-#data
+-<!doctype html><p><button><ul>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <ul>
+-
+-#data
+-<!doctype html><p><button><h1>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <h1>
+-
+-#data
+-<!doctype html><p><button><h6>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <h6>
+-
+-#data
+-<!doctype html><p><button><listing>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <listing>
+-
+-#data
+-<!doctype html><p><button><pre>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <pre>
+-
+-#data
+-<!doctype html><p><button><form>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <form>
+-
+-#data
+-<!doctype html><p><button><li>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <li>
+-
+-#data
+-<!doctype html><p><button><dd>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <dd>
+-
+-#data
+-<!doctype html><p><button><dt>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <dt>
+-
+-#data
+-<!doctype html><p><button><plaintext>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <plaintext>
+-
+-#data
+-<!doctype html><p><button><table>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <table>
+-
+-#data
+-<!doctype html><p><button><hr>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <hr>
+-
+-#data
+-<!doctype html><p><button><xmp>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <xmp>
+-
+-#data
+-<!doctype html><p><button></p>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <button>
+-|         <p>
+-
+-#data
+-<!doctype html><address><button></address>a
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <address>
+-|       <button>
+-|     "a"
+-
+-#data
+-<!doctype html><address><button></address>a
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <address>
+-|       <button>
+-|     "a"
+-
+-#data
+-<p><table></p>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <p>
+-|       <table>
+-
+-#data
+-<!doctype html><svg>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-
+-#data
+-<!doctype html><p><figcaption>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <figcaption>
+-
+-#data
+-<!doctype html><p><summary>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <summary>
+-
+-#data
+-<!doctype html><form><table><form>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       <table>
+-
+-#data
+-<!doctype html><table><form><form>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <form>
+-
+-#data
+-<!doctype html><table><form></table><form>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <form>
+-
+-#data
+-<!doctype html><svg><foreignObject><p>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg foreignObject>
+-|         <p>
+-
+-#data
+-<!doctype html><svg><title>abc
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg title>
+-|         "abc"
+-
+-#data
+-<option><span><option>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <option>
+-|       <span>
+-|         <option>
+-
+-#data
+-<option><option>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <option>
+-|     <option>
+-
+-#data
+-<math><annotation-xml><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|     <div>
+-
+-#data
+-<math><annotation-xml encoding="application/svg+xml"><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         encoding="application/svg+xml"
+-|     <div>
+-
+-#data
+-<math><annotation-xml encoding="application/xhtml+xml"><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         encoding="application/xhtml+xml"
+-|         <div>
+-
+-#data
+-<math><annotation-xml encoding="aPPlication/xhtmL+xMl"><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         encoding="aPPlication/xhtmL+xMl"
+-|         <div>
+-
+-#data
+-<math><annotation-xml encoding="text/html"><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         encoding="text/html"
+-|         <div>
+-
+-#data
+-<math><annotation-xml encoding="Text/htmL"><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         encoding="Text/htmL"
+-|         <div>
+-
+-#data
+-<math><annotation-xml encoding=" text/html "><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         encoding=" text/html "
+-|     <div>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests21.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests21.dat
+deleted file mode 100644
+index 1260ec0..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests21.dat
++++ /dev/null
+@@ -1,221 +0,0 @@
+-#data
+-<svg><![CDATA[foo]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "foo"
+-
+-#data
+-<math><![CDATA[foo]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       "foo"
+-
+-#data
+-<div><![CDATA[foo]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <!-- [CDATA[foo]] -->
+-
+-#data
+-<svg><![CDATA[foo
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "foo"
+-
+-#data
+-<svg><![CDATA[foo
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "foo"
+-
+-#data
+-<svg><![CDATA[
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-
+-#data
+-<svg><![CDATA[]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-
+-#data
+-<svg><![CDATA[]] >]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "]] >"
+-
+-#data
+-<svg><![CDATA[]] >]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "]] >"
+-
+-#data
+-<svg><![CDATA[]]
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "]]"
+-
+-#data
+-<svg><![CDATA[]
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "]"
+-
+-#data
+-<svg><![CDATA[]>a
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "]>a"
+-
+-#data
+-<svg><foreignObject><div><![CDATA[foo]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg foreignObject>
+-|         <div>
+-|           <!-- [CDATA[foo]] -->
+-
+-#data
+-<svg><![CDATA[<svg>]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "<svg>"
+-
+-#data
+-<svg><![CDATA[</svg>a]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "</svg>a"
+-
+-#data
+-<svg><![CDATA[<svg>a
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "<svg>a"
+-
+-#data
+-<svg><![CDATA[</svg>a
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "</svg>a"
+-
+-#data
+-<svg><![CDATA[<svg>]]><path>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "<svg>"
+-|       <svg path>
+-
+-#data
+-<svg><![CDATA[<svg>]]></path>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "<svg>"
+-
+-#data
+-<svg><![CDATA[<svg>]]><!--path-->
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "<svg>"
+-|       <!-- path -->
+-
+-#data
+-<svg><![CDATA[<svg>]]>path
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "<svg>path"
+-
+-#data
+-<svg><![CDATA[<!--svg-->]]>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       "<!--svg-->"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests22.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests22.dat
+deleted file mode 100644
+index aab27b2..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests22.dat
++++ /dev/null
+@@ -1,157 +0,0 @@
+-#data
+-<a><b><big><em><strong><div>X</a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       <b>
+-|         <big>
+-|           <em>
+-|             <strong>
+-|     <big>
+-|       <em>
+-|         <strong>
+-|           <div>
+-|             <a>
+-|               "X"
+-
+-#data
+-<a><b><div id=1><div id=2><div id=3><div id=4><div id=5><div id=6><div id=7><div id=8>A</a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       <b>
+-|     <b>
+-|       <div>
+-|         id="1"
+-|         <a>
+-|         <div>
+-|           id="2"
+-|           <a>
+-|           <div>
+-|             id="3"
+-|             <a>
+-|             <div>
+-|               id="4"
+-|               <a>
+-|               <div>
+-|                 id="5"
+-|                 <a>
+-|                 <div>
+-|                   id="6"
+-|                   <a>
+-|                   <div>
+-|                     id="7"
+-|                     <a>
+-|                     <div>
+-|                       id="8"
+-|                       <a>
+-|                         "A"
+-
+-#data
+-<a><b><div id=1><div id=2><div id=3><div id=4><div id=5><div id=6><div id=7><div id=8><div id=9>A</a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       <b>
+-|     <b>
+-|       <div>
+-|         id="1"
+-|         <a>
+-|         <div>
+-|           id="2"
+-|           <a>
+-|           <div>
+-|             id="3"
+-|             <a>
+-|             <div>
+-|               id="4"
+-|               <a>
+-|               <div>
+-|                 id="5"
+-|                 <a>
+-|                 <div>
+-|                   id="6"
+-|                   <a>
+-|                   <div>
+-|                     id="7"
+-|                     <a>
+-|                     <div>
+-|                       id="8"
+-|                       <a>
+-|                         <div>
+-|                           id="9"
+-|                           "A"
+-
+-#data
+-<a><b><div id=1><div id=2><div id=3><div id=4><div id=5><div id=6><div id=7><div id=8><div id=9><div id=10>A</a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       <b>
+-|     <b>
+-|       <div>
+-|         id="1"
+-|         <a>
+-|         <div>
+-|           id="2"
+-|           <a>
+-|           <div>
+-|             id="3"
+-|             <a>
+-|             <div>
+-|               id="4"
+-|               <a>
+-|               <div>
+-|                 id="5"
+-|                 <a>
+-|                 <div>
+-|                   id="6"
+-|                   <a>
+-|                   <div>
+-|                     id="7"
+-|                     <a>
+-|                     <div>
+-|                       id="8"
+-|                       <a>
+-|                         <div>
+-|                           id="9"
+-|                           <div>
+-|                             id="10"
+-|                             "A"
+-
+-#data
+-<cite><b><cite><i><cite><i><cite><i><div>X</b>TEST
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (cite). Expected DOCTYPE.
+-Line: 1 Col: 46 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 50 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <cite>
+-|       <b>
+-|         <cite>
+-|           <i>
+-|             <cite>
+-|               <i>
+-|                 <cite>
+-|                   <i>
+-|       <i>
+-|         <i>
+-|           <div>
+-|             <b>
+-|               "X"
+-|             "TEST"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests23.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests23.dat
+deleted file mode 100644
+index 34d2a73..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests23.dat
++++ /dev/null
+@@ -1,155 +0,0 @@
+-#data
+-<p><font size=4><font color=red><font size=4><font size=4><font size=4><font size=4><font size=4><font color=red><p>X
+-#errors
+-3: Start tag seen without seeing a doctype first. Expected “<!DOCTYPE html>”.
+-116: Unclosed elements.
+-117: End of file seen and there were open elements.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <font>
+-|         size="4"
+-|         <font>
+-|           color="red"
+-|           <font>
+-|             size="4"
+-|             <font>
+-|               size="4"
+-|               <font>
+-|                 size="4"
+-|                 <font>
+-|                   size="4"
+-|                   <font>
+-|                     size="4"
+-|                     <font>
+-|                       color="red"
+-|     <p>
+-|       <font>
+-|         color="red"
+-|         <font>
+-|           size="4"
+-|           <font>
+-|             size="4"
+-|             <font>
+-|               size="4"
+-|               <font>
+-|                 color="red"
+-|                 "X"
+-
+-#data
+-<p><font size=4><font size=4><font size=4><font size=4><p>X
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <font>
+-|         size="4"
+-|         <font>
+-|           size="4"
+-|           <font>
+-|             size="4"
+-|             <font>
+-|               size="4"
+-|     <p>
+-|       <font>
+-|         size="4"
+-|         <font>
+-|           size="4"
+-|           <font>
+-|             size="4"
+-|             "X"
+-
+-#data
+-<p><font size=4><font size=4><font size=4><font size="5"><font size=4><p>X
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <font>
+-|         size="4"
+-|         <font>
+-|           size="4"
+-|           <font>
+-|             size="4"
+-|             <font>
+-|               size="5"
+-|               <font>
+-|                 size="4"
+-|     <p>
+-|       <font>
+-|         size="4"
+-|         <font>
+-|           size="4"
+-|           <font>
+-|             size="5"
+-|             <font>
+-|               size="4"
+-|               "X"
+-
+-#data
+-<p><font size=4 id=a><font size=4 id=b><font size=4><font size=4><p>X
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <font>
+-|         id="a"
+-|         size="4"
+-|         <font>
+-|           id="b"
+-|           size="4"
+-|           <font>
+-|             size="4"
+-|             <font>
+-|               size="4"
+-|     <p>
+-|       <font>
+-|         id="a"
+-|         size="4"
+-|         <font>
+-|           id="b"
+-|           size="4"
+-|           <font>
+-|             size="4"
+-|             <font>
+-|               size="4"
+-|               "X"
+-
+-#data
+-<p><b id=a><b id=a><b id=a><b><object><b id=a><b id=a>X</object><p>Y
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <b>
+-|         id="a"
+-|         <b>
+-|           id="a"
+-|           <b>
+-|             id="a"
+-|             <b>
+-|               <object>
+-|                 <b>
+-|                   id="a"
+-|                   <b>
+-|                     id="a"
+-|                     "X"
+-|     <p>
+-|       <b>
+-|         id="a"
+-|         <b>
+-|           id="a"
+-|           <b>
+-|             id="a"
+-|             <b>
+-|               "Y"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests24.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests24.dat
+deleted file mode 100644
+index f6dc7eb..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests24.dat
++++ /dev/null
+@@ -1,79 +0,0 @@
+-#data
+-<!DOCTYPE html>&NotEqualTilde;
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "≂̸"
+-
+-#data
+-<!DOCTYPE html>&NotEqualTilde;A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "≂̸A"
+-
+-#data
+-<!DOCTYPE html>&ThickSpace;
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "  "
+-
+-#data
+-<!DOCTYPE html>&ThickSpace;A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "  A"
+-
+-#data
+-<!DOCTYPE html>&NotSubset;
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "⊂⃒"
+-
+-#data
+-<!DOCTYPE html>&NotSubset;A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "⊂⃒A"
+-
+-#data
+-<!DOCTYPE html>&Gopf;
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "𝔾"
+-
+-#data
+-<!DOCTYPE html>&Gopf;A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "𝔾A"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests25.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests25.dat
+deleted file mode 100644
+index 00de729..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests25.dat
++++ /dev/null
+@@ -1,219 +0,0 @@
+-#data
+-<!DOCTYPE html><body><foo>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <foo>
+-|       "A"
+-
+-#data
+-<!DOCTYPE html><body><area>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <area>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><base>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <base>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><basefont>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <basefont>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><bgsound>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <bgsound>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><br>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <br>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><col>A
+-#errors
+-26: Stray start tag “col”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><command>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <command>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><embed>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <embed>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><frame>A
+-#errors
+-26: Stray start tag “frame”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><hr>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <hr>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><img>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <img>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><input>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <input>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><keygen>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <keygen>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><link>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <link>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><meta>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <meta>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><param>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <param>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><source>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <source>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><track>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <track>
+-|     "A"
+-
+-#data
+-<!DOCTYPE html><body><wbr>A
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <wbr>
+-|     "A"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests26.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests26.dat
+deleted file mode 100644
+index fae11ff..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests26.dat
++++ /dev/null
+@@ -1,313 +0,0 @@
+-#data
+-<!DOCTYPE html><body><a href='#1'><nobr>1<nobr></a><br><a href='#2'><nobr>2<nobr></a><br><a href='#3'><nobr>3<nobr></a>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       href="#1"
+-|       <nobr>
+-|         "1"
+-|       <nobr>
+-|     <nobr>
+-|       <br>
+-|       <a>
+-|         href="#2"
+-|     <a>
+-|       href="#2"
+-|       <nobr>
+-|         "2"
+-|       <nobr>
+-|     <nobr>
+-|       <br>
+-|       <a>
+-|         href="#3"
+-|     <a>
+-|       href="#3"
+-|       <nobr>
+-|         "3"
+-|       <nobr>
+-
+-#data
+-<!DOCTYPE html><body><b><nobr>1<nobr></b><i><nobr>2<nobr></i>3
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <nobr>
+-|         "1"
+-|       <nobr>
+-|     <nobr>
+-|       <i>
+-|     <i>
+-|       <nobr>
+-|         "2"
+-|       <nobr>
+-|     <nobr>
+-|       "3"
+-
+-#data
+-<!DOCTYPE html><body><b><nobr>1<table><nobr></b><i><nobr>2<nobr></i>3
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <nobr>
+-|         "1"
+-|         <nobr>
+-|           <i>
+-|         <i>
+-|           <nobr>
+-|             "2"
+-|           <nobr>
+-|         <nobr>
+-|           "3"
+-|         <table>
+-
+-#data
+-<!DOCTYPE html><body><b><nobr>1<table><tr><td><nobr></b><i><nobr>2<nobr></i>3
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <nobr>
+-|         "1"
+-|         <table>
+-|           <tbody>
+-|             <tr>
+-|               <td>
+-|                 <nobr>
+-|                   <i>
+-|                 <i>
+-|                   <nobr>
+-|                     "2"
+-|                   <nobr>
+-|                 <nobr>
+-|                   "3"
+-
+-#data
+-<!DOCTYPE html><body><b><nobr>1<div><nobr></b><i><nobr>2<nobr></i>3
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <nobr>
+-|         "1"
+-|     <div>
+-|       <b>
+-|         <nobr>
+-|         <nobr>
+-|       <nobr>
+-|         <i>
+-|       <i>
+-|         <nobr>
+-|           "2"
+-|         <nobr>
+-|       <nobr>
+-|         "3"
+-
+-#data
+-<!DOCTYPE html><body><b><nobr>1<nobr></b><div><i><nobr>2<nobr></i>3
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <nobr>
+-|         "1"
+-|       <nobr>
+-|     <div>
+-|       <nobr>
+-|         <i>
+-|       <i>
+-|         <nobr>
+-|           "2"
+-|         <nobr>
+-|       <nobr>
+-|         "3"
+-
+-#data
+-<!DOCTYPE html><body><b><nobr>1<nobr><ins></b><i><nobr>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <nobr>
+-|         "1"
+-|       <nobr>
+-|         <ins>
+-|     <nobr>
+-|       <i>
+-|     <i>
+-|       <nobr>
+-
+-#data
+-<!DOCTYPE html><body><b><nobr>1<ins><nobr></b><i>2
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       <nobr>
+-|         "1"
+-|         <ins>
+-|       <nobr>
+-|     <nobr>
+-|       <i>
+-|         "2"
+-
+-#data
+-<!DOCTYPE html><body><b>1<nobr></b><i><nobr>2</i>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       "1"
+-|       <nobr>
+-|     <nobr>
+-|       <i>
+-|     <i>
+-|       <nobr>
+-|         "2"
+-
+-#data
+-<p><code x</code></p>
+-
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <code>
+-|         code=""
+-|         x<=""
+-|     <code>
+-|       code=""
+-|       x<=""
+-|       "
+-"
+-
+-#data
+-<!DOCTYPE html><svg><foreignObject><p><i></p>a
+-#errors
+-45: End tag “p” seen, but there were open elements.
+-41: Unclosed element “i”.
+-46: End of file seen and there were open elements.
+-35: Unclosed element “foreignObject”.
+-20: Unclosed element “svg”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg foreignObject>
+-|         <p>
+-|           <i>
+-|         <i>
+-|           "a"
+-
+-#data
+-<!DOCTYPE html><table><tr><td><svg><foreignObject><p><i></p>a
+-#errors
+-56: End tag “p” seen, but there were open elements.
+-52: Unclosed element “i”.
+-57: End of file seen and there were open elements.
+-46: Unclosed element “foreignObject”.
+-31: Unclosed element “svg”.
+-22: Unclosed element “table”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <svg svg>
+-|               <svg foreignObject>
+-|                 <p>
+-|                   <i>
+-|                 <i>
+-|                   "a"
+-
+-#data
+-<!DOCTYPE html><math><mtext><p><i></p>a
+-#errors
+-38: End tag “p” seen, but there were open elements.
+-34: Unclosed element “i”.
+-39: End of file in a foreign namespace context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mtext>
+-|         <p>
+-|           <i>
+-|         <i>
+-|           "a"
+-
+-#data
+-<!DOCTYPE html><table><tr><td><math><mtext><p><i></p>a
+-#errors
+-53: End tag “p” seen, but there were open elements.
+-49: Unclosed element “i”.
+-54: End of file in a foreign namespace context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <math math>
+-|               <math mtext>
+-|                 <p>
+-|                   <i>
+-|                 <i>
+-|                   "a"
+-
+-#data
+-<!DOCTYPE html><body><div><!/div>a
+-#errors
+-29: Bogus comment.
+-34: End of file seen and there were open elements.
+-26: Unclosed element “div”.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <!-- /div -->
+-|       "a"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests3.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests3.dat
+deleted file mode 100644
+index 38dc501..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests3.dat
++++ /dev/null
+@@ -1,305 +0,0 @@
+-#data
+-<head></head><style></style>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE.
+-Line: 1 Col: 20 Unexpected start tag (style) that can be in head. Moved.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|   <body>
+-
+-#data
+-<head></head><script></script>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE.
+-Line: 1 Col: 21 Unexpected start tag (script) that can be in head. Moved.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|   <body>
+-
+-#data
+-<head></head><!-- --><style></style><!-- --><script></script>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE.
+-Line: 1 Col: 28 Unexpected start tag (style) that can be in head. Moved.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|     <script>
+-|   <!--   -->
+-|   <!--   -->
+-|   <body>
+-
+-#data
+-<head></head><!-- -->x<style></style><!-- --><script></script>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <!--   -->
+-|   <body>
+-|     "x"
+-|     <style>
+-|     <!--   -->
+-|     <script>
+-
+-#data
+-<!DOCTYPE html><html><head></head><body><pre>
+-</pre></body></html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <pre>
+-
+-#data
+-<!DOCTYPE html><html><head></head><body><pre>
+-foo</pre></body></html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <pre>
+-|       "foo"
+-
+-#data
+-<!DOCTYPE html><html><head></head><body><pre>
+-
+-foo</pre></body></html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <pre>
+-|       "
+-foo"
+-
+-#data
+-<!DOCTYPE html><html><head></head><body><pre>
+-foo
+-</pre></body></html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <pre>
+-|       "foo
+-"
+-
+-#data
+-<!DOCTYPE html><html><head></head><body><pre>x</pre><span>
+-</span></body></html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <pre>
+-|       "x"
+-|     <span>
+-|       "
+-"
+-
+-#data
+-<!DOCTYPE html><html><head></head><body><pre>x
+-y</pre></body></html>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <pre>
+-|       "x
+-y"
+-
+-#data
+-<!DOCTYPE html><html><head></head><body><pre>x<div>
+-y</pre></body></html>
+-#errors
+-Line: 2 Col: 7 End tag (pre) seen too early. Expected other end tag.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <pre>
+-|       "x"
+-|       <div>
+-|         "
+-y"
+-
+-#data
+-<!DOCTYPE html><pre>&#x0a;&#x0a;A</pre>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <pre>
+-|       "
+-A"
+-
+-#data
+-<!DOCTYPE html><HTML><META><HEAD></HEAD></HTML>
+-#errors
+-Line: 1 Col: 33 Unexpected start tag head in existing head. Ignored.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <meta>
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><HTML><HEAD><head></HEAD></HTML>
+-#errors
+-Line: 1 Col: 33 Unexpected start tag head in existing head. Ignored.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<textarea>foo<span>bar</span><i>baz
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (textarea). Expected DOCTYPE.
+-Line: 1 Col: 35 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "foo<span>bar</span><i>baz"
+-
+-#data
+-<title>foo<span>bar</em><i>baz
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (title). Expected DOCTYPE.
+-Line: 1 Col: 30 Unexpected end of file. Expected end tag (title).
+-#document
+-| <html>
+-|   <head>
+-|     <title>
+-|       "foo<span>bar</em><i>baz"
+-|   <body>
+-
+-#data
+-<!DOCTYPE html><textarea>
+-</textarea>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-
+-#data
+-<!DOCTYPE html><textarea>
+-foo</textarea>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "foo"
+-
+-#data
+-<!DOCTYPE html><textarea>
+-
+-foo</textarea>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       "
+-foo"
+-
+-#data
+-<!DOCTYPE html><html><head></head><body><ul><li><div><p><li></ul></body></html>
+-#errors
+-Line: 1 Col: 60 Missing end tag (div, li).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ul>
+-|       <li>
+-|         <div>
+-|           <p>
+-|       <li>
+-
+-#data
+-<!doctype html><nobr><nobr><nobr>
+-#errors
+-Line: 1 Col: 27 Unexpected start tag (nobr) implies end tag (nobr).
+-Line: 1 Col: 33 Unexpected start tag (nobr) implies end tag (nobr).
+-Line: 1 Col: 33 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <nobr>
+-|     <nobr>
+-|     <nobr>
+-
+-#data
+-<!doctype html><nobr><nobr></nobr><nobr>
+-#errors
+-Line: 1 Col: 27 Unexpected start tag (nobr) implies end tag (nobr).
+-Line: 1 Col: 40 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <nobr>
+-|     <nobr>
+-|     <nobr>
+-
+-#data
+-<!doctype html><html><body><p><table></table></body></html>
+-#errors
+-Not known
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <table>
+-
+-#data
+-<p><table></table>
+-#errors
+-Not known
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <table>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests4.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests4.dat
+deleted file mode 100644
+index 3c50632..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests4.dat
++++ /dev/null
+@@ -1,59 +0,0 @@
+-#data
+-direct div content
+-#errors
+-#document-fragment
+-div
+-#document
+-| "direct div content"
+-
+-#data
+-direct textarea content
+-#errors
+-#document-fragment
+-textarea
+-#document
+-| "direct textarea content"
+-
+-#data
+-textarea content with <em>pseudo</em> <foo>markup
+-#errors
+-#document-fragment
+-textarea
+-#document
+-| "textarea content with <em>pseudo</em> <foo>markup"
+-
+-#data
+-this is &#x0043;DATA inside a <style> element
+-#errors
+-#document-fragment
+-style
+-#document
+-| "this is &#x0043;DATA inside a <style> element"
+-
+-#data
+-</plaintext>
+-#errors
+-#document-fragment
+-plaintext
+-#document
+-| "</plaintext>"
+-
+-#data
+-setting html's innerHTML
+-#errors
+-Line: 1 Col: 24 Unexpected EOF in inner html mode.
+-#document-fragment
+-html
+-#document
+-| <head>
+-| <body>
+-|   "setting html's innerHTML"
+-
+-#data
+-<title>setting head's innerHTML</title>
+-#errors
+-#document-fragment
+-head
+-#document
+-| <title>
+-|   "setting head's innerHTML"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests5.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests5.dat
+deleted file mode 100644
+index d7b5128..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests5.dat
++++ /dev/null
+@@ -1,191 +0,0 @@
+-#data
+-<style> <!-- </style>x
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-Line: 1 Col: 22 Unexpected end of file. Expected end tag (style).
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       " <!-- "
+-|   <body>
+-|     "x"
+-
+-#data
+-<style> <!-- </style> --> </style>x
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       " <!-- "
+-|     " "
+-|   <body>
+-|     "--> x"
+-
+-#data
+-<style> <!--> </style>x
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       " <!--> "
+-|   <body>
+-|     "x"
+-
+-#data
+-<style> <!---> </style>x
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       " <!---> "
+-|   <body>
+-|     "x"
+-
+-#data
+-<iframe> <!---> </iframe>x
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (iframe). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <iframe>
+-|       " <!---> "
+-|     "x"
+-
+-#data
+-<iframe> <!--- </iframe>->x</iframe> --> </iframe>x
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (iframe). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <iframe>
+-|       " <!--- "
+-|     "->x --> x"
+-
+-#data
+-<script> <!-- </script> --> </script>x
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <script>
+-|       " <!-- "
+-|     " "
+-|   <body>
+-|     "--> x"
+-
+-#data
+-<title> <!-- </title> --> </title>x
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (title). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <title>
+-|       " <!-- "
+-|     " "
+-|   <body>
+-|     "--> x"
+-
+-#data
+-<textarea> <!--- </textarea>->x</textarea> --> </textarea>x
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (textarea). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <textarea>
+-|       " <!--- "
+-|     "->x --> x"
+-
+-#data
+-<style> <!</-- </style>x
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|       " <!</-- "
+-|   <body>
+-|     "x"
+-
+-#data
+-<p><xmp></xmp>
+-#errors
+-XXX: Unknown
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|     <xmp>
+-
+-#data
+-<xmp> <!-- > --> </xmp>
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (xmp). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <xmp>
+-|       " <!-- > --> "
+-
+-#data
+-<title>&amp;</title>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (title). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <title>
+-|       "&"
+-|   <body>
+-
+-#data
+-<title><!--&amp;--></title>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (title). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <title>
+-|       "<!--&-->"
+-|   <body>
+-
+-#data
+-<title><!--</title>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (title). Expected DOCTYPE.
+-Line: 1 Col: 19 Unexpected end of file. Expected end tag (title).
+-#document
+-| <html>
+-|   <head>
+-|     <title>
+-|       "<!--"
+-|   <body>
+-
+-#data
+-<noscript><!--</noscript>--></noscript>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (noscript). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|     <noscript>
+-|       "<!--"
+-|   <body>
+-|     "-->"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests6.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests6.dat
+deleted file mode 100644
+index f28ece4..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests6.dat
++++ /dev/null
+@@ -1,663 +0,0 @@
+-#data
+-<!doctype html></head> <head>
+-#errors
+-Line: 1 Col: 29 Unexpected start tag head. Ignored.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   " "
+-|   <body>
+-
+-#data
+-<!doctype html><form><div></form><div>
+-#errors
+-33: End tag "form" seen but there were unclosed elements.
+-38: End of file seen and there were open elements.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-|       <div>
+-|         <div>
+-
+-#data
+-<!doctype html><title>&amp;</title>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <title>
+-|       "&"
+-|   <body>
+-
+-#data
+-<!doctype html><title><!--&amp;--></title>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <title>
+-|       "<!--&-->"
+-|   <body>
+-
+-#data
+-<!doctype>
+-#errors
+-Line: 1 Col: 9 No space after literal string 'DOCTYPE'.
+-Line: 1 Col: 10 Unexpected > character. Expected DOCTYPE name.
+-Line: 1 Col: 10 Erroneous DOCTYPE.
+-#document
+-| <!DOCTYPE >
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!---x
+-#errors
+-Line: 1 Col: 6 Unexpected end of file in comment.
+-Line: 1 Col: 6 Unexpected End of file. Expected DOCTYPE.
+-#document
+-| <!-- -x -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<body>
+-<div>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (body).
+-Line: 2 Col: 5 Expected closing tag. Unexpected end of file.
+-#document-fragment
+-div
+-#document
+-| "
+-"
+-| <div>
+-
+-#data
+-<frameset></frameset>
+-foo
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (frameset). Expected DOCTYPE.
+-Line: 2 Col: 3 Unexpected non-space characters in the after frameset phase. Ignored.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   "
+-"
+-
+-#data
+-<frameset></frameset>
+-<noframes>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (frameset). Expected DOCTYPE.
+-Line: 2 Col: 10 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   "
+-"
+-|   <noframes>
+-
+-#data
+-<frameset></frameset>
+-<div>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (frameset). Expected DOCTYPE.
+-Line: 2 Col: 5 Unexpected start tag (div) in the after frameset phase. Ignored.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   "
+-"
+-
+-#data
+-<frameset></frameset>
+-</html>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (frameset). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   "
+-"
+-
+-#data
+-<frameset></frameset>
+-</div>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (frameset). Expected DOCTYPE.
+-Line: 2 Col: 6 Unexpected end tag (div) in the after frameset phase. Ignored.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   "
+-"
+-
+-#data
+-<form><form>
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (form). Expected DOCTYPE.
+-Line: 1 Col: 12 Unexpected start tag (form).
+-Line: 1 Col: 12 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <form>
+-
+-#data
+-<button><button>
+-#errors
+-Line: 1 Col: 8 Unexpected start tag (button). Expected DOCTYPE.
+-Line: 1 Col: 16 Unexpected start tag (button) implies end tag (button).
+-Line: 1 Col: 16 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <button>
+-|     <button>
+-
+-#data
+-<table><tr><td></th>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 20 Unexpected end tag (th). Ignored.
+-Line: 1 Col: 20 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-
+-#data
+-<table><caption><td>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 20 Unexpected end tag (td). Ignored.
+-Line: 1 Col: 20 Unexpected table cell start tag (td) in the table body phase.
+-Line: 1 Col: 20 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-
+-#data
+-<table><caption><div>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 21 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <div>
+-
+-#data
+-</caption><div>
+-#errors
+-Line: 1 Col: 10 Unexpected end tag (caption). Ignored.
+-Line: 1 Col: 15 Expected closing tag. Unexpected end of file.
+-#document-fragment
+-caption
+-#document
+-| <div>
+-
+-#data
+-<table><caption><div></caption>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 31 Unexpected end tag (caption). Missing end tag (div).
+-Line: 1 Col: 31 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <div>
+-
+-#data
+-<table><caption></table>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 24 Unexpected end table tag in caption. Generates implied end caption.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-
+-#data
+-</table><div>
+-#errors
+-Line: 1 Col: 8 Unexpected end table tag in caption. Generates implied end caption.
+-Line: 1 Col: 8 Unexpected end tag (caption). Ignored.
+-Line: 1 Col: 13 Expected closing tag. Unexpected end of file.
+-#document-fragment
+-caption
+-#document
+-| <div>
+-
+-#data
+-<table><caption></body></col></colgroup></html></tbody></td></tfoot></th></thead></tr>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 23 Unexpected end tag (body). Ignored.
+-Line: 1 Col: 29 Unexpected end tag (col). Ignored.
+-Line: 1 Col: 40 Unexpected end tag (colgroup). Ignored.
+-Line: 1 Col: 47 Unexpected end tag (html). Ignored.
+-Line: 1 Col: 55 Unexpected end tag (tbody). Ignored.
+-Line: 1 Col: 60 Unexpected end tag (td). Ignored.
+-Line: 1 Col: 68 Unexpected end tag (tfoot). Ignored.
+-Line: 1 Col: 73 Unexpected end tag (th). Ignored.
+-Line: 1 Col: 81 Unexpected end tag (thead). Ignored.
+-Line: 1 Col: 86 Unexpected end tag (tr). Ignored.
+-Line: 1 Col: 86 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-
+-#data
+-<table><caption><div></div>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 27 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <div>
+-
+-#data
+-<table><tr><td></body></caption></col></colgroup></html>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 22 Unexpected end tag (body). Ignored.
+-Line: 1 Col: 32 Unexpected end tag (caption). Ignored.
+-Line: 1 Col: 38 Unexpected end tag (col). Ignored.
+-Line: 1 Col: 49 Unexpected end tag (colgroup). Ignored.
+-Line: 1 Col: 56 Unexpected end tag (html). Ignored.
+-Line: 1 Col: 56 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-
+-#data
+-</table></tbody></tfoot></thead></tr><div>
+-#errors
+-Line: 1 Col: 8 Unexpected end tag (table). Ignored.
+-Line: 1 Col: 16 Unexpected end tag (tbody). Ignored.
+-Line: 1 Col: 24 Unexpected end tag (tfoot). Ignored.
+-Line: 1 Col: 32 Unexpected end tag (thead). Ignored.
+-Line: 1 Col: 37 Unexpected end tag (tr). Ignored.
+-Line: 1 Col: 42 Expected closing tag. Unexpected end of file.
+-#document-fragment
+-td
+-#document
+-| <div>
+-
+-#data
+-<table><colgroup>foo
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 20 Unexpected non-space characters in table context caused voodoo mode.
+-Line: 1 Col: 20 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "foo"
+-|     <table>
+-|       <colgroup>
+-
+-#data
+-foo<col>
+-#errors
+-Line: 1 Col: 3 Unexpected end tag (colgroup). Ignored.
+-#document-fragment
+-colgroup
+-#document
+-| <col>
+-
+-#data
+-<table><colgroup></col>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 23 This element (col) has no end tag.
+-Line: 1 Col: 23 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <colgroup>
+-
+-#data
+-<frameset><div>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (frameset). Expected DOCTYPE.
+-Line: 1 Col: 15 Unexpected start tag token (div) in the frameset phase. Ignored.
+-Line: 1 Col: 15 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-</frameset><frame>
+-#errors
+-Line: 1 Col: 11 Unexpected end tag token (frameset) in the frameset phase (innerHTML).
+-#document-fragment
+-frameset
+-#document
+-| <frame>
+-
+-#data
+-<frameset></div>
+-#errors
+-Line: 1 Col: 10 Unexpected start tag (frameset). Expected DOCTYPE.
+-Line: 1 Col: 16 Unexpected end tag token (div) in the frameset phase. Ignored.
+-Line: 1 Col: 16 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-</body><div>
+-#errors
+-Line: 1 Col: 7 Unexpected end tag (body). Ignored.
+-Line: 1 Col: 12 Expected closing tag. Unexpected end of file.
+-#document-fragment
+-body
+-#document
+-| <div>
+-
+-#data
+-<table><tr><div>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 16 Unexpected start tag (div) in table context caused voodoo mode.
+-Line: 1 Col: 16 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-</tr><td>
+-#errors
+-Line: 1 Col: 5 Unexpected end tag (tr). Ignored.
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-</tbody></tfoot></thead><td>
+-#errors
+-Line: 1 Col: 8 Unexpected end tag (tbody). Ignored.
+-Line: 1 Col: 16 Unexpected end tag (tfoot). Ignored.
+-Line: 1 Col: 24 Unexpected end tag (thead). Ignored.
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-<table><tr><div><td>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 16 Unexpected start tag (div) in table context caused voodoo mode.
+-Line: 1 Col: 20 Unexpected implied end tag (div) in the table row phase.
+-Line: 1 Col: 20 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-
+-#data
+-<caption><col><colgroup><tbody><tfoot><thead><tr>
+-#errors
+-Line: 1 Col: 9 Unexpected start tag (caption).
+-Line: 1 Col: 14 Unexpected start tag (col).
+-Line: 1 Col: 24 Unexpected start tag (colgroup).
+-Line: 1 Col: 31 Unexpected start tag (tbody).
+-Line: 1 Col: 38 Unexpected start tag (tfoot).
+-Line: 1 Col: 45 Unexpected start tag (thead).
+-Line: 1 Col: 49 Unexpected end of file. Expected table content.
+-#document-fragment
+-tbody
+-#document
+-| <tr>
+-
+-#data
+-<table><tbody></thead>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 22 Unexpected end tag (thead) in the table body phase. Ignored.
+-Line: 1 Col: 22 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-
+-#data
+-</table><tr>
+-#errors
+-Line: 1 Col: 8 Unexpected end tag (table). Ignored.
+-Line: 1 Col: 12 Unexpected end of file. Expected table content.
+-#document-fragment
+-tbody
+-#document
+-| <tr>
+-
+-#data
+-<table><tbody></body></caption></col></colgroup></html></td></th></tr>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 21 Unexpected end tag (body) in the table body phase. Ignored.
+-Line: 1 Col: 31 Unexpected end tag (caption) in the table body phase. Ignored.
+-Line: 1 Col: 37 Unexpected end tag (col) in the table body phase. Ignored.
+-Line: 1 Col: 48 Unexpected end tag (colgroup) in the table body phase. Ignored.
+-Line: 1 Col: 55 Unexpected end tag (html) in the table body phase. Ignored.
+-Line: 1 Col: 60 Unexpected end tag (td) in the table body phase. Ignored.
+-Line: 1 Col: 65 Unexpected end tag (th) in the table body phase. Ignored.
+-Line: 1 Col: 70 Unexpected end tag (tr) in the table body phase. Ignored.
+-Line: 1 Col: 70 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-
+-#data
+-<table><tbody></div>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 20 Unexpected end tag (div) in table context caused voodoo mode.
+-Line: 1 Col: 20 End tag (div) seen too early. Expected other end tag.
+-Line: 1 Col: 20 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-
+-#data
+-<table><table>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected start tag (table) implies end tag (table).
+-Line: 1 Col: 14 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|     <table>
+-
+-#data
+-<table></body></caption></col></colgroup></html></tbody></td></tfoot></th></thead></tr>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 14 Unexpected end tag (body). Ignored.
+-Line: 1 Col: 24 Unexpected end tag (caption). Ignored.
+-Line: 1 Col: 30 Unexpected end tag (col). Ignored.
+-Line: 1 Col: 41 Unexpected end tag (colgroup). Ignored.
+-Line: 1 Col: 48 Unexpected end tag (html). Ignored.
+-Line: 1 Col: 56 Unexpected end tag (tbody). Ignored.
+-Line: 1 Col: 61 Unexpected end tag (td). Ignored.
+-Line: 1 Col: 69 Unexpected end tag (tfoot). Ignored.
+-Line: 1 Col: 74 Unexpected end tag (th). Ignored.
+-Line: 1 Col: 82 Unexpected end tag (thead). Ignored.
+-Line: 1 Col: 87 Unexpected end tag (tr). Ignored.
+-Line: 1 Col: 87 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-
+-#data
+-</table><tr>
+-#errors
+-Line: 1 Col: 8 Unexpected end tag (table). Ignored.
+-Line: 1 Col: 12 Unexpected end of file. Expected table content.
+-#document-fragment
+-table
+-#document
+-| <tbody>
+-|   <tr>
+-
+-#data
+-<body></body></html>
+-#errors
+-Line: 1 Col: 20 Unexpected html end tag in inner html mode.
+-Line: 1 Col: 20 Unexpected EOF in inner html mode.
+-#document-fragment
+-html
+-#document
+-| <head>
+-| <body>
+-
+-#data
+-<html><frameset></frameset></html> 
+-#errors
+-Line: 1 Col: 6 Unexpected start tag (html). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-|   " "
+-
+-#data
+-<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"><html></html>
+-#errors
+-Line: 1 Col: 50 Erroneous DOCTYPE.
+-Line: 1 Col: 63 Unexpected end tag (html) after the (implied) root element.
+-#document
+-| <!DOCTYPE html "-//W3C//DTD HTML 4.01//EN" "">
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<param><frameset></frameset>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (param). Expected DOCTYPE.
+-Line: 1 Col: 17 Unexpected start tag (frameset).
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<source><frameset></frameset>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (source). Expected DOCTYPE.
+-Line: 1 Col: 17 Unexpected start tag (frameset).
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<track><frameset></frameset>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (track). Expected DOCTYPE.
+-Line: 1 Col: 17 Unexpected start tag (frameset).
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-</html><frameset></frameset>
+-#errors
+-7: End tag seen without seeing a doctype first. Expected “<!DOCTYPE html>”.
+-17: Stray “frameset” start tag.
+-17: “frameset” start tag seen.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-</body><frameset></frameset>
+-#errors
+-7: End tag seen without seeing a doctype first. Expected “<!DOCTYPE html>”.
+-17: Stray “frameset” start tag.
+-17: “frameset” start tag seen.
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests7.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests7.dat
+deleted file mode 100644
+index f5193c6..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests7.dat
++++ /dev/null
+@@ -1,390 +0,0 @@
+-#data
+-<!doctype html><body><title>X</title>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <title>
+-|       "X"
+-
+-#data
+-<!doctype html><table><title>X</title></table>
+-#errors
+-Line: 1 Col: 29 Unexpected start tag (title) in table context caused voodoo mode.
+-Line: 1 Col: 38 Unexpected end tag (title) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <title>
+-|       "X"
+-|     <table>
+-
+-#data
+-<!doctype html><head></head><title>X</title>
+-#errors
+-Line: 1 Col: 35 Unexpected start tag (title) that can be in head. Moved.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <title>
+-|       "X"
+-|   <body>
+-
+-#data
+-<!doctype html></head><title>X</title>
+-#errors
+-Line: 1 Col: 29 Unexpected start tag (title) that can be in head. Moved.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|     <title>
+-|       "X"
+-|   <body>
+-
+-#data
+-<!doctype html><table><meta></table>
+-#errors
+-Line: 1 Col: 28 Unexpected start tag (meta) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <meta>
+-|     <table>
+-
+-#data
+-<!doctype html><table>X<tr><td><table> <meta></table></table>
+-#errors
+-Line: 1 Col: 23 Unexpected non-space characters in table context caused voodoo mode.
+-Line: 1 Col: 45 Unexpected start tag (meta) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "X"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <meta>
+-|             <table>
+-|               " "
+-
+-#data
+-<!doctype html><html> <head>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!doctype html> <head>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!doctype html><table><style> <tr>x </style> </table>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <style>
+-|         " <tr>x "
+-|       " "
+-
+-#data
+-<!doctype html><table><TBODY><script> <tr>x </script> </table>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <script>
+-|           " <tr>x "
+-|         " "
+-
+-#data
+-<!doctype html><p><applet><p>X</p></applet>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <applet>
+-|         <p>
+-|           "X"
+-
+-#data
+-<!doctype html><listing>
+-X</listing>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <listing>
+-|       "X"
+-
+-#data
+-<!doctype html><select><input>X
+-#errors
+-Line: 1 Col: 30 Unexpected input start tag in the select phase.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|     <input>
+-|     "X"
+-
+-#data
+-<!doctype html><select><select>X
+-#errors
+-Line: 1 Col: 31 Unexpected select start tag in the select phase treated as select end tag.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|     "X"
+-
+-#data
+-<!doctype html><table><input type=hidDEN></table>
+-#errors
+-Line: 1 Col: 41 Unexpected input with type hidden in table context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <input>
+-|         type="hidDEN"
+-
+-#data
+-<!doctype html><table>X<input type=hidDEN></table>
+-#errors
+-Line: 1 Col: 23 Unexpected non-space characters in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     "X"
+-|     <table>
+-|       <input>
+-|         type="hidDEN"
+-
+-#data
+-<!doctype html><table>  <input type=hidDEN></table>
+-#errors
+-Line: 1 Col: 43 Unexpected input with type hidden in table context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       "  "
+-|       <input>
+-|         type="hidDEN"
+-
+-#data
+-<!doctype html><table>  <input type='hidDEN'></table>
+-#errors
+-Line: 1 Col: 45 Unexpected input with type hidden in table context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       "  "
+-|       <input>
+-|         type="hidDEN"
+-
+-#data
+-<!doctype html><table><input type=" hidden"><input type=hidDEN></table>
+-#errors
+-Line: 1 Col: 44 Unexpected start tag (input) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <input>
+-|       type=" hidden"
+-|     <table>
+-|       <input>
+-|         type="hidDEN"
+-
+-#data
+-<!doctype html><table><select>X<tr>
+-#errors
+-Line: 1 Col: 30 Unexpected start tag (select) in table context caused voodoo mode.
+-Line: 1 Col: 35 Unexpected table element start tag (trs) in the select in table phase.
+-Line: 1 Col: 35 Unexpected end of file. Expected table content.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       "X"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!doctype html><select>X</select>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       "X"
+-
+-#data
+-<!DOCTYPE hTmL><html></html>
+-#errors
+-Line: 1 Col: 28 Unexpected end tag (html) after the (implied) root element.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<!DOCTYPE HTML><html></html>
+-#errors
+-Line: 1 Col: 28 Unexpected end tag (html) after the (implied) root element.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<body>X</body></body>
+-#errors
+-Line: 1 Col: 21 Unexpected end tag token (body) in the after body phase.
+-Line: 1 Col: 21 Unexpected EOF in inner html mode.
+-#document-fragment
+-html
+-#document
+-| <head>
+-| <body>
+-|   "X"
+-
+-#data
+-<div><p>a</x> b
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 13 Unexpected end tag (x). Ignored.
+-Line: 1 Col: 15 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <p>
+-|         "a b"
+-
+-#data
+-<table><tr><td><code></code> </table>
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <code>
+-|             " "
+-
+-#data
+-<table><b><tr><td>aaa</td></tr>bbb</table>ccc
+-#errors
+-XXX: Fix me
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|     <b>
+-|       "bbb"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "aaa"
+-|     <b>
+-|       "ccc"
+-
+-#data
+-A<table><tr> B</tr> B</table>
+-#errors
+-XXX: Fix me
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "A B B"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-A<table><tr> B</tr> </em>C</table>
+-#errors
+-XXX: Fix me
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "A BC"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|         " "
+-
+-#data
+-<select><keygen>
+-#errors
+-Not known
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|     <keygen>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests8.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests8.dat
+deleted file mode 100644
+index 90e6c91..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests8.dat
++++ /dev/null
+@@ -1,148 +0,0 @@
+-#data
+-<div>
+-<div></div>
+-</span>x
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 3 Col: 7 Unexpected end tag (span). Ignored.
+-Line: 3 Col: 8 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "
+-"
+-|       <div>
+-|       "
+-x"
+-
+-#data
+-<div>x<div></div>
+-</span>x
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 2 Col: 7 Unexpected end tag (span). Ignored.
+-Line: 2 Col: 8 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "x"
+-|       <div>
+-|       "
+-x"
+-
+-#data
+-<div>x<div></div>x</span>x
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 25 Unexpected end tag (span). Ignored.
+-Line: 1 Col: 26 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "x"
+-|       <div>
+-|       "xx"
+-
+-#data
+-<div>x<div></div>y</span>z
+-#errors
+-Line: 1 Col: 5 Unexpected start tag (div). Expected DOCTYPE.
+-Line: 1 Col: 25 Unexpected end tag (span). Ignored.
+-Line: 1 Col: 26 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "x"
+-|       <div>
+-|       "yz"
+-
+-#data
+-<table><div>x<div></div>x</span>x
+-#errors
+-Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE.
+-Line: 1 Col: 12 Unexpected start tag (div) in table context caused voodoo mode.
+-Line: 1 Col: 18 Unexpected start tag (div) in table context caused voodoo mode.
+-Line: 1 Col: 24 Unexpected end tag (div) in table context caused voodoo mode.
+-Line: 1 Col: 32 Unexpected end tag (span) in table context caused voodoo mode.
+-Line: 1 Col: 32 Unexpected end tag (span). Ignored.
+-Line: 1 Col: 33 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "x"
+-|       <div>
+-|       "xx"
+-|     <table>
+-
+-#data
+-x<table>x
+-#errors
+-Line: 1 Col: 1 Unexpected non-space characters. Expected DOCTYPE.
+-Line: 1 Col: 9 Unexpected non-space characters in table context caused voodoo mode.
+-Line: 1 Col: 9 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "xx"
+-|     <table>
+-
+-#data
+-x<table><table>x
+-#errors
+-Line: 1 Col: 1 Unexpected non-space characters. Expected DOCTYPE.
+-Line: 1 Col: 15 Unexpected start tag (table) implies end tag (table).
+-Line: 1 Col: 16 Unexpected non-space characters in table context caused voodoo mode.
+-Line: 1 Col: 16 Unexpected end of file. Expected table content.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "x"
+-|     <table>
+-|     "x"
+-|     <table>
+-
+-#data
+-<b>a<div></div><div></b>y
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (b). Expected DOCTYPE.
+-Line: 1 Col: 24 End tag (b) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 25 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|       "a"
+-|       <div>
+-|     <div>
+-|       <b>
+-|       "y"
+-
+-#data
+-<a><div><p></a>
+-#errors
+-Line: 1 Col: 3 Unexpected start tag (a). Expected DOCTYPE.
+-Line: 1 Col: 15 End tag (a) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 15 End tag (a) violates step 1, paragraph 3 of the adoption agency algorithm.
+-Line: 1 Col: 15 Expected closing tag. Unexpected end of file.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|     <div>
+-|       <a>
+-|       <p>
+-|         <a>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests9.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests9.dat
+deleted file mode 100644
+index 554e27a..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests9.dat
++++ /dev/null
+@@ -1,457 +0,0 @@
+-#data
+-<!DOCTYPE html><math></math>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-
+-#data
+-<!DOCTYPE html><body><math></math>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-
+-#data
+-<!DOCTYPE html><math><mi>
+-#errors
+-25: End of file in a foreign namespace context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-
+-#data
+-<!DOCTYPE html><math><annotation-xml><svg><u>
+-#errors
+-45: HTML start tag “u” in a foreign namespace context.
+-45: End of file seen and there were open elements.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math annotation-xml>
+-|         <svg svg>
+-|     <u>
+-
+-#data
+-<!DOCTYPE html><body><select><math></math></select>
+-#errors
+-Line: 1 Col: 35 Unexpected start tag token (math) in the select phase. Ignored.
+-Line: 1 Col: 42 Unexpected end tag (math) in the select phase. Ignored.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-
+-#data
+-<!DOCTYPE html><body><select><option><math></math></option></select>
+-#errors
+-Line: 1 Col: 43 Unexpected start tag token (math) in the select phase. Ignored.
+-Line: 1 Col: 50 Unexpected end tag (math) in the select phase. Ignored.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-
+-#data
+-<!DOCTYPE html><body><table><math></math></table>
+-#errors
+-Line: 1 Col: 34 Unexpected start tag (math) in table context caused voodoo mode.
+-Line: 1 Col: 41 Unexpected end tag (math) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|     <table>
+-
+-#data
+-<!DOCTYPE html><body><table><math><mi>foo</mi></math></table>
+-#errors
+-Line: 1 Col: 34 Unexpected start tag (math) in table context caused voodoo mode.
+-Line: 1 Col: 46 Unexpected end tag (mi) in table context caused voodoo mode.
+-Line: 1 Col: 53 Unexpected end tag (math) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         "foo"
+-|     <table>
+-
+-#data
+-<!DOCTYPE html><body><table><math><mi>foo</mi><mi>bar</mi></math></table>
+-#errors
+-Line: 1 Col: 34 Unexpected start tag (math) in table context caused voodoo mode.
+-Line: 1 Col: 46 Unexpected end tag (mi) in table context caused voodoo mode.
+-Line: 1 Col: 58 Unexpected end tag (mi) in table context caused voodoo mode.
+-Line: 1 Col: 65 Unexpected end tag (math) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         "foo"
+-|       <math mi>
+-|         "bar"
+-|     <table>
+-
+-#data
+-<!DOCTYPE html><body><table><tbody><math><mi>foo</mi><mi>bar</mi></math></tbody></table>
+-#errors
+-Line: 1 Col: 41 Unexpected start tag (math) in table context caused voodoo mode.
+-Line: 1 Col: 53 Unexpected end tag (mi) in table context caused voodoo mode.
+-Line: 1 Col: 65 Unexpected end tag (mi) in table context caused voodoo mode.
+-Line: 1 Col: 72 Unexpected end tag (math) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         "foo"
+-|       <math mi>
+-|         "bar"
+-|     <table>
+-|       <tbody>
+-
+-#data
+-<!DOCTYPE html><body><table><tbody><tr><math><mi>foo</mi><mi>bar</mi></math></tr></tbody></table>
+-#errors
+-Line: 1 Col: 45 Unexpected start tag (math) in table context caused voodoo mode.
+-Line: 1 Col: 57 Unexpected end tag (mi) in table context caused voodoo mode.
+-Line: 1 Col: 69 Unexpected end tag (mi) in table context caused voodoo mode.
+-Line: 1 Col: 76 Unexpected end tag (math) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         "foo"
+-|       <math mi>
+-|         "bar"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<!DOCTYPE html><body><table><tbody><tr><td><math><mi>foo</mi><mi>bar</mi></math></td></tr></tbody></table>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <math math>
+-|               <math mi>
+-|                 "foo"
+-|               <math mi>
+-|                 "bar"
+-
+-#data
+-<!DOCTYPE html><body><table><tbody><tr><td><math><mi>foo</mi><mi>bar</mi></math><p>baz</td></tr></tbody></table>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <math math>
+-|               <math mi>
+-|                 "foo"
+-|               <math mi>
+-|                 "bar"
+-|             <p>
+-|               "baz"
+-
+-#data
+-<!DOCTYPE html><body><table><caption><math><mi>foo</mi><mi>bar</mi></math><p>baz</caption></table>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <math math>
+-|           <math mi>
+-|             "foo"
+-|           <math mi>
+-|             "bar"
+-|         <p>
+-|           "baz"
+-
+-#data
+-<!DOCTYPE html><body><table><caption><math><mi>foo</mi><mi>bar</mi><p>baz</table><p>quux
+-#errors
+-Line: 1 Col: 70 HTML start tag "p" in a foreign namespace context.
+-Line: 1 Col: 81 Unexpected end table tag in caption. Generates implied end caption.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <math math>
+-|           <math mi>
+-|             "foo"
+-|           <math mi>
+-|             "bar"
+-|         <p>
+-|           "baz"
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body><table><caption><math><mi>foo</mi><mi>bar</mi>baz</table><p>quux
+-#errors
+-Line: 1 Col: 78 Unexpected end table tag in caption. Generates implied end caption.
+-Line: 1 Col: 78 Unexpected end tag (caption). Missing end tag (math).
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <caption>
+-|         <math math>
+-|           <math mi>
+-|             "foo"
+-|           <math mi>
+-|             "bar"
+-|           "baz"
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body><table><colgroup><math><mi>foo</mi><mi>bar</mi><p>baz</table><p>quux
+-#errors
+-Line: 1 Col: 44 Unexpected start tag (math) in table context caused voodoo mode.
+-Line: 1 Col: 56 Unexpected end tag (mi) in table context caused voodoo mode.
+-Line: 1 Col: 68 Unexpected end tag (mi) in table context caused voodoo mode.
+-Line: 1 Col: 71 HTML start tag "p" in a foreign namespace context.
+-Line: 1 Col: 71 Unexpected start tag (p) in table context caused voodoo mode.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         "foo"
+-|       <math mi>
+-|         "bar"
+-|     <p>
+-|       "baz"
+-|     <table>
+-|       <colgroup>
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body><table><tr><td><select><math><mi>foo</mi><mi>bar</mi><p>baz</table><p>quux
+-#errors
+-Line: 1 Col: 50 Unexpected start tag token (math) in the select phase. Ignored.
+-Line: 1 Col: 54 Unexpected start tag token (mi) in the select phase. Ignored.
+-Line: 1 Col: 62 Unexpected end tag (mi) in the select phase. Ignored.
+-Line: 1 Col: 66 Unexpected start tag token (mi) in the select phase. Ignored.
+-Line: 1 Col: 74 Unexpected end tag (mi) in the select phase. Ignored.
+-Line: 1 Col: 77 Unexpected start tag token (p) in the select phase. Ignored.
+-Line: 1 Col: 88 Unexpected table element end tag (tables) in the select in table phase.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <select>
+-|               "foobarbaz"
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body><table><select><math><mi>foo</mi><mi>bar</mi><p>baz</table><p>quux
+-#errors
+-Line: 1 Col: 36 Unexpected start tag (select) in table context caused voodoo mode.
+-Line: 1 Col: 42 Unexpected start tag token (math) in the select phase. Ignored.
+-Line: 1 Col: 46 Unexpected start tag token (mi) in the select phase. Ignored.
+-Line: 1 Col: 54 Unexpected end tag (mi) in the select phase. Ignored.
+-Line: 1 Col: 58 Unexpected start tag token (mi) in the select phase. Ignored.
+-Line: 1 Col: 66 Unexpected end tag (mi) in the select phase. Ignored.
+-Line: 1 Col: 69 Unexpected start tag token (p) in the select phase. Ignored.
+-Line: 1 Col: 80 Unexpected table element end tag (tables) in the select in table phase.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       "foobarbaz"
+-|     <table>
+-|     <p>
+-|       "quux"
+-
+-#data
+-<!DOCTYPE html><body></body></html><math><mi>foo</mi><mi>bar</mi><p>baz
+-#errors
+-Line: 1 Col: 41 Unexpected start tag (math).
+-Line: 1 Col: 68 HTML start tag "p" in a foreign namespace context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         "foo"
+-|       <math mi>
+-|         "bar"
+-|     <p>
+-|       "baz"
+-
+-#data
+-<!DOCTYPE html><body></body><math><mi>foo</mi><mi>bar</mi><p>baz
+-#errors
+-Line: 1 Col: 34 Unexpected start tag token (math) in the after body phase.
+-Line: 1 Col: 61 HTML start tag "p" in a foreign namespace context.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mi>
+-|         "foo"
+-|       <math mi>
+-|         "bar"
+-|     <p>
+-|       "baz"
+-
+-#data
+-<!DOCTYPE html><frameset><math><mi></mi><mi></mi><p><span>
+-#errors
+-Line: 1 Col: 31 Unexpected start tag token (math) in the frameset phase. Ignored.
+-Line: 1 Col: 35 Unexpected start tag token (mi) in the frameset phase. Ignored.
+-Line: 1 Col: 40 Unexpected end tag token (mi) in the frameset phase. Ignored.
+-Line: 1 Col: 44 Unexpected start tag token (mi) in the frameset phase. Ignored.
+-Line: 1 Col: 49 Unexpected end tag token (mi) in the frameset phase. Ignored.
+-Line: 1 Col: 52 Unexpected start tag token (p) in the frameset phase. Ignored.
+-Line: 1 Col: 58 Unexpected start tag token (span) in the frameset phase. Ignored.
+-Line: 1 Col: 58 Expected closing tag. Unexpected end of file.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!DOCTYPE html><frameset></frameset><math><mi></mi><mi></mi><p><span>
+-#errors
+-Line: 1 Col: 42 Unexpected start tag (math) in the after frameset phase. Ignored.
+-Line: 1 Col: 46 Unexpected start tag (mi) in the after frameset phase. Ignored.
+-Line: 1 Col: 51 Unexpected end tag (mi) in the after frameset phase. Ignored.
+-Line: 1 Col: 55 Unexpected start tag (mi) in the after frameset phase. Ignored.
+-Line: 1 Col: 60 Unexpected end tag (mi) in the after frameset phase. Ignored.
+-Line: 1 Col: 63 Unexpected start tag (p) in the after frameset phase. Ignored.
+-Line: 1 Col: 69 Unexpected start tag (span) in the after frameset phase. Ignored.
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!DOCTYPE html><body xlink:href=foo><math xlink:href=foo></math>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     xlink:href="foo"
+-|     <math math>
+-|       xlink href="foo"
+-
+-#data
+-<!DOCTYPE html><body xlink:href=foo xml:lang=en><math><mi xml:lang=en xlink:href=foo></mi></math>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     xlink:href="foo"
+-|     xml:lang="en"
+-|     <math math>
+-|       <math mi>
+-|         xlink href="foo"
+-|         xml lang="en"
+-
+-#data
+-<!DOCTYPE html><body xlink:href=foo xml:lang=en><math><mi xml:lang=en xlink:href=foo /></math>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     xlink:href="foo"
+-|     xml:lang="en"
+-|     <math math>
+-|       <math mi>
+-|         xlink href="foo"
+-|         xml lang="en"
+-
+-#data
+-<!DOCTYPE html><body xlink:href=foo xml:lang=en><math><mi xml:lang=en xlink:href=foo />bar</math>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     xlink:href="foo"
+-|     xml:lang="en"
+-|     <math math>
+-|       <math mi>
+-|         xlink href="foo"
+-|         xml lang="en"
+-|       "bar"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests_innerHTML_1.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests_innerHTML_1.dat
+deleted file mode 100644
+index 6c78661..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tests_innerHTML_1.dat
++++ /dev/null
+@@ -1,741 +0,0 @@
+-#data
+-<body><span>
+-#errors
+-#document-fragment
+-body
+-#document
+-| <span>
+-
+-#data
+-<span><body>
+-#errors
+-#document-fragment
+-body
+-#document
+-| <span>
+-
+-#data
+-<span><body>
+-#errors
+-#document-fragment
+-div
+-#document
+-| <span>
+-
+-#data
+-<body><span>
+-#errors
+-#document-fragment
+-html
+-#document
+-| <head>
+-| <body>
+-|   <span>
+-
+-#data
+-<frameset><span>
+-#errors
+-#document-fragment
+-body
+-#document
+-| <span>
+-
+-#data
+-<span><frameset>
+-#errors
+-#document-fragment
+-body
+-#document
+-| <span>
+-
+-#data
+-<span><frameset>
+-#errors
+-#document-fragment
+-div
+-#document
+-| <span>
+-
+-#data
+-<frameset><span>
+-#errors
+-#document-fragment
+-html
+-#document
+-| <head>
+-| <frameset>
+-
+-#data
+-<table><tr>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <tbody>
+-|   <tr>
+-
+-#data
+-</table><tr>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <tbody>
+-|   <tr>
+-
+-#data
+-<a>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-
+-#data
+-<a>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-
+-#data
+-<a><caption>a
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-| <caption>
+-|   "a"
+-
+-#data
+-<a><colgroup><col>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-| <colgroup>
+-|   <col>
+-
+-#data
+-<a><tbody><tr>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-| <tbody>
+-|   <tr>
+-
+-#data
+-<a><tfoot><tr>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-| <tfoot>
+-|   <tr>
+-
+-#data
+-<a><thead><tr>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-| <thead>
+-|   <tr>
+-
+-#data
+-<a><tr>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-| <tbody>
+-|   <tr>
+-
+-#data
+-<a><th>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-| <tbody>
+-|   <tr>
+-|     <th>
+-
+-#data
+-<a><td>
+-#errors
+-#document-fragment
+-table
+-#document
+-| <a>
+-| <tbody>
+-|   <tr>
+-|     <td>
+-
+-#data
+-<table></table><tbody>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <table>
+-
+-#data
+-</table><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-
+-#data
+-<span></table>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-
+-#data
+-</caption><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-
+-#data
+-<span></caption><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><caption><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><col><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><colgroup><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><html><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><tbody><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><td><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><tfoot><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><thead><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><th><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span><tr><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-<span></table><span>
+-#errors
+-#document-fragment
+-caption
+-#document
+-| <span>
+-|   <span>
+-
+-#data
+-</colgroup><col>
+-#errors
+-#document-fragment
+-colgroup
+-#document
+-| <col>
+-
+-#data
+-<a><col>
+-#errors
+-#document-fragment
+-colgroup
+-#document
+-| <col>
+-
+-#data
+-<caption><a>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-
+-#data
+-<col><a>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-
+-#data
+-<colgroup><a>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-
+-#data
+-<tbody><a>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-
+-#data
+-<tfoot><a>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-
+-#data
+-<thead><a>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-
+-#data
+-</table><a>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-
+-#data
+-<a><tr>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-| <tr>
+-
+-#data
+-<a><td>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-| <tr>
+-|   <td>
+-
+-#data
+-<a><td>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-| <tr>
+-|   <td>
+-
+-#data
+-<a><td>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <a>
+-| <tr>
+-|   <td>
+-
+-#data
+-<td><table><tbody><a><tr>
+-#errors
+-#document-fragment
+-tbody
+-#document
+-| <tr>
+-|   <td>
+-|     <a>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-</tr><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-<td><table><a><tr></tr><tr>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-|   <a>
+-|   <table>
+-|     <tbody>
+-|       <tr>
+-|       <tr>
+-
+-#data
+-<caption><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-<col><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-<colgroup><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-<tbody><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-<tfoot><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-<thead><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-<tr><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-</table><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-
+-#data
+-<td><table></table><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-|   <table>
+-| <td>
+-
+-#data
+-<td><table></table><td>
+-#errors
+-#document-fragment
+-tr
+-#document
+-| <td>
+-|   <table>
+-| <td>
+-
+-#data
+-<caption><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-<col><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-<colgroup><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-<tbody><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-<tfoot><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-<th><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-<thead><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-<tr><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-</table><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-</tbody><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-</td><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-</tfoot><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-</thead><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-</th><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-</tr><a>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <a>
+-
+-#data
+-<table><td><td>
+-#errors
+-#document-fragment
+-td
+-#document
+-| <table>
+-|   <tbody>
+-|     <tr>
+-|       <td>
+-|       <td>
+-
+-#data
+-</select><option>
+-#errors
+-#document-fragment
+-select
+-#document
+-| <option>
+-
+-#data
+-<input><option>
+-#errors
+-#document-fragment
+-select
+-#document
+-| <option>
+-
+-#data
+-<keygen><option>
+-#errors
+-#document-fragment
+-select
+-#document
+-| <option>
+-
+-#data
+-<textarea><option>
+-#errors
+-#document-fragment
+-select
+-#document
+-| <option>
+-
+-#data
+-</html><!--abc-->
+-#errors
+-#document-fragment
+-html
+-#document
+-| <head>
+-| <body>
+-| <!-- abc -->
+-
+-#data
+-</frameset><frame>
+-#errors
+-#document-fragment
+-frameset
+-#document
+-| <frame>
+-
+-#data
+-#errors
+-#document-fragment
+-html
+-#document
+-| <head>
+-| <body>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tricky01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tricky01.dat
+deleted file mode 100644
+index 0841992..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/tricky01.dat
++++ /dev/null
+@@ -1,261 +0,0 @@
+-#data
+-<b><p>Bold </b> Not bold</p>
+-Also not bold.
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <b>
+-|     <p>
+-|       <b>
+-|         "Bold "
+-|       " Not bold"
+-|     "
+-Also not bold."
+-
+-#data
+-<html>
+-<font color=red><i>Italic and Red<p>Italic and Red </font> Just italic.</p> Italic only.</i> Plain
+-<p>I should not be red. <font color=red>Red. <i>Italic and red.</p>
+-<p>Italic and red. </i> Red.</font> I should not be red.</p>
+-<b>Bold <i>Bold and italic</b> Only Italic </i> Plain
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <font>
+-|       color="red"
+-|       <i>
+-|         "Italic and Red"
+-|     <i>
+-|       <p>
+-|         <font>
+-|           color="red"
+-|           "Italic and Red "
+-|         " Just italic."
+-|       " Italic only."
+-|     " Plain
+-"
+-|     <p>
+-|       "I should not be red. "
+-|       <font>
+-|         color="red"
+-|         "Red. "
+-|         <i>
+-|           "Italic and red."
+-|     <font>
+-|       color="red"
+-|       <i>
+-|         "
+-"
+-|     <p>
+-|       <font>
+-|         color="red"
+-|         <i>
+-|           "Italic and red. "
+-|         " Red."
+-|       " I should not be red."
+-|     "
+-"
+-|     <b>
+-|       "Bold "
+-|       <i>
+-|         "Bold and italic"
+-|     <i>
+-|       " Only Italic "
+-|     " Plain"
+-
+-#data
+-<html><body>
+-<p><font size="7">First paragraph.</p>
+-<p>Second paragraph.</p></font>
+-<b><p><i>Bold and Italic</b> Italic</p>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "
+-"
+-|     <p>
+-|       <font>
+-|         size="7"
+-|         "First paragraph."
+-|     <font>
+-|       size="7"
+-|       "
+-"
+-|       <p>
+-|         "Second paragraph."
+-|     "
+-"
+-|     <b>
+-|     <p>
+-|       <b>
+-|         <i>
+-|           "Bold and Italic"
+-|       <i>
+-|         " Italic"
+-
+-#data
+-<html>
+-<dl>
+-<dt><b>Boo
+-<dd>Goo?
+-</dl>
+-</html>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <dl>
+-|       "
+-"
+-|       <dt>
+-|         <b>
+-|           "Boo
+-"
+-|       <dd>
+-|         <b>
+-|           "Goo?
+-"
+-|     <b>
+-|       "
+-"
+-
+-#data
+-<html><body>
+-<label><a><div>Hello<div>World</div></a></label>  
+-</body></html>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "
+-"
+-|     <label>
+-|       <a>
+-|       <div>
+-|         <a>
+-|           "Hello"
+-|           <div>
+-|             "World"
+-|         "  
+-"
+-
+-#data
+-<table><center> <font>a</center> <img> <tr><td> </td> </tr> </table>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <center>
+-|       " "
+-|       <font>
+-|         "a"
+-|     <font>
+-|       <img>
+-|       " "
+-|     <table>
+-|       " "
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             " "
+-|           " "
+-|         " "
+-
+-#data
+-<table><tr><p><a><p>You should see this text.
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       <a>
+-|     <p>
+-|       <a>
+-|         "You should see this text."
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-
+-#data
+-<TABLE>
+-<TR>
+-<CENTER><CENTER><TD></TD></TR><TR>
+-<FONT>
+-<TABLE><tr></tr></TABLE>
+-</P>
+-<a></font><font></a>
+-This page contains an insanely badly-nested tag sequence.
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <center>
+-|       <center>
+-|     <font>
+-|       "
+-"
+-|     <table>
+-|       "
+-"
+-|       <tbody>
+-|         <tr>
+-|           "
+-"
+-|           <td>
+-|         <tr>
+-|           "
+-"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|     <font>
+-|       "
+-"
+-|       <p>
+-|       "
+-"
+-|       <a>
+-|     <a>
+-|       <font>
+-|     <font>
+-|       "
+-This page contains an insanely badly-nested tag sequence."
+-
+-#data
+-<html>
+-<body>
+-<b><nobr><div>This text is in a div inside a nobr</nobr>More text that should not be in the nobr, i.e., the
+-nobr should have closed the div inside it implicitly. </b><pre>A pre tag outside everything else.</pre>
+-</body>
+-</html>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "
+-"
+-|     <b>
+-|       <nobr>
+-|     <div>
+-|       <b>
+-|         <nobr>
+-|           "This text is in a div inside a nobr"
+-|         "More text that should not be in the nobr, i.e., the
+-nobr should have closed the div inside it implicitly. "
+-|       <pre>
+-|         "A pre tag outside everything else."
+-|       "
+-
+-"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/webkit01.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/webkit01.dat
+deleted file mode 100644
+index 9d425e9..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/webkit01.dat
++++ /dev/null
+@@ -1,610 +0,0 @@
+-#data
+-Test
+-#errors
+-Line: 1 Col: 4 Unexpected non-space characters. Expected DOCTYPE.
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "Test"
+-
+-#data
+-<div></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-
+-#data
+-<div>Test</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "Test"
+-
+-#data
+-<di
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<div>Hello</div>
+-<script>
+-console.log("PASS");
+-</script>
+-<div>Bye</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "Hello"
+-|     "
+-"
+-|     <script>
+-|       "
+-console.log("PASS");
+-"
+-|     "
+-"
+-|     <div>
+-|       "Bye"
+-
+-#data
+-<div foo="bar">Hello</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       foo="bar"
+-|       "Hello"
+-
+-#data
+-<div>Hello</div>
+-<script>
+-console.log("FOO<span>BAR</span>BAZ");
+-</script>
+-<div>Bye</div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       "Hello"
+-|     "
+-"
+-|     <script>
+-|       "
+-console.log("FOO<span>BAR</span>BAZ");
+-"
+-|     "
+-"
+-|     <div>
+-|       "Bye"
+-
+-#data
+-<foo bar="baz"></foo><potato quack="duck"></potato>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <foo>
+-|       bar="baz"
+-|     <potato>
+-|       quack="duck"
+-
+-#data
+-<foo bar="baz"><potato quack="duck"></potato></foo>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <foo>
+-|       bar="baz"
+-|       <potato>
+-|         quack="duck"
+-
+-#data
+-<foo></foo bar="baz"><potato></potato quack="duck">
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <foo>
+-|     <potato>
+-
+-#data
+-</ tttt>
+-#errors
+-#document
+-| <!--  tttt -->
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<div FOO ><img><img></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       foo=""
+-|       <img>
+-|       <img>
+-
+-#data
+-<p>Test</p<p>Test2</p>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       "TestTest2"
+-
+-#data
+-<rdar://problem/6869687>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <rdar:>
+-|       6869687=""
+-|       problem=""
+-
+-#data
+-<A>test< /A>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|       "test< /A>"
+-
+-#data
+-&lt;
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "<"
+-
+-#data
+-<body foo='bar'><body foo='baz' yo='mama'>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     foo="bar"
+-|     yo="mama"
+-
+-#data
+-<body></br foo="bar"></body>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <br>
+-
+-#data
+-<bdy><br foo="bar"></body>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <bdy>
+-|       <br>
+-|         foo="bar"
+-
+-#data
+-<body></body></br foo="bar">
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <br>
+-
+-#data
+-<bdy></body><br foo="bar">
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <bdy>
+-|       <br>
+-|         foo="bar"
+-
+-#data
+-<html><body></body></html><!-- Hi there -->
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-| <!--  Hi there  -->
+-
+-#data
+-<html><body></body></html>x<!-- Hi there -->
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "x"
+-|     <!--  Hi there  -->
+-
+-#data
+-<html><body></body></html>x<!-- Hi there --></html><!-- Again -->
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "x"
+-|     <!--  Hi there  -->
+-| <!--  Again  -->
+-
+-#data
+-<html><body></body></html>x<!-- Hi there --></body></html><!-- Again -->
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "x"
+-|     <!--  Hi there  -->
+-| <!--  Again  -->
+-
+-#data
+-<html><body><ruby><div><rp>xx</rp></div></ruby></body></html>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ruby>
+-|       <div>
+-|         <rp>
+-|           "xx"
+-
+-#data
+-<html><body><ruby><div><rt>xx</rt></div></ruby></body></html>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ruby>
+-|       <div>
+-|         <rt>
+-|           "xx"
+-
+-#data
+-<html><frameset><!--1--><noframes>A</noframes><!--2--></frameset><!--3--><noframes>B</noframes><!--4--></html><!--5--><noframes>C</noframes><!--6-->
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <frameset>
+-|     <!-- 1 -->
+-|     <noframes>
+-|       "A"
+-|     <!-- 2 -->
+-|   <!-- 3 -->
+-|   <noframes>
+-|     "B"
+-|   <!-- 4 -->
+-|   <noframes>
+-|     "C"
+-| <!-- 5 -->
+-| <!-- 6 -->
+-
+-#data
+-<select><option>A<select><option>B<select><option>C<select><option>D<select><option>E<select><option>F<select><option>G<select>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <select>
+-|       <option>
+-|         "A"
+-|     <option>
+-|       "B"
+-|       <select>
+-|         <option>
+-|           "C"
+-|     <option>
+-|       "D"
+-|       <select>
+-|         <option>
+-|           "E"
+-|     <option>
+-|       "F"
+-|       <select>
+-|         <option>
+-|           "G"
+-
+-#data
+-<dd><dd><dt><dt><dd><li><li>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <dd>
+-|     <dd>
+-|     <dt>
+-|     <dt>
+-|     <dd>
+-|       <li>
+-|       <li>
+-
+-#data
+-<div><b></div><div><nobr>a<nobr>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <b>
+-|     <div>
+-|       <b>
+-|         <nobr>
+-|           "a"
+-|         <nobr>
+-
+-#data
+-<head></head>
+-<body></body>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   "
+-"
+-|   <body>
+-
+-#data
+-<head></head> <style></style>ddd
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|     <style>
+-|   " "
+-|   <body>
+-|     "ddd"
+-
+-#data
+-<kbd><table></kbd><col><select><tr>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <kbd>
+-|       <select>
+-|       <table>
+-|         <colgroup>
+-|           <col>
+-|         <tbody>
+-|           <tr>
+-
+-#data
+-<kbd><table></kbd><col><select><tr></table><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <kbd>
+-|       <select>
+-|       <table>
+-|         <colgroup>
+-|           <col>
+-|         <tbody>
+-|           <tr>
+-|       <div>
+-
+-#data
+-<a><li><style></style><title></title></a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|     <li>
+-|       <a>
+-|         <style>
+-|         <title>
+-
+-#data
+-<font></p><p><meta><title></title></font>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <font>
+-|       <p>
+-|     <p>
+-|       <font>
+-|         <meta>
+-|         <title>
+-
+-#data
+-<a><center><title></title><a>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <a>
+-|     <center>
+-|       <a>
+-|         <title>
+-|       <a>
+-
+-#data
+-<svg><title><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg title>
+-|         <div>
+-
+-#data
+-<svg><title><rect><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg title>
+-|         <rect>
+-|           <div>
+-
+-#data
+-<svg><title><svg><div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg title>
+-|         <svg svg>
+-|         <div>
+-
+-#data
+-<img <="" FAIL>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <img>
+-|       <=""
+-|       fail=""
+-
+-#data
+-<ul><li><div id='foo'/>A</li><li>B<div>C</div></li></ul>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <ul>
+-|       <li>
+-|         <div>
+-|           id="foo"
+-|           "A"
+-|       <li>
+-|         "B"
+-|         <div>
+-|           "C"
+-
+-#data
+-<svg><em><desc></em>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|     <em>
+-|       <desc>
+-
+-#data
+-<table><tr><td><svg><desc><td></desc><circle>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             <svg svg>
+-|               <svg desc>
+-|           <td>
+-|             <circle>
+-
+-#data
+-<svg><tfoot></mi><td>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <svg svg>
+-|       <svg tfoot>
+-|         <svg td>
+-
+-#data
+-<math><mrow><mrow><mn>1</mn></mrow><mi>a</mi></mrow></math>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <math math>
+-|       <math mrow>
+-|         <math mrow>
+-|           <math mn>
+-|             "1"
+-|         <math mi>
+-|           "a"
+-
+-#data
+-<!doctype html><input type="hidden"><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <frameset>
+-
+-#data
+-<!doctype html><input type="button"><frameset>
+-#errors
+-#document
+-| <!DOCTYPE html>
+-| <html>
+-|   <head>
+-|   <body>
+-|     <input>
+-|       type="button"
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/webkit02.dat b/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/webkit02.dat
+deleted file mode 100644
+index 905783d..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/testdata/webkit/webkit02.dat
++++ /dev/null
+@@ -1,159 +0,0 @@
+-#data
+-<foo bar=qux/>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <foo>
+-|       bar="qux/"
+-
+-#data
+-<p id="status"><noscript><strong>A</strong></noscript><span>B</span></p>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <p>
+-|       id="status"
+-|       <noscript>
+-|         "<strong>A</strong>"
+-|       <span>
+-|         "B"
+-
+-#data
+-<div><sarcasm><div></div></sarcasm></div>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <div>
+-|       <sarcasm>
+-|         <div>
+-
+-#data
+-<html><body><img src="" border="0" alt="><div>A</div></body></html>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-
+-#data
+-<table><td></tbody>A
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     "A"
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-
+-#data
+-<table><td></thead>A
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "A"
+-
+-#data
+-<table><td></tfoot>A
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <tbody>
+-|         <tr>
+-|           <td>
+-|             "A"
+-
+-#data
+-<table><thead><td></tbody>A
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <table>
+-|       <thead>
+-|         <tr>
+-|           <td>
+-|             "A"
+-
+-#data
+-<legend>test</legend>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <legend>
+-|       "test"
+-
+-#data
+-<table><input>
+-#errors
+-#document
+-| <html>
+-|   <head>
+-|   <body>
+-|     <input>
+-|     <table>
+-
+-#data
+-<b><em><dcell><postfield><postfield><postfield><postfield><missing_glyph><missing_glyph><missing_glyph><missing_glyph><hkern><aside></b></em>
+-#errors
+-#document-fragment
+-div
+-#document
+-| <b>
+-|   <em>
+-|     <dcell>
+-|       <postfield>
+-|         <postfield>
+-|           <postfield>
+-|             <postfield>
+-|               <missing_glyph>
+-|                 <missing_glyph>
+-|                   <missing_glyph>
+-|                     <missing_glyph>
+-|                       <hkern>
+-| <aside>
+-|   <em>
+-|     <b>
+-
+-#data
+-<isindex action="x">
+-#errors
+-#document-fragment
+-table
+-#document
+-| <form>
+-|   action="x"
+-| <hr>
+-| <label>
+-|   "This is a searchable index. Enter search keywords: "
+-|   <input>
+-|     name="isindex"
+-| <hr>
+-
+-#data
+-<option><XH<optgroup></optgroup>
+-#errors
+-#document-fragment
+-select
+-#document
+-| <option>
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/token.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/token.go
+deleted file mode 100644
+index a226099..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/token.go
++++ /dev/null
+@@ -1,1219 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"bytes"
+-	"errors"
+-	"io"
+-	"strconv"
+-	"strings"
+-
+-	"code.google.com/p/go.net/html/atom"
+-)
+-
+-// A TokenType is the type of a Token.
+-type TokenType uint32
+-
+-const (
+-	// ErrorToken means that an error occurred during tokenization.
+-	ErrorToken TokenType = iota
+-	// TextToken means a text node.
+-	TextToken
+-	// A StartTagToken looks like <a>.
+-	StartTagToken
+-	// An EndTagToken looks like </a>.
+-	EndTagToken
+-	// A SelfClosingTagToken tag looks like <br/>.
+-	SelfClosingTagToken
+-	// A CommentToken looks like <!--x-->.
+-	CommentToken
+-	// A DoctypeToken looks like <!DOCTYPE x>
+-	DoctypeToken
+-)
+-
+-// ErrBufferExceeded means that the buffering limit was exceeded.
+-var ErrBufferExceeded = errors.New("max buffer exceeded")
+-
+-// String returns a string representation of the TokenType.
+-func (t TokenType) String() string {
+-	switch t {
+-	case ErrorToken:
+-		return "Error"
+-	case TextToken:
+-		return "Text"
+-	case StartTagToken:
+-		return "StartTag"
+-	case EndTagToken:
+-		return "EndTag"
+-	case SelfClosingTagToken:
+-		return "SelfClosingTag"
+-	case CommentToken:
+-		return "Comment"
+-	case DoctypeToken:
+-		return "Doctype"
+-	}
+-	return "Invalid(" + strconv.Itoa(int(t)) + ")"
+-}
+-
+-// An Attribute is an attribute namespace-key-value triple. Namespace is
+-// non-empty for foreign attributes like xlink, Key is alphabetic (and hence
+-// does not contain escapable characters like '&', '<' or '>'), and Val is
+-// unescaped (it looks like "a<b" rather than "a&lt;b").
+-//
+-// Namespace is only used by the parser, not the tokenizer.
+-type Attribute struct {
+-	Namespace, Key, Val string
+-}
+-
+-// A Token consists of a TokenType and some Data (tag name for start and end
+-// tags, content for text, comments and doctypes). A tag Token may also contain
+-// a slice of Attributes. Data is unescaped for all Tokens (it looks like "a<b"
+-// rather than "a&lt;b"). For tag Tokens, DataAtom is the atom for Data, or
+-// zero if Data is not a known tag name.
+-type Token struct {
+-	Type     TokenType
+-	DataAtom atom.Atom
+-	Data     string
+-	Attr     []Attribute
+-}
+-
+-// tagString returns a string representation of a tag Token's Data and Attr.
+-func (t Token) tagString() string {
+-	if len(t.Attr) == 0 {
+-		return t.Data
+-	}
+-	buf := bytes.NewBufferString(t.Data)
+-	for _, a := range t.Attr {
+-		buf.WriteByte(' ')
+-		buf.WriteString(a.Key)
+-		buf.WriteString(`="`)
+-		escape(buf, a.Val)
+-		buf.WriteByte('"')
+-	}
+-	return buf.String()
+-}
+-
+-// String returns a string representation of the Token.
+-func (t Token) String() string {
+-	switch t.Type {
+-	case ErrorToken:
+-		return ""
+-	case TextToken:
+-		return EscapeString(t.Data)
+-	case StartTagToken:
+-		return "<" + t.tagString() + ">"
+-	case EndTagToken:
+-		return "</" + t.tagString() + ">"
+-	case SelfClosingTagToken:
+-		return "<" + t.tagString() + "/>"
+-	case CommentToken:
+-		return "<!--" + t.Data + "-->"
+-	case DoctypeToken:
+-		return "<!DOCTYPE " + t.Data + ">"
+-	}
+-	return "Invalid(" + strconv.Itoa(int(t.Type)) + ")"
+-}
+-
+-// span is a range of bytes in a Tokenizer's buffer. The start is inclusive,
+-// the end is exclusive.
+-type span struct {
+-	start, end int
+-}
+-
+-// A Tokenizer returns a stream of HTML Tokens.
+-type Tokenizer struct {
+-	// r is the source of the HTML text.
+-	r io.Reader
+-	// tt is the TokenType of the current token.
+-	tt TokenType
+-	// err is the first error encountered during tokenization. It is possible
+-	// for tt != Error && err != nil to hold: this means that Next returned a
+-	// valid token but the subsequent Next call will return an error token.
+-	// For example, if the HTML text input was just "plain", then the first
+-	// Next call would set z.err to io.EOF but return a TextToken, and all
+-	// subsequent Next calls would return an ErrorToken.
+-	// err is never reset. Once it becomes non-nil, it stays non-nil.
+-	err error
+-	// readErr is the error returned by the io.Reader r. It is separate from
+-	// err because it is valid for an io.Reader to return (n int, err1 error)
+-	// such that n > 0 && err1 != nil, and callers should always process the
+-	// n > 0 bytes before considering the error err1.
+-	readErr error
+-	// buf[raw.start:raw.end] holds the raw bytes of the current token.
+-	// buf[raw.end:] is buffered input that will yield future tokens.
+-	raw span
+-	buf []byte
+-	// maxBuf limits the data buffered in buf. A value of 0 means unlimited.
+-	maxBuf int
+-	// buf[data.start:data.end] holds the raw bytes of the current token's data:
+-	// a text token's text, a tag token's tag name, etc.
+-	data span
+-	// pendingAttr is the attribute key and value currently being tokenized.
+-	// When complete, pendingAttr is pushed onto attr. nAttrReturned is
+-	// incremented on each call to TagAttr.
+-	pendingAttr   [2]span
+-	attr          [][2]span
+-	nAttrReturned int
+-	// rawTag is the "script" in "</script>" that closes the next token. If
+-	// non-empty, the subsequent call to Next will return a raw or RCDATA text
+-	// token: one that treats "<p>" as text instead of an element.
+-	// rawTag's contents are lower-cased.
+-	rawTag string
+-	// textIsRaw is whether the current text token's data is not escaped.
+-	textIsRaw bool
+-	// convertNUL is whether NUL bytes in the current token's data should
+-	// be converted into \ufffd replacement characters.
+-	convertNUL bool
+-	// allowCDATA is whether CDATA sections are allowed in the current context.
+-	allowCDATA bool
+-}
+-
+-// AllowCDATA sets whether or not the tokenizer recognizes <![CDATA[foo]]> as
+-// the text "foo". The default value is false, which means to recognize it as
+-// a bogus comment "<!-- [CDATA[foo]] -->" instead.
+-//
+-// Strictly speaking, an HTML5 compliant tokenizer should allow CDATA if and
+-// only if tokenizing foreign content, such as MathML and SVG. However,
+-// tracking foreign-contentness is difficult to do purely in the tokenizer,
+-// as opposed to the parser, due to HTML integration points: an <svg> element
+-// can contain a <foreignObject> that is foreign-to-SVG but not foreign-to-
+-// HTML. For strict compliance with the HTML5 tokenization algorithm, it is the
+-// responsibility of the user of a tokenizer to call AllowCDATA as appropriate.
+-// In practice, if using the tokenizer without caring whether MathML or SVG
+-// CDATA is text or comments, such as tokenizing HTML to find all the anchor
+-// text, it is acceptable to ignore this responsibility.
+-func (z *Tokenizer) AllowCDATA(allowCDATA bool) {
+-	z.allowCDATA = allowCDATA
+-}
+-
+-// NextIsNotRawText instructs the tokenizer that the next token should not be
+-// considered as 'raw text'. Some elements, such as script and title elements,
+-// normally require the next token after the opening tag to be 'raw text' that
+-// has no child elements. For example, tokenizing "<title>a<b>c</b>d</title>"
+-// yields a start tag token for "<title>", a text token for "a<b>c</b>d", and
+-// an end tag token for "</title>". There are no distinct start tag or end tag
+-// tokens for the "<b>" and "</b>".
+-//
+-// This tokenizer implementation will generally look for raw text at the right
+-// times. Strictly speaking, an HTML5 compliant tokenizer should not look for
+-// raw text if in foreign content: <title> generally needs raw text, but a
+-// <title> inside an <svg> does not. Another example is that a <textarea>
+-// generally needs raw text, but a <textarea> is not allowed as an immediate
+-// child of a <select>; in normal parsing, a <textarea> implies </select>, but
+-// one cannot close the implicit element when parsing a <select>'s InnerHTML.
+-// Similarly to AllowCDATA, tracking the correct moment to override raw-text-
+-// ness is difficult to do purely in the tokenizer, as opposed to the parser.
+-// For strict compliance with the HTML5 tokenization algorithm, it is the
+-// responsibility of the user of a tokenizer to call NextIsNotRawText as
+-// appropriate. In practice, like AllowCDATA, it is acceptable to ignore this
+-// responsibility for basic usage.
+-//
+-// Note that this 'raw text' concept is different from the one offered by the
+-// Tokenizer.Raw method.
+-func (z *Tokenizer) NextIsNotRawText() {
+-	z.rawTag = ""
+-}
+-
+-// Err returns the error associated with the most recent ErrorToken token.
+-// This is typically io.EOF, meaning the end of tokenization.
+-func (z *Tokenizer) Err() error {
+-	if z.tt != ErrorToken {
+-		return nil
+-	}
+-	return z.err
+-}
+-
+-// readByte returns the next byte from the input stream, doing a buffered read
+-// from z.r into z.buf if necessary. z.buf[z.raw.start:z.raw.end] remains a contiguous byte
+-// slice that holds all the bytes read so far for the current token.
+-// It sets z.err if the underlying reader returns an error.
+-// Pre-condition: z.err == nil.
+-func (z *Tokenizer) readByte() byte {
+-	if z.raw.end >= len(z.buf) {
+-		// Our buffer is exhausted and we have to read from z.r. Check if the
+-		// previous read resulted in an error.
+-		if z.readErr != nil {
+-			z.err = z.readErr
+-			return 0
+-		}
+-		// We copy z.buf[z.raw.start:z.raw.end] to the beginning of z.buf. If the length
+-		// z.raw.end - z.raw.start is more than half the capacity of z.buf, then we
+-		// allocate a new buffer before the copy.
+-		c := cap(z.buf)
+-		d := z.raw.end - z.raw.start
+-		var buf1 []byte
+-		if 2*d > c {
+-			buf1 = make([]byte, d, 2*c)
+-		} else {
+-			buf1 = z.buf[:d]
+-		}
+-		copy(buf1, z.buf[z.raw.start:z.raw.end])
+-		if x := z.raw.start; x != 0 {
+-			// Adjust the data/attr spans to refer to the same contents after the copy.
+-			z.data.start -= x
+-			z.data.end -= x
+-			z.pendingAttr[0].start -= x
+-			z.pendingAttr[0].end -= x
+-			z.pendingAttr[1].start -= x
+-			z.pendingAttr[1].end -= x
+-			for i := range z.attr {
+-				z.attr[i][0].start -= x
+-				z.attr[i][0].end -= x
+-				z.attr[i][1].start -= x
+-				z.attr[i][1].end -= x
+-			}
+-		}
+-		z.raw.start, z.raw.end, z.buf = 0, d, buf1[:d]
+-		// Now that we have copied the live bytes to the start of the buffer,
+-		// we read from z.r into the remainder.
+-		var n int
+-		n, z.readErr = readAtLeastOneByte(z.r, buf1[d:cap(buf1)])
+-		if n == 0 {
+-			z.err = z.readErr
+-			return 0
+-		}
+-		z.buf = buf1[:d+n]
+-	}
+-	x := z.buf[z.raw.end]
+-	z.raw.end++
+-	if z.maxBuf > 0 && z.raw.end-z.raw.start >= z.maxBuf {
+-		z.err = ErrBufferExceeded
+-		return 0
+-	}
+-	return x
+-}
+-
+-// Buffered returns a slice containing data buffered but not yet tokenized.
+-func (z *Tokenizer) Buffered() []byte {
+-	return z.buf[z.raw.end:]
+-}
+-
+-// readAtLeastOneByte wraps an io.Reader so that reading cannot return (0, nil).
+-// It returns io.ErrNoProgress if the underlying r.Read method returns (0, nil)
+-// too many times in succession.
+-func readAtLeastOneByte(r io.Reader, b []byte) (int, error) {
+-	for i := 0; i < 100; i++ {
+-		n, err := r.Read(b)
+-		if n != 0 || err != nil {
+-			return n, err
+-		}
+-	}
+-	return 0, io.ErrNoProgress
+-}
+-
+-// skipWhiteSpace skips past any white space.
+-func (z *Tokenizer) skipWhiteSpace() {
+-	if z.err != nil {
+-		return
+-	}
+-	for {
+-		c := z.readByte()
+-		if z.err != nil {
+-			return
+-		}
+-		switch c {
+-		case ' ', '\n', '\r', '\t', '\f':
+-			// No-op.
+-		default:
+-			z.raw.end--
+-			return
+-		}
+-	}
+-}
+-
+-// readRawOrRCDATA reads until the next "</foo>", where "foo" is z.rawTag and
+-// is typically something like "script" or "textarea".
+-func (z *Tokenizer) readRawOrRCDATA() {
+-	if z.rawTag == "script" {
+-		z.readScript()
+-		z.textIsRaw = true
+-		z.rawTag = ""
+-		return
+-	}
+-loop:
+-	for {
+-		c := z.readByte()
+-		if z.err != nil {
+-			break loop
+-		}
+-		if c != '<' {
+-			continue loop
+-		}
+-		c = z.readByte()
+-		if z.err != nil {
+-			break loop
+-		}
+-		if c != '/' {
+-			continue loop
+-		}
+-		if z.readRawEndTag() || z.err != nil {
+-			break loop
+-		}
+-	}
+-	z.data.end = z.raw.end
+-	// A textarea's or title's RCDATA can contain escaped entities.
+-	z.textIsRaw = z.rawTag != "textarea" && z.rawTag != "title"
+-	z.rawTag = ""
+-}
+-
+-// readRawEndTag attempts to read a tag like "</foo>", where "foo" is z.rawTag.
+-// If it succeeds, it backs up the input position to reconsume the tag and
+-// returns true. Otherwise it returns false. The opening "</" has already been
+-// consumed.
+-func (z *Tokenizer) readRawEndTag() bool {
+-	for i := 0; i < len(z.rawTag); i++ {
+-		c := z.readByte()
+-		if z.err != nil {
+-			return false
+-		}
+-		if c != z.rawTag[i] && c != z.rawTag[i]-('a'-'A') {
+-			z.raw.end--
+-			return false
+-		}
+-	}
+-	c := z.readByte()
+-	if z.err != nil {
+-		return false
+-	}
+-	switch c {
+-	case ' ', '\n', '\r', '\t', '\f', '/', '>':
+-		// The 3 is 2 for the leading "</" plus 1 for the trailing character c.
+-		z.raw.end -= 3 + len(z.rawTag)
+-		return true
+-	}
+-	z.raw.end--
+-	return false
+-}
+-
+-// readScript reads until the next </script> tag, following the byzantine
+-// rules for escaping/hiding the closing tag.
+-func (z *Tokenizer) readScript() {
+-	defer func() {
+-		z.data.end = z.raw.end
+-	}()
+-	var c byte
+-
+-scriptData:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	if c == '<' {
+-		goto scriptDataLessThanSign
+-	}
+-	goto scriptData
+-
+-scriptDataLessThanSign:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	switch c {
+-	case '/':
+-		goto scriptDataEndTagOpen
+-	case '!':
+-		goto scriptDataEscapeStart
+-	}
+-	z.raw.end--
+-	goto scriptData
+-
+-scriptDataEndTagOpen:
+-	if z.readRawEndTag() || z.err != nil {
+-		return
+-	}
+-	goto scriptData
+-
+-scriptDataEscapeStart:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	if c == '-' {
+-		goto scriptDataEscapeStartDash
+-	}
+-	z.raw.end--
+-	goto scriptData
+-
+-scriptDataEscapeStartDash:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	if c == '-' {
+-		goto scriptDataEscapedDashDash
+-	}
+-	z.raw.end--
+-	goto scriptData
+-
+-scriptDataEscaped:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	switch c {
+-	case '-':
+-		goto scriptDataEscapedDash
+-	case '<':
+-		goto scriptDataEscapedLessThanSign
+-	}
+-	goto scriptDataEscaped
+-
+-scriptDataEscapedDash:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	switch c {
+-	case '-':
+-		goto scriptDataEscapedDashDash
+-	case '<':
+-		goto scriptDataEscapedLessThanSign
+-	}
+-	goto scriptDataEscaped
+-
+-scriptDataEscapedDashDash:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	switch c {
+-	case '-':
+-		goto scriptDataEscapedDashDash
+-	case '<':
+-		goto scriptDataEscapedLessThanSign
+-	case '>':
+-		goto scriptData
+-	}
+-	goto scriptDataEscaped
+-
+-scriptDataEscapedLessThanSign:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	if c == '/' {
+-		goto scriptDataEscapedEndTagOpen
+-	}
+-	if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' {
+-		goto scriptDataDoubleEscapeStart
+-	}
+-	z.raw.end--
+-	goto scriptData
+-
+-scriptDataEscapedEndTagOpen:
+-	if z.readRawEndTag() || z.err != nil {
+-		return
+-	}
+-	goto scriptDataEscaped
+-
+-scriptDataDoubleEscapeStart:
+-	z.raw.end--
+-	for i := 0; i < len("script"); i++ {
+-		c = z.readByte()
+-		if z.err != nil {
+-			return
+-		}
+-		if c != "script"[i] && c != "SCRIPT"[i] {
+-			z.raw.end--
+-			goto scriptDataEscaped
+-		}
+-	}
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	switch c {
+-	case ' ', '\n', '\r', '\t', '\f', '/', '>':
+-		goto scriptDataDoubleEscaped
+-	}
+-	z.raw.end--
+-	goto scriptDataEscaped
+-
+-scriptDataDoubleEscaped:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	switch c {
+-	case '-':
+-		goto scriptDataDoubleEscapedDash
+-	case '<':
+-		goto scriptDataDoubleEscapedLessThanSign
+-	}
+-	goto scriptDataDoubleEscaped
+-
+-scriptDataDoubleEscapedDash:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	switch c {
+-	case '-':
+-		goto scriptDataDoubleEscapedDashDash
+-	case '<':
+-		goto scriptDataDoubleEscapedLessThanSign
+-	}
+-	goto scriptDataDoubleEscaped
+-
+-scriptDataDoubleEscapedDashDash:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	switch c {
+-	case '-':
+-		goto scriptDataDoubleEscapedDashDash
+-	case '<':
+-		goto scriptDataDoubleEscapedLessThanSign
+-	case '>':
+-		goto scriptData
+-	}
+-	goto scriptDataDoubleEscaped
+-
+-scriptDataDoubleEscapedLessThanSign:
+-	c = z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	if c == '/' {
+-		goto scriptDataDoubleEscapeEnd
+-	}
+-	z.raw.end--
+-	goto scriptDataDoubleEscaped
+-
+-scriptDataDoubleEscapeEnd:
+-	if z.readRawEndTag() {
+-		z.raw.end += len("</script>")
+-		goto scriptDataEscaped
+-	}
+-	if z.err != nil {
+-		return
+-	}
+-	goto scriptDataDoubleEscaped
+-}
+-
+-// readComment reads the next comment token starting with "<!--". The opening
+-// "<!--" has already been consumed.
+-func (z *Tokenizer) readComment() {
+-	z.data.start = z.raw.end
+-	defer func() {
+-		if z.data.end < z.data.start {
+-			// It's a comment with no data, like <!-->.
+-			z.data.end = z.data.start
+-		}
+-	}()
+-	for dashCount := 2; ; {
+-		c := z.readByte()
+-		if z.err != nil {
+-			// Ignore up to two dashes at EOF.
+-			if dashCount > 2 {
+-				dashCount = 2
+-			}
+-			z.data.end = z.raw.end - dashCount
+-			return
+-		}
+-		switch c {
+-		case '-':
+-			dashCount++
+-			continue
+-		case '>':
+-			if dashCount >= 2 {
+-				z.data.end = z.raw.end - len("-->")
+-				return
+-			}
+-		case '!':
+-			if dashCount >= 2 {
+-				c = z.readByte()
+-				if z.err != nil {
+-					z.data.end = z.raw.end
+-					return
+-				}
+-				if c == '>' {
+-					z.data.end = z.raw.end - len("--!>")
+-					return
+-				}
+-			}
+-		}
+-		dashCount = 0
+-	}
+-}
+-
+-// readUntilCloseAngle reads until the next ">".
+-func (z *Tokenizer) readUntilCloseAngle() {
+-	z.data.start = z.raw.end
+-	for {
+-		c := z.readByte()
+-		if z.err != nil {
+-			z.data.end = z.raw.end
+-			return
+-		}
+-		if c == '>' {
+-			z.data.end = z.raw.end - len(">")
+-			return
+-		}
+-	}
+-}
+-
+-// readMarkupDeclaration reads the next token starting with "<!". It might be
+-// a "<!--comment-->", a "<!DOCTYPE foo>", a "<![CDATA[section]]>" or
+-// "<!a bogus comment". The opening "<!" has already been consumed.
+-func (z *Tokenizer) readMarkupDeclaration() TokenType {
+-	z.data.start = z.raw.end
+-	var c [2]byte
+-	for i := 0; i < 2; i++ {
+-		c[i] = z.readByte()
+-		if z.err != nil {
+-			z.data.end = z.raw.end
+-			return CommentToken
+-		}
+-	}
+-	if c[0] == '-' && c[1] == '-' {
+-		z.readComment()
+-		return CommentToken
+-	}
+-	z.raw.end -= 2
+-	if z.readDoctype() {
+-		return DoctypeToken
+-	}
+-	if z.allowCDATA && z.readCDATA() {
+-		z.convertNUL = true
+-		return TextToken
+-	}
+-	// It's a bogus comment.
+-	z.readUntilCloseAngle()
+-	return CommentToken
+-}
+-
+-// readDoctype attempts to read a doctype declaration and returns true if
+-// successful. The opening "<!" has already been consumed.
+-func (z *Tokenizer) readDoctype() bool {
+-	const s = "DOCTYPE"
+-	for i := 0; i < len(s); i++ {
+-		c := z.readByte()
+-		if z.err != nil {
+-			z.data.end = z.raw.end
+-			return false
+-		}
+-		if c != s[i] && c != s[i]+('a'-'A') {
+-			// Back up to read the fragment of "DOCTYPE" again.
+-			z.raw.end = z.data.start
+-			return false
+-		}
+-	}
+-	if z.skipWhiteSpace(); z.err != nil {
+-		z.data.start = z.raw.end
+-		z.data.end = z.raw.end
+-		return true
+-	}
+-	z.readUntilCloseAngle()
+-	return true
+-}
+-
+-// readCDATA attempts to read a CDATA section and returns true if
+-// successful. The opening "<!" has already been consumed.
+-func (z *Tokenizer) readCDATA() bool {
+-	const s = "[CDATA["
+-	for i := 0; i < len(s); i++ {
+-		c := z.readByte()
+-		if z.err != nil {
+-			z.data.end = z.raw.end
+-			return false
+-		}
+-		if c != s[i] {
+-			// Back up to read the fragment of "[CDATA[" again.
+-			z.raw.end = z.data.start
+-			return false
+-		}
+-	}
+-	z.data.start = z.raw.end
+-	brackets := 0
+-	for {
+-		c := z.readByte()
+-		if z.err != nil {
+-			z.data.end = z.raw.end
+-			return true
+-		}
+-		switch c {
+-		case ']':
+-			brackets++
+-		case '>':
+-			if brackets >= 2 {
+-				z.data.end = z.raw.end - len("]]>")
+-				return true
+-			}
+-			brackets = 0
+-		default:
+-			brackets = 0
+-		}
+-	}
+-}
+-
+-// startTagIn returns whether the start tag in z.buf[z.data.start:z.data.end]
+-// case-insensitively matches any element of ss.
+-func (z *Tokenizer) startTagIn(ss ...string) bool {
+-loop:
+-	for _, s := range ss {
+-		if z.data.end-z.data.start != len(s) {
+-			continue loop
+-		}
+-		for i := 0; i < len(s); i++ {
+-			c := z.buf[z.data.start+i]
+-			if 'A' <= c && c <= 'Z' {
+-				c += 'a' - 'A'
+-			}
+-			if c != s[i] {
+-				continue loop
+-			}
+-		}
+-		return true
+-	}
+-	return false
+-}
+-
+-// readStartTag reads the next start tag token. The opening "<a" has already
+-// been consumed, where 'a' means anything in [A-Za-z].
+-func (z *Tokenizer) readStartTag() TokenType {
+-	z.readTag(true)
+-	if z.err != nil {
+-		return ErrorToken
+-	}
+-	// Several tags flag the tokenizer's next token as raw.
+-	c, raw := z.buf[z.data.start], false
+-	if 'A' <= c && c <= 'Z' {
+-		c += 'a' - 'A'
+-	}
+-	switch c {
+-	case 'i':
+-		raw = z.startTagIn("iframe")
+-	case 'n':
+-		raw = z.startTagIn("noembed", "noframes", "noscript")
+-	case 'p':
+-		raw = z.startTagIn("plaintext")
+-	case 's':
+-		raw = z.startTagIn("script", "style")
+-	case 't':
+-		raw = z.startTagIn("textarea", "title")
+-	case 'x':
+-		raw = z.startTagIn("xmp")
+-	}
+-	if raw {
+-		z.rawTag = strings.ToLower(string(z.buf[z.data.start:z.data.end]))
+-	}
+-	// Look for a self-closing token like "<br/>".
+-	if z.err == nil && z.buf[z.raw.end-2] == '/' {
+-		return SelfClosingTagToken
+-	}
+-	return StartTagToken
+-}
+-
+-// readTag reads the next tag token and its attributes. If saveAttr, those
+-// attributes are saved in z.attr, otherwise z.attr is set to an empty slice.
+-// The opening "<a" or "</a" has already been consumed, where 'a' means anything
+-// in [A-Za-z].
+-func (z *Tokenizer) readTag(saveAttr bool) {
+-	z.attr = z.attr[:0]
+-	z.nAttrReturned = 0
+-	// Read the tag name and attribute key/value pairs.
+-	z.readTagName()
+-	if z.skipWhiteSpace(); z.err != nil {
+-		return
+-	}
+-	for {
+-		c := z.readByte()
+-		if z.err != nil || c == '>' {
+-			break
+-		}
+-		z.raw.end--
+-		z.readTagAttrKey()
+-		z.readTagAttrVal()
+-		// Save pendingAttr if saveAttr and that attribute has a non-empty key.
+-		if saveAttr && z.pendingAttr[0].start != z.pendingAttr[0].end {
+-			z.attr = append(z.attr, z.pendingAttr)
+-		}
+-		if z.skipWhiteSpace(); z.err != nil {
+-			break
+-		}
+-	}
+-}
+-
+-// readTagName sets z.data to the "div" in "<div k=v>". The reader (z.raw.end)
+-// is positioned such that the first byte of the tag name (the "d" in "<div")
+-// has already been consumed.
+-func (z *Tokenizer) readTagName() {
+-	z.data.start = z.raw.end - 1
+-	for {
+-		c := z.readByte()
+-		if z.err != nil {
+-			z.data.end = z.raw.end
+-			return
+-		}
+-		switch c {
+-		case ' ', '\n', '\r', '\t', '\f':
+-			z.data.end = z.raw.end - 1
+-			return
+-		case '/', '>':
+-			z.raw.end--
+-			z.data.end = z.raw.end
+-			return
+-		}
+-	}
+-}
+-
+-// readTagAttrKey sets z.pendingAttr[0] to the "k" in "<div k=v>".
+-// Precondition: z.err == nil.
+-func (z *Tokenizer) readTagAttrKey() {
+-	z.pendingAttr[0].start = z.raw.end
+-	for {
+-		c := z.readByte()
+-		if z.err != nil {
+-			z.pendingAttr[0].end = z.raw.end
+-			return
+-		}
+-		switch c {
+-		case ' ', '\n', '\r', '\t', '\f', '/':
+-			z.pendingAttr[0].end = z.raw.end - 1
+-			return
+-		case '=', '>':
+-			z.raw.end--
+-			z.pendingAttr[0].end = z.raw.end
+-			return
+-		}
+-	}
+-}
+-
+-// readTagAttrVal sets z.pendingAttr[1] to the "v" in "<div k=v>".
+-func (z *Tokenizer) readTagAttrVal() {
+-	z.pendingAttr[1].start = z.raw.end
+-	z.pendingAttr[1].end = z.raw.end
+-	if z.skipWhiteSpace(); z.err != nil {
+-		return
+-	}
+-	c := z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	if c != '=' {
+-		z.raw.end--
+-		return
+-	}
+-	if z.skipWhiteSpace(); z.err != nil {
+-		return
+-	}
+-	quote := z.readByte()
+-	if z.err != nil {
+-		return
+-	}
+-	switch quote {
+-	case '>':
+-		z.raw.end--
+-		return
+-
+-	case '\'', '"':
+-		z.pendingAttr[1].start = z.raw.end
+-		for {
+-			c := z.readByte()
+-			if z.err != nil {
+-				z.pendingAttr[1].end = z.raw.end
+-				return
+-			}
+-			if c == quote {
+-				z.pendingAttr[1].end = z.raw.end - 1
+-				return
+-			}
+-		}
+-
+-	default:
+-		z.pendingAttr[1].start = z.raw.end - 1
+-		for {
+-			c := z.readByte()
+-			if z.err != nil {
+-				z.pendingAttr[1].end = z.raw.end
+-				return
+-			}
+-			switch c {
+-			case ' ', '\n', '\r', '\t', '\f':
+-				z.pendingAttr[1].end = z.raw.end - 1
+-				return
+-			case '>':
+-				z.raw.end--
+-				z.pendingAttr[1].end = z.raw.end
+-				return
+-			}
+-		}
+-	}
+-}
+-
+-// Next scans the next token and returns its type.
+-func (z *Tokenizer) Next() TokenType {
+-	z.raw.start = z.raw.end
+-	z.data.start = z.raw.end
+-	z.data.end = z.raw.end
+-	if z.err != nil {
+-		z.tt = ErrorToken
+-		return z.tt
+-	}
+-	if z.rawTag != "" {
+-		if z.rawTag == "plaintext" {
+-			// Read everything up to EOF.
+-			for z.err == nil {
+-				z.readByte()
+-			}
+-			z.data.end = z.raw.end
+-			z.textIsRaw = true
+-		} else {
+-			z.readRawOrRCDATA()
+-		}
+-		if z.data.end > z.data.start {
+-			z.tt = TextToken
+-			z.convertNUL = true
+-			return z.tt
+-		}
+-	}
+-	z.textIsRaw = false
+-	z.convertNUL = false
+-
+-loop:
+-	for {
+-		c := z.readByte()
+-		if z.err != nil {
+-			break loop
+-		}
+-		if c != '<' {
+-			continue loop
+-		}
+-
+-		// Check if the '<' we have just read is part of a tag, comment
+-		// or doctype. If not, it's part of the accumulated text token.
+-		c = z.readByte()
+-		if z.err != nil {
+-			break loop
+-		}
+-		var tokenType TokenType
+-		switch {
+-		case 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z':
+-			tokenType = StartTagToken
+-		case c == '/':
+-			tokenType = EndTagToken
+-		case c == '!' || c == '?':
+-			// We use CommentToken to mean any of "<!--actual comments-->",
+-			// "<!DOCTYPE declarations>" and "<?xml processing instructions?>".
+-			tokenType = CommentToken
+-		default:
+-			// Reconsume the current character.
+-			z.raw.end--
+-			continue
+-		}
+-
+-		// We have a non-text token, but we might have accumulated some text
+-		// before that. If so, we return the text first, and return the non-
+-		// text token on the subsequent call to Next.
+-		if x := z.raw.end - len("<a"); z.raw.start < x {
+-			z.raw.end = x
+-			z.data.end = x
+-			z.tt = TextToken
+-			return z.tt
+-		}
+-		switch tokenType {
+-		case StartTagToken:
+-			z.tt = z.readStartTag()
+-			return z.tt
+-		case EndTagToken:
+-			c = z.readByte()
+-			if z.err != nil {
+-				break loop
+-			}
+-			if c == '>' {
+-				// "</>" does not generate a token at all. Generate an empty comment
+-				// to allow passthrough clients to pick up the data using Raw.
+-				// Reset the tokenizer state and start again.
+-				z.tt = CommentToken
+-				return z.tt
+-			}
+-			if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' {
+-				z.readTag(false)
+-				if z.err != nil {
+-					z.tt = ErrorToken
+-				} else {
+-					z.tt = EndTagToken
+-				}
+-				return z.tt
+-			}
+-			z.raw.end--
+-			z.readUntilCloseAngle()
+-			z.tt = CommentToken
+-			return z.tt
+-		case CommentToken:
+-			if c == '!' {
+-				z.tt = z.readMarkupDeclaration()
+-				return z.tt
+-			}
+-			z.raw.end--
+-			z.readUntilCloseAngle()
+-			z.tt = CommentToken
+-			return z.tt
+-		}
+-	}
+-	if z.raw.start < z.raw.end {
+-		z.data.end = z.raw.end
+-		z.tt = TextToken
+-		return z.tt
+-	}
+-	z.tt = ErrorToken
+-	return z.tt
+-}
+-
+-// Raw returns the unmodified text of the current token. Calling Next, Token,
+-// Text, TagName or TagAttr may change the contents of the returned slice.
+-func (z *Tokenizer) Raw() []byte {
+-	return z.buf[z.raw.start:z.raw.end]
+-}
+-
+-// convertNewlines converts "\r" and "\r\n" in s to "\n".
+-// The conversion happens in place, but the resulting slice may be shorter.
+-func convertNewlines(s []byte) []byte {
+-	for i, c := range s {
+-		if c != '\r' {
+-			continue
+-		}
+-
+-		src := i + 1
+-		if src >= len(s) || s[src] != '\n' {
+-			s[i] = '\n'
+-			continue
+-		}
+-
+-		dst := i
+-		for src < len(s) {
+-			if s[src] == '\r' {
+-				if src+1 < len(s) && s[src+1] == '\n' {
+-					src++
+-				}
+-				s[dst] = '\n'
+-			} else {
+-				s[dst] = s[src]
+-			}
+-			src++
+-			dst++
+-		}
+-		return s[:dst]
+-	}
+-	return s
+-}
+-
+-var (
+-	nul         = []byte("\x00")
+-	replacement = []byte("\ufffd")
+-)
+-
+-// Text returns the unescaped text of a text, comment or doctype token. The
+-// contents of the returned slice may change on the next call to Next.
+-func (z *Tokenizer) Text() []byte {
+-	switch z.tt {
+-	case TextToken, CommentToken, DoctypeToken:
+-		s := z.buf[z.data.start:z.data.end]
+-		z.data.start = z.raw.end
+-		z.data.end = z.raw.end
+-		s = convertNewlines(s)
+-		if (z.convertNUL || z.tt == CommentToken) && bytes.Contains(s, nul) {
+-			s = bytes.Replace(s, nul, replacement, -1)
+-		}
+-		if !z.textIsRaw {
+-			s = unescape(s, false)
+-		}
+-		return s
+-	}
+-	return nil
+-}
+-
+-// TagName returns the lower-cased name of a tag token (the `img` out of
+-// `<IMG SRC="foo">`) and whether the tag has attributes.
+-// The contents of the returned slice may change on the next call to Next.
+-func (z *Tokenizer) TagName() (name []byte, hasAttr bool) {
+-	if z.data.start < z.data.end {
+-		switch z.tt {
+-		case StartTagToken, EndTagToken, SelfClosingTagToken:
+-			s := z.buf[z.data.start:z.data.end]
+-			z.data.start = z.raw.end
+-			z.data.end = z.raw.end
+-			return lower(s), z.nAttrReturned < len(z.attr)
+-		}
+-	}
+-	return nil, false
+-}
+-
+-// TagAttr returns the lower-cased key and unescaped value of the next unparsed
+-// attribute for the current tag token and whether there are more attributes.
+-// The contents of the returned slices may change on the next call to Next.
+-func (z *Tokenizer) TagAttr() (key, val []byte, moreAttr bool) {
+-	if z.nAttrReturned < len(z.attr) {
+-		switch z.tt {
+-		case StartTagToken, SelfClosingTagToken:
+-			x := z.attr[z.nAttrReturned]
+-			z.nAttrReturned++
+-			key = z.buf[x[0].start:x[0].end]
+-			val = z.buf[x[1].start:x[1].end]
+-			return lower(key), unescape(convertNewlines(val), true), z.nAttrReturned < len(z.attr)
+-		}
+-	}
+-	return nil, nil, false
+-}
+-
+-// Token returns the next Token. The result's Data and Attr values remain valid
+-// after subsequent Next calls.
+-func (z *Tokenizer) Token() Token {
+-	t := Token{Type: z.tt}
+-	switch z.tt {
+-	case TextToken, CommentToken, DoctypeToken:
+-		t.Data = string(z.Text())
+-	case StartTagToken, SelfClosingTagToken, EndTagToken:
+-		name, moreAttr := z.TagName()
+-		for moreAttr {
+-			var key, val []byte
+-			key, val, moreAttr = z.TagAttr()
+-			t.Attr = append(t.Attr, Attribute{"", atom.String(key), string(val)})
+-		}
+-		if a := atom.Lookup(name); a != 0 {
+-			t.DataAtom, t.Data = a, a.String()
+-		} else {
+-			t.DataAtom, t.Data = 0, string(name)
+-		}
+-	}
+-	return t
+-}
+-
+-// SetMaxBuf sets a limit on the amount of data buffered during tokenization.
+-// A value of 0 means unlimited.
+-func (z *Tokenizer) SetMaxBuf(n int) {
+-	z.maxBuf = n
+-}
+-
+-// NewTokenizer returns a new HTML Tokenizer for the given Reader.
+-// The input is assumed to be UTF-8 encoded.
+-func NewTokenizer(r io.Reader) *Tokenizer {
+-	return NewTokenizerFragment(r, "")
+-}
+-
+-// NewTokenizerFragment returns a new HTML Tokenizer for the given Reader, for
+-// tokenizing an existing element's InnerHTML fragment. contextTag is that
+-// element's tag, such as "div" or "iframe".
+-//
+-// For example, how the InnerHTML "a<b" is tokenized depends on whether it is
+-// for a <p> tag or a <script> tag.
+-//
+-// The input is assumed to be UTF-8 encoded.
+-func NewTokenizerFragment(r io.Reader, contextTag string) *Tokenizer {
+-	z := &Tokenizer{
+-		r:   r,
+-		buf: make([]byte, 0, 4096),
+-	}
+-	if contextTag != "" {
+-		switch s := strings.ToLower(contextTag); s {
+-		case "iframe", "noembed", "noframes", "noscript", "plaintext", "script", "style", "title", "textarea", "xmp":
+-			z.rawTag = s
+-		}
+-	}
+-	return z
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/html/token_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/html/token_test.go
+deleted file mode 100644
+index f6988a8..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/html/token_test.go
++++ /dev/null
+@@ -1,748 +0,0 @@
+-// Copyright 2010 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package html
+-
+-import (
+-	"bytes"
+-	"io"
+-	"io/ioutil"
+-	"reflect"
+-	"runtime"
+-	"strings"
+-	"testing"
+-)
+-
+-type tokenTest struct {
+-	// A short description of the test case.
+-	desc string
+-	// The HTML to parse.
+-	html string
+-	// The string representations of the expected tokens, joined by '$'.
+-	golden string
+-}
+-
+-var tokenTests = []tokenTest{
+-	{
+-		"empty",
+-		"",
+-		"",
+-	},
+-	// A single text node. The tokenizer should not break text nodes on whitespace,
+-	// nor should it normalize whitespace within a text node.
+-	{
+-		"text",
+-		"foo  bar",
+-		"foo  bar",
+-	},
+-	// An entity.
+-	{
+-		"entity",
+-		"one &lt; two",
+-		"one &lt; two",
+-	},
+-	// A start, self-closing and end tag. The tokenizer does not care if the start
+-	// and end tokens don't match; that is the job of the parser.
+-	{
+-		"tags",
+-		"<a>b<c/>d</e>",
+-		"<a>$b$<c/>$d$</e>",
+-	},
+-	// Angle brackets that aren't a tag.
+-	{
+-		"not a tag #0",
+-		"<",
+-		"&lt;",
+-	},
+-	{
+-		"not a tag #1",
+-		"</",
+-		"&lt;/",
+-	},
+-	{
+-		"not a tag #2",
+-		"</>",
+-		"<!---->",
+-	},
+-	{
+-		"not a tag #3",
+-		"a</>b",
+-		"a$<!---->$b",
+-	},
+-	{
+-		"not a tag #4",
+-		"</ >",
+-		"<!-- -->",
+-	},
+-	{
+-		"not a tag #5",
+-		"</.",
+-		"<!--.-->",
+-	},
+-	{
+-		"not a tag #6",
+-		"</.>",
+-		"<!--.-->",
+-	},
+-	{
+-		"not a tag #7",
+-		"a < b",
+-		"a &lt; b",
+-	},
+-	{
+-		"not a tag #8",
+-		"<.>",
+-		"&lt;.&gt;",
+-	},
+-	{
+-		"not a tag #9",
+-		"a<<<b>>>c",
+-		"a&lt;&lt;$<b>$&gt;&gt;c",
+-	},
+-	{
+-		"not a tag #10",
+-		"if x<0 and y < 0 then x*y>0",
+-		"if x&lt;0 and y &lt; 0 then x*y&gt;0",
+-	},
+-	{
+-		"not a tag #11",
+-		"<<p>",
+-		"&lt;$<p>",
+-	},
+-	// EOF in a tag name.
+-	{
+-		"tag name eof #0",
+-		"<a",
+-		"",
+-	},
+-	{
+-		"tag name eof #1",
+-		"<a ",
+-		"",
+-	},
+-	{
+-		"tag name eof #2",
+-		"a<b",
+-		"a",
+-	},
+-	{
+-		"tag name eof #3",
+-		"<a><b",
+-		"<a>",
+-	},
+-	{
+-		"tag name eof #4",
+-		`<a x`,
+-		``,
+-	},
+-	// Some malformed tags that are missing a '>'.
+-	{
+-		"malformed tag #0",
+-		`<p</p>`,
+-		`<p< p="">`,
+-	},
+-	{
+-		"malformed tag #1",
+-		`<p </p>`,
+-		`<p <="" p="">`,
+-	},
+-	{
+-		"malformed tag #2",
+-		`<p id`,
+-		``,
+-	},
+-	{
+-		"malformed tag #3",
+-		`<p id=`,
+-		``,
+-	},
+-	{
+-		"malformed tag #4",
+-		`<p id=>`,
+-		`<p id="">`,
+-	},
+-	{
+-		"malformed tag #5",
+-		`<p id=0`,
+-		``,
+-	},
+-	{
+-		"malformed tag #6",
+-		`<p id=0</p>`,
+-		`<p id="0&lt;/p">`,
+-	},
+-	{
+-		"malformed tag #7",
+-		`<p id="0</p>`,
+-		``,
+-	},
+-	{
+-		"malformed tag #8",
+-		`<p id="0"</p>`,
+-		`<p id="0" <="" p="">`,
+-	},
+-	{
+-		"malformed tag #9",
+-		`<p></p id`,
+-		`<p>`,
+-	},
+-	// Raw text and RCDATA.
+-	{
+-		"basic raw text",
+-		"<script><a></b></script>",
+-		"<script>$&lt;a&gt;&lt;/b&gt;$</script>",
+-	},
+-	{
+-		"unfinished script end tag",
+-		"<SCRIPT>a</SCR",
+-		"<script>$a&lt;/SCR",
+-	},
+-	{
+-		"broken script end tag",
+-		"<SCRIPT>a</SCR ipt>",
+-		"<script>$a&lt;/SCR ipt&gt;",
+-	},
+-	{
+-		"EOF in script end tag",
+-		"<SCRIPT>a</SCRipt",
+-		"<script>$a&lt;/SCRipt",
+-	},
+-	{
+-		"scriptx end tag",
+-		"<SCRIPT>a</SCRiptx",
+-		"<script>$a&lt;/SCRiptx",
+-	},
+-	{
+-		"' ' completes script end tag",
+-		"<SCRIPT>a</SCRipt ",
+-		"<script>$a",
+-	},
+-	{
+-		"'>' completes script end tag",
+-		"<SCRIPT>a</SCRipt>",
+-		"<script>$a$</script>",
+-	},
+-	{
+-		"self-closing script end tag",
+-		"<SCRIPT>a</SCRipt/>",
+-		"<script>$a$</script>",
+-	},
+-	{
+-		"nested script tag",
+-		"<SCRIPT>a</SCRipt<script>",
+-		"<script>$a&lt;/SCRipt&lt;script&gt;",
+-	},
+-	{
+-		"script end tag after unfinished",
+-		"<SCRIPT>a</SCRipt</script>",
+-		"<script>$a&lt;/SCRipt$</script>",
+-	},
+-	{
+-		"script/style mismatched tags",
+-		"<script>a</style>",
+-		"<script>$a&lt;/style&gt;",
+-	},
+-	{
+-		"style element with entity",
+-		"<style>&apos;",
+-		"<style>$&amp;apos;",
+-	},
+-	{
+-		"textarea with tag",
+-		"<textarea><div></textarea>",
+-		"<textarea>$&lt;div&gt;$</textarea>",
+-	},
+-	{
+-		"title with tag and entity",
+-		"<title><b>K&amp;R C</b></title>",
+-		"<title>$&lt;b&gt;K&amp;R C&lt;/b&gt;$</title>",
+-	},
+-	// DOCTYPE tests.
+-	{
+-		"Proper DOCTYPE",
+-		"<!DOCTYPE html>",
+-		"<!DOCTYPE html>",
+-	},
+-	{
+-		"DOCTYPE with no space",
+-		"<!doctypehtml>",
+-		"<!DOCTYPE html>",
+-	},
+-	{
+-		"DOCTYPE with two spaces",
+-		"<!doctype  html>",
+-		"<!DOCTYPE html>",
+-	},
+-	{
+-		"looks like DOCTYPE but isn't",
+-		"<!DOCUMENT html>",
+-		"<!--DOCUMENT html-->",
+-	},
+-	{
+-		"DOCTYPE at EOF",
+-		"<!DOCtype",
+-		"<!DOCTYPE >",
+-	},
+-	// XML processing instructions.
+-	{
+-		"XML processing instruction",
+-		"<?xml?>",
+-		"<!--?xml?-->",
+-	},
+-	// Comments.
+-	{
+-		"comment0",
+-		"abc<b><!-- skipme --></b>def",
+-		"abc$<b>$<!-- skipme -->$</b>$def",
+-	},
+-	{
+-		"comment1",
+-		"a<!-->z",
+-		"a$<!---->$z",
+-	},
+-	{
+-		"comment2",
+-		"a<!--->z",
+-		"a$<!---->$z",
+-	},
+-	{
+-		"comment3",
+-		"a<!--x>-->z",
+-		"a$<!--x>-->$z",
+-	},
+-	{
+-		"comment4",
+-		"a<!--x->-->z",
+-		"a$<!--x->-->$z",
+-	},
+-	{
+-		"comment5",
+-		"a<!>z",
+-		"a$<!---->$z",
+-	},
+-	{
+-		"comment6",
+-		"a<!->z",
+-		"a$<!----->$z",
+-	},
+-	{
+-		"comment7",
+-		"a<!---<>z",
+-		"a$<!---<>z-->",
+-	},
+-	{
+-		"comment8",
+-		"a<!--z",
+-		"a$<!--z-->",
+-	},
+-	{
+-		"comment9",
+-		"a<!--z-",
+-		"a$<!--z-->",
+-	},
+-	{
+-		"comment10",
+-		"a<!--z--",
+-		"a$<!--z-->",
+-	},
+-	{
+-		"comment11",
+-		"a<!--z---",
+-		"a$<!--z--->",
+-	},
+-	{
+-		"comment12",
+-		"a<!--z----",
+-		"a$<!--z---->",
+-	},
+-	{
+-		"comment13",
+-		"a<!--x--!>z",
+-		"a$<!--x-->$z",
+-	},
+-	// An attribute with a backslash.
+-	{
+-		"backslash",
+-		`<p id="a\"b">`,
+-		`<p id="a\" b"="">`,
+-	},
+-	// Entities, tag name and attribute key lower-casing, and whitespace
+-	// normalization within a tag.
+-	{
+-		"tricky",
+-		"<p \t\n iD=\"a&quot;B\"  foo=\"bar\"><EM>te&lt;&amp;;xt</em></p>",
+-		`<p id="a&#34;B" foo="bar">$<em>$te&lt;&amp;;xt$</em>$</p>`,
+-	},
+-	// A nonexistent entity. Tokenizing and converting back to a string should
+-	// escape the "&" to become "&amp;".
+-	{
+-		"noSuchEntity",
+-		`<a b="c&noSuchEntity;d">&lt;&alsoDoesntExist;&`,
+-		`<a b="c&amp;noSuchEntity;d">$&lt;&amp;alsoDoesntExist;&amp;`,
+-	},
+-	{
+-		"entity without semicolon",
+-		`&notit;&notin;<a b="q=z&amp=5&notice=hello&not;=world">`,
+-		`¬it;∉$<a b="q=z&amp;amp=5&amp;notice=hello¬=world">`,
+-	},
+-	{
+-		"entity with digits",
+-		"&frac12;",
+-		"½",
+-	},
+-	// Attribute tests:
+-	// http://dev.w3.org/html5/spec/Overview.html#attributes-0
+-	{
+-		"Empty attribute",
+-		`<input disabled FOO>`,
+-		`<input disabled="" foo="">`,
+-	},
+-	{
+-		"Empty attribute, whitespace",
+-		`<input disabled FOO >`,
+-		`<input disabled="" foo="">`,
+-	},
+-	{
+-		"Unquoted attribute value",
+-		`<input value=yes FOO=BAR>`,
+-		`<input value="yes" foo="BAR">`,
+-	},
+-	{
+-		"Unquoted attribute value, spaces",
+-		`<input value = yes FOO = BAR>`,
+-		`<input value="yes" foo="BAR">`,
+-	},
+-	{
+-		"Unquoted attribute value, trailing space",
+-		`<input value=yes FOO=BAR >`,
+-		`<input value="yes" foo="BAR">`,
+-	},
+-	{
+-		"Single-quoted attribute value",
+-		`<input value='yes' FOO='BAR'>`,
+-		`<input value="yes" foo="BAR">`,
+-	},
+-	{
+-		"Single-quoted attribute value, trailing space",
+-		`<input value='yes' FOO='BAR' >`,
+-		`<input value="yes" foo="BAR">`,
+-	},
+-	{
+-		"Double-quoted attribute value",
+-		`<input value="I'm an attribute" FOO="BAR">`,
+-		`<input value="I&#39;m an attribute" foo="BAR">`,
+-	},
+-	{
+-		"Attribute name characters",
+-		`<meta http-equiv="content-type">`,
+-		`<meta http-equiv="content-type">`,
+-	},
+-	{
+-		"Mixed attributes",
+-		`a<P V="0 1" w='2' X=3 y>z`,
+-		`a$<p v="0 1" w="2" x="3" y="">$z`,
+-	},
+-	{
+-		"Attributes with a solitary single quote",
+-		`<p id=can't><p id=won't>`,
+-		`<p id="can&#39;t">$<p id="won&#39;t">`,
+-	},
+-}
+-
+-func TestTokenizer(t *testing.T) {
+-loop:
+-	for _, tt := range tokenTests {
+-		z := NewTokenizer(strings.NewReader(tt.html))
+-		if tt.golden != "" {
+-			for i, s := range strings.Split(tt.golden, "$") {
+-				if z.Next() == ErrorToken {
+-					t.Errorf("%s token %d: want %q got error %v", tt.desc, i, s, z.Err())
+-					continue loop
+-				}
+-				actual := z.Token().String()
+-				if s != actual {
+-					t.Errorf("%s token %d: want %q got %q", tt.desc, i, s, actual)
+-					continue loop
+-				}
+-			}
+-		}
+-		z.Next()
+-		if z.Err() != io.EOF {
+-			t.Errorf("%s: want EOF got %q", tt.desc, z.Err())
+-		}
+-	}
+-}
+-
+-func TestMaxBuffer(t *testing.T) {
+-	// Exceeding the maximum buffer size generates ErrBufferExceeded.
+-	z := NewTokenizer(strings.NewReader("<" + strings.Repeat("t", 10)))
+-	z.SetMaxBuf(5)
+-	tt := z.Next()
+-	if got, want := tt, ErrorToken; got != want {
+-		t.Fatalf("token type: got: %v want: %v", got, want)
+-	}
+-	if got, want := z.Err(), ErrBufferExceeded; got != want {
+-		t.Errorf("error type: got: %v want: %v", got, want)
+-	}
+-	if got, want := string(z.Raw()), "<tttt"; got != want {
+-		t.Fatalf("buffered before overflow: got: %q want: %q", got, want)
+-	}
+-}
+-
+-func TestMaxBufferReconstruction(t *testing.T) {
+-	// Exceeding the maximum buffer size at any point while tokenizing permits
+-	// reconstructing the original input.
+-tests:
+-	for _, test := range tokenTests {
+-		for maxBuf := 1; ; maxBuf++ {
+-			r := strings.NewReader(test.html)
+-			z := NewTokenizer(r)
+-			z.SetMaxBuf(maxBuf)
+-			var tokenized bytes.Buffer
+-			for {
+-				tt := z.Next()
+-				tokenized.Write(z.Raw())
+-				if tt == ErrorToken {
+-					if err := z.Err(); err != io.EOF && err != ErrBufferExceeded {
+-						t.Errorf("%s: unexpected error: %v", test.desc, err)
+-					}
+-					break
+-				}
+-			}
+-			// Anything tokenized along with untokenized input or data left in the reader.
+-			assembled, err := ioutil.ReadAll(io.MultiReader(&tokenized, bytes.NewReader(z.Buffered()), r))
+-			if err != nil {
+-				t.Errorf("%s: ReadAll: %v", test.desc, err)
+-				continue tests
+-			}
+-			if got, want := string(assembled), test.html; got != want {
+-				t.Errorf("%s: reassembled html:\n got: %q\nwant: %q", test.desc, got, want)
+-				continue tests
+-			}
+-			// EOF indicates that we completed tokenization and hence found the max
+-			// maxBuf that generates ErrBufferExceeded, so continue to the next test.
+-			if z.Err() == io.EOF {
+-				break
+-			}
+-		} // buffer sizes
+-	} // tests
+-}
+-
+-func TestPassthrough(t *testing.T) {
+-	// Accumulating the raw output for each parse event should reconstruct the
+-	// original input.
+-	for _, test := range tokenTests {
+-		z := NewTokenizer(strings.NewReader(test.html))
+-		var parsed bytes.Buffer
+-		for {
+-			tt := z.Next()
+-			parsed.Write(z.Raw())
+-			if tt == ErrorToken {
+-				break
+-			}
+-		}
+-		if got, want := parsed.String(), test.html; got != want {
+-			t.Errorf("%s: parsed output:\n got: %q\nwant: %q", test.desc, got, want)
+-		}
+-	}
+-}
+-
+-func TestBufAPI(t *testing.T) {
+-	s := "0<a>1</a>2<b>3<a>4<a>5</a>6</b>7</a>8<a/>9"
+-	z := NewTokenizer(bytes.NewBufferString(s))
+-	var result bytes.Buffer
+-	depth := 0
+-loop:
+-	for {
+-		tt := z.Next()
+-		switch tt {
+-		case ErrorToken:
+-			if z.Err() != io.EOF {
+-				t.Error(z.Err())
+-			}
+-			break loop
+-		case TextToken:
+-			if depth > 0 {
+-				result.Write(z.Text())
+-			}
+-		case StartTagToken, EndTagToken:
+-			tn, _ := z.TagName()
+-			if len(tn) == 1 && tn[0] == 'a' {
+-				if tt == StartTagToken {
+-					depth++
+-				} else {
+-					depth--
+-				}
+-			}
+-		}
+-	}
+-	u := "14567"
+-	v := string(result.Bytes())
+-	if u != v {
+-		t.Errorf("TestBufAPI: want %q got %q", u, v)
+-	}
+-}
+-
+-func TestConvertNewlines(t *testing.T) {
+-	testCases := map[string]string{
+-		"Mac\rDOS\r\nUnix\n":    "Mac\nDOS\nUnix\n",
+-		"Unix\nMac\rDOS\r\n":    "Unix\nMac\nDOS\n",
+-		"DOS\r\nDOS\r\nDOS\r\n": "DOS\nDOS\nDOS\n",
+-		"":         "",
+-		"\n":       "\n",
+-		"\n\r":     "\n\n",
+-		"\r":       "\n",
+-		"\r\n":     "\n",
+-		"\r\n\n":   "\n\n",
+-		"\r\n\r":   "\n\n",
+-		"\r\n\r\n": "\n\n",
+-		"\r\r":     "\n\n",
+-		"\r\r\n":   "\n\n",
+-		"\r\r\n\n": "\n\n\n",
+-		"\r\r\r\n": "\n\n\n",
+-		"\r \n":    "\n \n",
+-		"xyz":      "xyz",
+-	}
+-	for in, want := range testCases {
+-		if got := string(convertNewlines([]byte(in))); got != want {
+-			t.Errorf("input %q: got %q, want %q", in, got, want)
+-		}
+-	}
+-}
+-
+-func TestReaderEdgeCases(t *testing.T) {
+-	const s = "<p>An io.Reader can return (0, nil) or (n, io.EOF).</p>"
+-	testCases := []io.Reader{
+-		&zeroOneByteReader{s: s},
+-		&eofStringsReader{s: s},
+-		&stuckReader{},
+-	}
+-	for i, tc := range testCases {
+-		got := []TokenType{}
+-		z := NewTokenizer(tc)
+-		for {
+-			tt := z.Next()
+-			if tt == ErrorToken {
+-				break
+-			}
+-			got = append(got, tt)
+-		}
+-		if err := z.Err(); err != nil && err != io.EOF {
+-			if err != io.ErrNoProgress {
+-				t.Errorf("i=%d: %v", i, err)
+-			}
+-			continue
+-		}
+-		want := []TokenType{
+-			StartTagToken,
+-			TextToken,
+-			EndTagToken,
+-		}
+-		if !reflect.DeepEqual(got, want) {
+-			t.Errorf("i=%d: got %v, want %v", i, got, want)
+-			continue
+-		}
+-	}
+-}
+-
+-// zeroOneByteReader is like a strings.Reader that alternates between
+-// returning 0 bytes and 1 byte at a time.
+-type zeroOneByteReader struct {
+-	s string
+-	n int
+-}
+-
+-func (r *zeroOneByteReader) Read(p []byte) (int, error) {
+-	if len(p) == 0 {
+-		return 0, nil
+-	}
+-	if len(r.s) == 0 {
+-		return 0, io.EOF
+-	}
+-	r.n++
+-	if r.n%2 != 0 {
+-		return 0, nil
+-	}
+-	p[0], r.s = r.s[0], r.s[1:]
+-	return 1, nil
+-}
+-
+-// eofStringsReader is like a strings.Reader but can return an (n, err) where
+-// n > 0 && err != nil.
+-type eofStringsReader struct {
+-	s string
+-}
+-
+-func (r *eofStringsReader) Read(p []byte) (int, error) {
+-	n := copy(p, r.s)
+-	r.s = r.s[n:]
+-	if r.s != "" {
+-		return n, nil
+-	}
+-	return n, io.EOF
+-}
+-
+-// stuckReader is an io.Reader that always returns no data and no error.
+-type stuckReader struct{}
+-
+-func (*stuckReader) Read(p []byte) (int, error) {
+-	return 0, nil
+-}
+-
+-const (
+-	rawLevel = iota
+-	lowLevel
+-	highLevel
+-)
+-
+-func benchmarkTokenizer(b *testing.B, level int) {
+-	buf, err := ioutil.ReadFile("testdata/go1.html")
+-	if err != nil {
+-		b.Fatalf("could not read testdata/go1.html: %v", err)
+-	}
+-	b.SetBytes(int64(len(buf)))
+-	runtime.GC()
+-	b.ReportAllocs()
+-	b.ResetTimer()
+-	for i := 0; i < b.N; i++ {
+-		z := NewTokenizer(bytes.NewBuffer(buf))
+-		for {
+-			tt := z.Next()
+-			if tt == ErrorToken {
+-				if err := z.Err(); err != nil && err != io.EOF {
+-					b.Fatalf("tokenizer error: %v", err)
+-				}
+-				break
+-			}
+-			switch level {
+-			case rawLevel:
+-				// Calling z.Raw just returns the raw bytes of the token. It does
+-				// not unescape &lt; to <, or lower-case tag names and attribute keys.
+-				z.Raw()
+-			case lowLevel:
+-				// Caling z.Text, z.TagName and z.TagAttr returns []byte values
+-				// whose contents may change on the next call to z.Next.
+-				switch tt {
+-				case TextToken, CommentToken, DoctypeToken:
+-					z.Text()
+-				case StartTagToken, SelfClosingTagToken:
+-					_, more := z.TagName()
+-					for more {
+-						_, _, more = z.TagAttr()
+-					}
+-				case EndTagToken:
+-					z.TagName()
+-				}
+-			case highLevel:
+-				// Calling z.Token converts []byte values to strings whose validity
+-				// extend beyond the next call to z.Next.
+-				z.Token()
+-			}
+-		}
+-	}
+-}
+-
+-func BenchmarkRawLevelTokenizer(b *testing.B)  { benchmarkTokenizer(b, rawLevel) }
+-func BenchmarkLowLevelTokenizer(b *testing.B)  { benchmarkTokenizer(b, lowLevel) }
+-func BenchmarkHighLevelTokenizer(b *testing.B) { benchmarkTokenizer(b, highLevel) }
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/client.go b/Godeps/_workspace/src/code.google.com/p/go.net/websocket/client.go
+deleted file mode 100644
+index a861bb9..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/client.go
++++ /dev/null
+@@ -1,98 +0,0 @@
+-// Copyright 2009 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package websocket
+-
+-import (
+-	"bufio"
+-	"crypto/tls"
+-	"io"
+-	"net"
+-	"net/http"
+-	"net/url"
+-)
+-
+-// DialError is an error that occurs while dialling a websocket server.
+-type DialError struct {
+-	*Config
+-	Err error
+-}
+-
+-func (e *DialError) Error() string {
+-	return "websocket.Dial " + e.Config.Location.String() + ": " + e.Err.Error()
+-}
+-
+-// NewConfig creates a new WebSocket config for client connection.
+-func NewConfig(server, origin string) (config *Config, err error) {
+-	config = new(Config)
+-	config.Version = ProtocolVersionHybi13
+-	config.Location, err = url.ParseRequestURI(server)
+-	if err != nil {
+-		return
+-	}
+-	config.Origin, err = url.ParseRequestURI(origin)
+-	if err != nil {
+-		return
+-	}
+-	config.Header = http.Header(make(map[string][]string))
+-	return
+-}
+-
+-// NewClient creates a new WebSocket client connection over rwc.
+-func NewClient(config *Config, rwc io.ReadWriteCloser) (ws *Conn, err error) {
+-	br := bufio.NewReader(rwc)
+-	bw := bufio.NewWriter(rwc)
+-	err = hybiClientHandshake(config, br, bw)
+-	if err != nil {
+-		return
+-	}
+-	buf := bufio.NewReadWriter(br, bw)
+-	ws = newHybiClientConn(config, buf, rwc)
+-	return
+-}
+-
+-// Dial opens a new client connection to a WebSocket.
+-func Dial(url_, protocol, origin string) (ws *Conn, err error) {
+-	config, err := NewConfig(url_, origin)
+-	if err != nil {
+-		return nil, err
+-	}
+-	if protocol != "" {
+-		config.Protocol = []string{protocol}
+-	}
+-	return DialConfig(config)
+-}
+-
+-// DialConfig opens a new client connection to a WebSocket with a config.
+-func DialConfig(config *Config) (ws *Conn, err error) {
+-	var client net.Conn
+-	if config.Location == nil {
+-		return nil, &DialError{config, ErrBadWebSocketLocation}
+-	}
+-	if config.Origin == nil {
+-		return nil, &DialError{config, ErrBadWebSocketOrigin}
+-	}
+-	switch config.Location.Scheme {
+-	case "ws":
+-		client, err = net.Dial("tcp", config.Location.Host)
+-
+-	case "wss":
+-		client, err = tls.Dial("tcp", config.Location.Host, config.TlsConfig)
+-
+-	default:
+-		err = ErrBadScheme
+-	}
+-	if err != nil {
+-		goto Error
+-	}
+-
+-	ws, err = NewClient(config, client)
+-	if err != nil {
+-		goto Error
+-	}
+-	return
+-
+-Error:
+-	return nil, &DialError{config, err}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/exampledial_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/websocket/exampledial_test.go
+deleted file mode 100644
+index 777a668..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/exampledial_test.go
++++ /dev/null
+@@ -1,31 +0,0 @@
+-// Copyright 2012 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package websocket_test
+-
+-import (
+-	"fmt"
+-	"log"
+-
+-	"code.google.com/p/go.net/websocket"
+-)
+-
+-// This example demonstrates a trivial client.
+-func ExampleDial() {
+-	origin := "http://localhost/"
+-	url := "ws://localhost:12345/ws"
+-	ws, err := websocket.Dial(url, "", origin)
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-	if _, err := ws.Write([]byte("hello, world!\n")); err != nil {
+-		log.Fatal(err)
+-	}
+-	var msg = make([]byte, 512)
+-	var n int
+-	if n, err = ws.Read(msg); err != nil {
+-		log.Fatal(err)
+-	}
+-	fmt.Printf("Received: %s.\n", msg[:n])
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/examplehandler_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/websocket/examplehandler_test.go
+deleted file mode 100644
+index 47b0bb9..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/examplehandler_test.go
++++ /dev/null
+@@ -1,26 +0,0 @@
+-// Copyright 2012 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package websocket_test
+-
+-import (
+-	"io"
+-	"net/http"
+-
+-	"code.google.com/p/go.net/websocket"
+-)
+-
+-// Echo the data received on the WebSocket.
+-func EchoServer(ws *websocket.Conn) {
+-	io.Copy(ws, ws)
+-}
+-
+-// This example demonstrates a trivial echo server.
+-func ExampleHandler() {
+-	http.Handle("/echo", websocket.Handler(EchoServer))
+-	err := http.ListenAndServe(":12345", nil)
+-	if err != nil {
+-		panic("ListenAndServe: " + err.Error())
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/hybi.go b/Godeps/_workspace/src/code.google.com/p/go.net/websocket/hybi.go
+deleted file mode 100644
+index f8c0b2e..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/hybi.go
++++ /dev/null
+@@ -1,564 +0,0 @@
+-// Copyright 2011 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package websocket
+-
+-// This file implements a protocol of hybi draft.
+-// http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-17
+-
+-import (
+-	"bufio"
+-	"bytes"
+-	"crypto/rand"
+-	"crypto/sha1"
+-	"encoding/base64"
+-	"encoding/binary"
+-	"fmt"
+-	"io"
+-	"io/ioutil"
+-	"net/http"
+-	"net/url"
+-	"strings"
+-)
+-
+-const (
+-	websocketGUID = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
+-
+-	closeStatusNormal            = 1000
+-	closeStatusGoingAway         = 1001
+-	closeStatusProtocolError     = 1002
+-	closeStatusUnsupportedData   = 1003
+-	closeStatusFrameTooLarge     = 1004
+-	closeStatusNoStatusRcvd      = 1005
+-	closeStatusAbnormalClosure   = 1006
+-	closeStatusBadMessageData    = 1007
+-	closeStatusPolicyViolation   = 1008
+-	closeStatusTooBigData        = 1009
+-	closeStatusExtensionMismatch = 1010
+-
+-	maxControlFramePayloadLength = 125
+-)
+-
+-var (
+-	ErrBadMaskingKey         = &ProtocolError{"bad masking key"}
+-	ErrBadPongMessage        = &ProtocolError{"bad pong message"}
+-	ErrBadClosingStatus      = &ProtocolError{"bad closing status"}
+-	ErrUnsupportedExtensions = &ProtocolError{"unsupported extensions"}
+-	ErrNotImplemented        = &ProtocolError{"not implemented"}
+-
+-	handshakeHeader = map[string]bool{
+-		"Host":                   true,
+-		"Upgrade":                true,
+-		"Connection":             true,
+-		"Sec-Websocket-Key":      true,
+-		"Sec-Websocket-Origin":   true,
+-		"Sec-Websocket-Version":  true,
+-		"Sec-Websocket-Protocol": true,
+-		"Sec-Websocket-Accept":   true,
+-	}
+-)
+-
+-// A hybiFrameHeader is a frame header as defined in hybi draft.
+-type hybiFrameHeader struct {
+-	Fin        bool
+-	Rsv        [3]bool
+-	OpCode     byte
+-	Length     int64
+-	MaskingKey []byte
+-
+-	data *bytes.Buffer
+-}
+-
+-// A hybiFrameReader is a reader for hybi frame.
+-type hybiFrameReader struct {
+-	reader io.Reader
+-
+-	header hybiFrameHeader
+-	pos    int64
+-	length int
+-}
+-
+-func (frame *hybiFrameReader) Read(msg []byte) (n int, err error) {
+-	n, err = frame.reader.Read(msg)
+-	if err != nil {
+-		return 0, err
+-	}
+-	if frame.header.MaskingKey != nil {
+-		for i := 0; i < n; i++ {
+-			msg[i] = msg[i] ^ frame.header.MaskingKey[frame.pos%4]
+-			frame.pos++
+-		}
+-	}
+-	return n, err
+-}
+-
+-func (frame *hybiFrameReader) PayloadType() byte { return frame.header.OpCode }
+-
+-func (frame *hybiFrameReader) HeaderReader() io.Reader {
+-	if frame.header.data == nil {
+-		return nil
+-	}
+-	if frame.header.data.Len() == 0 {
+-		return nil
+-	}
+-	return frame.header.data
+-}
+-
+-func (frame *hybiFrameReader) TrailerReader() io.Reader { return nil }
+-
+-func (frame *hybiFrameReader) Len() (n int) { return frame.length }
+-
+-// A hybiFrameReaderFactory creates new frame reader based on its frame type.
+-type hybiFrameReaderFactory struct {
+-	*bufio.Reader
+-}
+-
+-// NewFrameReader reads a frame header from the connection, and creates new reader for the frame.
+-// See Section 5.2 Base Framing protocol for detail.
+-// http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-17#section-5.2
+-func (buf hybiFrameReaderFactory) NewFrameReader() (frame frameReader, err error) {
+-	hybiFrame := new(hybiFrameReader)
+-	frame = hybiFrame
+-	var header []byte
+-	var b byte
+-	// First byte. FIN/RSV1/RSV2/RSV3/OpCode(4bits)
+-	b, err = buf.ReadByte()
+-	if err != nil {
+-		return
+-	}
+-	header = append(header, b)
+-	hybiFrame.header.Fin = ((header[0] >> 7) & 1) != 0
+-	for i := 0; i < 3; i++ {
+-		j := uint(6 - i)
+-		hybiFrame.header.Rsv[i] = ((header[0] >> j) & 1) != 0
+-	}
+-	hybiFrame.header.OpCode = header[0] & 0x0f
+-
+-	// Second byte. Mask/Payload len(7bits)
+-	b, err = buf.ReadByte()
+-	if err != nil {
+-		return
+-	}
+-	header = append(header, b)
+-	mask := (b & 0x80) != 0
+-	b &= 0x7f
+-	lengthFields := 0
+-	switch {
+-	case b <= 125: // Payload length 7bits.
+-		hybiFrame.header.Length = int64(b)
+-	case b == 126: // Payload length 7+16bits
+-		lengthFields = 2
+-	case b == 127: // Payload length 7+64bits
+-		lengthFields = 8
+-	}
+-	for i := 0; i < lengthFields; i++ {
+-		b, err = buf.ReadByte()
+-		if err != nil {
+-			return
+-		}
+-		header = append(header, b)
+-		hybiFrame.header.Length = hybiFrame.header.Length*256 + int64(b)
+-	}
+-	if mask {
+-		// Masking key. 4 bytes.
+-		for i := 0; i < 4; i++ {
+-			b, err = buf.ReadByte()
+-			if err != nil {
+-				return
+-			}
+-			header = append(header, b)
+-			hybiFrame.header.MaskingKey = append(hybiFrame.header.MaskingKey, b)
+-		}
+-	}
+-	hybiFrame.reader = io.LimitReader(buf.Reader, hybiFrame.header.Length)
+-	hybiFrame.header.data = bytes.NewBuffer(header)
+-	hybiFrame.length = len(header) + int(hybiFrame.header.Length)
+-	return
+-}
+-
+-// A HybiFrameWriter is a writer for hybi frame.
+-type hybiFrameWriter struct {
+-	writer *bufio.Writer
+-
+-	header *hybiFrameHeader
+-}
+-
+-func (frame *hybiFrameWriter) Write(msg []byte) (n int, err error) {
+-	var header []byte
+-	var b byte
+-	if frame.header.Fin {
+-		b |= 0x80
+-	}
+-	for i := 0; i < 3; i++ {
+-		if frame.header.Rsv[i] {
+-			j := uint(6 - i)
+-			b |= 1 << j
+-		}
+-	}
+-	b |= frame.header.OpCode
+-	header = append(header, b)
+-	if frame.header.MaskingKey != nil {
+-		b = 0x80
+-	} else {
+-		b = 0
+-	}
+-	lengthFields := 0
+-	length := len(msg)
+-	switch {
+-	case length <= 125:
+-		b |= byte(length)
+-	case length < 65536:
+-		b |= 126
+-		lengthFields = 2
+-	default:
+-		b |= 127
+-		lengthFields = 8
+-	}
+-	header = append(header, b)
+-	for i := 0; i < lengthFields; i++ {
+-		j := uint((lengthFields - i - 1) * 8)
+-		b = byte((length >> j) & 0xff)
+-		header = append(header, b)
+-	}
+-	if frame.header.MaskingKey != nil {
+-		if len(frame.header.MaskingKey) != 4 {
+-			return 0, ErrBadMaskingKey
+-		}
+-		header = append(header, frame.header.MaskingKey...)
+-		frame.writer.Write(header)
+-		data := make([]byte, length)
+-		for i := range data {
+-			data[i] = msg[i] ^ frame.header.MaskingKey[i%4]
+-		}
+-		frame.writer.Write(data)
+-		err = frame.writer.Flush()
+-		return length, err
+-	}
+-	frame.writer.Write(header)
+-	frame.writer.Write(msg)
+-	err = frame.writer.Flush()
+-	return length, err
+-}
+-
+-func (frame *hybiFrameWriter) Close() error { return nil }
+-
+-type hybiFrameWriterFactory struct {
+-	*bufio.Writer
+-	needMaskingKey bool
+-}
+-
+-func (buf hybiFrameWriterFactory) NewFrameWriter(payloadType byte) (frame frameWriter, err error) {
+-	frameHeader := &hybiFrameHeader{Fin: true, OpCode: payloadType}
+-	if buf.needMaskingKey {
+-		frameHeader.MaskingKey, err = generateMaskingKey()
+-		if err != nil {
+-			return nil, err
+-		}
+-	}
+-	return &hybiFrameWriter{writer: buf.Writer, header: frameHeader}, nil
+-}
+-
+-type hybiFrameHandler struct {
+-	conn        *Conn
+-	payloadType byte
+-}
+-
+-func (handler *hybiFrameHandler) HandleFrame(frame frameReader) (r frameReader, err error) {
+-	if handler.conn.IsServerConn() {
+-		// The client MUST mask all frames sent to the server.
+-		if frame.(*hybiFrameReader).header.MaskingKey == nil {
+-			handler.WriteClose(closeStatusProtocolError)
+-			return nil, io.EOF
+-		}
+-	} else {
+-		// The server MUST NOT mask all frames.
+-		if frame.(*hybiFrameReader).header.MaskingKey != nil {
+-			handler.WriteClose(closeStatusProtocolError)
+-			return nil, io.EOF
+-		}
+-	}
+-	if header := frame.HeaderReader(); header != nil {
+-		io.Copy(ioutil.Discard, header)
+-	}
+-	switch frame.PayloadType() {
+-	case ContinuationFrame:
+-		frame.(*hybiFrameReader).header.OpCode = handler.payloadType
+-	case TextFrame, BinaryFrame:
+-		handler.payloadType = frame.PayloadType()
+-	case CloseFrame:
+-		return nil, io.EOF
+-	case PingFrame:
+-		pingMsg := make([]byte, maxControlFramePayloadLength)
+-		n, err := io.ReadFull(frame, pingMsg)
+-		if err != nil && err != io.ErrUnexpectedEOF {
+-			return nil, err
+-		}
+-		io.Copy(ioutil.Discard, frame)
+-		n, err = handler.WritePong(pingMsg[:n])
+-		if err != nil {
+-			return nil, err
+-		}
+-		return nil, nil
+-	case PongFrame:
+-		return nil, ErrNotImplemented
+-	}
+-	return frame, nil
+-}
+-
+-func (handler *hybiFrameHandler) WriteClose(status int) (err error) {
+-	handler.conn.wio.Lock()
+-	defer handler.conn.wio.Unlock()
+-	w, err := handler.conn.frameWriterFactory.NewFrameWriter(CloseFrame)
+-	if err != nil {
+-		return err
+-	}
+-	msg := make([]byte, 2)
+-	binary.BigEndian.PutUint16(msg, uint16(status))
+-	_, err = w.Write(msg)
+-	w.Close()
+-	return err
+-}
+-
+-func (handler *hybiFrameHandler) WritePong(msg []byte) (n int, err error) {
+-	handler.conn.wio.Lock()
+-	defer handler.conn.wio.Unlock()
+-	w, err := handler.conn.frameWriterFactory.NewFrameWriter(PongFrame)
+-	if err != nil {
+-		return 0, err
+-	}
+-	n, err = w.Write(msg)
+-	w.Close()
+-	return n, err
+-}
+-
+-// newHybiConn creates a new WebSocket connection speaking hybi draft protocol.
+-func newHybiConn(config *Config, buf *bufio.ReadWriter, rwc io.ReadWriteCloser, request *http.Request) *Conn {
+-	if buf == nil {
+-		br := bufio.NewReader(rwc)
+-		bw := bufio.NewWriter(rwc)
+-		buf = bufio.NewReadWriter(br, bw)
+-	}
+-	ws := &Conn{config: config, request: request, buf: buf, rwc: rwc,
+-		frameReaderFactory: hybiFrameReaderFactory{buf.Reader},
+-		frameWriterFactory: hybiFrameWriterFactory{
+-			buf.Writer, request == nil},
+-		PayloadType:        TextFrame,
+-		defaultCloseStatus: closeStatusNormal}
+-	ws.frameHandler = &hybiFrameHandler{conn: ws}
+-	return ws
+-}
+-
+-// generateMaskingKey generates a masking key for a frame.
+-func generateMaskingKey() (maskingKey []byte, err error) {
+-	maskingKey = make([]byte, 4)
+-	if _, err = io.ReadFull(rand.Reader, maskingKey); err != nil {
+-		return
+-	}
+-	return
+-}
+-
+-// generateNonce generates a nonce consisting of a randomly selected 16-byte
+-// value that has been base64-encoded.
+-func generateNonce() (nonce []byte) {
+-	key := make([]byte, 16)
+-	if _, err := io.ReadFull(rand.Reader, key); err != nil {
+-		panic(err)
+-	}
+-	nonce = make([]byte, 24)
+-	base64.StdEncoding.Encode(nonce, key)
+-	return
+-}
+-
+-// getNonceAccept computes the base64-encoded SHA-1 of the concatenation of
+-// the nonce ("Sec-WebSocket-Key" value) with the websocket GUID string.
+-func getNonceAccept(nonce []byte) (expected []byte, err error) {
+-	h := sha1.New()
+-	if _, err = h.Write(nonce); err != nil {
+-		return
+-	}
+-	if _, err = h.Write([]byte(websocketGUID)); err != nil {
+-		return
+-	}
+-	expected = make([]byte, 28)
+-	base64.StdEncoding.Encode(expected, h.Sum(nil))
+-	return
+-}
+-
+-// Client handshake described in draft-ietf-hybi-thewebsocket-protocol-17
+-func hybiClientHandshake(config *Config, br *bufio.Reader, bw *bufio.Writer) (err error) {
+-	bw.WriteString("GET " + config.Location.RequestURI() + " HTTP/1.1\r\n")
+-
+-	bw.WriteString("Host: " + config.Location.Host + "\r\n")
+-	bw.WriteString("Upgrade: websocket\r\n")
+-	bw.WriteString("Connection: Upgrade\r\n")
+-	nonce := generateNonce()
+-	if config.handshakeData != nil {
+-		nonce = []byte(config.handshakeData["key"])
+-	}
+-	bw.WriteString("Sec-WebSocket-Key: " + string(nonce) + "\r\n")
+-	bw.WriteString("Origin: " + strings.ToLower(config.Origin.String()) + "\r\n")
+-
+-	if config.Version != ProtocolVersionHybi13 {
+-		return ErrBadProtocolVersion
+-	}
+-
+-	bw.WriteString("Sec-WebSocket-Version: " + fmt.Sprintf("%d", config.Version) + "\r\n")
+-	if len(config.Protocol) > 0 {
+-		bw.WriteString("Sec-WebSocket-Protocol: " + strings.Join(config.Protocol, ", ") + "\r\n")
+-	}
+-	// TODO(ukai): send Sec-WebSocket-Extensions.
+-	err = config.Header.WriteSubset(bw, handshakeHeader)
+-	if err != nil {
+-		return err
+-	}
+-
+-	bw.WriteString("\r\n")
+-	if err = bw.Flush(); err != nil {
+-		return err
+-	}
+-
+-	resp, err := http.ReadResponse(br, &http.Request{Method: "GET"})
+-	if err != nil {
+-		return err
+-	}
+-	if resp.StatusCode != 101 {
+-		return ErrBadStatus
+-	}
+-	if strings.ToLower(resp.Header.Get("Upgrade")) != "websocket" ||
+-		strings.ToLower(resp.Header.Get("Connection")) != "upgrade" {
+-		return ErrBadUpgrade
+-	}
+-	expectedAccept, err := getNonceAccept(nonce)
+-	if err != nil {
+-		return err
+-	}
+-	if resp.Header.Get("Sec-WebSocket-Accept") != string(expectedAccept) {
+-		return ErrChallengeResponse
+-	}
+-	if resp.Header.Get("Sec-WebSocket-Extensions") != "" {
+-		return ErrUnsupportedExtensions
+-	}
+-	offeredProtocol := resp.Header.Get("Sec-WebSocket-Protocol")
+-	if offeredProtocol != "" {
+-		protocolMatched := false
+-		for i := 0; i < len(config.Protocol); i++ {
+-			if config.Protocol[i] == offeredProtocol {
+-				protocolMatched = true
+-				break
+-			}
+-		}
+-		if !protocolMatched {
+-			return ErrBadWebSocketProtocol
+-		}
+-		config.Protocol = []string{offeredProtocol}
+-	}
+-
+-	return nil
+-}
+-
+-// newHybiClientConn creates a client WebSocket connection after handshake.
+-func newHybiClientConn(config *Config, buf *bufio.ReadWriter, rwc io.ReadWriteCloser) *Conn {
+-	return newHybiConn(config, buf, rwc, nil)
+-}
+-
+-// A HybiServerHandshaker performs a server handshake using hybi draft protocol.
+-type hybiServerHandshaker struct {
+-	*Config
+-	accept []byte
+-}
+-
+-func (c *hybiServerHandshaker) ReadHandshake(buf *bufio.Reader, req *http.Request) (code int, err error) {
+-	c.Version = ProtocolVersionHybi13
+-	if req.Method != "GET" {
+-		return http.StatusMethodNotAllowed, ErrBadRequestMethod
+-	}
+-	// HTTP version can be safely ignored.
+-
+-	if strings.ToLower(req.Header.Get("Upgrade")) != "websocket" ||
+-		!strings.Contains(strings.ToLower(req.Header.Get("Connection")), "upgrade") {
+-		return http.StatusBadRequest, ErrNotWebSocket
+-	}
+-
+-	key := req.Header.Get("Sec-Websocket-Key")
+-	if key == "" {
+-		return http.StatusBadRequest, ErrChallengeResponse
+-	}
+-	version := req.Header.Get("Sec-Websocket-Version")
+-	switch version {
+-	case "13":
+-		c.Version = ProtocolVersionHybi13
+-	default:
+-		return http.StatusBadRequest, ErrBadWebSocketVersion
+-	}
+-	var scheme string
+-	if req.TLS != nil {
+-		scheme = "wss"
+-	} else {
+-		scheme = "ws"
+-	}
+-	c.Location, err = url.ParseRequestURI(scheme + "://" + req.Host + req.URL.RequestURI())
+-	if err != nil {
+-		return http.StatusBadRequest, err
+-	}
+-	protocol := strings.TrimSpace(req.Header.Get("Sec-Websocket-Protocol"))
+-	if protocol != "" {
+-		protocols := strings.Split(protocol, ",")
+-		for i := 0; i < len(protocols); i++ {
+-			c.Protocol = append(c.Protocol, strings.TrimSpace(protocols[i]))
+-		}
+-	}
+-	c.accept, err = getNonceAccept([]byte(key))
+-	if err != nil {
+-		return http.StatusInternalServerError, err
+-	}
+-	return http.StatusSwitchingProtocols, nil
+-}
+-
+-// Origin parses Origin header in "req".
+-// If origin is "null", returns (nil, nil).
+-func Origin(config *Config, req *http.Request) (*url.URL, error) {
+-	var origin string
+-	switch config.Version {
+-	case ProtocolVersionHybi13:
+-		origin = req.Header.Get("Origin")
+-	}
+-	if origin == "null" {
+-		return nil, nil
+-	}
+-	return url.ParseRequestURI(origin)
+-}
+-
+-func (c *hybiServerHandshaker) AcceptHandshake(buf *bufio.Writer) (err error) {
+-	if len(c.Protocol) > 0 {
+-		if len(c.Protocol) != 1 {
+-			// You need choose a Protocol in Handshake func in Server.
+-			return ErrBadWebSocketProtocol
+-		}
+-	}
+-	buf.WriteString("HTTP/1.1 101 Switching Protocols\r\n")
+-	buf.WriteString("Upgrade: websocket\r\n")
+-	buf.WriteString("Connection: Upgrade\r\n")
+-	buf.WriteString("Sec-WebSocket-Accept: " + string(c.accept) + "\r\n")
+-	if len(c.Protocol) > 0 {
+-		buf.WriteString("Sec-WebSocket-Protocol: " + c.Protocol[0] + "\r\n")
+-	}
+-	// TODO(ukai): send Sec-WebSocket-Extensions.
+-	if c.Header != nil {
+-		err := c.Header.WriteSubset(buf, handshakeHeader)
+-		if err != nil {
+-			return err
+-		}
+-	}
+-	buf.WriteString("\r\n")
+-	return buf.Flush()
+-}
+-
+-func (c *hybiServerHandshaker) NewServerConn(buf *bufio.ReadWriter, rwc io.ReadWriteCloser, request *http.Request) *Conn {
+-	return newHybiServerConn(c.Config, buf, rwc, request)
+-}
+-
+-// newHybiServerConn returns a new WebSocket connection speaking hybi draft protocol.
+-func newHybiServerConn(config *Config, buf *bufio.ReadWriter, rwc io.ReadWriteCloser, request *http.Request) *Conn {
+-	return newHybiConn(config, buf, rwc, request)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/hybi_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/websocket/hybi_test.go
+deleted file mode 100644
+index d6a1910..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/hybi_test.go
++++ /dev/null
+@@ -1,590 +0,0 @@
+-// Copyright 2011 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package websocket
+-
+-import (
+-	"bufio"
+-	"bytes"
+-	"fmt"
+-	"io"
+-	"net/http"
+-	"net/url"
+-	"strings"
+-	"testing"
+-)
+-
+-// Test the getNonceAccept function with values in
+-// http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-17
+-func TestSecWebSocketAccept(t *testing.T) {
+-	nonce := []byte("dGhlIHNhbXBsZSBub25jZQ==")
+-	expected := []byte("s3pPLMBiTxaQ9kYGzzhZRbK+xOo=")
+-	accept, err := getNonceAccept(nonce)
+-	if err != nil {
+-		t.Errorf("getNonceAccept: returned error %v", err)
+-		return
+-	}
+-	if !bytes.Equal(expected, accept) {
+-		t.Errorf("getNonceAccept: expected %q got %q", expected, accept)
+-	}
+-}
+-
+-func TestHybiClientHandshake(t *testing.T) {
+-	b := bytes.NewBuffer([]byte{})
+-	bw := bufio.NewWriter(b)
+-	br := bufio.NewReader(strings.NewReader(`HTTP/1.1 101 Switching Protocols
+-Upgrade: websocket
+-Connection: Upgrade
+-Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
+-Sec-WebSocket-Protocol: chat
+-
+-`))
+-	var err error
+-	config := new(Config)
+-	config.Location, err = url.ParseRequestURI("ws://server.example.com/chat")
+-	if err != nil {
+-		t.Fatal("location url", err)
+-	}
+-	config.Origin, err = url.ParseRequestURI("http://example.com")
+-	if err != nil {
+-		t.Fatal("origin url", err)
+-	}
+-	config.Protocol = append(config.Protocol, "chat")
+-	config.Protocol = append(config.Protocol, "superchat")
+-	config.Version = ProtocolVersionHybi13
+-
+-	config.handshakeData = map[string]string{
+-		"key": "dGhlIHNhbXBsZSBub25jZQ==",
+-	}
+-	err = hybiClientHandshake(config, br, bw)
+-	if err != nil {
+-		t.Errorf("handshake failed: %v", err)
+-	}
+-	req, err := http.ReadRequest(bufio.NewReader(b))
+-	if err != nil {
+-		t.Fatalf("read request: %v", err)
+-	}
+-	if req.Method != "GET" {
+-		t.Errorf("request method expected GET, but got %q", req.Method)
+-	}
+-	if req.URL.Path != "/chat" {
+-		t.Errorf("request path expected /chat, but got %q", req.URL.Path)
+-	}
+-	if req.Proto != "HTTP/1.1" {
+-		t.Errorf("request proto expected HTTP/1.1, but got %q", req.Proto)
+-	}
+-	if req.Host != "server.example.com" {
+-		t.Errorf("request Host expected server.example.com, but got %v", req.Host)
+-	}
+-	var expectedHeader = map[string]string{
+-		"Connection":             "Upgrade",
+-		"Upgrade":                "websocket",
+-		"Sec-Websocket-Key":      config.handshakeData["key"],
+-		"Origin":                 config.Origin.String(),
+-		"Sec-Websocket-Protocol": "chat, superchat",
+-		"Sec-Websocket-Version":  fmt.Sprintf("%d", ProtocolVersionHybi13),
+-	}
+-	for k, v := range expectedHeader {
+-		if req.Header.Get(k) != v {
+-			t.Errorf(fmt.Sprintf("%s expected %q but got %q", k, v, req.Header.Get(k)))
+-		}
+-	}
+-}
+-
+-func TestHybiClientHandshakeWithHeader(t *testing.T) {
+-	b := bytes.NewBuffer([]byte{})
+-	bw := bufio.NewWriter(b)
+-	br := bufio.NewReader(strings.NewReader(`HTTP/1.1 101 Switching Protocols
+-Upgrade: websocket
+-Connection: Upgrade
+-Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
+-Sec-WebSocket-Protocol: chat
+-
+-`))
+-	var err error
+-	config := new(Config)
+-	config.Location, err = url.ParseRequestURI("ws://server.example.com/chat")
+-	if err != nil {
+-		t.Fatal("location url", err)
+-	}
+-	config.Origin, err = url.ParseRequestURI("http://example.com")
+-	if err != nil {
+-		t.Fatal("origin url", err)
+-	}
+-	config.Protocol = append(config.Protocol, "chat")
+-	config.Protocol = append(config.Protocol, "superchat")
+-	config.Version = ProtocolVersionHybi13
+-	config.Header = http.Header(make(map[string][]string))
+-	config.Header.Add("User-Agent", "test")
+-
+-	config.handshakeData = map[string]string{
+-		"key": "dGhlIHNhbXBsZSBub25jZQ==",
+-	}
+-	err = hybiClientHandshake(config, br, bw)
+-	if err != nil {
+-		t.Errorf("handshake failed: %v", err)
+-	}
+-	req, err := http.ReadRequest(bufio.NewReader(b))
+-	if err != nil {
+-		t.Fatalf("read request: %v", err)
+-	}
+-	if req.Method != "GET" {
+-		t.Errorf("request method expected GET, but got %q", req.Method)
+-	}
+-	if req.URL.Path != "/chat" {
+-		t.Errorf("request path expected /chat, but got %q", req.URL.Path)
+-	}
+-	if req.Proto != "HTTP/1.1" {
+-		t.Errorf("request proto expected HTTP/1.1, but got %q", req.Proto)
+-	}
+-	if req.Host != "server.example.com" {
+-		t.Errorf("request Host expected server.example.com, but got %v", req.Host)
+-	}
+-	var expectedHeader = map[string]string{
+-		"Connection":             "Upgrade",
+-		"Upgrade":                "websocket",
+-		"Sec-Websocket-Key":      config.handshakeData["key"],
+-		"Origin":                 config.Origin.String(),
+-		"Sec-Websocket-Protocol": "chat, superchat",
+-		"Sec-Websocket-Version":  fmt.Sprintf("%d", ProtocolVersionHybi13),
+-		"User-Agent":             "test",
+-	}
+-	for k, v := range expectedHeader {
+-		if req.Header.Get(k) != v {
+-			t.Errorf(fmt.Sprintf("%s expected %q but got %q", k, v, req.Header.Get(k)))
+-		}
+-	}
+-}
+-
+-func TestHybiServerHandshake(t *testing.T) {
+-	config := new(Config)
+-	handshaker := &hybiServerHandshaker{Config: config}
+-	br := bufio.NewReader(strings.NewReader(`GET /chat HTTP/1.1
+-Host: server.example.com
+-Upgrade: websocket
+-Connection: Upgrade
+-Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
+-Origin: http://example.com
+-Sec-WebSocket-Protocol: chat, superchat
+-Sec-WebSocket-Version: 13
+-
+-`))
+-	req, err := http.ReadRequest(br)
+-	if err != nil {
+-		t.Fatal("request", err)
+-	}
+-	code, err := handshaker.ReadHandshake(br, req)
+-	if err != nil {
+-		t.Errorf("handshake failed: %v", err)
+-	}
+-	if code != http.StatusSwitchingProtocols {
+-		t.Errorf("status expected %q but got %q", http.StatusSwitchingProtocols, code)
+-	}
+-	expectedProtocols := []string{"chat", "superchat"}
+-	if fmt.Sprintf("%v", config.Protocol) != fmt.Sprintf("%v", expectedProtocols) {
+-		t.Errorf("protocol expected %q but got %q", expectedProtocols, config.Protocol)
+-	}
+-	b := bytes.NewBuffer([]byte{})
+-	bw := bufio.NewWriter(b)
+-
+-	config.Protocol = config.Protocol[:1]
+-
+-	err = handshaker.AcceptHandshake(bw)
+-	if err != nil {
+-		t.Errorf("handshake response failed: %v", err)
+-	}
+-	expectedResponse := strings.Join([]string{
+-		"HTTP/1.1 101 Switching Protocols",
+-		"Upgrade: websocket",
+-		"Connection: Upgrade",
+-		"Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=",
+-		"Sec-WebSocket-Protocol: chat",
+-		"", ""}, "\r\n")
+-
+-	if b.String() != expectedResponse {
+-		t.Errorf("handshake expected %q but got %q", expectedResponse, b.String())
+-	}
+-}
+-
+-func TestHybiServerHandshakeNoSubProtocol(t *testing.T) {
+-	config := new(Config)
+-	handshaker := &hybiServerHandshaker{Config: config}
+-	br := bufio.NewReader(strings.NewReader(`GET /chat HTTP/1.1
+-Host: server.example.com
+-Upgrade: websocket
+-Connection: Upgrade
+-Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
+-Origin: http://example.com
+-Sec-WebSocket-Version: 13
+-
+-`))
+-	req, err := http.ReadRequest(br)
+-	if err != nil {
+-		t.Fatal("request", err)
+-	}
+-	code, err := handshaker.ReadHandshake(br, req)
+-	if err != nil {
+-		t.Errorf("handshake failed: %v", err)
+-	}
+-	if code != http.StatusSwitchingProtocols {
+-		t.Errorf("status expected %q but got %q", http.StatusSwitchingProtocols, code)
+-	}
+-	if len(config.Protocol) != 0 {
+-		t.Errorf("len(config.Protocol) expected 0, but got %q", len(config.Protocol))
+-	}
+-	b := bytes.NewBuffer([]byte{})
+-	bw := bufio.NewWriter(b)
+-
+-	err = handshaker.AcceptHandshake(bw)
+-	if err != nil {
+-		t.Errorf("handshake response failed: %v", err)
+-	}
+-	expectedResponse := strings.Join([]string{
+-		"HTTP/1.1 101 Switching Protocols",
+-		"Upgrade: websocket",
+-		"Connection: Upgrade",
+-		"Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=",
+-		"", ""}, "\r\n")
+-
+-	if b.String() != expectedResponse {
+-		t.Errorf("handshake expected %q but got %q", expectedResponse, b.String())
+-	}
+-}
+-
+-func TestHybiServerHandshakeHybiBadVersion(t *testing.T) {
+-	config := new(Config)
+-	handshaker := &hybiServerHandshaker{Config: config}
+-	br := bufio.NewReader(strings.NewReader(`GET /chat HTTP/1.1
+-Host: server.example.com
+-Upgrade: websocket
+-Connection: Upgrade
+-Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
+-Sec-WebSocket-Origin: http://example.com
+-Sec-WebSocket-Protocol: chat, superchat
+-Sec-WebSocket-Version: 9
+-
+-`))
+-	req, err := http.ReadRequest(br)
+-	if err != nil {
+-		t.Fatal("request", err)
+-	}
+-	code, err := handshaker.ReadHandshake(br, req)
+-	if err != ErrBadWebSocketVersion {
+-		t.Errorf("handshake expected err %q but got %q", ErrBadWebSocketVersion, err)
+-	}
+-	if code != http.StatusBadRequest {
+-		t.Errorf("status expected %q but got %q", http.StatusBadRequest, code)
+-	}
+-}
+-
+-func testHybiFrame(t *testing.T, testHeader, testPayload, testMaskedPayload []byte, frameHeader *hybiFrameHeader) {
+-	b := bytes.NewBuffer([]byte{})
+-	frameWriterFactory := &hybiFrameWriterFactory{bufio.NewWriter(b), false}
+-	w, _ := frameWriterFactory.NewFrameWriter(TextFrame)
+-	w.(*hybiFrameWriter).header = frameHeader
+-	_, err := w.Write(testPayload)
+-	w.Close()
+-	if err != nil {
+-		t.Errorf("Write error %q", err)
+-	}
+-	var expectedFrame []byte
+-	expectedFrame = append(expectedFrame, testHeader...)
+-	expectedFrame = append(expectedFrame, testMaskedPayload...)
+-	if !bytes.Equal(expectedFrame, b.Bytes()) {
+-		t.Errorf("frame expected %q got %q", expectedFrame, b.Bytes())
+-	}
+-	frameReaderFactory := &hybiFrameReaderFactory{bufio.NewReader(b)}
+-	r, err := frameReaderFactory.NewFrameReader()
+-	if err != nil {
+-		t.Errorf("Read error %q", err)
+-	}
+-	if header := r.HeaderReader(); header == nil {
+-		t.Errorf("no header")
+-	} else {
+-		actualHeader := make([]byte, r.Len())
+-		n, err := header.Read(actualHeader)
+-		if err != nil {
+-			t.Errorf("Read header error %q", err)
+-		} else {
+-			if n < len(testHeader) {
+-				t.Errorf("header too short %q got %q", testHeader, actualHeader[:n])
+-			}
+-			if !bytes.Equal(testHeader, actualHeader[:n]) {
+-				t.Errorf("header expected %q got %q", testHeader, actualHeader[:n])
+-			}
+-		}
+-	}
+-	if trailer := r.TrailerReader(); trailer != nil {
+-		t.Errorf("unexpected trailer %q", trailer)
+-	}
+-	frame := r.(*hybiFrameReader)
+-	if frameHeader.Fin != frame.header.Fin ||
+-		frameHeader.OpCode != frame.header.OpCode ||
+-		len(testPayload) != int(frame.header.Length) {
+-		t.Errorf("mismatch %v (%d) vs %v", frameHeader, len(testPayload), frame)
+-	}
+-	payload := make([]byte, len(testPayload))
+-	_, err = r.Read(payload)
+-	if err != nil {
+-		t.Errorf("read %v", err)
+-	}
+-	if !bytes.Equal(testPayload, payload) {
+-		t.Errorf("payload %q vs %q", testPayload, payload)
+-	}
+-}
+-
+-func TestHybiShortTextFrame(t *testing.T) {
+-	frameHeader := &hybiFrameHeader{Fin: true, OpCode: TextFrame}
+-	payload := []byte("hello")
+-	testHybiFrame(t, []byte{0x81, 0x05}, payload, payload, frameHeader)
+-
+-	payload = make([]byte, 125)
+-	testHybiFrame(t, []byte{0x81, 125}, payload, payload, frameHeader)
+-}
+-
+-func TestHybiShortMaskedTextFrame(t *testing.T) {
+-	frameHeader := &hybiFrameHeader{Fin: true, OpCode: TextFrame,
+-		MaskingKey: []byte{0xcc, 0x55, 0x80, 0x20}}
+-	payload := []byte("hello")
+-	maskedPayload := []byte{0xa4, 0x30, 0xec, 0x4c, 0xa3}
+-	header := []byte{0x81, 0x85}
+-	header = append(header, frameHeader.MaskingKey...)
+-	testHybiFrame(t, header, payload, maskedPayload, frameHeader)
+-}
+-
+-func TestHybiShortBinaryFrame(t *testing.T) {
+-	frameHeader := &hybiFrameHeader{Fin: true, OpCode: BinaryFrame}
+-	payload := []byte("hello")
+-	testHybiFrame(t, []byte{0x82, 0x05}, payload, payload, frameHeader)
+-
+-	payload = make([]byte, 125)
+-	testHybiFrame(t, []byte{0x82, 125}, payload, payload, frameHeader)
+-}
+-
+-func TestHybiControlFrame(t *testing.T) {
+-	frameHeader := &hybiFrameHeader{Fin: true, OpCode: PingFrame}
+-	payload := []byte("hello")
+-	testHybiFrame(t, []byte{0x89, 0x05}, payload, payload, frameHeader)
+-
+-	frameHeader = &hybiFrameHeader{Fin: true, OpCode: PongFrame}
+-	testHybiFrame(t, []byte{0x8A, 0x05}, payload, payload, frameHeader)
+-
+-	frameHeader = &hybiFrameHeader{Fin: true, OpCode: CloseFrame}
+-	payload = []byte{0x03, 0xe8} // 1000
+-	testHybiFrame(t, []byte{0x88, 0x02}, payload, payload, frameHeader)
+-}
+-
+-func TestHybiLongFrame(t *testing.T) {
+-	frameHeader := &hybiFrameHeader{Fin: true, OpCode: TextFrame}
+-	payload := make([]byte, 126)
+-	testHybiFrame(t, []byte{0x81, 126, 0x00, 126}, payload, payload, frameHeader)
+-
+-	payload = make([]byte, 65535)
+-	testHybiFrame(t, []byte{0x81, 126, 0xff, 0xff}, payload, payload, frameHeader)
+-
+-	payload = make([]byte, 65536)
+-	testHybiFrame(t, []byte{0x81, 127, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00}, payload, payload, frameHeader)
+-}
+-
+-func TestHybiClientRead(t *testing.T) {
+-	wireData := []byte{0x81, 0x05, 'h', 'e', 'l', 'l', 'o',
+-		0x89, 0x05, 'h', 'e', 'l', 'l', 'o', // ping
+-		0x81, 0x05, 'w', 'o', 'r', 'l', 'd'}
+-	br := bufio.NewReader(bytes.NewBuffer(wireData))
+-	bw := bufio.NewWriter(bytes.NewBuffer([]byte{}))
+-	conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, nil)
+-
+-	msg := make([]byte, 512)
+-	n, err := conn.Read(msg)
+-	if err != nil {
+-		t.Errorf("read 1st frame, error %q", err)
+-	}
+-	if n != 5 {
+-		t.Errorf("read 1st frame, expect 5, got %d", n)
+-	}
+-	if !bytes.Equal(wireData[2:7], msg[:n]) {
+-		t.Errorf("read 1st frame %v, got %v", wireData[2:7], msg[:n])
+-	}
+-	n, err = conn.Read(msg)
+-	if err != nil {
+-		t.Errorf("read 2nd frame, error %q", err)
+-	}
+-	if n != 5 {
+-		t.Errorf("read 2nd frame, expect 5, got %d", n)
+-	}
+-	if !bytes.Equal(wireData[16:21], msg[:n]) {
+-		t.Errorf("read 2nd frame %v, got %v", wireData[16:21], msg[:n])
+-	}
+-	n, err = conn.Read(msg)
+-	if err == nil {
+-		t.Errorf("read not EOF")
+-	}
+-	if n != 0 {
+-		t.Errorf("expect read 0, got %d", n)
+-	}
+-}
+-
+-func TestHybiShortRead(t *testing.T) {
+-	wireData := []byte{0x81, 0x05, 'h', 'e', 'l', 'l', 'o',
+-		0x89, 0x05, 'h', 'e', 'l', 'l', 'o', // ping
+-		0x81, 0x05, 'w', 'o', 'r', 'l', 'd'}
+-	br := bufio.NewReader(bytes.NewBuffer(wireData))
+-	bw := bufio.NewWriter(bytes.NewBuffer([]byte{}))
+-	conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, nil)
+-
+-	step := 0
+-	pos := 0
+-	expectedPos := []int{2, 5, 16, 19}
+-	expectedLen := []int{3, 2, 3, 2}
+-	for {
+-		msg := make([]byte, 3)
+-		n, err := conn.Read(msg)
+-		if step >= len(expectedPos) {
+-			if err == nil {
+-				t.Errorf("read not EOF")
+-			}
+-			if n != 0 {
+-				t.Errorf("expect read 0, got %d", n)
+-			}
+-			return
+-		}
+-		pos = expectedPos[step]
+-		endPos := pos + expectedLen[step]
+-		if err != nil {
+-			t.Errorf("read from %d, got error %q", pos, err)
+-			return
+-		}
+-		if n != endPos-pos {
+-			t.Errorf("read from %d, expect %d, got %d", pos, endPos-pos, n)
+-		}
+-		if !bytes.Equal(wireData[pos:endPos], msg[:n]) {
+-			t.Errorf("read from %d, frame %v, got %v", pos, wireData[pos:endPos], msg[:n])
+-		}
+-		step++
+-	}
+-}
+-
+-func TestHybiServerRead(t *testing.T) {
+-	wireData := []byte{0x81, 0x85, 0xcc, 0x55, 0x80, 0x20,
+-		0xa4, 0x30, 0xec, 0x4c, 0xa3, // hello
+-		0x89, 0x85, 0xcc, 0x55, 0x80, 0x20,
+-		0xa4, 0x30, 0xec, 0x4c, 0xa3, // ping: hello
+-		0x81, 0x85, 0xed, 0x83, 0xb4, 0x24,
+-		0x9a, 0xec, 0xc6, 0x48, 0x89, // world
+-	}
+-	br := bufio.NewReader(bytes.NewBuffer(wireData))
+-	bw := bufio.NewWriter(bytes.NewBuffer([]byte{}))
+-	conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, new(http.Request))
+-
+-	expected := [][]byte{[]byte("hello"), []byte("world")}
+-
+-	msg := make([]byte, 512)
+-	n, err := conn.Read(msg)
+-	if err != nil {
+-		t.Errorf("read 1st frame, error %q", err)
+-	}
+-	if n != 5 {
+-		t.Errorf("read 1st frame, expect 5, got %d", n)
+-	}
+-	if !bytes.Equal(expected[0], msg[:n]) {
+-		t.Errorf("read 1st frame %q, got %q", expected[0], msg[:n])
+-	}
+-
+-	n, err = conn.Read(msg)
+-	if err != nil {
+-		t.Errorf("read 2nd frame, error %q", err)
+-	}
+-	if n != 5 {
+-		t.Errorf("read 2nd frame, expect 5, got %d", n)
+-	}
+-	if !bytes.Equal(expected[1], msg[:n]) {
+-		t.Errorf("read 2nd frame %q, got %q", expected[1], msg[:n])
+-	}
+-
+-	n, err = conn.Read(msg)
+-	if err == nil {
+-		t.Errorf("read not EOF")
+-	}
+-	if n != 0 {
+-		t.Errorf("expect read 0, got %d", n)
+-	}
+-}
+-
+-func TestHybiServerReadWithoutMasking(t *testing.T) {
+-	wireData := []byte{0x81, 0x05, 'h', 'e', 'l', 'l', 'o'}
+-	br := bufio.NewReader(bytes.NewBuffer(wireData))
+-	bw := bufio.NewWriter(bytes.NewBuffer([]byte{}))
+-	conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, new(http.Request))
+-	// server MUST close the connection upon receiving a non-masked frame.
+-	msg := make([]byte, 512)
+-	_, err := conn.Read(msg)
+-	if err != io.EOF {
+-		t.Errorf("read 1st frame, expect %q, but got %q", io.EOF, err)
+-	}
+-}
+-
+-func TestHybiClientReadWithMasking(t *testing.T) {
+-	wireData := []byte{0x81, 0x85, 0xcc, 0x55, 0x80, 0x20,
+-		0xa4, 0x30, 0xec, 0x4c, 0xa3, // hello
+-	}
+-	br := bufio.NewReader(bytes.NewBuffer(wireData))
+-	bw := bufio.NewWriter(bytes.NewBuffer([]byte{}))
+-	conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, nil)
+-
+-	// client MUST close the connection upon receiving a masked frame.
+-	msg := make([]byte, 512)
+-	_, err := conn.Read(msg)
+-	if err != io.EOF {
+-		t.Errorf("read 1st frame, expect %q, but got %q", io.EOF, err)
+-	}
+-}
+-
+-// Test the hybiServerHandshaker supports firefox implementation and
+-// checks Connection request header include (but it's not necessary
+-// equal to) "upgrade"
+-func TestHybiServerFirefoxHandshake(t *testing.T) {
+-	config := new(Config)
+-	handshaker := &hybiServerHandshaker{Config: config}
+-	br := bufio.NewReader(strings.NewReader(`GET /chat HTTP/1.1
+-Host: server.example.com
+-Upgrade: websocket
+-Connection: keep-alive, upgrade
+-Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
+-Origin: http://example.com
+-Sec-WebSocket-Protocol: chat, superchat
+-Sec-WebSocket-Version: 13
+-
+-`))
+-	req, err := http.ReadRequest(br)
+-	if err != nil {
+-		t.Fatal("request", err)
+-	}
+-	code, err := handshaker.ReadHandshake(br, req)
+-	if err != nil {
+-		t.Errorf("handshake failed: %v", err)
+-	}
+-	if code != http.StatusSwitchingProtocols {
+-		t.Errorf("status expected %q but got %q", http.StatusSwitchingProtocols, code)
+-	}
+-	b := bytes.NewBuffer([]byte{})
+-	bw := bufio.NewWriter(b)
+-
+-	config.Protocol = []string{"chat"}
+-
+-	err = handshaker.AcceptHandshake(bw)
+-	if err != nil {
+-		t.Errorf("handshake response failed: %v", err)
+-	}
+-	expectedResponse := strings.Join([]string{
+-		"HTTP/1.1 101 Switching Protocols",
+-		"Upgrade: websocket",
+-		"Connection: Upgrade",
+-		"Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=",
+-		"Sec-WebSocket-Protocol: chat",
+-		"", ""}, "\r\n")
+-
+-	if b.String() != expectedResponse {
+-		t.Errorf("handshake expected %q but got %q", expectedResponse, b.String())
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/server.go b/Godeps/_workspace/src/code.google.com/p/go.net/websocket/server.go
+deleted file mode 100644
+index 7032213..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/server.go
++++ /dev/null
+@@ -1,114 +0,0 @@
+-// Copyright 2009 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package websocket
+-
+-import (
+-	"bufio"
+-	"fmt"
+-	"io"
+-	"net/http"
+-)
+-
+-func newServerConn(rwc io.ReadWriteCloser, buf *bufio.ReadWriter, req *http.Request, config *Config, handshake func(*Config, *http.Request) error) (conn *Conn, err error) {
+-	var hs serverHandshaker = &hybiServerHandshaker{Config: config}
+-	code, err := hs.ReadHandshake(buf.Reader, req)
+-	if err == ErrBadWebSocketVersion {
+-		fmt.Fprintf(buf, "HTTP/1.1 %03d %s\r\n", code, http.StatusText(code))
+-		fmt.Fprintf(buf, "Sec-WebSocket-Version: %s\r\n", SupportedProtocolVersion)
+-		buf.WriteString("\r\n")
+-		buf.WriteString(err.Error())
+-		buf.Flush()
+-		return
+-	}
+-	if err != nil {
+-		fmt.Fprintf(buf, "HTTP/1.1 %03d %s\r\n", code, http.StatusText(code))
+-		buf.WriteString("\r\n")
+-		buf.WriteString(err.Error())
+-		buf.Flush()
+-		return
+-	}
+-	if handshake != nil {
+-		err = handshake(config, req)
+-		if err != nil {
+-			code = http.StatusForbidden
+-			fmt.Fprintf(buf, "HTTP/1.1 %03d %s\r\n", code, http.StatusText(code))
+-			buf.WriteString("\r\n")
+-			buf.Flush()
+-			return
+-		}
+-	}
+-	err = hs.AcceptHandshake(buf.Writer)
+-	if err != nil {
+-		code = http.StatusBadRequest
+-		fmt.Fprintf(buf, "HTTP/1.1 %03d %s\r\n", code, http.StatusText(code))
+-		buf.WriteString("\r\n")
+-		buf.Flush()
+-		return
+-	}
+-	conn = hs.NewServerConn(buf, rwc, req)
+-	return
+-}
+-
+-// Server represents a server of a WebSocket.
+-type Server struct {
+-	// Config is a WebSocket configuration for new WebSocket connection.
+-	Config
+-
+-	// Handshake is an optional function in WebSocket handshake.
+-	// For example, you can check, or don't check Origin header.
+-	// Another example, you can select config.Protocol.
+-	Handshake func(*Config, *http.Request) error
+-
+-	// Handler handles a WebSocket connection.
+-	Handler
+-}
+-
+-// ServeHTTP implements the http.Handler interface for a WebSocket
+-func (s Server) ServeHTTP(w http.ResponseWriter, req *http.Request) {
+-	s.serveWebSocket(w, req)
+-}
+-
+-func (s Server) serveWebSocket(w http.ResponseWriter, req *http.Request) {
+-	rwc, buf, err := w.(http.Hijacker).Hijack()
+-	if err != nil {
+-		panic("Hijack failed: " + err.Error())
+-		return
+-	}
+-	// The server should abort the WebSocket connection if it finds
+-	// the client did not send a handshake that matches with protocol
+-	// specification.
+-	defer rwc.Close()
+-	conn, err := newServerConn(rwc, buf, req, &s.Config, s.Handshake)
+-	if err != nil {
+-		return
+-	}
+-	if conn == nil {
+-		panic("unexpected nil conn")
+-	}
+-	s.Handler(conn)
+-}
+-
+-// Handler is a simple interface to a WebSocket browser client.
+-// It checks if Origin header is valid URL by default.
+-// You might want to verify websocket.Conn.Config().Origin in the func.
+-// If you use Server instead of Handler, you could call websocket.Origin and
+-// check the origin in your Handshake func. So, if you want to accept
+-// non-browser client, which doesn't send Origin header, you could use Server
+-//. that doesn't check origin in its Handshake.
+-type Handler func(*Conn)
+-
+-func checkOrigin(config *Config, req *http.Request) (err error) {
+-	config.Origin, err = Origin(config, req)
+-	if err == nil && config.Origin == nil {
+-		return fmt.Errorf("null origin")
+-	}
+-	return err
+-}
+-
+-// ServeHTTP implements the http.Handler interface for a WebSocket
+-func (h Handler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
+-	s := Server{Handler: h, Handshake: checkOrigin}
+-	s.serveWebSocket(w, req)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/websocket.go b/Godeps/_workspace/src/code.google.com/p/go.net/websocket/websocket.go
+deleted file mode 100644
+index 0f4917b..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/websocket.go
++++ /dev/null
+@@ -1,411 +0,0 @@
+-// Copyright 2009 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// Package websocket implements a client and server for the WebSocket protocol
+-// as specified in RFC 6455.
+-package websocket
+-
+-import (
+-	"bufio"
+-	"crypto/tls"
+-	"encoding/json"
+-	"errors"
+-	"io"
+-	"io/ioutil"
+-	"net"
+-	"net/http"
+-	"net/url"
+-	"sync"
+-	"time"
+-)
+-
+-const (
+-	ProtocolVersionHybi13    = 13
+-	ProtocolVersionHybi      = ProtocolVersionHybi13
+-	SupportedProtocolVersion = "13"
+-
+-	ContinuationFrame = 0
+-	TextFrame         = 1
+-	BinaryFrame       = 2
+-	CloseFrame        = 8
+-	PingFrame         = 9
+-	PongFrame         = 10
+-	UnknownFrame      = 255
+-)
+-
+-// ProtocolError represents WebSocket protocol errors.
+-type ProtocolError struct {
+-	ErrorString string
+-}
+-
+-func (err *ProtocolError) Error() string { return err.ErrorString }
+-
+-var (
+-	ErrBadProtocolVersion   = &ProtocolError{"bad protocol version"}
+-	ErrBadScheme            = &ProtocolError{"bad scheme"}
+-	ErrBadStatus            = &ProtocolError{"bad status"}
+-	ErrBadUpgrade           = &ProtocolError{"missing or bad upgrade"}
+-	ErrBadWebSocketOrigin   = &ProtocolError{"missing or bad WebSocket-Origin"}
+-	ErrBadWebSocketLocation = &ProtocolError{"missing or bad WebSocket-Location"}
+-	ErrBadWebSocketProtocol = &ProtocolError{"missing or bad WebSocket-Protocol"}
+-	ErrBadWebSocketVersion  = &ProtocolError{"missing or bad WebSocket Version"}
+-	ErrChallengeResponse    = &ProtocolError{"mismatch challenge/response"}
+-	ErrBadFrame             = &ProtocolError{"bad frame"}
+-	ErrBadFrameBoundary     = &ProtocolError{"not on frame boundary"}
+-	ErrNotWebSocket         = &ProtocolError{"not websocket protocol"}
+-	ErrBadRequestMethod     = &ProtocolError{"bad method"}
+-	ErrNotSupported         = &ProtocolError{"not supported"}
+-)
+-
+-// Addr is an implementation of net.Addr for WebSocket.
+-type Addr struct {
+-	*url.URL
+-}
+-
+-// Network returns the network type for a WebSocket, "websocket".
+-func (addr *Addr) Network() string { return "websocket" }
+-
+-// Config is a WebSocket configuration
+-type Config struct {
+-	// A WebSocket server address.
+-	Location *url.URL
+-
+-	// A Websocket client origin.
+-	Origin *url.URL
+-
+-	// WebSocket subprotocols.
+-	Protocol []string
+-
+-	// WebSocket protocol version.
+-	Version int
+-
+-	// TLS config for secure WebSocket (wss).
+-	TlsConfig *tls.Config
+-
+-	// Additional header fields to be sent in WebSocket opening handshake.
+-	Header http.Header
+-
+-	handshakeData map[string]string
+-}
+-
+-// serverHandshaker is an interface to handle WebSocket server side handshake.
+-type serverHandshaker interface {
+-	// ReadHandshake reads handshake request message from client.
+-	// Returns http response code and error if any.
+-	ReadHandshake(buf *bufio.Reader, req *http.Request) (code int, err error)
+-
+-	// AcceptHandshake accepts the client handshake request and sends
+-	// handshake response back to client.
+-	AcceptHandshake(buf *bufio.Writer) (err error)
+-
+-	// NewServerConn creates a new WebSocket connection.
+-	NewServerConn(buf *bufio.ReadWriter, rwc io.ReadWriteCloser, request *http.Request) (conn *Conn)
+-}
+-
+-// frameReader is an interface to read a WebSocket frame.
+-type frameReader interface {
+-	// Reader is to read payload of the frame.
+-	io.Reader
+-
+-	// PayloadType returns payload type.
+-	PayloadType() byte
+-
+-	// HeaderReader returns a reader to read header of the frame.
+-	HeaderReader() io.Reader
+-
+-	// TrailerReader returns a reader to read trailer of the frame.
+-	// If it returns nil, there is no trailer in the frame.
+-	TrailerReader() io.Reader
+-
+-	// Len returns total length of the frame, including header and trailer.
+-	Len() int
+-}
+-
+-// frameReaderFactory is an interface to creates new frame reader.
+-type frameReaderFactory interface {
+-	NewFrameReader() (r frameReader, err error)
+-}
+-
+-// frameWriter is an interface to write a WebSocket frame.
+-type frameWriter interface {
+-	// Writer is to write payload of the frame.
+-	io.WriteCloser
+-}
+-
+-// frameWriterFactory is an interface to create new frame writer.
+-type frameWriterFactory interface {
+-	NewFrameWriter(payloadType byte) (w frameWriter, err error)
+-}
+-
+-type frameHandler interface {
+-	HandleFrame(frame frameReader) (r frameReader, err error)
+-	WriteClose(status int) (err error)
+-}
+-
+-// Conn represents a WebSocket connection.
+-type Conn struct {
+-	config  *Config
+-	request *http.Request
+-
+-	buf *bufio.ReadWriter
+-	rwc io.ReadWriteCloser
+-
+-	rio sync.Mutex
+-	frameReaderFactory
+-	frameReader
+-
+-	wio sync.Mutex
+-	frameWriterFactory
+-
+-	frameHandler
+-	PayloadType        byte
+-	defaultCloseStatus int
+-}
+-
+-// Read implements the io.Reader interface:
+-// it reads data of a frame from the WebSocket connection.
+-// if msg is not large enough for the frame data, it fills the msg and next Read
+-// will read the rest of the frame data.
+-// it reads Text frame or Binary frame.
+-func (ws *Conn) Read(msg []byte) (n int, err error) {
+-	ws.rio.Lock()
+-	defer ws.rio.Unlock()
+-again:
+-	if ws.frameReader == nil {
+-		frame, err := ws.frameReaderFactory.NewFrameReader()
+-		if err != nil {
+-			return 0, err
+-		}
+-		ws.frameReader, err = ws.frameHandler.HandleFrame(frame)
+-		if err != nil {
+-			return 0, err
+-		}
+-		if ws.frameReader == nil {
+-			goto again
+-		}
+-	}
+-	n, err = ws.frameReader.Read(msg)
+-	if err == io.EOF {
+-		if trailer := ws.frameReader.TrailerReader(); trailer != nil {
+-			io.Copy(ioutil.Discard, trailer)
+-		}
+-		ws.frameReader = nil
+-		goto again
+-	}
+-	return n, err
+-}
+-
+-// Write implements the io.Writer interface:
+-// it writes data as a frame to the WebSocket connection.
+-func (ws *Conn) Write(msg []byte) (n int, err error) {
+-	ws.wio.Lock()
+-	defer ws.wio.Unlock()
+-	w, err := ws.frameWriterFactory.NewFrameWriter(ws.PayloadType)
+-	if err != nil {
+-		return 0, err
+-	}
+-	n, err = w.Write(msg)
+-	w.Close()
+-	if err != nil {
+-		return n, err
+-	}
+-	return n, err
+-}
+-
+-// Close implements the io.Closer interface.
+-func (ws *Conn) Close() error {
+-	err := ws.frameHandler.WriteClose(ws.defaultCloseStatus)
+-	if err != nil {
+-		return err
+-	}
+-	return ws.rwc.Close()
+-}
+-
+-func (ws *Conn) IsClientConn() bool { return ws.request == nil }
+-func (ws *Conn) IsServerConn() bool { return ws.request != nil }
+-
+-// LocalAddr returns the WebSocket Origin for the connection for client, or
+-// the WebSocket location for server.
+-func (ws *Conn) LocalAddr() net.Addr {
+-	if ws.IsClientConn() {
+-		return &Addr{ws.config.Origin}
+-	}
+-	return &Addr{ws.config.Location}
+-}
+-
+-// RemoteAddr returns the WebSocket location for the connection for client, or
+-// the Websocket Origin for server.
+-func (ws *Conn) RemoteAddr() net.Addr {
+-	if ws.IsClientConn() {
+-		return &Addr{ws.config.Location}
+-	}
+-	return &Addr{ws.config.Origin}
+-}
+-
+-var errSetDeadline = errors.New("websocket: cannot set deadline: not using a net.Conn")
+-
+-// SetDeadline sets the connection's network read & write deadlines.
+-func (ws *Conn) SetDeadline(t time.Time) error {
+-	if conn, ok := ws.rwc.(net.Conn); ok {
+-		return conn.SetDeadline(t)
+-	}
+-	return errSetDeadline
+-}
+-
+-// SetReadDeadline sets the connection's network read deadline.
+-func (ws *Conn) SetReadDeadline(t time.Time) error {
+-	if conn, ok := ws.rwc.(net.Conn); ok {
+-		return conn.SetReadDeadline(t)
+-	}
+-	return errSetDeadline
+-}
+-
+-// SetWriteDeadline sets the connection's network write deadline.
+-func (ws *Conn) SetWriteDeadline(t time.Time) error {
+-	if conn, ok := ws.rwc.(net.Conn); ok {
+-		return conn.SetWriteDeadline(t)
+-	}
+-	return errSetDeadline
+-}
+-
+-// Config returns the WebSocket config.
+-func (ws *Conn) Config() *Config { return ws.config }
+-
+-// Request returns the http request upgraded to the WebSocket.
+-// It is nil for client side.
+-func (ws *Conn) Request() *http.Request { return ws.request }
+-
+-// Codec represents a symmetric pair of functions that implement a codec.
+-type Codec struct {
+-	Marshal   func(v interface{}) (data []byte, payloadType byte, err error)
+-	Unmarshal func(data []byte, payloadType byte, v interface{}) (err error)
+-}
+-
+-// Send sends v marshaled by cd.Marshal as single frame to ws.
+-func (cd Codec) Send(ws *Conn, v interface{}) (err error) {
+-	data, payloadType, err := cd.Marshal(v)
+-	if err != nil {
+-		return err
+-	}
+-	ws.wio.Lock()
+-	defer ws.wio.Unlock()
+-	w, err := ws.frameWriterFactory.NewFrameWriter(payloadType)
+-	if err != nil {
+-		return err
+-	}
+-	_, err = w.Write(data)
+-	w.Close()
+-	return err
+-}
+-
+-// Receive receives single frame from ws, unmarshaled by cd.Unmarshal and stores in v.
+-func (cd Codec) Receive(ws *Conn, v interface{}) (err error) {
+-	ws.rio.Lock()
+-	defer ws.rio.Unlock()
+-	if ws.frameReader != nil {
+-		_, err = io.Copy(ioutil.Discard, ws.frameReader)
+-		if err != nil {
+-			return err
+-		}
+-		ws.frameReader = nil
+-	}
+-again:
+-	frame, err := ws.frameReaderFactory.NewFrameReader()
+-	if err != nil {
+-		return err
+-	}
+-	frame, err = ws.frameHandler.HandleFrame(frame)
+-	if err != nil {
+-		return err
+-	}
+-	if frame == nil {
+-		goto again
+-	}
+-	payloadType := frame.PayloadType()
+-	data, err := ioutil.ReadAll(frame)
+-	if err != nil {
+-		return err
+-	}
+-	return cd.Unmarshal(data, payloadType, v)
+-}
+-
+-func marshal(v interface{}) (msg []byte, payloadType byte, err error) {
+-	switch data := v.(type) {
+-	case string:
+-		return []byte(data), TextFrame, nil
+-	case []byte:
+-		return data, BinaryFrame, nil
+-	}
+-	return nil, UnknownFrame, ErrNotSupported
+-}
+-
+-func unmarshal(msg []byte, payloadType byte, v interface{}) (err error) {
+-	switch data := v.(type) {
+-	case *string:
+-		*data = string(msg)
+-		return nil
+-	case *[]byte:
+-		*data = msg
+-		return nil
+-	}
+-	return ErrNotSupported
+-}
+-
+-/*
+-Message is a codec to send/receive text/binary data in a frame on WebSocket connection.
+-To send/receive text frame, use string type.
+-To send/receive binary frame, use []byte type.
+-
+-Trivial usage:
+-
+-	import "websocket"
+-
+-	// receive text frame
+-	var message string
+-	websocket.Message.Receive(ws, &message)
+-
+-	// send text frame
+-	message = "hello"
+-	websocket.Message.Send(ws, message)
+-
+-	// receive binary frame
+-	var data []byte
+-	websocket.Message.Receive(ws, &data)
+-
+-	// send binary frame
+-	data = []byte{0, 1, 2}
+-	websocket.Message.Send(ws, data)
+-
+-*/
+-var Message = Codec{marshal, unmarshal}
+-
+-func jsonMarshal(v interface{}) (msg []byte, payloadType byte, err error) {
+-	msg, err = json.Marshal(v)
+-	return msg, TextFrame, err
+-}
+-
+-func jsonUnmarshal(msg []byte, payloadType byte, v interface{}) (err error) {
+-	return json.Unmarshal(msg, v)
+-}
+-
+-/*
+-JSON is a codec to send/receive JSON data in a frame from a WebSocket connection.
+-
+-Trivial usage:
+-
+-	import "websocket"
+-
+-	type T struct {
+-		Msg string
+-		Count int
+-	}
+-
+-	// receive JSON type T
+-	var data T
+-	websocket.JSON.Receive(ws, &data)
+-
+-	// send JSON type T
+-	websocket.JSON.Send(ws, data)
+-*/
+-var JSON = Codec{jsonMarshal, jsonUnmarshal}
+diff --git a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/websocket_test.go b/Godeps/_workspace/src/code.google.com/p/go.net/websocket/websocket_test.go
+deleted file mode 100644
+index 48f14b6..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/go.net/websocket/websocket_test.go
++++ /dev/null
+@@ -1,341 +0,0 @@
+-// Copyright 2009 The Go Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package websocket
+-
+-import (
+-	"bytes"
+-	"fmt"
+-	"io"
+-	"log"
+-	"net"
+-	"net/http"
+-	"net/http/httptest"
+-	"net/url"
+-	"strings"
+-	"sync"
+-	"testing"
+-)
+-
+-var serverAddr string
+-var once sync.Once
+-
+-func echoServer(ws *Conn) { io.Copy(ws, ws) }
+-
+-type Count struct {
+-	S string
+-	N int
+-}
+-
+-func countServer(ws *Conn) {
+-	for {
+-		var count Count
+-		err := JSON.Receive(ws, &count)
+-		if err != nil {
+-			return
+-		}
+-		count.N++
+-		count.S = strings.Repeat(count.S, count.N)
+-		err = JSON.Send(ws, count)
+-		if err != nil {
+-			return
+-		}
+-	}
+-}
+-
+-func subProtocolHandshake(config *Config, req *http.Request) error {
+-	for _, proto := range config.Protocol {
+-		if proto == "chat" {
+-			config.Protocol = []string{proto}
+-			return nil
+-		}
+-	}
+-	return ErrBadWebSocketProtocol
+-}
+-
+-func subProtoServer(ws *Conn) {
+-	for _, proto := range ws.Config().Protocol {
+-		io.WriteString(ws, proto)
+-	}
+-}
+-
+-func startServer() {
+-	http.Handle("/echo", Handler(echoServer))
+-	http.Handle("/count", Handler(countServer))
+-	subproto := Server{
+-		Handshake: subProtocolHandshake,
+-		Handler:   Handler(subProtoServer),
+-	}
+-	http.Handle("/subproto", subproto)
+-	server := httptest.NewServer(nil)
+-	serverAddr = server.Listener.Addr().String()
+-	log.Print("Test WebSocket server listening on ", serverAddr)
+-}
+-
+-func newConfig(t *testing.T, path string) *Config {
+-	config, _ := NewConfig(fmt.Sprintf("ws://%s%s", serverAddr, path), "http://localhost")
+-	return config
+-}
+-
+-func TestEcho(t *testing.T) {
+-	once.Do(startServer)
+-
+-	// websocket.Dial()
+-	client, err := net.Dial("tcp", serverAddr)
+-	if err != nil {
+-		t.Fatal("dialing", err)
+-	}
+-	conn, err := NewClient(newConfig(t, "/echo"), client)
+-	if err != nil {
+-		t.Errorf("WebSocket handshake error: %v", err)
+-		return
+-	}
+-
+-	msg := []byte("hello, world\n")
+-	if _, err := conn.Write(msg); err != nil {
+-		t.Errorf("Write: %v", err)
+-	}
+-	var actual_msg = make([]byte, 512)
+-	n, err := conn.Read(actual_msg)
+-	if err != nil {
+-		t.Errorf("Read: %v", err)
+-	}
+-	actual_msg = actual_msg[0:n]
+-	if !bytes.Equal(msg, actual_msg) {
+-		t.Errorf("Echo: expected %q got %q", msg, actual_msg)
+-	}
+-	conn.Close()
+-}
+-
+-func TestAddr(t *testing.T) {
+-	once.Do(startServer)
+-
+-	// websocket.Dial()
+-	client, err := net.Dial("tcp", serverAddr)
+-	if err != nil {
+-		t.Fatal("dialing", err)
+-	}
+-	conn, err := NewClient(newConfig(t, "/echo"), client)
+-	if err != nil {
+-		t.Errorf("WebSocket handshake error: %v", err)
+-		return
+-	}
+-
+-	ra := conn.RemoteAddr().String()
+-	if !strings.HasPrefix(ra, "ws://") || !strings.HasSuffix(ra, "/echo") {
+-		t.Errorf("Bad remote addr: %v", ra)
+-	}
+-	la := conn.LocalAddr().String()
+-	if !strings.HasPrefix(la, "http://") {
+-		t.Errorf("Bad local addr: %v", la)
+-	}
+-	conn.Close()
+-}
+-
+-func TestCount(t *testing.T) {
+-	once.Do(startServer)
+-
+-	// websocket.Dial()
+-	client, err := net.Dial("tcp", serverAddr)
+-	if err != nil {
+-		t.Fatal("dialing", err)
+-	}
+-	conn, err := NewClient(newConfig(t, "/count"), client)
+-	if err != nil {
+-		t.Errorf("WebSocket handshake error: %v", err)
+-		return
+-	}
+-
+-	var count Count
+-	count.S = "hello"
+-	if err := JSON.Send(conn, count); err != nil {
+-		t.Errorf("Write: %v", err)
+-	}
+-	if err := JSON.Receive(conn, &count); err != nil {
+-		t.Errorf("Read: %v", err)
+-	}
+-	if count.N != 1 {
+-		t.Errorf("count: expected %d got %d", 1, count.N)
+-	}
+-	if count.S != "hello" {
+-		t.Errorf("count: expected %q got %q", "hello", count.S)
+-	}
+-	if err := JSON.Send(conn, count); err != nil {
+-		t.Errorf("Write: %v", err)
+-	}
+-	if err := JSON.Receive(conn, &count); err != nil {
+-		t.Errorf("Read: %v", err)
+-	}
+-	if count.N != 2 {
+-		t.Errorf("count: expected %d got %d", 2, count.N)
+-	}
+-	if count.S != "hellohello" {
+-		t.Errorf("count: expected %q got %q", "hellohello", count.S)
+-	}
+-	conn.Close()
+-}
+-
+-func TestWithQuery(t *testing.T) {
+-	once.Do(startServer)
+-
+-	client, err := net.Dial("tcp", serverAddr)
+-	if err != nil {
+-		t.Fatal("dialing", err)
+-	}
+-
+-	config := newConfig(t, "/echo")
+-	config.Location, err = url.ParseRequestURI(fmt.Sprintf("ws://%s/echo?q=v", serverAddr))
+-	if err != nil {
+-		t.Fatal("location url", err)
+-	}
+-
+-	ws, err := NewClient(config, client)
+-	if err != nil {
+-		t.Errorf("WebSocket handshake: %v", err)
+-		return
+-	}
+-	ws.Close()
+-}
+-
+-func testWithProtocol(t *testing.T, subproto []string) (string, error) {
+-	once.Do(startServer)
+-
+-	client, err := net.Dial("tcp", serverAddr)
+-	if err != nil {
+-		t.Fatal("dialing", err)
+-	}
+-
+-	config := newConfig(t, "/subproto")
+-	config.Protocol = subproto
+-
+-	ws, err := NewClient(config, client)
+-	if err != nil {
+-		return "", err
+-	}
+-	msg := make([]byte, 16)
+-	n, err := ws.Read(msg)
+-	if err != nil {
+-		return "", err
+-	}
+-	ws.Close()
+-	return string(msg[:n]), nil
+-}
+-
+-func TestWithProtocol(t *testing.T) {
+-	proto, err := testWithProtocol(t, []string{"chat"})
+-	if err != nil {
+-		t.Errorf("SubProto: unexpected error: %v", err)
+-	}
+-	if proto != "chat" {
+-		t.Errorf("SubProto: expected %q, got %q", "chat", proto)
+-	}
+-}
+-
+-func TestWithTwoProtocol(t *testing.T) {
+-	proto, err := testWithProtocol(t, []string{"test", "chat"})
+-	if err != nil {
+-		t.Errorf("SubProto: unexpected error: %v", err)
+-	}
+-	if proto != "chat" {
+-		t.Errorf("SubProto: expected %q, got %q", "chat", proto)
+-	}
+-}
+-
+-func TestWithBadProtocol(t *testing.T) {
+-	_, err := testWithProtocol(t, []string{"test"})
+-	if err != ErrBadStatus {
+-		t.Errorf("SubProto: expected %v, got %v", ErrBadStatus, err)
+-	}
+-}
+-
+-func TestHTTP(t *testing.T) {
+-	once.Do(startServer)
+-
+-	// If the client did not send a handshake that matches the protocol
+-	// specification, the server MUST return an HTTP response with an
+-	// appropriate error code (such as 400 Bad Request)
+-	resp, err := http.Get(fmt.Sprintf("http://%s/echo", serverAddr))
+-	if err != nil {
+-		t.Errorf("Get: error %#v", err)
+-		return
+-	}
+-	if resp == nil {
+-		t.Error("Get: resp is null")
+-		return
+-	}
+-	if resp.StatusCode != http.StatusBadRequest {
+-		t.Errorf("Get: expected %q got %q", http.StatusBadRequest, resp.StatusCode)
+-	}
+-}
+-
+-func TestTrailingSpaces(t *testing.T) {
+-	// http://code.google.com/p/go/issues/detail?id=955
+-	// The last runs of this create keys with trailing spaces that should not be
+-	// generated by the client.
+-	once.Do(startServer)
+-	config := newConfig(t, "/echo")
+-	for i := 0; i < 30; i++ {
+-		// body
+-		ws, err := DialConfig(config)
+-		if err != nil {
+-			t.Errorf("Dial #%d failed: %v", i, err)
+-			break
+-		}
+-		ws.Close()
+-	}
+-}
+-
+-func TestDialConfigBadVersion(t *testing.T) {
+-	once.Do(startServer)
+-	config := newConfig(t, "/echo")
+-	config.Version = 1234
+-
+-	_, err := DialConfig(config)
+-
+-	if dialerr, ok := err.(*DialError); ok {
+-		if dialerr.Err != ErrBadProtocolVersion {
+-			t.Errorf("dial expected err %q but got %q", ErrBadProtocolVersion, dialerr.Err)
+-		}
+-	}
+-}
+-
+-func TestSmallBuffer(t *testing.T) {
+-	// http://code.google.com/p/go/issues/detail?id=1145
+-	// Read should be able to handle reading a fragment of a frame.
+-	once.Do(startServer)
+-
+-	// websocket.Dial()
+-	client, err := net.Dial("tcp", serverAddr)
+-	if err != nil {
+-		t.Fatal("dialing", err)
+-	}
+-	conn, err := NewClient(newConfig(t, "/echo"), client)
+-	if err != nil {
+-		t.Errorf("WebSocket handshake error: %v", err)
+-		return
+-	}
+-
+-	msg := []byte("hello, world\n")
+-	if _, err := conn.Write(msg); err != nil {
+-		t.Errorf("Write: %v", err)
+-	}
+-	var small_msg = make([]byte, 8)
+-	n, err := conn.Read(small_msg)
+-	if err != nil {
+-		t.Errorf("Read: %v", err)
+-	}
+-	if !bytes.Equal(msg[:len(small_msg)], small_msg) {
+-		t.Errorf("Echo: expected %q got %q", msg[:len(small_msg)], small_msg)
+-	}
+-	var second_msg = make([]byte, len(msg))
+-	n, err = conn.Read(second_msg)
+-	if err != nil {
+-		t.Errorf("Read: %v", err)
+-	}
+-	second_msg = second_msg[0:n]
+-	if !bytes.Equal(msg[len(small_msg):], second_msg) {
+-		t.Errorf("Echo: expected %q got %q", msg[len(small_msg):], second_msg)
+-	}
+-	conn.Close()
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/goauth2/compute/serviceaccount/serviceaccount.go b/Godeps/_workspace/src/code.google.com/p/goauth2/compute/serviceaccount/serviceaccount.go
+deleted file mode 100644
+index ed3e10c..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/goauth2/compute/serviceaccount/serviceaccount.go
++++ /dev/null
+@@ -1,172 +0,0 @@
+-// Copyright 2013 The goauth2 Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// Package serviceaccount provides support for making OAuth2-authorized
+-// HTTP requests from Google Compute Engine instances using service accounts.
+-//
+-// See: https://developers.google.com/compute/docs/authentication
+-//
+-// Example usage:
+-//
+-//	client, err := serviceaccount.NewClient(&serviceaccount.Options{})
+-//	if err != nil {
+-//		c.Errorf("failed to create service account client: %q", err)
+-//		return err
+-//	}
+-//	client.Post("https://www.googleapis.com/compute/...", ...)
+-//	client.Post("https://www.googleapis.com/bigquery/...", ...)
+-//
+-package serviceaccount
+-
+-import (
+-	"encoding/json"
+-	"net/http"
+-	"net/url"
+-	"path"
+-	"sync"
+-	"time"
+-
+-	"code.google.com/p/goauth2/oauth"
+-)
+-
+-const (
+-	metadataServer     = "metadata"
+-	serviceAccountPath = "/computeMetadata/v1/instance/service-accounts"
+-)
+-
+-// Options configures a service account Client.
+-type Options struct {
+-	// Underlying transport of service account Client.
+-	// If nil, http.DefaultTransport is used.
+-	Transport http.RoundTripper
+-
+-	// Service account name.
+-	// If empty, "default" is used.
+-	Account string
+-}
+-
+-// NewClient returns an *http.Client authorized with the service account
+-// configured in the Google Compute Engine instance.
+-func NewClient(opt *Options) (*http.Client, error) {
+-	tr := http.DefaultTransport
+-	account := "default"
+-	if opt != nil {
+-		if opt.Transport != nil {
+-			tr = opt.Transport
+-		}
+-		if opt.Account != "" {
+-			account = opt.Account
+-		}
+-	}
+-	t := &transport{
+-		Transport: tr,
+-		Account:   account,
+-	}
+-	// Get the initial access token.
+-	if _, err := fetchToken(t); err != nil {
+-		return nil, err
+-	}
+-	return &http.Client{
+-		Transport: t,
+-	}, nil
+-}
+-
+-type tokenData struct {
+-	AccessToken string  `json:"access_token"`
+-	ExpiresIn   float64 `json:"expires_in"`
+-	TokenType   string  `json:"token_type"`
+-}
+-
+-// transport is an oauth.Transport with a custom Refresh and RoundTrip implementation.
+-type transport struct {
+-	Transport http.RoundTripper
+-	Account   string
+-
+-	mu sync.Mutex
+-	*oauth.Token
+-}
+-
+-// Refresh renews the transport's AccessToken.
+-// t.mu sould be held when this is called.
+-func (t *transport) refresh() error {
+-	// https://developers.google.com/compute/docs/metadata#transitioning
+-	// v1 requires "Metadata-Flavor: Google" header.
+-	tokenURL := &url.URL{
+-		Scheme: "http",
+-		Host:   metadataServer,
+-		Path:   path.Join(serviceAccountPath, t.Account, "token"),
+-	}
+-	req, err := http.NewRequest("GET", tokenURL.String(), nil)
+-	if err != nil {
+-		return err
+-	}
+-	req.Header.Add("Metadata-Flavor", "Google")
+-	resp, err := http.DefaultClient.Do(req)
+-	if err != nil {
+-		return err
+-	}
+-	defer resp.Body.Close()
+-	d := json.NewDecoder(resp.Body)
+-	var token tokenData
+-	err = d.Decode(&token)
+-	if err != nil {
+-		return err
+-	}
+-	t.Token = &oauth.Token{
+-		AccessToken: token.AccessToken,
+-		Expiry:      time.Now().Add(time.Duration(token.ExpiresIn) * time.Second),
+-	}
+-	return nil
+-}
+-
+-// Refresh renews the transport's AccessToken.
+-func (t *transport) Refresh() error {
+-	t.mu.Lock()
+-	defer t.mu.Unlock()
+-	return t.refresh()
+-}
+-
+-// Fetch token from cache or generate a new one if cache miss or expired.
+-func fetchToken(t *transport) (*oauth.Token, error) {
+-	// Get a new token using Refresh in case of a cache miss of if it has expired.
+-	t.mu.Lock()
+-	defer t.mu.Unlock()
+-	if t.Token == nil || t.Expired() {
+-		if err := t.refresh(); err != nil {
+-			return nil, err
+-		}
+-	}
+-	return t.Token, nil
+-}
+-
+-// cloneRequest returns a clone of the provided *http.Request.
+-// The clone is a shallow copy of the struct and its Header map.
+-func cloneRequest(r *http.Request) *http.Request {
+-	// shallow copy of the struct
+-	r2 := new(http.Request)
+-	*r2 = *r
+-	// deep copy of the Header
+-	r2.Header = make(http.Header)
+-	for k, s := range r.Header {
+-		r2.Header[k] = s
+-	}
+-	return r2
+-}
+-
+-// RoundTrip issues an authorized HTTP request and returns its response.
+-func (t *transport) RoundTrip(req *http.Request) (*http.Response, error) {
+-	token, err := fetchToken(t)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	// To set the Authorization header, we must make a copy of the Request
+-	// so that we don't modify the Request we were given.
+-	// This is required by the specification of http.RoundTripper.
+-	newReq := cloneRequest(req)
+-	newReq.Header.Set("Authorization", "Bearer "+token.AccessToken)
+-
+-	// Make the HTTP request.
+-	return t.Transport.RoundTrip(newReq)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/example/oauthreq.go b/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/example/oauthreq.go
+deleted file mode 100644
+index f9651bd..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/example/oauthreq.go
++++ /dev/null
+@@ -1,100 +0,0 @@
+-// Copyright 2011 The goauth2 Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// This program makes a call to the specified API, authenticated with OAuth2.
+-// a list of example APIs can be found at https://code.google.com/oauthplayground/
+-package main
+-
+-import (
+-	"flag"
+-	"fmt"
+-	"io"
+-	"log"
+-	"os"
+-
+-	"code.google.com/p/goauth2/oauth"
+-)
+-
+-var (
+-	clientId     = flag.String("id", "", "Client ID")
+-	clientSecret = flag.String("secret", "", "Client Secret")
+-	scope        = flag.String("scope", "https://www.googleapis.com/auth/userinfo.profile", "OAuth scope")
+-	redirectURL  = flag.String("redirect_url", "oob", "Redirect URL")
+-	authURL      = flag.String("auth_url", "https://accounts.google.com/o/oauth2/auth", "Authentication URL")
+-	tokenURL     = flag.String("token_url", "https://accounts.google.com/o/oauth2/token", "Token URL")
+-	requestURL   = flag.String("request_url", "https://www.googleapis.com/oauth2/v1/userinfo", "API request")
+-	code         = flag.String("code", "", "Authorization Code")
+-	cachefile    = flag.String("cache", "cache.json", "Token cache file")
+-)
+-
+-const usageMsg = `
+-To obtain a request token you must specify both -id and -secret.
+-
+-To obtain Client ID and Secret, see the "OAuth 2 Credentials" section under
+-the "API Access" tab on this page: https://code.google.com/apis/console/
+-
+-Once you have completed the OAuth flow, the credentials should be stored inside
+-the file specified by -cache and you may run without the -id and -secret flags.
+-`
+-
+-func main() {
+-	flag.Parse()
+-
+-	// Set up a configuration.
+-	config := &oauth.Config{
+-		ClientId:     *clientId,
+-		ClientSecret: *clientSecret,
+-		RedirectURL:  *redirectURL,
+-		Scope:        *scope,
+-		AuthURL:      *authURL,
+-		TokenURL:     *tokenURL,
+-		TokenCache:   oauth.CacheFile(*cachefile),
+-	}
+-
+-	// Set up a Transport using the config.
+-	transport := &oauth.Transport{Config: config}
+-
+-	// Try to pull the token from the cache; if this fails, we need to get one.
+-	token, err := config.TokenCache.Token()
+-	if err != nil {
+-		if *clientId == "" || *clientSecret == "" {
+-			flag.Usage()
+-			fmt.Fprint(os.Stderr, usageMsg)
+-			os.Exit(2)
+-		}
+-		if *code == "" {
+-			// Get an authorization code from the data provider.
+-			// ("Please ask the user if I can access this resource.")
+-			url := config.AuthCodeURL("")
+-			fmt.Print("Visit this URL to get a code, then run again with -code=YOUR_CODE\n\n")
+-			fmt.Println(url)
+-			return
+-		}
+-		// Exchange the authorization code for an access token.
+-		// ("Here's the code you gave the user, now give me a token!")
+-		token, err = transport.Exchange(*code)
+-		if err != nil {
+-			log.Fatal("Exchange:", err)
+-		}
+-		// (The Exchange method will automatically cache the token.)
+-		fmt.Printf("Token is cached in %v\n", config.TokenCache)
+-	}
+-
+-	// Make the actual request using the cached token to authenticate.
+-	// ("Here's the token, let me in!")
+-	transport.Token = token
+-
+-	// Make the request.
+-	r, err := transport.Client().Get(*requestURL)
+-	if err != nil {
+-		log.Fatal("Get:", err)
+-	}
+-	defer r.Body.Close()
+-
+-	// Write the response to standard output.
+-	io.Copy(os.Stdout, r.Body)
+-
+-	// Send final carriage return, just to be neat.
+-	fmt.Println()
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/example.client_secrets.json b/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/example.client_secrets.json
+deleted file mode 100644
+index 2ea86f2..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/example.client_secrets.json
++++ /dev/null
+@@ -1 +0,0 @@
+-{"web":{"auth_uri":"https://accounts.google.com/o/oauth2/auth","token_uri":"https://accounts.google.com/o/oauth2/token","client_email":"XXXXXXXXXXXX at developer.gserviceaccount.com","client_x509_cert_url":"https://www.googleapis.com/robot/v1/metadata/x509/XXXXXXXXXXXX@developer.gserviceaccount.com","client_id":"XXXXXXXXXXXX.apps.googleusercontent.com","auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs"}}
+diff --git a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/example.pem b/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/example.pem
+deleted file mode 100644
+index 8f78b92..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/example.pem
++++ /dev/null
+@@ -1,20 +0,0 @@
+-Bag Attributes
+-    friendlyName: privatekey
+-    localKeyID: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
+-Key Attributes: <No Attributes>
+------BEGIN PRIVATE KEY-----
+-XXXXxyXXXXXXXxxyxxxX9y0XXYXXXXYXXxXyxxXxXxXXXyXXXXx4yx1xy1xyYxxY
+-1XxYy38YxXxxxyXxyyxx+xxxxyx1Y1xYx7yx2/Y1XyyXYYYxY5YXxX0xY/Y642yX
+-zYYxYXzXYxY0Y8y9YxyYXxxX40YyXxxXX4XXxx7XxXxxXyXxYYXxXyxX5XY0Yy2X
+-1YX0XXyy6YXyXx9XxXxyXX9XXYXxXxXXXXXXxYXYY3Y8Yy311XYYY81XyY14Xyyx
+-xXyx7xxXXXxxxxyyyX4YYYXyYyYXyxX4XYXYyxXYyx9xy23xXYyXyxYxXxx1XXXY
+-y98yX6yYxyyyX4Xyx1Xy/0yxxYxXxYYx2xx7yYXXXxYXXXxyXyyYYxx5XX2xxyxy
+-y6Yyyx0XX3YYYyx9YYXXXX7y0yxXXy+90XYz1y2xyx7yXxX+8X0xYxXXYxxyxYYy
+-YXx8Yy4yX0Xyxxx6yYX92yxy1YYYzyyyyxy55x/yyXXXYYXYXXzXXxYYxyXY8XXX
+-+y9+yXxX7XxxyYYxxXYxyY623xxXxYX59x5Y6yYyXYY4YxXXYXXXYxXYxXxXXx6x
+-YXX7XxXX2X0XY7YXyYy1XXxYXxXxYY1xXXxxxyy+07zXYxYxxXyyxxyxXx1XYy5X
+-5XYzyxYxXXYyX9XX7xX8xXxx+XXYyYXXXX5YY1x8Yxyx54Xy/1XXyyYXY5YxYyxY
+-XyyxXyX/YxxXXXxXXYXxyxx63xX/xxyYXXyYzx0XY+YxX5xyYyyxxxXXYX/94XXy
+-Xx63xYxXyXY3/XXxyyXX15XXXyz08XYY5YYXY/YXy/96x68XyyXXxYyXy4xYXx5x
+-7yxxyxxYxXxyx3y=
+------END PRIVATE KEY-----
+diff --git a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/main.go b/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/main.go
+deleted file mode 100644
+index 2256e9c..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/example/main.go
++++ /dev/null
+@@ -1,114 +0,0 @@
+-// Copyright 2011 The goauth2 Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// This program makes a read only call to the Google Cloud Storage API,
+-// authenticated with OAuth2. A list of example APIs can be found at
+-// https://code.google.com/oauthplayground/
+-package main
+-
+-import (
+-	"encoding/json"
+-	"flag"
+-	"fmt"
+-	"io/ioutil"
+-	"log"
+-	"net/http"
+-	"strings"
+-
+-	"code.google.com/p/goauth2/oauth/jwt"
+-)
+-
+-const scope = "https://www.googleapis.com/auth/devstorage.read_only"
+-
+-var (
+-	secretsFile = flag.String("s", "", "JSON encoded secrets for the service account")
+-	pemFile     = flag.String("k", "", "private pem key file for the service account")
+-)
+-
+-const usageMsg = `
+-You must specify -k and -s.
+-
+-To obtain client secrets and pem, see the "OAuth 2 Credentials" section under
+-the "API Access" tab on this page: https://code.google.com/apis/console/
+-
+-Google Cloud Storage must also be turned on in the API console.
+-`
+-
+-func main() {
+-	flag.Parse()
+-
+-	if *secretsFile == "" || *pemFile == "" {
+-		flag.Usage()
+-		fmt.Println(usageMsg)
+-		return
+-	}
+-
+-	// Read the secret file bytes into the config.
+-	secretBytes, err := ioutil.ReadFile(*secretsFile)
+-	if err != nil {
+-		log.Fatal("error reading secerets file:", err)
+-	}
+-	var config struct {
+-		Web struct {
+-			ClientEmail string `json:"client_email"`
+-			ClientID    string `json:"client_id"`
+-			TokenURI    string `json:"token_uri"`
+-		}
+-	}
+-	err = json.Unmarshal(secretBytes, &config)
+-	if err != nil {
+-		log.Fatal("error unmarshalling secerets:", err)
+-	}
+-
+-	// Get the project ID from the client ID.
+-	projectID := strings.SplitN(config.Web.ClientID, "-", 2)[0]
+-
+-	// Read the pem file bytes for the private key.
+-	keyBytes, err := ioutil.ReadFile(*pemFile)
+-	if err != nil {
+-		log.Fatal("error reading private key file:", err)
+-	}
+-
+-	// Craft the ClaimSet and JWT token.
+-	t := jwt.NewToken(config.Web.ClientEmail, scope, keyBytes)
+-	t.ClaimSet.Aud = config.Web.TokenURI
+-
+-	// We need to provide a client.
+-	c := &http.Client{}
+-
+-	// Get the access token.
+-	o, err := t.Assert(c)
+-	if err != nil {
+-		log.Fatal("assertion error:", err)
+-	}
+-
+-	// Refresh token will be missing, but this access_token will be good
+-	// for one hour.
+-	fmt.Printf("access_token = %v\n", o.AccessToken)
+-	fmt.Printf("refresh_token = %v\n", o.RefreshToken)
+-	fmt.Printf("expires %v\n", o.Expiry)
+-
+-	// Form the request to list Google Cloud Storage buckets.
+-	req, err := http.NewRequest("GET", "https://storage.googleapis.com/", nil)
+-	if err != nil {
+-		log.Fatal("http.NewRequest:", err)
+-	}
+-	req.Header.Set("Authorization", "OAuth "+o.AccessToken)
+-	req.Header.Set("x-goog-api-version", "2")
+-	req.Header.Set("x-goog-project-id", projectID)
+-
+-	// Make the request.
+-	r, err := c.Do(req)
+-	if err != nil {
+-		log.Fatal("API request error:", err)
+-	}
+-	defer r.Body.Close()
+-
+-	// Write the response to standard output.
+-	res, err := ioutil.ReadAll(r.Body)
+-	if err != nil {
+-		log.Fatal("error reading API request results:", err)
+-	}
+-	fmt.Printf("\nRESULT:\n%s\n", res)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/jwt.go b/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/jwt.go
+deleted file mode 100644
+index 61bf5ce..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/jwt.go
++++ /dev/null
+@@ -1,511 +0,0 @@
+-// Copyright 2012 The goauth2 Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// The jwt package provides support for creating credentials for OAuth2 service
+-// account requests.
+-//
+-// For examples of the package usage please see jwt_test.go.
+-// Example usage (error handling omitted for brevity):
+-//
+-//	// Craft the ClaimSet and JWT token.
+-//	iss := "XXXXXXXXXXXX at developer.gserviceaccount.com"
+-//	scope := "https://www.googleapis.com/auth/devstorage.read_only"
+-//	t := jwt.NewToken(iss, scope, pemKeyBytes)
+-//
+-//	// We need to provide a client.
+-//	c := &http.Client{}
+-//
+-//	// Get the access token.
+-//	o, _ := t.Assert(c)
+-//
+-//	// Form the request to the service.
+-//	req, _ := http.NewRequest("GET", "https://storage.googleapis.com/", nil)
+-//	req.Header.Set("Authorization", "OAuth "+o.AccessToken)
+-//	req.Header.Set("x-goog-api-version", "2")
+-//	req.Header.Set("x-goog-project-id", "XXXXXXXXXXXX")
+-//
+-//	// Make the request.
+-//	result, _ := c.Do(req)
+-//
+-// For info on OAuth2 service accounts please see the online documentation.
+-// https://developers.google.com/accounts/docs/OAuth2ServiceAccount
+-//
+-package jwt
+-
+-import (
+-	"bytes"
+-	"crypto"
+-	"crypto/rand"
+-	"crypto/rsa"
+-	"crypto/sha256"
+-	"crypto/x509"
+-	"encoding/base64"
+-	"encoding/json"
+-	"encoding/pem"
+-	"errors"
+-	"fmt"
+-	"net/http"
+-	"net/url"
+-	"strings"
+-	"time"
+-
+-	"code.google.com/p/goauth2/oauth"
+-)
+-
+-// These are the default/standard values for this to work for Google service accounts.
+-const (
+-	stdAlgorithm     = "RS256"
+-	stdType          = "JWT"
+-	stdAssertionType = "http://oauth.net/grant_type/jwt/1.0/bearer"
+-	stdGrantType     = "urn:ietf:params:oauth:grant-type:jwt-bearer"
+-	stdAud           = "https://accounts.google.com/o/oauth2/token"
+-)
+-
+-var (
+-	ErrInvalidKey = errors.New("Invalid Key")
+-)
+-
+-// base64Encode returns and Base64url encoded version of the input string with any
+-// trailing "=" stripped.
+-func base64Encode(b []byte) string {
+-	return strings.TrimRight(base64.URLEncoding.EncodeToString(b), "=")
+-}
+-
+-// base64Decode decodes the Base64url encoded string
+-func base64Decode(s string) ([]byte, error) {
+-	// add back missing padding
+-	switch len(s) % 4 {
+-	case 2:
+-		s += "=="
+-	case 3:
+-		s += "="
+-	}
+-	return base64.URLEncoding.DecodeString(s)
+-}
+-
+-// The JWT claim set contains information about the JWT including the
+-// permissions being requested (scopes), the target of the token, the issuer,
+-// the time the token was issued, and the lifetime of the token.
+-//
+-// Aud is usually https://accounts.google.com/o/oauth2/token
+-type ClaimSet struct {
+-	Iss   string `json:"iss"`             // email address of the client_id of the application making the access token request
+-	Scope string `json:"scope,omitempty"` // space-delimited list of the permissions the application requests
+-	Aud   string `json:"aud"`             // descriptor of the intended target of the assertion (Optional).
+-	Prn   string `json:"prn,omitempty"`   // email for which the application is requesting delegated access (Optional).
+-	Exp   int64  `json:"exp"`
+-	Iat   int64  `json:"iat"`
+-	Typ   string `json:"typ,omitempty"`
+-	Sub   string `json:"sub,omitempty"` // Add support for googleapi delegation support
+-
+-	// See http://tools.ietf.org/html/draft-jones-json-web-token-10#section-4.3
+-	// This array is marshalled using custom code (see (c *ClaimSet) encode()).
+-	PrivateClaims map[string]interface{} `json:"-"`
+-
+-	exp time.Time
+-	iat time.Time
+-}
+-
+-// setTimes sets iat and exp to time.Now() and iat.Add(time.Hour) respectively.
+-//
+-// Note that these times have nothing to do with the expiration time for the
+-// access_token returned by the server.  These have to do with the lifetime of
+-// the encoded JWT.
+-//
+-// A JWT can be re-used for up to one hour after it was encoded.  The access
+-// token that is granted will also be good for one hour so there is little point
+-// in trying to use the JWT a second time.
+-func (c *ClaimSet) setTimes(t time.Time) {
+-	c.iat = t
+-	c.exp = c.iat.Add(time.Hour)
+-}
+-
+-var (
+-	jsonStart = []byte{'{'}
+-	jsonEnd   = []byte{'}'}
+-)
+-
+-// encode returns the Base64url encoded form of the Signature.
+-func (c *ClaimSet) encode() string {
+-	if c.exp.IsZero() || c.iat.IsZero() {
+-		c.setTimes(time.Now())
+-	}
+-	if c.Aud == "" {
+-		c.Aud = stdAud
+-	}
+-	c.Exp = c.exp.Unix()
+-	c.Iat = c.iat.Unix()
+-
+-	b, err := json.Marshal(c)
+-	if err != nil {
+-		panic(err)
+-	}
+-
+-	if len(c.PrivateClaims) == 0 {
+-		return base64Encode(b)
+-	}
+-
+-	// Marshal private claim set and then append it to b.
+-	prv, err := json.Marshal(c.PrivateClaims)
+-	if err != nil {
+-		panic(fmt.Errorf("Invalid map of private claims %v", c.PrivateClaims))
+-	}
+-
+-	// Concatenate public and private claim JSON objects.
+-	if !bytes.HasSuffix(b, jsonEnd) {
+-		panic(fmt.Errorf("Invalid JSON %s", b))
+-	}
+-	if !bytes.HasPrefix(prv, jsonStart) {
+-		panic(fmt.Errorf("Invalid JSON %s", prv))
+-	}
+-	b[len(b)-1] = ','         // Replace closing curly brace with a comma.
+-	b = append(b, prv[1:]...) // Append private claims.
+-
+-	return base64Encode(b)
+-}
+-
+-// Header describes the algorithm and type of token being generated,
+-// and optionally a KeyID describing additional parameters for the
+-// signature.
+-type Header struct {
+-	Algorithm string `json:"alg"`
+-	Type      string `json:"typ"`
+-	KeyId     string `json:"kid,omitempty"`
+-}
+-
+-func (h *Header) encode() string {
+-	b, err := json.Marshal(h)
+-	if err != nil {
+-		panic(err)
+-	}
+-	return base64Encode(b)
+-}
+-
+-// A JWT is composed of three parts: a header, a claim set, and a signature.
+-// The well formed and encoded JWT can then be exchanged for an access token.
+-//
+-// The Token is not a JWT, but is is encoded to produce a well formed JWT.
+-//
+-// When obtaining a key from the Google API console it will be downloaded in a
+-// PKCS12 encoding.  To use this key you will need to convert it to a PEM file.
+-// This can be achieved with openssl.
+-//
+-//   $ openssl pkcs12 -in <key.p12> -nocerts -passin pass:notasecret -nodes -out <key.pem>
+-//
+-// The contents of this file can then be used as the Key.
+-type Token struct {
+-	ClaimSet *ClaimSet // claim set used to construct the JWT
+-	Header   *Header   // header used to construct the JWT
+-	Key      []byte    // PEM printable encoding of the private key
+-	pKey     *rsa.PrivateKey
+-
+-	header string
+-	claim  string
+-	sig    string
+-
+-	useExternalSigner bool
+-	signer            Signer
+-}
+-
+-// NewToken returns a filled in *Token based on the standard header,
+-// and sets the Iat and Exp times based on when the call to Assert is
+-// made.
+-func NewToken(iss, scope string, key []byte) *Token {
+-	c := &ClaimSet{
+-		Iss:   iss,
+-		Scope: scope,
+-		Aud:   stdAud,
+-	}
+-	h := &Header{
+-		Algorithm: stdAlgorithm,
+-		Type:      stdType,
+-	}
+-	t := &Token{
+-		ClaimSet: c,
+-		Header:   h,
+-		Key:      key,
+-	}
+-	return t
+-}
+-
+-// Signer is an interface that given a JWT token, returns the header &
+-// claim (serialized and urlEncoded to a byte slice), along with the
+-// signature and an error (if any occured).  It could modify any data
+-// to sign (typically the KeyID).
+-//
+-// Example usage where a SHA256 hash of the original url-encoded token
+-// with an added KeyID and secret data is used as a signature:
+-//
+-//	var privateData = "secret data added to hash, indexed by KeyID"
+-//
+-//	type SigningService struct{}
+-//
+-//	func (ss *SigningService) Sign(in *jwt.Token) (newTokenData, sig []byte, err error) {
+-//		in.Header.KeyID = "signing service"
+-//		newTokenData = in.EncodeWithoutSignature()
+-//		dataToSign := fmt.Sprintf("%s.%s", newTokenData, privateData)
+-//		h := sha256.New()
+-//		_, err := h.Write([]byte(dataToSign))
+-//		sig = h.Sum(nil)
+-//		return
+-//	}
+-type Signer interface {
+-	Sign(in *Token) (tokenData, signature []byte, err error)
+-}
+-
+-// NewSignerToken returns a *Token, using an external signer function
+-func NewSignerToken(iss, scope string, signer Signer) *Token {
+-	t := NewToken(iss, scope, nil)
+-	t.useExternalSigner = true
+-	t.signer = signer
+-	return t
+-}
+-
+-// Expired returns a boolean value letting us know if the token has expired.
+-func (t *Token) Expired() bool {
+-	return t.ClaimSet.exp.Before(time.Now())
+-}
+-
+-// Encode constructs and signs a Token returning a JWT ready to use for
+-// requesting an access token.
+-func (t *Token) Encode() (string, error) {
+-	var tok string
+-	t.header = t.Header.encode()
+-	t.claim = t.ClaimSet.encode()
+-	err := t.sign()
+-	if err != nil {
+-		return tok, err
+-	}
+-	tok = fmt.Sprintf("%s.%s.%s", t.header, t.claim, t.sig)
+-	return tok, nil
+-}
+-
+-// EncodeWithoutSignature returns the url-encoded value of the Token
+-// before signing has occured (typically for use by external signers).
+-func (t *Token) EncodeWithoutSignature() string {
+-	t.header = t.Header.encode()
+-	t.claim = t.ClaimSet.encode()
+-	return fmt.Sprintf("%s.%s", t.header, t.claim)
+-}
+-
+-// sign computes the signature for a Token.  The details for this can be found
+-// in the OAuth2 Service Account documentation.
+-// https://developers.google.com/accounts/docs/OAuth2ServiceAccount#computingsignature
+-func (t *Token) sign() error {
+-	if t.useExternalSigner {
+-		fulldata, sig, err := t.signer.Sign(t)
+-		if err != nil {
+-			return err
+-		}
+-		split := strings.Split(string(fulldata), ".")
+-		if len(split) != 2 {
+-			return errors.New("no token returned")
+-		}
+-		t.header = split[0]
+-		t.claim = split[1]
+-		t.sig = base64Encode(sig)
+-		return err
+-	}
+-	ss := fmt.Sprintf("%s.%s", t.header, t.claim)
+-	if t.pKey == nil {
+-		err := t.parsePrivateKey()
+-		if err != nil {
+-			return err
+-		}
+-	}
+-	h := sha256.New()
+-	h.Write([]byte(ss))
+-	b, err := rsa.SignPKCS1v15(rand.Reader, t.pKey, crypto.SHA256, h.Sum(nil))
+-	t.sig = base64Encode(b)
+-	return err
+-}
+-
+-// parsePrivateKey converts the Token's Key ([]byte) into a parsed
+-// rsa.PrivateKey.  If the key is not well formed this method will return an
+-// ErrInvalidKey error.
+-func (t *Token) parsePrivateKey() error {
+-	block, _ := pem.Decode(t.Key)
+-	if block == nil {
+-		return ErrInvalidKey
+-	}
+-	parsedKey, err := x509.ParsePKCS8PrivateKey(block.Bytes)
+-	if err != nil {
+-		parsedKey, err = x509.ParsePKCS1PrivateKey(block.Bytes)
+-		if err != nil {
+-			return err
+-		}
+-	}
+-	var ok bool
+-	t.pKey, ok = parsedKey.(*rsa.PrivateKey)
+-	if !ok {
+-		return ErrInvalidKey
+-	}
+-	return nil
+-}
+-
+-// Assert obtains an *oauth.Token from the remote server by encoding and sending
+-// a JWT.  The access_token will expire in one hour (3600 seconds) and cannot be
+-// refreshed (no refresh_token is returned with the response).  Once this token
+-// expires call this method again to get a fresh one.
+-func (t *Token) Assert(c *http.Client) (*oauth.Token, error) {
+-	var o *oauth.Token
+-	t.ClaimSet.setTimes(time.Now())
+-	u, v, err := t.buildRequest()
+-	if err != nil {
+-		return o, err
+-	}
+-	resp, err := c.PostForm(u, v)
+-	if err != nil {
+-		return o, err
+-	}
+-	o, err = handleResponse(resp)
+-	return o, err
+-}
+-
+-// buildRequest sets up the URL values and the proper URL string for making our
+-// access_token request.
+-func (t *Token) buildRequest() (string, url.Values, error) {
+-	v := url.Values{}
+-	j, err := t.Encode()
+-	if err != nil {
+-		return t.ClaimSet.Aud, v, err
+-	}
+-	v.Set("grant_type", stdGrantType)
+-	v.Set("assertion", j)
+-	return t.ClaimSet.Aud, v, nil
+-}
+-
+-// Used for decoding the response body.
+-type respBody struct {
+-	IdToken   string        `json:"id_token"`
+-	Access    string        `json:"access_token"`
+-	Type      string        `json:"token_type"`
+-	ExpiresIn time.Duration `json:"expires_in"`
+-}
+-
+-// handleResponse returns a filled in *oauth.Token given the *http.Response from
+-// a *http.Request created by buildRequest.
+-func handleResponse(r *http.Response) (*oauth.Token, error) {
+-	o := &oauth.Token{}
+-	defer r.Body.Close()
+-	if r.StatusCode != 200 {
+-		return o, errors.New("invalid response: " + r.Status)
+-	}
+-	b := &respBody{}
+-	err := json.NewDecoder(r.Body).Decode(b)
+-	if err != nil {
+-		return o, err
+-	}
+-	o.AccessToken = b.Access
+-	if b.IdToken != "" {
+-		// decode returned id token to get expiry
+-		o.AccessToken = b.IdToken
+-		s := strings.Split(b.IdToken, ".")
+-		if len(s) < 2 {
+-			return nil, errors.New("invalid token received")
+-		}
+-		d, err := base64Decode(s[1])
+-		if err != nil {
+-			return o, err
+-		}
+-		c := &ClaimSet{}
+-		err = json.NewDecoder(bytes.NewBuffer(d)).Decode(c)
+-		if err != nil {
+-			return o, err
+-		}
+-		o.Expiry = time.Unix(c.Exp, 0)
+-		return o, nil
+-	}
+-	o.Expiry = time.Now().Add(b.ExpiresIn * time.Second)
+-	return o, nil
+-}
+-
+-// Transport implements http.RoundTripper. When configured with a valid
+-// JWT and OAuth tokens it can be used to make authenticated HTTP requests.
+-//
+-//	t := &jwt.Transport{jwtToken, oauthToken}
+-//	r, _, err := t.Client().Get("http://example.org/url/requiring/auth")
+-//
+-// It will automatically refresh the OAuth token if it can, updating in place.
+-type Transport struct {
+-	JWTToken   *Token
+-	OAuthToken *oauth.Token
+-
+-	// Transport is the HTTP transport to use when making requests.
+-	// It will default to http.DefaultTransport if nil.
+-	Transport http.RoundTripper
+-}
+-
+-// Creates a new authenticated transport.
+-func NewTransport(token *Token) (*Transport, error) {
+-	oa, err := token.Assert(new(http.Client))
+-	if err != nil {
+-		return nil, err
+-	}
+-	return &Transport{
+-		JWTToken:   token,
+-		OAuthToken: oa,
+-	}, nil
+-}
+-
+-// Client returns an *http.Client that makes OAuth-authenticated requests.
+-func (t *Transport) Client() *http.Client {
+-	return &http.Client{Transport: t}
+-}
+-
+-// Fetches the internal transport.
+-func (t *Transport) transport() http.RoundTripper {
+-	if t.Transport != nil {
+-		return t.Transport
+-	}
+-	return http.DefaultTransport
+-}
+-
+-// RoundTrip executes a single HTTP transaction using the Transport's
+-// OAuthToken as authorization headers.
+-//
+-// This method will attempt to renew the token if it has expired and may return
+-// an error related to that token renewal before attempting the client request.
+-// If the token cannot be renewed a non-nil os.Error value will be returned.
+-// If the token is invalid callers should expect HTTP-level errors,
+-// as indicated by the Response's StatusCode.
+-func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
+-	// Sanity check the two tokens
+-	if t.JWTToken == nil {
+-		return nil, fmt.Errorf("no JWT token supplied")
+-	}
+-	if t.OAuthToken == nil {
+-		return nil, fmt.Errorf("no OAuth token supplied")
+-	}
+-	// Refresh the OAuth token if it has expired
+-	if t.OAuthToken.Expired() {
+-		if oa, err := t.JWTToken.Assert(new(http.Client)); err != nil {
+-			return nil, err
+-		} else {
+-			t.OAuthToken = oa
+-		}
+-	}
+-	// To set the Authorization header, we must make a copy of the Request
+-	// so that we don't modify the Request we were given.
+-	// This is required by the specification of http.RoundTripper.
+-	req = cloneRequest(req)
+-	req.Header.Set("Authorization", "Bearer "+t.OAuthToken.AccessToken)
+-
+-	// Make the HTTP request.
+-	return t.transport().RoundTrip(req)
+-}
+-
+-// cloneRequest returns a clone of the provided *http.Request.
+-// The clone is a shallow copy of the struct and its Header map.
+-func cloneRequest(r *http.Request) *http.Request {
+-	// shallow copy of the struct
+-	r2 := new(http.Request)
+-	*r2 = *r
+-	// deep copy of the Header
+-	r2.Header = make(http.Header)
+-	for k, s := range r.Header {
+-		r2.Header[k] = s
+-	}
+-	return r2
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/jwt_test.go b/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/jwt_test.go
+deleted file mode 100644
+index 622843e..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/jwt/jwt_test.go
++++ /dev/null
+@@ -1,486 +0,0 @@
+-// Copyright 2012 The goauth2 Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// For package documentation please see jwt.go.
+-//
+-package jwt
+-
+-import (
+-	"bytes"
+-	"crypto"
+-	"crypto/rand"
+-	"crypto/rsa"
+-	"crypto/sha256"
+-	"crypto/x509"
+-	"encoding/json"
+-	"encoding/pem"
+-	"io/ioutil"
+-	"net/http"
+-	"testing"
+-	"time"
+-)
+-
+-const (
+-	stdHeaderStr = `{"alg":"RS256","typ":"JWT"}`
+-	iss          = "761326798069-r5mljlln1rd4lrbhg75efgigp36m78j5 at developer.gserviceaccount.com"
+-	scope        = "https://www.googleapis.com/auth/prediction"
+-	exp          = 1328554385
+-	iat          = 1328550785 // exp + 1 hour
+-)
+-
+-// Base64url encoded Header
+-const headerEnc = "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9"
+-
+-// Base64url encoded ClaimSet
+-const claimSetEnc = "eyJpc3MiOiI3NjEzMjY3OTgwNjktcjVtbGpsbG4xcmQ0bHJiaGc3NWVmZ2lncDM2bTc4ajVAZGV2ZWxvcGVyLmdzZXJ2aWNlYWNjb3VudC5jb20iLCJzY29wZSI6Imh0dHBzOi8vd3d3Lmdvb2dsZWFwaXMuY29tL2F1dGgvcHJlZGljdGlvbiIsImF1ZCI6Imh0dHBzOi8vYWNjb3VudHMuZ29vZ2xlLmNvbS9vL29hdXRoMi90b2tlbiIsImV4cCI6MTMyODU1NDM4NSwiaWF0IjoxMzI4NTUwNzg1fQ"
+-
+-// Base64url encoded Signature
+-const sigEnc = "olukbHreNiYrgiGCTEmY3eWGeTvYDSUHYoE84Jz3BRPBSaMdZMNOn_0CYK7UHPO7OdvUofjwft1dH59UxE9GWS02pjFti1uAQoImaqjLZoTXr8qiF6O_kDa9JNoykklWlRAIwGIZkDupCS-8cTAnM_ksSymiH1coKJrLDUX_BM0x2f4iMFQzhL5vT1ll-ZipJ0lNlxb5QsyXxDYcxtHYguF12-vpv3ItgT0STfcXoWzIGQoEbhwB9SBp9JYcQ8Ygz6pYDjm0rWX9LrchmTyDArCodpKLFtutNgcIFUP9fWxvwd1C2dNw5GjLcKr9a_SAERyoJ2WnCR1_j9N0wD2o0g"
+-
+-// Base64url encoded Token
+-const tokEnc = "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiI3NjEzMjY3OTgwNjktcjVtbGpsbG4xcmQ0bHJiaGc3NWVmZ2lncDM2bTc4ajVAZGV2ZWxvcGVyLmdzZXJ2aWNlYWNjb3VudC5jb20iLCJzY29wZSI6Imh0dHBzOi8vd3d3Lmdvb2dsZWFwaXMuY29tL2F1dGgvcHJlZGljdGlvbiIsImF1ZCI6Imh0dHBzOi8vYWNjb3VudHMuZ29vZ2xlLmNvbS9vL29hdXRoMi90b2tlbiIsImV4cCI6MTMyODU1NDM4NSwiaWF0IjoxMzI4NTUwNzg1fQ.olukbHreNiYrgiGCTEmY3eWGeTvYDSUHYoE84Jz3BRPBSaMdZMNOn_0CYK7UHPO7OdvUofjwft1dH59UxE9GWS02pjFti1uAQoImaqjLZoTXr8qiF6O_kDa9JNoykklWlRAIwGIZkDupCS-8cTAnM_ksSymiH1coKJrLDUX_BM0x2f4iMFQzhL5vT1ll-ZipJ0lNlxb5QsyXxDYcxtHYguF12-vpv3ItgT0STfcXoWzIGQoEbhwB9SBp9JYcQ8Ygz6pYDjm0rWX9LrchmTyDArCodpKLFtutNgcIFUP9fWxvwd1C2dNw5GjLcKr9a_SAERyoJ2WnCR1_j9N0wD2o0g"
+-
+-// Private key for testing
+-const privateKeyPem = `-----BEGIN RSA PRIVATE KEY-----
+-MIIEpAIBAAKCAQEA4ej0p7bQ7L/r4rVGUz9RN4VQWoej1Bg1mYWIDYslvKrk1gpj
+-7wZgkdmM7oVK2OfgrSj/FCTkInKPqaCR0gD7K80q+mLBrN3PUkDrJQZpvRZIff3/
+-xmVU1WeruQLFJjnFb2dqu0s/FY/2kWiJtBCakXvXEOb7zfbINuayL+MSsCGSdVYs
+-SliS5qQpgyDap+8b5fpXZVJkq92hrcNtbkg7hCYUJczt8n9hcCTJCfUpApvaFQ18
+-pe+zpyl4+WzkP66I28hniMQyUlA1hBiskT7qiouq0m8IOodhv2fagSZKjOTTU2xk
+-SBc//fy3ZpsL7WqgsZS7Q+0VRK8gKfqkxg5OYQIDAQABAoIBAQDGGHzQxGKX+ANk
+-nQi53v/c6632dJKYXVJC+PDAz4+bzU800Y+n/bOYsWf/kCp94XcG4Lgsdd0Gx+Zq
+-HD9CI1IcqqBRR2AFscsmmX6YzPLTuEKBGMW8twaYy3utlFxElMwoUEsrSWRcCA1y
+-nHSDzTt871c7nxCXHxuZ6Nm/XCL7Bg8uidRTSC1sQrQyKgTPhtQdYrPQ4WZ1A4J9
+-IisyDYmZodSNZe5P+LTJ6M1SCgH8KH9ZGIxv3diMwzNNpk3kxJc9yCnja4mjiGE2
+-YCNusSycU5IhZwVeCTlhQGcNeV/skfg64xkiJE34c2y2ttFbdwBTPixStGaF09nU
+-Z422D40BAoGBAPvVyRRsC3BF+qZdaSMFwI1yiXY7vQw5+JZh01tD28NuYdRFzjcJ
+-vzT2n8LFpj5ZfZFvSMLMVEFVMgQvWnN0O6xdXvGov6qlRUSGaH9u+TCPNnIldjMP
+-B8+xTwFMqI7uQr54wBB+Poq7dVRP+0oHb0NYAwUBXoEuvYo3c/nDoRcZAoGBAOWl
+-aLHjMv4CJbArzT8sPfic/8waSiLV9Ixs3Re5YREUTtnLq7LoymqB57UXJB3BNz/2
+-eCueuW71avlWlRtE/wXASj5jx6y5mIrlV4nZbVuyYff0QlcG+fgb6pcJQuO9DxMI
+-aqFGrWP3zye+LK87a6iR76dS9vRU+bHZpSVvGMKJAoGAFGt3TIKeQtJJyqeUWNSk
+-klORNdcOMymYMIlqG+JatXQD1rR6ThgqOt8sgRyJqFCVT++YFMOAqXOBBLnaObZZ
+-CFbh1fJ66BlSjoXff0W+SuOx5HuJJAa5+WtFHrPajwxeuRcNa8jwxUsB7n41wADu
+-UqWWSRedVBg4Ijbw3nWwYDECgYB0pLew4z4bVuvdt+HgnJA9n0EuYowVdadpTEJg
+-soBjNHV4msLzdNqbjrAqgz6M/n8Ztg8D2PNHMNDNJPVHjJwcR7duSTA6w2p/4k28
+-bvvk/45Ta3XmzlxZcZSOct3O31Cw0i2XDVc018IY5be8qendDYM08icNo7vQYkRH
+-504kQQKBgQDjx60zpz8ozvm1XAj0wVhi7GwXe+5lTxiLi9Fxq721WDxPMiHDW2XL
+-YXfFVy/9/GIMvEiGYdmarK1NW+VhWl1DC5xhDg0kvMfxplt4tynoq1uTsQTY31Mx
+-BeF5CT/JuNYk3bEBF0H/Q3VGO1/ggVS+YezdFbLWIRoMnLj6XCFEGg==
+------END RSA PRIVATE KEY-----`
+-
+-// Public key to go with the private key for testing
+-const publicKeyPem = `-----BEGIN CERTIFICATE-----
+-MIIDIzCCAgugAwIBAgIJAMfISuBQ5m+5MA0GCSqGSIb3DQEBBQUAMBUxEzARBgNV
+-BAMTCnVuaXQtdGVzdHMwHhcNMTExMjA2MTYyNjAyWhcNMjExMjAzMTYyNjAyWjAV
+-MRMwEQYDVQQDEwp1bml0LXRlc3RzMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
+-CgKCAQEA4ej0p7bQ7L/r4rVGUz9RN4VQWoej1Bg1mYWIDYslvKrk1gpj7wZgkdmM
+-7oVK2OfgrSj/FCTkInKPqaCR0gD7K80q+mLBrN3PUkDrJQZpvRZIff3/xmVU1Wer
+-uQLFJjnFb2dqu0s/FY/2kWiJtBCakXvXEOb7zfbINuayL+MSsCGSdVYsSliS5qQp
+-gyDap+8b5fpXZVJkq92hrcNtbkg7hCYUJczt8n9hcCTJCfUpApvaFQ18pe+zpyl4
+-+WzkP66I28hniMQyUlA1hBiskT7qiouq0m8IOodhv2fagSZKjOTTU2xkSBc//fy3
+-ZpsL7WqgsZS7Q+0VRK8gKfqkxg5OYQIDAQABo3YwdDAdBgNVHQ4EFgQU2RQ8yO+O
+-gN8oVW2SW7RLrfYd9jEwRQYDVR0jBD4wPIAU2RQ8yO+OgN8oVW2SW7RLrfYd9jGh
+-GaQXMBUxEzARBgNVBAMTCnVuaXQtdGVzdHOCCQDHyErgUOZvuTAMBgNVHRMEBTAD
+-AQH/MA0GCSqGSIb3DQEBBQUAA4IBAQBRv+M/6+FiVu7KXNjFI5pSN17OcW5QUtPr
+-odJMlWrJBtynn/TA1oJlYu3yV5clc/71Vr/AxuX5xGP+IXL32YDF9lTUJXG/uUGk
+-+JETpKmQviPbRsvzYhz4pf6ZIOZMc3/GIcNq92ECbseGO+yAgyWUVKMmZM0HqXC9
+-ovNslqe0M8C1sLm1zAR5z/h/litE7/8O2ietija3Q/qtl2TOXJdCA6sgjJX2WUql
+-ybrC55ct18NKf3qhpcEkGQvFU40rVYApJpi98DiZPYFdx1oBDp/f4uZ3ojpxRVFT
+-cDwcJLfNRCPUhormsY7fDS9xSyThiHsW9mjJYdcaKQkwYZ0F11yB
+------END CERTIFICATE-----`
+-
+-var (
+-	privateKeyPemBytes = []byte(privateKeyPem)
+-	publicKeyPemBytes  = []byte(publicKeyPem)
+-	stdHeader          = &Header{Algorithm: stdAlgorithm, Type: stdType}
+-)
+-
+-// Testing the urlEncode function.
+-func TestUrlEncode(t *testing.T) {
+-	enc := base64Encode([]byte(stdHeaderStr))
+-	b := []byte(enc)
+-	if b[len(b)-1] == 61 {
+-		t.Error("TestUrlEncode: last chat == \"=\"")
+-	}
+-	if enc != headerEnc {
+-		t.Error("TestUrlEncode: enc != headerEnc")
+-		t.Errorf("        enc = %s", enc)
+-		t.Errorf("  headerEnc = %s", headerEnc)
+-	}
+-}
+-
+-// Test that the times are set properly.
+-func TestClaimSetSetTimes(t *testing.T) {
+-	c := &ClaimSet{
+-		Iss:   iss,
+-		Scope: scope,
+-	}
+-	iat := time.Unix(iat, 0)
+-	c.setTimes(iat)
+-	if c.exp.Unix() != exp {
+-		t.Error("TestClaimSetSetTimes: c.exp != exp")
+-		t.Errorf("  c.Exp = %d", c.exp.Unix())
+-		t.Errorf("    exp = %d", exp)
+-	}
+-}
+-
+-// Given a well formed ClaimSet, test for proper encoding.
+-func TestClaimSetEncode(t *testing.T) {
+-	c := &ClaimSet{
+-		Iss:   iss,
+-		Scope: scope,
+-		exp:   time.Unix(exp, 0),
+-		iat:   time.Unix(iat, 0),
+-	}
+-	enc := c.encode()
+-	re, err := base64Decode(enc)
+-	if err != nil {
+-		t.Fatalf("error decoding encoded claim set: %v", err)
+-	}
+-
+-	wa, err := base64Decode(claimSetEnc)
+-	if err != nil {
+-		t.Fatalf("error decoding encoded expected claim set: %v", err)
+-	}
+-
+-	if enc != claimSetEnc {
+-		t.Error("TestClaimSetEncode: enc != claimSetEnc")
+-		t.Errorf("          enc = %s", string(re))
+-		t.Errorf("  claimSetEnc = %s", string(wa))
+-	}
+-}
+-
+-// Test that claim sets with private claim names are encoded correctly.
+-func TestClaimSetWithPrivateNameEncode(t *testing.T) {
+-	iatT := time.Unix(iat, 0)
+-	expT := time.Unix(exp, 0)
+-
+-	i, err := json.Marshal(iatT.Unix())
+-	if err != nil {
+-		t.Fatalf("error marshaling iatT value of %v: %v", iatT.Unix(), err)
+-	}
+-	iatStr := string(i)
+-	e, err := json.Marshal(expT.Unix())
+-	if err != nil {
+-		t.Fatalf("error marshaling expT value of %v: %v", expT.Unix(), err)
+-	}
+-
+-	expStr := string(e)
+-
+-	testCases := []struct {
+-		desc  string
+-		input map[string]interface{}
+-		want  string
+-	}{
+-		// Test a simple int field.
+-		{
+-			"single simple field",
+-			map[string]interface{}{"amount": 22},
+-			`{` +
+-				`"iss":"` + iss + `",` +
+-				`"scope":"` + scope + `",` +
+-				`"aud":"` + stdAud + `",` +
+-				`"exp":` + expStr + `,` +
+-				`"iat":` + iatStr + `,` +
+-				`"amount":22` +
+-				`}`,
+-		},
+-		{
+-			"multiple simple fields",
+-			map[string]interface{}{"tracking_code": "axZf", "amount": 22},
+-			`{` +
+-				`"iss":"` + iss + `",` +
+-				`"scope":"` + scope + `",` +
+-				`"aud":"` + stdAud + `",` +
+-				`"exp":` + expStr + `,` +
+-				`"iat":` + iatStr + `,` +
+-				`"amount":22,` +
+-				`"tracking_code":"axZf"` +
+-				`}`,
+-		},
+-		{
+-			"nested struct fields",
+-			map[string]interface{}{
+-				"tracking_code": "axZf",
+-				"purchase": struct {
+-					Description string `json:"desc"`
+-					Quantity    int32  `json:"q"`
+-					Time        int64  `json:"t"`
+-				}{
+-					"toaster",
+-					5,
+-					iat,
+-				},
+-			},
+-			`{` +
+-				`"iss":"` + iss + `",` +
+-				`"scope":"` + scope + `",` +
+-				`"aud":"` + stdAud + `",` +
+-				`"exp":` + expStr + `,` +
+-				`"iat":` + iatStr + `,` +
+-				`"purchase":{"desc":"toaster","q":5,"t":` + iatStr + `},` +
+-				`"tracking_code":"axZf"` +
+-				`}`,
+-		},
+-	}
+-
+-	for _, testCase := range testCases {
+-		c := &ClaimSet{
+-			Iss:           iss,
+-			Scope:         scope,
+-			Aud:           stdAud,
+-			iat:           iatT,
+-			exp:           expT,
+-			PrivateClaims: testCase.input,
+-		}
+-		cJSON, err := base64Decode(c.encode())
+-		if err != nil {
+-			t.Fatalf("error decoding claim set: %v", err)
+-		}
+-		if string(cJSON) != testCase.want {
+-			t.Errorf("TestClaimSetWithPrivateNameEncode: enc != want in case %s", testCase.desc)
+-			t.Errorf("    enc = %s", cJSON)
+-			t.Errorf("    want = %s", testCase.want)
+-		}
+-	}
+-}
+-
+-// Test the NewToken constructor.
+-func TestNewToken(t *testing.T) {
+-	tok := NewToken(iss, scope, privateKeyPemBytes)
+-	if tok.ClaimSet.Iss != iss {
+-		t.Error("TestNewToken: tok.ClaimSet.Iss != iss")
+-		t.Errorf("  tok.ClaimSet.Iss = %s", tok.ClaimSet.Iss)
+-		t.Errorf("               iss = %s", iss)
+-	}
+-	if tok.ClaimSet.Scope != scope {
+-		t.Error("TestNewToken: tok.ClaimSet.Scope != scope")
+-		t.Errorf("  tok.ClaimSet.Scope = %s", tok.ClaimSet.Scope)
+-		t.Errorf("               scope = %s", scope)
+-	}
+-	if tok.ClaimSet.Aud != stdAud {
+-		t.Error("TestNewToken: tok.ClaimSet.Aud != stdAud")
+-		t.Errorf("  tok.ClaimSet.Aud = %s", tok.ClaimSet.Aud)
+-		t.Errorf("            stdAud = %s", stdAud)
+-	}
+-	if !bytes.Equal(tok.Key, privateKeyPemBytes) {
+-		t.Error("TestNewToken: tok.Key != privateKeyPemBytes")
+-		t.Errorf("             tok.Key = %s", tok.Key)
+-		t.Errorf("  privateKeyPemBytes = %s", privateKeyPemBytes)
+-	}
+-}
+-
+-// Make sure the private key parsing functions work.
+-func TestParsePrivateKey(t *testing.T) {
+-	tok := &Token{
+-		Key: privateKeyPemBytes,
+-	}
+-	err := tok.parsePrivateKey()
+-	if err != nil {
+-		t.Errorf("TestParsePrivateKey:tok.parsePrivateKey: %v", err)
+-	}
+-}
+-
+-// Test that the token signature generated matches the golden standard.
+-func TestTokenSign(t *testing.T) {
+-	tok := &Token{
+-		Key:    privateKeyPemBytes,
+-		claim:  claimSetEnc,
+-		header: headerEnc,
+-	}
+-	err := tok.parsePrivateKey()
+-	if err != nil {
+-		t.Errorf("TestTokenSign:tok.parsePrivateKey: %v", err)
+-	}
+-	err = tok.sign()
+-	if err != nil {
+-		t.Errorf("TestTokenSign:tok.sign: %v", err)
+-	}
+-	if tok.sig != sigEnc {
+-		t.Error("TestTokenSign: tok.sig != sigEnc")
+-		t.Errorf("  tok.sig = %s", tok.sig)
+-		t.Errorf("   sigEnc = %s", sigEnc)
+-	}
+-}
+-
+-// Test that the token expiration function is working.
+-func TestTokenExpired(t *testing.T) {
+-	c := &ClaimSet{}
+-	tok := &Token{
+-		ClaimSet: c,
+-	}
+-	now := time.Now()
+-	c.setTimes(now)
+-	if tok.Expired() != false {
+-		t.Error("TestTokenExpired: tok.Expired != false")
+-	}
+-	// Set the times as if they were set 2 hours ago.
+-	c.setTimes(now.Add(-2 * time.Hour))
+-	if tok.Expired() != true {
+-		t.Error("TestTokenExpired: tok.Expired != true")
+-	}
+-}
+-
+-// Given a well formed Token, test for proper encoding.
+-func TestTokenEncode(t *testing.T) {
+-	c := &ClaimSet{
+-		Iss:   iss,
+-		Scope: scope,
+-		exp:   time.Unix(exp, 0),
+-		iat:   time.Unix(iat, 0),
+-	}
+-	tok := &Token{
+-		ClaimSet: c,
+-		Header:   stdHeader,
+-		Key:      privateKeyPemBytes,
+-	}
+-	enc, err := tok.Encode()
+-	if err != nil {
+-		t.Errorf("TestTokenEncode:tok.Assertion: %v", err)
+-	}
+-	if enc != tokEnc {
+-		t.Error("TestTokenEncode: enc != tokEnc")
+-		t.Errorf("     enc = %s", enc)
+-		t.Errorf("  tokEnc = %s", tokEnc)
+-	}
+-}
+-
+-// Given a well formed Token we should get back a well formed request.
+-func TestBuildRequest(t *testing.T) {
+-	c := &ClaimSet{
+-		Iss:   iss,
+-		Scope: scope,
+-		exp:   time.Unix(exp, 0),
+-		iat:   time.Unix(iat, 0),
+-	}
+-	tok := &Token{
+-		ClaimSet: c,
+-		Header:   stdHeader,
+-		Key:      privateKeyPemBytes,
+-	}
+-	u, v, err := tok.buildRequest()
+-	if err != nil {
+-		t.Errorf("TestBuildRequest:BuildRequest: %v", err)
+-	}
+-	if u != c.Aud {
+-		t.Error("TestBuildRequest: u != c.Aud")
+-		t.Errorf("      u = %s", u)
+-		t.Errorf("  c.Aud = %s", c.Aud)
+-	}
+-	if v.Get("grant_type") != stdGrantType {
+-		t.Error("TestBuildRequest: grant_type != stdGrantType")
+-		t.Errorf("    grant_type = %s", v.Get("grant_type"))
+-		t.Errorf("  stdGrantType = %s", stdGrantType)
+-	}
+-	if v.Get("assertion") != tokEnc {
+-		t.Error("TestBuildRequest: assertion != tokEnc")
+-		t.Errorf("  assertion = %s", v.Get("assertion"))
+-		t.Errorf("     tokEnc = %s", tokEnc)
+-	}
+-}
+-
+-// Given a well formed access request response we should get back a oauth.Token.
+-func TestHandleResponse(t *testing.T) {
+-	rb := &respBody{
+-		Access:    "1/8xbJqaOZXSUZbHLl5EOtu1pxz3fmmetKx9W8CV4t79M",
+-		Type:      "Bearer",
+-		ExpiresIn: 3600,
+-	}
+-	b, err := json.Marshal(rb)
+-	if err != nil {
+-		t.Errorf("TestHandleResponse:json.Marshal: %v", err)
+-	}
+-	r := &http.Response{
+-		Status:     "200 OK",
+-		StatusCode: 200,
+-		Body:       ioutil.NopCloser(bytes.NewReader(b)),
+-	}
+-	o, err := handleResponse(r)
+-	if err != nil {
+-		t.Errorf("TestHandleResponse:handleResponse: %v", err)
+-	}
+-	if o.AccessToken != rb.Access {
+-		t.Error("TestHandleResponse: o.AccessToken != rb.Access")
+-		t.Errorf("  o.AccessToken = %s", o.AccessToken)
+-		t.Errorf("       rb.Access = %s", rb.Access)
+-	}
+-	if o.Expired() {
+-		t.Error("TestHandleResponse: o.Expired == true")
+-	}
+-}
+-
+-// passthrough signature for test
+-type FakeSigner struct{}
+-
+-func (f FakeSigner) Sign(tok *Token) ([]byte, []byte, error) {
+-	block, _ := pem.Decode(privateKeyPemBytes)
+-	pKey, _ := x509.ParsePKCS1PrivateKey(block.Bytes)
+-	ss := headerEnc + "." + claimSetEnc
+-	h := sha256.New()
+-	h.Write([]byte(ss))
+-	b, _ := rsa.SignPKCS1v15(rand.Reader, pKey, crypto.SHA256, h.Sum(nil))
+-	return []byte(ss), b, nil
+-}
+-
+-// Given an external signer, get back a valid and signed JWT
+-func TestExternalSigner(t *testing.T) {
+-	tok := NewSignerToken(iss, scope, FakeSigner{})
+-	enc, _ := tok.Encode()
+-	if enc != tokEnc {
+-		t.Errorf("TestExternalSigner: enc != tokEnc")
+-		t.Errorf("     enc = %s", enc)
+-		t.Errorf("  tokEnc = %s", tokEnc)
+-	}
+-}
+-
+-func TestHandleResponseWithNewExpiry(t *testing.T) {
+-	rb := &respBody{
+-		IdToken: tokEnc,
+-	}
+-	b, err := json.Marshal(rb)
+-	if err != nil {
+-		t.Errorf("TestHandleResponse:json.Marshal: %v", err)
+-	}
+-	r := &http.Response{
+-		Status:     "200 OK",
+-		StatusCode: 200,
+-		Body:       ioutil.NopCloser(bytes.NewReader(b)),
+-	}
+-	o, err := handleResponse(r)
+-	if err != nil {
+-		t.Errorf("TestHandleResponse:handleResponse: %v", err)
+-	}
+-	if o.Expiry != time.Unix(exp, 0) {
+-		t.Error("TestHandleResponse: o.Expiry != exp")
+-		t.Errorf("  o.Expiry = %s", o.Expiry)
+-		t.Errorf("       exp = %s", time.Unix(exp, 0))
+-	}
+-}
+-
+-// Placeholder for future Assert tests.
+-func TestAssert(t *testing.T) {
+-	// Since this method makes a call to BuildRequest, an htttp.Client, and
+-	// finally HandleResponse there is not much more to test.  This is here
+-	// as a placeholder if that changes.
+-}
+-
+-// Benchmark for the end-to-end encoding of a well formed token.
+-func BenchmarkTokenEncode(b *testing.B) {
+-	b.StopTimer()
+-	c := &ClaimSet{
+-		Iss:   iss,
+-		Scope: scope,
+-		exp:   time.Unix(exp, 0),
+-		iat:   time.Unix(iat, 0),
+-	}
+-	tok := &Token{
+-		ClaimSet: c,
+-		Key:      privateKeyPemBytes,
+-	}
+-	b.StartTimer()
+-	for i := 0; i < b.N; i++ {
+-		tok.Encode()
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/oauth.go b/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/oauth.go
+deleted file mode 100644
+index 79d603d..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/oauth.go
++++ /dev/null
+@@ -1,405 +0,0 @@
+-// Copyright 2011 The goauth2 Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// The oauth package provides support for making
+-// OAuth2-authenticated HTTP requests.
+-//
+-// Example usage:
+-//
+-//	// Specify your configuration. (typically as a global variable)
+-//	var config = &oauth.Config{
+-//		ClientId:     YOUR_CLIENT_ID,
+-//		ClientSecret: YOUR_CLIENT_SECRET,
+-//		Scope:        "https://www.googleapis.com/auth/buzz",
+-//		AuthURL:      "https://accounts.google.com/o/oauth2/auth",
+-//		TokenURL:     "https://accounts.google.com/o/oauth2/token",
+-//		RedirectURL:  "http://you.example.org/handler",
+-//	}
+-//
+-//	// A landing page redirects to the OAuth provider to get the auth code.
+-//	func landing(w http.ResponseWriter, r *http.Request) {
+-//		http.Redirect(w, r, config.AuthCodeURL("foo"), http.StatusFound)
+-//	}
+-//
+-//	// The user will be redirected back to this handler, that takes the
+-//	// "code" query parameter and Exchanges it for an access token.
+-//	func handler(w http.ResponseWriter, r *http.Request) {
+-//		t := &oauth.Transport{Config: config}
+-//		t.Exchange(r.FormValue("code"))
+-//		// The Transport now has a valid Token. Create an *http.Client
+-//		// with which we can make authenticated API requests.
+-//		c := t.Client()
+-//		c.Post(...)
+-//		// ...
+-//		// btw, r.FormValue("state") == "foo"
+-//	}
+-//
+-package oauth
+-
+-import (
+-	"encoding/json"
+-	"errors"
+-	"fmt"
+-	"io"
+-	"io/ioutil"
+-	"mime"
+-	"net/http"
+-	"net/url"
+-	"os"
+-	"strings"
+-	"time"
+-)
+-
+-type OAuthError struct {
+-	prefix string
+-	msg    string
+-}
+-
+-func (oe OAuthError) Error() string {
+-	return "OAuthError: " + oe.prefix + ": " + oe.msg
+-}
+-
+-// Cache specifies the methods that implement a Token cache.
+-type Cache interface {
+-	Token() (*Token, error)
+-	PutToken(*Token) error
+-}
+-
+-// CacheFile implements Cache. Its value is the name of the file in which
+-// the Token is stored in JSON format.
+-type CacheFile string
+-
+-func (f CacheFile) Token() (*Token, error) {
+-	file, err := os.Open(string(f))
+-	if err != nil {
+-		return nil, OAuthError{"CacheFile.Token", err.Error()}
+-	}
+-	defer file.Close()
+-	tok := &Token{}
+-	if err := json.NewDecoder(file).Decode(tok); err != nil {
+-		return nil, OAuthError{"CacheFile.Token", err.Error()}
+-	}
+-	return tok, nil
+-}
+-
+-func (f CacheFile) PutToken(tok *Token) error {
+-	file, err := os.OpenFile(string(f), os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0600)
+-	if err != nil {
+-		return OAuthError{"CacheFile.PutToken", err.Error()}
+-	}
+-	if err := json.NewEncoder(file).Encode(tok); err != nil {
+-		file.Close()
+-		return OAuthError{"CacheFile.PutToken", err.Error()}
+-	}
+-	if err := file.Close(); err != nil {
+-		return OAuthError{"CacheFile.PutToken", err.Error()}
+-	}
+-	return nil
+-}
+-
+-// Config is the configuration of an OAuth consumer.
+-type Config struct {
+-	// ClientId is the OAuth client identifier used when communicating with
+-	// the configured OAuth provider.
+-	ClientId string
+-
+-	// ClientSecret is the OAuth client secret used when communicating with
+-	// the configured OAuth provider.
+-	ClientSecret string
+-
+-	// Scope identifies the level of access being requested. Multiple scope
+-	// values should be provided as a space-delimited string.
+-	Scope string
+-
+-	// AuthURL is the URL the user will be directed to in order to grant
+-	// access.
+-	AuthURL string
+-
+-	// TokenURL is the URL used to retrieve OAuth tokens.
+-	TokenURL string
+-
+-	// RedirectURL is the URL to which the user will be returned after
+-	// granting (or denying) access.
+-	RedirectURL string
+-
+-	// TokenCache allows tokens to be cached for subsequent requests.
+-	TokenCache Cache
+-
+-	AccessType string // Optional, "online" (default) or "offline", no refresh token if "online"
+-
+-	// ApprovalPrompt indicates whether the user should be
+-	// re-prompted for consent. If set to "auto" (default) the
+-	// user will be prompted only if they haven't previously
+-	// granted consent and the code can only be exchanged for an
+-	// access token.
+-	// If set to "force" the user will always be prompted, and the
+-	// code can be exchanged for a refresh token.
+-	ApprovalPrompt string
+-}
+-
+-// Token contains an end-user's tokens.
+-// This is the data you must store to persist authentication.
+-type Token struct {
+-	AccessToken  string
+-	RefreshToken string
+-	Expiry       time.Time         // If zero the token has no (known) expiry time.
+-	Extra        map[string]string // May be nil.
+-}
+-
+-func (t *Token) Expired() bool {
+-	if t.Expiry.IsZero() {
+-		return false
+-	}
+-	return t.Expiry.Before(time.Now())
+-}
+-
+-// Transport implements http.RoundTripper. When configured with a valid
+-// Config and Token it can be used to make authenticated HTTP requests.
+-//
+-//	t := &oauth.Transport{config}
+-//      t.Exchange(code)
+-//      // t now contains a valid Token
+-//	r, _, err := t.Client().Get("http://example.org/url/requiring/auth")
+-//
+-// It will automatically refresh the Token if it can,
+-// updating the supplied Token in place.
+-type Transport struct {
+-	*Config
+-	*Token
+-
+-	// Transport is the HTTP transport to use when making requests.
+-	// It will default to http.DefaultTransport if nil.
+-	// (It should never be an oauth.Transport.)
+-	Transport http.RoundTripper
+-}
+-
+-// Client returns an *http.Client that makes OAuth-authenticated requests.
+-func (t *Transport) Client() *http.Client {
+-	return &http.Client{Transport: t}
+-}
+-
+-func (t *Transport) transport() http.RoundTripper {
+-	if t.Transport != nil {
+-		return t.Transport
+-	}
+-	return http.DefaultTransport
+-}
+-
+-// AuthCodeURL returns a URL that the end-user should be redirected to,
+-// so that they may obtain an authorization code.
+-func (c *Config) AuthCodeURL(state string) string {
+-	url_, err := url.Parse(c.AuthURL)
+-	if err != nil {
+-		panic("AuthURL malformed: " + err.Error())
+-	}
+-	q := url.Values{
+-		"response_type":   {"code"},
+-		"client_id":       {c.ClientId},
+-		"redirect_uri":    {c.RedirectURL},
+-		"scope":           {c.Scope},
+-		"state":           {state},
+-		"access_type":     {c.AccessType},
+-		"approval_prompt": {c.ApprovalPrompt},
+-	}.Encode()
+-	if url_.RawQuery == "" {
+-		url_.RawQuery = q
+-	} else {
+-		url_.RawQuery += "&" + q
+-	}
+-	return url_.String()
+-}
+-
+-// Exchange takes a code and gets access Token from the remote server.
+-func (t *Transport) Exchange(code string) (*Token, error) {
+-	if t.Config == nil {
+-		return nil, OAuthError{"Exchange", "no Config supplied"}
+-	}
+-
+-	// If the transport or the cache already has a token, it is
+-	// passed to `updateToken` to preserve existing refresh token.
+-	tok := t.Token
+-	if tok == nil && t.TokenCache != nil {
+-		tok, _ = t.TokenCache.Token()
+-	}
+-	if tok == nil {
+-		tok = new(Token)
+-	}
+-	err := t.updateToken(tok, url.Values{
+-		"grant_type":   {"authorization_code"},
+-		"redirect_uri": {t.RedirectURL},
+-		"scope":        {t.Scope},
+-		"code":         {code},
+-	})
+-	if err != nil {
+-		return nil, err
+-	}
+-	t.Token = tok
+-	if t.TokenCache != nil {
+-		return tok, t.TokenCache.PutToken(tok)
+-	}
+-	return tok, nil
+-}
+-
+-// RoundTrip executes a single HTTP transaction using the Transport's
+-// Token as authorization headers.
+-//
+-// This method will attempt to renew the Token if it has expired and may return
+-// an error related to that Token renewal before attempting the client request.
+-// If the Token cannot be renewed a non-nil os.Error value will be returned.
+-// If the Token is invalid callers should expect HTTP-level errors,
+-// as indicated by the Response's StatusCode.
+-func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
+-	if t.Token == nil {
+-		if t.Config == nil {
+-			return nil, OAuthError{"RoundTrip", "no Config supplied"}
+-		}
+-		if t.TokenCache == nil {
+-			return nil, OAuthError{"RoundTrip", "no Token supplied"}
+-		}
+-		var err error
+-		t.Token, err = t.TokenCache.Token()
+-		if err != nil {
+-			return nil, err
+-		}
+-	}
+-
+-	// Refresh the Token if it has expired.
+-	if t.Expired() {
+-		if err := t.Refresh(); err != nil {
+-			return nil, err
+-		}
+-	}
+-
+-	// To set the Authorization header, we must make a copy of the Request
+-	// so that we don't modify the Request we were given.
+-	// This is required by the specification of http.RoundTripper.
+-	req = cloneRequest(req)
+-	req.Header.Set("Authorization", "Bearer "+t.AccessToken)
+-
+-	// Make the HTTP request.
+-	return t.transport().RoundTrip(req)
+-}
+-
+-// cloneRequest returns a clone of the provided *http.Request.
+-// The clone is a shallow copy of the struct and its Header map.
+-func cloneRequest(r *http.Request) *http.Request {
+-	// shallow copy of the struct
+-	r2 := new(http.Request)
+-	*r2 = *r
+-	// deep copy of the Header
+-	r2.Header = make(http.Header)
+-	for k, s := range r.Header {
+-		r2.Header[k] = s
+-	}
+-	return r2
+-}
+-
+-// Refresh renews the Transport's AccessToken using its RefreshToken.
+-func (t *Transport) Refresh() error {
+-	if t.Token == nil {
+-		return OAuthError{"Refresh", "no existing Token"}
+-	}
+-	if t.RefreshToken == "" {
+-		return OAuthError{"Refresh", "Token expired; no Refresh Token"}
+-	}
+-	if t.Config == nil {
+-		return OAuthError{"Refresh", "no Config supplied"}
+-	}
+-
+-	err := t.updateToken(t.Token, url.Values{
+-		"grant_type":    {"refresh_token"},
+-		"refresh_token": {t.RefreshToken},
+-	})
+-	if err != nil {
+-		return err
+-	}
+-	if t.TokenCache != nil {
+-		return t.TokenCache.PutToken(t.Token)
+-	}
+-	return nil
+-}
+-
+-// AuthenticateClient gets an access Token using the client_credentials grant
+-// type.
+-func (t *Transport) AuthenticateClient() error {
+-	if t.Config == nil {
+-		return OAuthError{"Exchange", "no Config supplied"}
+-	}
+-	if t.Token == nil {
+-		t.Token = &Token{}
+-	}
+-	return t.updateToken(t.Token, url.Values{"grant_type": {"client_credentials"}})
+-}
+-
+-func (t *Transport) updateToken(tok *Token, v url.Values) error {
+-	v.Set("client_id", t.ClientId)
+-	v.Set("client_secret", t.ClientSecret)
+-	client := &http.Client{Transport: t.transport()}
+-	req, err := http.NewRequest("POST", t.TokenURL, strings.NewReader(v.Encode()))
+-	if err != nil {
+-		return err
+-	}
+-	req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
+-	req.SetBasicAuth(t.ClientId, t.ClientSecret)
+-	r, err := client.Do(req)
+-	if err != nil {
+-		return err
+-	}
+-	defer r.Body.Close()
+-	if r.StatusCode != 200 {
+-		return OAuthError{"updateToken", r.Status}
+-	}
+-	var b struct {
+-		Access    string        `json:"access_token"`
+-		Refresh   string        `json:"refresh_token"`
+-		ExpiresIn time.Duration `json:"expires_in"`
+-		Id        string        `json:"id_token"`
+-	}
+-
+-	body, err := ioutil.ReadAll(io.LimitReader(r.Body, 1<<20))
+-	if err != nil {
+-		return err
+-	}
+-
+-	content, _, _ := mime.ParseMediaType(r.Header.Get("Content-Type"))
+-	switch content {
+-	case "application/x-www-form-urlencoded", "text/plain":
+-		vals, err := url.ParseQuery(string(body))
+-		if err != nil {
+-			return err
+-		}
+-
+-		b.Access = vals.Get("access_token")
+-		b.Refresh = vals.Get("refresh_token")
+-		b.ExpiresIn, _ = time.ParseDuration(vals.Get("expires_in") + "s")
+-		b.Id = vals.Get("id_token")
+-	default:
+-		if err = json.Unmarshal(body, &b); err != nil {
+-			return fmt.Errorf("got bad response from server: %q", body)
+-		}
+-		// The JSON parser treats the unitless ExpiresIn like 'ns' instead of 's' as above,
+-		// so compensate here.
+-		b.ExpiresIn *= time.Second
+-	}
+-	if b.Access == "" {
+-		return errors.New("received empty access token from authorization server")
+-	}
+-	tok.AccessToken = b.Access
+-	// Don't overwrite `RefreshToken` with an empty value
+-	if len(b.Refresh) > 0 {
+-		tok.RefreshToken = b.Refresh
+-	}
+-	if b.ExpiresIn == 0 {
+-		tok.Expiry = time.Time{}
+-	} else {
+-		tok.Expiry = time.Now().Add(b.ExpiresIn)
+-	}
+-	if b.Id != "" {
+-		if tok.Extra == nil {
+-			tok.Extra = make(map[string]string)
+-		}
+-		tok.Extra["id_token"] = b.Id
+-	}
+-	return nil
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/oauth_test.go b/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/oauth_test.go
+deleted file mode 100644
+index b903c16..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/goauth2/oauth/oauth_test.go
++++ /dev/null
+@@ -1,214 +0,0 @@
+-// Copyright 2011 The goauth2 Authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package oauth
+-
+-import (
+-	"io"
+-	"io/ioutil"
+-	"net/http"
+-	"net/http/httptest"
+-	"net/url"
+-	"os"
+-	"path/filepath"
+-	"runtime"
+-	"testing"
+-	"time"
+-)
+-
+-var requests = []struct {
+-	path, query, auth string // request
+-	contenttype, body string // response
+-}{
+-	{
+-		path:        "/token",
+-		query:       "grant_type=authorization_code&code=c0d3&client_id=cl13nt1d&client_secret=s3cr3t",
+-		contenttype: "application/json",
+-		auth:        "Basic Y2wxM250MWQ6czNjcjN0",
+-		body: `
+-			{
+-				"access_token":"token1",
+-				"refresh_token":"refreshtoken1",
+-				"id_token":"idtoken1",
+-				"expires_in":3600
+-			}
+-		`,
+-	},
+-	{path: "/secure", auth: "Bearer token1", body: "first payload"},
+-	{
+-		path:        "/token",
+-		query:       "grant_type=refresh_token&refresh_token=refreshtoken1&client_id=cl13nt1d&client_secret=s3cr3t",
+-		contenttype: "application/json",
+-		auth:        "Basic Y2wxM250MWQ6czNjcjN0",
+-		body: `
+-			{
+-				"access_token":"token2",
+-				"refresh_token":"refreshtoken2",
+-				"id_token":"idtoken2",
+-				"expires_in":3600
+-			}
+-		`,
+-	},
+-	{path: "/secure", auth: "Bearer token2", body: "second payload"},
+-	{
+-		path:        "/token",
+-		query:       "grant_type=refresh_token&refresh_token=refreshtoken2&client_id=cl13nt1d&client_secret=s3cr3t",
+-		contenttype: "application/x-www-form-urlencoded",
+-		body:        "access_token=token3&refresh_token=refreshtoken3&id_token=idtoken3&expires_in=3600",
+-		auth:        "Basic Y2wxM250MWQ6czNjcjN0",
+-	},
+-	{path: "/secure", auth: "Bearer token3", body: "third payload"},
+-	{
+-		path:        "/token",
+-		query:       "grant_type=client_credentials&client_id=cl13nt1d&client_secret=s3cr3t",
+-		contenttype: "application/json",
+-		auth:        "Basic Y2wxM250MWQ6czNjcjN0",
+-		body: `
+-			{
+-				"access_token":"token4",
+-				"expires_in":3600
+-			}
+-		`,
+-	},
+-	{path: "/secure", auth: "Bearer token4", body: "fourth payload"},
+-}
+-
+-func TestOAuth(t *testing.T) {
+-	// Set up test server.
+-	n := 0
+-	handler := func(w http.ResponseWriter, r *http.Request) {
+-		if n >= len(requests) {
+-			t.Errorf("too many requests: %d", n)
+-			return
+-		}
+-		req := requests[n]
+-		n++
+-
+-		// Check request.
+-		if g, w := r.URL.Path, req.path; g != w {
+-			t.Errorf("request[%d] got path %s, want %s", n, g, w)
+-		}
+-		want, _ := url.ParseQuery(req.query)
+-		for k := range want {
+-			if g, w := r.FormValue(k), want.Get(k); g != w {
+-				t.Errorf("query[%s] = %s, want %s", k, g, w)
+-			}
+-		}
+-		if g, w := r.Header.Get("Authorization"), req.auth; w != "" && g != w {
+-			t.Errorf("Authorization: %v, want %v", g, w)
+-		}
+-
+-		// Send response.
+-		w.Header().Set("Content-Type", req.contenttype)
+-		io.WriteString(w, req.body)
+-	}
+-	server := httptest.NewServer(http.HandlerFunc(handler))
+-	defer server.Close()
+-
+-	config := &Config{
+-		ClientId:     "cl13nt1d",
+-		ClientSecret: "s3cr3t",
+-		Scope:        "https://example.net/scope",
+-		AuthURL:      server.URL + "/auth",
+-		TokenURL:     server.URL + "/token",
+-	}
+-
+-	// TODO(adg): test AuthCodeURL
+-
+-	transport := &Transport{Config: config}
+-	_, err := transport.Exchange("c0d3")
+-	if err != nil {
+-		t.Fatalf("Exchange: %v", err)
+-	}
+-	checkToken(t, transport.Token, "token1", "refreshtoken1", "idtoken1")
+-
+-	c := transport.Client()
+-	resp, err := c.Get(server.URL + "/secure")
+-	if err != nil {
+-		t.Fatalf("Get: %v", err)
+-	}
+-	checkBody(t, resp, "first payload")
+-
+-	// test automatic refresh
+-	transport.Expiry = time.Now().Add(-time.Hour)
+-	resp, err = c.Get(server.URL + "/secure")
+-	if err != nil {
+-		t.Fatalf("Get: %v", err)
+-	}
+-	checkBody(t, resp, "second payload")
+-	checkToken(t, transport.Token, "token2", "refreshtoken2", "idtoken2")
+-
+-	// refresh one more time, but get URL-encoded token instead of JSON
+-	transport.Expiry = time.Now().Add(-time.Hour)
+-	resp, err = c.Get(server.URL + "/secure")
+-	if err != nil {
+-		t.Fatalf("Get: %v", err)
+-	}
+-	checkBody(t, resp, "third payload")
+-	checkToken(t, transport.Token, "token3", "refreshtoken3", "idtoken3")
+-
+-	transport.Token = &Token{}
+-	err = transport.AuthenticateClient()
+-	if err != nil {
+-		t.Fatalf("AuthenticateClient: %v", err)
+-	}
+-	checkToken(t, transport.Token, "token4", "", "")
+-	resp, err = c.Get(server.URL + "/secure")
+-	if err != nil {
+-		t.Fatalf("Get: %v", err)
+-	}
+-	checkBody(t, resp, "fourth payload")
+-}
+-
+-func checkToken(t *testing.T, tok *Token, access, refresh, id string) {
+-	if g, w := tok.AccessToken, access; g != w {
+-		t.Errorf("AccessToken = %q, want %q", g, w)
+-	}
+-	if g, w := tok.RefreshToken, refresh; g != w {
+-		t.Errorf("RefreshToken = %q, want %q", g, w)
+-	}
+-	if g, w := tok.Extra["id_token"], id; g != w {
+-		t.Errorf("Extra['id_token'] = %q, want %q", g, w)
+-	}
+-	exp := tok.Expiry.Sub(time.Now())
+-	if (time.Hour-time.Second) > exp || exp > time.Hour {
+-		t.Errorf("Expiry = %v, want ~1 hour", exp)
+-	}
+-}
+-
+-func checkBody(t *testing.T, r *http.Response, body string) {
+-	b, err := ioutil.ReadAll(r.Body)
+-	if err != nil {
+-		t.Errorf("reading reponse body: %v, want %q", err, body)
+-	}
+-	if g, w := string(b), body; g != w {
+-		t.Errorf("request body mismatch: got %q, want %q", g, w)
+-	}
+-}
+-
+-func TestCachePermissions(t *testing.T) {
+-	if runtime.GOOS == "windows" {
+-		// Windows doesn't support file mode bits.
+-		return
+-	}
+-
+-	td, err := ioutil.TempDir("", "oauth-test")
+-	if err != nil {
+-		t.Fatalf("ioutil.TempDir: %v", err)
+-	}
+-	defer os.RemoveAll(td)
+-	tempFile := filepath.Join(td, "cache-file")
+-
+-	cf := CacheFile(tempFile)
+-	if err := cf.PutToken(new(Token)); err != nil {
+-		t.Fatalf("PutToken: %v", err)
+-	}
+-	fi, err := os.Stat(tempFile)
+-	if err != nil {
+-		t.Fatalf("os.Stat: %v", err)
+-	}
+-	if fi.Mode()&0077 != 0 {
+-		t.Errorf("Created cache file has mode %#o, want non-accessible to group+other", fi.Mode())
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/compute/v1/compute-api.json b/Godeps/_workspace/src/code.google.com/p/google-api-go-client/compute/v1/compute-api.json
+deleted file mode 100644
+index ec71edb..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/compute/v1/compute-api.json
++++ /dev/null
+@@ -1,9229 +0,0 @@
+-{
+- "kind": "discovery#restDescription",
+- "etag": "\"FrPV2U6xXFUq8eRv_PO3IoAURkc/Qrs2SriggaSYO28pp7DFm8wFPbo\"",
+- "discoveryVersion": "v1",
+- "id": "compute:v1",
+- "name": "compute",
+- "version": "v1",
+- "revision": "20140625",
+- "title": "Compute Engine API",
+- "description": "API for the Google Compute Engine service.",
+- "ownerDomain": "google.com",
+- "ownerName": "Google",
+- "icons": {
+-  "x16": "http://www.google.com/images/icons/product/compute_engine-16.png",
+-  "x32": "http://www.google.com/images/icons/product/compute_engine-32.png"
+- },
+- "documentationLink": "https://developers.google.com/compute/docs/reference/latest/",
+- "protocol": "rest",
+- "baseUrl": "https://www.googleapis.com/compute/v1/projects/",
+- "basePath": "/compute/v1/projects/",
+- "rootUrl": "https://www.googleapis.com/",
+- "servicePath": "compute/v1/projects/",
+- "batchPath": "batch",
+- "parameters": {
+-  "alt": {
+-   "type": "string",
+-   "description": "Data format for the response.",
+-   "default": "json",
+-   "enum": [
+-    "json"
+-   ],
+-   "enumDescriptions": [
+-    "Responses with Content-Type of application/json"
+-   ],
+-   "location": "query"
+-  },
+-  "fields": {
+-   "type": "string",
+-   "description": "Selector specifying which fields to include in a partial response.",
+-   "location": "query"
+-  },
+-  "key": {
+-   "type": "string",
+-   "description": "API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.",
+-   "location": "query"
+-  },
+-  "oauth_token": {
+-   "type": "string",
+-   "description": "OAuth 2.0 token for the current user.",
+-   "location": "query"
+-  },
+-  "prettyPrint": {
+-   "type": "boolean",
+-   "description": "Returns response with indentations and line breaks.",
+-   "default": "true",
+-   "location": "query"
+-  },
+-  "quotaUser": {
+-   "type": "string",
+-   "description": "Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. Overrides userIp if both are provided.",
+-   "location": "query"
+-  },
+-  "userIp": {
+-   "type": "string",
+-   "description": "IP address of the site where the request originates. Use this if you want to enforce per-user limits.",
+-   "location": "query"
+-  }
+- },
+- "auth": {
+-  "oauth2": {
+-   "scopes": {
+-    "https://www.googleapis.com/auth/compute": {
+-     "description": "View and manage your Google Compute Engine resources"
+-    },
+-    "https://www.googleapis.com/auth/compute.readonly": {
+-     "description": "View your Google Compute Engine resources"
+-    },
+-    "https://www.googleapis.com/auth/devstorage.full_control": {
+-     "description": "Manage your data and permissions in Google Cloud Storage"
+-    },
+-    "https://www.googleapis.com/auth/devstorage.read_only": {
+-     "description": "View your data in Google Cloud Storage"
+-    },
+-    "https://www.googleapis.com/auth/devstorage.read_write": {
+-     "description": "Manage your data in Google Cloud Storage"
+-    }
+-   }
+-  }
+- },
+- "schemas": {
+-  "AccessConfig": {
+-   "id": "AccessConfig",
+-   "type": "object",
+-   "description": "An access configuration attached to an instance's network interface.",
+-   "properties": {
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#accessConfig"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of this access configuration."
+-    },
+-    "natIP": {
+-     "type": "string",
+-     "description": "An external IP address associated with this instance. Specify an unused static IP address available to the project. If not specified, the external IP will be drawn from a shared ephemeral pool."
+-    },
+-    "type": {
+-     "type": "string",
+-     "description": "Type of configuration. Must be set to \"ONE_TO_ONE_NAT\". This configures port-for-port NAT to the internet.",
+-     "default": "ONE_TO_ONE_NAT",
+-     "enum": [
+-      "ONE_TO_ONE_NAT"
+-     ],
+-     "enumDescriptions": [
+-      ""
+-     ]
+-    }
+-   }
+-  },
+-  "Address": {
+-   "id": "Address",
+-   "type": "object",
+-   "description": "A reserved address resource.",
+-   "properties": {
+-    "address": {
+-     "type": "string",
+-     "description": "The IP address represented by this resource."
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#address"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-     "annotations": {
+-      "required": [
+-       "compute.addresses.insert"
+-      ]
+-     }
+-    },
+-    "region": {
+-     "type": "string",
+-     "description": "URL of the region where the regional address resides (output only). This field is not applicable to global addresses."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "status": {
+-     "type": "string",
+-     "description": "The status of the address (output only).",
+-     "enum": [
+-      "IN_USE",
+-      "RESERVED"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      ""
+-     ]
+-    },
+-    "users": {
+-     "type": "array",
+-     "description": "The resources that are using this address resource.",
+-     "items": {
+-      "type": "string"
+-     }
+-    }
+-   }
+-  },
+-  "AddressAggregatedList": {
+-   "id": "AddressAggregatedList",
+-   "type": "object",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "object",
+-     "description": "A map of scoped address lists.",
+-     "additionalProperties": {
+-      "$ref": "AddressesScopedList",
+-      "description": "Name of the scope containing this set of addresses."
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#addressAggregatedList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "AddressList": {
+-   "id": "AddressList",
+-   "type": "object",
+-   "description": "Contains a list of address resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The address resources.",
+-     "items": {
+-      "$ref": "Address"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#addressList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    }
+-   }
+-  },
+-  "AddressesScopedList": {
+-   "id": "AddressesScopedList",
+-   "type": "object",
+-   "properties": {
+-    "addresses": {
+-     "type": "array",
+-     "description": "List of addresses contained in this scope.",
+-     "items": {
+-      "$ref": "Address"
+-     }
+-    },
+-    "warning": {
+-     "type": "object",
+-     "description": "Informational warning which replaces the list of addresses when the list is empty.",
+-     "properties": {
+-      "code": {
+-       "type": "string",
+-       "description": "The warning type identifier for this warning.",
+-       "enum": [
+-        "DEPRECATED_RESOURCE_USED",
+-        "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-        "INJECTED_KERNELS_DEPRECATED",
+-        "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-        "NEXT_HOP_CANNOT_IP_FORWARD",
+-        "NEXT_HOP_INSTANCE_NOT_FOUND",
+-        "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-        "NEXT_HOP_NOT_RUNNING",
+-        "NO_RESULTS_ON_PAGE",
+-        "REQUIRED_TOS_AGREEMENT",
+-        "RESOURCE_NOT_DELETED",
+-        "UNREACHABLE"
+-       ],
+-       "enumDescriptions": [
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        ""
+-       ]
+-      },
+-      "data": {
+-       "type": "array",
+-       "description": "Metadata for this warning in 'key: value' format.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "key": {
+-          "type": "string",
+-          "description": "A key for the warning data."
+-         },
+-         "value": {
+-          "type": "string",
+-          "description": "A warning data value corresponding to the key."
+-         }
+-        }
+-       }
+-      },
+-      "message": {
+-       "type": "string",
+-       "description": "Optional human-readable details for this warning."
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "AttachedDisk": {
+-   "id": "AttachedDisk",
+-   "type": "object",
+-   "description": "An instance-attached disk resource.",
+-   "properties": {
+-    "autoDelete": {
+-     "type": "boolean",
+-     "description": "Whether the disk will be auto-deleted when the instance is deleted (but not when the disk is detached from the instance)."
+-    },
+-    "boot": {
+-     "type": "boolean",
+-     "description": "Indicates that this is a boot disk. VM will use the first partition of the disk for its root filesystem."
+-    },
+-    "deviceName": {
+-     "type": "string",
+-     "description": "Persistent disk only; must be unique within the instance when specified. This represents a unique device name that is reflected into the /dev/ tree of a Linux operating system running within the instance. If not specified, a default will be chosen by the system."
+-    },
+-    "index": {
+-     "type": "integer",
+-     "description": "A zero-based index to assign to this disk, where 0 is reserved for the boot disk. If not specified, the server will choose an appropriate value (output only).",
+-     "format": "int32"
+-    },
+-    "initializeParams": {
+-     "$ref": "AttachedDiskInitializeParams",
+-     "description": "Initialization parameters."
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#attachedDisk"
+-    },
+-    "licenses": {
+-     "type": "array",
+-     "description": "Public visible licenses.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "mode": {
+-     "type": "string",
+-     "description": "The mode in which to attach this disk, either \"READ_WRITE\" or \"READ_ONLY\".",
+-     "enum": [
+-      "READ_ONLY",
+-      "READ_WRITE"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      ""
+-     ]
+-    },
+-    "source": {
+-     "type": "string",
+-     "description": "Persistent disk only; the URL of the persistent disk resource."
+-    },
+-    "type": {
+-     "type": "string",
+-     "description": "Type of the disk, either \"SCRATCH\" or \"PERSISTENT\". Note that persistent disks must be created before you can specify them here.",
+-     "enum": [
+-      "PERSISTENT",
+-      "SCRATCH"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      ""
+-     ],
+-     "annotations": {
+-      "required": [
+-       "compute.instances.insert"
+-      ]
+-     }
+-    }
+-   }
+-  },
+-  "AttachedDiskInitializeParams": {
+-   "id": "AttachedDiskInitializeParams",
+-   "type": "object",
+-   "description": "Initialization parameters for the new disk (Mutually exclusive with 'source', can currently only be specified on the boot disk).",
+-   "properties": {
+-    "diskName": {
+-     "type": "string",
+-     "description": "Name of the disk (when not provided defaults to the name of the instance)."
+-    },
+-    "diskSizeGb": {
+-     "type": "string",
+-     "description": "Size of the disk in base-2 GB.",
+-     "format": "int64"
+-    },
+-    "diskType": {
+-     "type": "string",
+-     "description": "URL of the disk type resource describing which disk type to use to create the disk; provided by the client when the disk is created."
+-    },
+-    "sourceImage": {
+-     "type": "string",
+-     "description": "The source image used to create this disk."
+-    }
+-   }
+-  },
+-  "Backend": {
+-   "id": "Backend",
+-   "type": "object",
+-   "description": "Message containing information of one individual backend.",
+-   "properties": {
+-    "balancingMode": {
+-     "type": "string",
+-     "description": "The balancing mode of this backend, default is UTILIZATION.",
+-     "enum": [
+-      "RATE",
+-      "UTILIZATION"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      ""
+-     ]
+-    },
+-    "capacityScaler": {
+-     "type": "number",
+-     "description": "The multiplier (a value between 0 and 1e6) of the max capacity (CPU or RPS, depending on 'balancingMode') the group should serve up to. 0 means the group is totally drained. Default value is 1. Valid range is [0, 1e6].",
+-     "format": "float"
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource, which is provided by the client when the resource is created."
+-    },
+-    "group": {
+-     "type": "string",
+-     "description": "URL of a zonal Cloud Resource View resource. This resoure view defines the list of instances that serve traffic. Member virtual machine instances from each resource view must live in the same zone as the resource view itself."
+-    },
+-    "maxRate": {
+-     "type": "integer",
+-     "description": "The max RPS of the group. Can be used with either balancing mode, but required if RATE mode. For RATE mode, either maxRate or maxRatePerInstance must be set.",
+-     "format": "int32"
+-    },
+-    "maxRatePerInstance": {
+-     "type": "number",
+-     "description": "The max RPS that a single backed instance can handle. This is used to calculate the capacity of the group. Can be used in either balancing mode. For RATE mode, either maxRate or maxRatePerInstance must be set.",
+-     "format": "float"
+-    },
+-    "maxUtilization": {
+-     "type": "number",
+-     "description": "Used when 'balancingMode' is UTILIZATION. This ratio defines the CPU utilization target for the group. The default is 0.8. Valid range is [0, 1].",
+-     "format": "float"
+-    }
+-   }
+-  },
+-  "BackendService": {
+-   "id": "BackendService",
+-   "type": "object",
+-   "description": "A BackendService resource. This resource defines a group of backend VMs together with their serving capacity.",
+-   "properties": {
+-    "backends": {
+-     "type": "array",
+-     "description": "The list of backends that serve this BackendService.",
+-     "items": {
+-      "$ref": "Backend"
+-     }
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "fingerprint": {
+-     "type": "string",
+-     "description": "Fingerprint of this resource. A hash of the contents stored in this object. This field is used in optimistic locking. This field will be ignored when inserting a BackendService. An up-to-date fingerprint must be provided in order to update the BackendService.",
+-     "format": "byte"
+-    },
+-    "healthChecks": {
+-     "type": "array",
+-     "description": "The list of URLs to the HttpHealthCheck resource for health checking this BackendService. Currently at most one health check can be specified, and a health check is required.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#backendService"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "port": {
+-     "type": "integer",
+-     "description": "The TCP port to connect on the backend. The default value is 80.",
+-     "format": "int32"
+-    },
+-    "protocol": {
+-     "type": "string",
+-     "enum": [
+-      "HTTP"
+-     ],
+-     "enumDescriptions": [
+-      ""
+-     ]
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "timeoutSec": {
+-     "type": "integer",
+-     "description": "How many seconds to wait for the backend before considering it a failed request. Default is 30 seconds.",
+-     "format": "int32"
+-    }
+-   }
+-  },
+-  "BackendServiceGroupHealth": {
+-   "id": "BackendServiceGroupHealth",
+-   "type": "object",
+-   "properties": {
+-    "healthStatus": {
+-     "type": "array",
+-     "items": {
+-      "$ref": "HealthStatus"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#backendServiceGroupHealth"
+-    }
+-   }
+-  },
+-  "BackendServiceList": {
+-   "id": "BackendServiceList",
+-   "type": "object",
+-   "description": "Contains a list of BackendService resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The BackendService resources.",
+-     "items": {
+-      "$ref": "BackendService"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#backendServiceList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "DeprecationStatus": {
+-   "id": "DeprecationStatus",
+-   "type": "object",
+-   "description": "Deprecation status for a public resource.",
+-   "properties": {
+-    "deleted": {
+-     "type": "string",
+-     "description": "An optional RFC3339 timestamp on or after which the deprecation state of this resource will be changed to DELETED."
+-    },
+-    "deprecated": {
+-     "type": "string",
+-     "description": "An optional RFC3339 timestamp on or after which the deprecation state of this resource will be changed to DEPRECATED."
+-    },
+-    "obsolete": {
+-     "type": "string",
+-     "description": "An optional RFC3339 timestamp on or after which the deprecation state of this resource will be changed to OBSOLETE."
+-    },
+-    "replacement": {
+-     "type": "string",
+-     "description": "A URL of the suggested replacement for the deprecated resource. The deprecated resource and its replacement must be resources of the same kind."
+-    },
+-    "state": {
+-     "type": "string",
+-     "description": "The deprecation state. Can be \"DEPRECATED\", \"OBSOLETE\", or \"DELETED\". Operations which create a new resource using a \"DEPRECATED\" resource will return successfully, but with a warning indicating the deprecated resource and recommending its replacement. New uses of \"OBSOLETE\" or \"DELETED\" resources will result in an error.",
+-     "enum": [
+-      "DELETED",
+-      "DEPRECATED",
+-      "OBSOLETE"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      "",
+-      ""
+-     ]
+-    }
+-   }
+-  },
+-  "Disk": {
+-   "id": "Disk",
+-   "type": "object",
+-   "description": "A persistent disk resource.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#disk"
+-    },
+-    "licenses": {
+-     "type": "array",
+-     "description": "Public visible licenses.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-     "annotations": {
+-      "required": [
+-       "compute.disks.insert"
+-      ]
+-     }
+-    },
+-    "options": {
+-     "type": "string",
+-     "description": "Internal use only."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "sizeGb": {
+-     "type": "string",
+-     "description": "Size of the persistent disk, specified in GB. This parameter is optional when creating a disk from a disk image or a snapshot, otherwise it is required.",
+-     "format": "int64"
+-    },
+-    "sourceImage": {
+-     "type": "string",
+-     "description": "The source image used to create this disk. Once the source image has been deleted from the system, this field will not be set, even if an image with the same name has been re-created."
+-    },
+-    "sourceImageId": {
+-     "type": "string",
+-     "description": "The 'id' value of the image used to create this disk. This value may be used to determine whether the disk was created from the current or a previous instance of a given image."
+-    },
+-    "sourceSnapshot": {
+-     "type": "string",
+-     "description": "The source snapshot used to create this disk. Once the source snapshot has been deleted from the system, this field will be cleared, and will not be set even if a snapshot with the same name has been re-created."
+-    },
+-    "sourceSnapshotId": {
+-     "type": "string",
+-     "description": "The 'id' value of the snapshot used to create this disk. This value may be used to determine whether the disk was created from the current or a previous instance of a given disk snapshot."
+-    },
+-    "status": {
+-     "type": "string",
+-     "description": "The status of disk creation (output only).",
+-     "enum": [
+-      "CREATING",
+-      "FAILED",
+-      "READY",
+-      "RESTORING"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      "",
+-      "",
+-      ""
+-     ]
+-    },
+-    "type": {
+-     "type": "string",
+-     "description": "URL of the disk type resource describing which disk type to use to create the disk; provided by the client when the disk is created."
+-    },
+-    "zone": {
+-     "type": "string",
+-     "description": "URL of the zone where the disk resides (output only)."
+-    }
+-   }
+-  },
+-  "DiskAggregatedList": {
+-   "id": "DiskAggregatedList",
+-   "type": "object",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "object",
+-     "description": "A map of scoped disk lists.",
+-     "additionalProperties": {
+-      "$ref": "DisksScopedList",
+-      "description": "Name of the scope containing this set of disks."
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#diskAggregatedList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "DiskList": {
+-   "id": "DiskList",
+-   "type": "object",
+-   "description": "Contains a list of persistent disk resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The persistent disk resources.",
+-     "items": {
+-      "$ref": "Disk"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#diskList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "DiskType": {
+-   "id": "DiskType",
+-   "type": "object",
+-   "description": "A disk type resource.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "deprecated": {
+-     "$ref": "DeprecationStatus",
+-     "description": "The deprecation status associated with this disk type."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#diskType"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "validDiskSize": {
+-     "type": "string",
+-     "description": "An optional textual descroption of the valid disk size, e.g., \"10GB-10TB\"."
+-    },
+-    "zone": {
+-     "type": "string",
+-     "description": "Url of the zone where the disk type resides (output only)."
+-    }
+-   }
+-  },
+-  "DiskTypeAggregatedList": {
+-   "id": "DiskTypeAggregatedList",
+-   "type": "object",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "object",
+-     "description": "A map of scoped disk type lists.",
+-     "additionalProperties": {
+-      "$ref": "DiskTypesScopedList",
+-      "description": "Name of the scope containing this set of disk types."
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#diskTypeAggregatedList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "DiskTypeList": {
+-   "id": "DiskTypeList",
+-   "type": "object",
+-   "description": "Contains a list of disk type resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The disk type resources.",
+-     "items": {
+-      "$ref": "DiskType"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#diskTypeList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "DiskTypesScopedList": {
+-   "id": "DiskTypesScopedList",
+-   "type": "object",
+-   "properties": {
+-    "diskTypes": {
+-     "type": "array",
+-     "description": "List of disk types contained in this scope.",
+-     "items": {
+-      "$ref": "DiskType"
+-     }
+-    },
+-    "warning": {
+-     "type": "object",
+-     "description": "Informational warning which replaces the list of disk types when the list is empty.",
+-     "properties": {
+-      "code": {
+-       "type": "string",
+-       "description": "The warning type identifier for this warning.",
+-       "enum": [
+-        "DEPRECATED_RESOURCE_USED",
+-        "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-        "INJECTED_KERNELS_DEPRECATED",
+-        "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-        "NEXT_HOP_CANNOT_IP_FORWARD",
+-        "NEXT_HOP_INSTANCE_NOT_FOUND",
+-        "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-        "NEXT_HOP_NOT_RUNNING",
+-        "NO_RESULTS_ON_PAGE",
+-        "REQUIRED_TOS_AGREEMENT",
+-        "RESOURCE_NOT_DELETED",
+-        "UNREACHABLE"
+-       ],
+-       "enumDescriptions": [
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        ""
+-       ]
+-      },
+-      "data": {
+-       "type": "array",
+-       "description": "Metadata for this warning in 'key: value' format.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "key": {
+-          "type": "string",
+-          "description": "A key for the warning data."
+-         },
+-         "value": {
+-          "type": "string",
+-          "description": "A warning data value corresponding to the key."
+-         }
+-        }
+-       }
+-      },
+-      "message": {
+-       "type": "string",
+-       "description": "Optional human-readable details for this warning."
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "DisksScopedList": {
+-   "id": "DisksScopedList",
+-   "type": "object",
+-   "properties": {
+-    "disks": {
+-     "type": "array",
+-     "description": "List of disks contained in this scope.",
+-     "items": {
+-      "$ref": "Disk"
+-     }
+-    },
+-    "warning": {
+-     "type": "object",
+-     "description": "Informational warning which replaces the list of disks when the list is empty.",
+-     "properties": {
+-      "code": {
+-       "type": "string",
+-       "description": "The warning type identifier for this warning.",
+-       "enum": [
+-        "DEPRECATED_RESOURCE_USED",
+-        "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-        "INJECTED_KERNELS_DEPRECATED",
+-        "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-        "NEXT_HOP_CANNOT_IP_FORWARD",
+-        "NEXT_HOP_INSTANCE_NOT_FOUND",
+-        "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-        "NEXT_HOP_NOT_RUNNING",
+-        "NO_RESULTS_ON_PAGE",
+-        "REQUIRED_TOS_AGREEMENT",
+-        "RESOURCE_NOT_DELETED",
+-        "UNREACHABLE"
+-       ],
+-       "enumDescriptions": [
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        ""
+-       ]
+-      },
+-      "data": {
+-       "type": "array",
+-       "description": "Metadata for this warning in 'key: value' format.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "key": {
+-          "type": "string",
+-          "description": "A key for the warning data."
+-         },
+-         "value": {
+-          "type": "string",
+-          "description": "A warning data value corresponding to the key."
+-         }
+-        }
+-       }
+-      },
+-      "message": {
+-       "type": "string",
+-       "description": "Optional human-readable details for this warning."
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "Firewall": {
+-   "id": "Firewall",
+-   "type": "object",
+-   "description": "A firewall resource.",
+-   "properties": {
+-    "allowed": {
+-     "type": "array",
+-     "description": "The list of rules specified by this firewall. Each rule specifies a protocol and port-range tuple that describes a permitted connection.",
+-     "items": {
+-      "type": "object",
+-      "properties": {
+-       "IPProtocol": {
+-        "type": "string",
+-        "description": "Required; this is the IP protocol that is allowed for this rule. This can either be one of the following well known protocol strings [\"tcp\", \"udp\", \"icmp\", \"esp\", \"ah\", \"sctp\"], or the IP protocol number."
+-       },
+-       "ports": {
+-        "type": "array",
+-        "description": "An optional list of ports which are allowed. It is an error to specify this for any protocol that isn't UDP or TCP. Each entry must be either an integer or a range. If not specified, connections through any port are allowed.\n\nExample inputs include: [\"22\"], [\"80\",\"443\"] and [\"12345-12349\"].",
+-        "items": {
+-         "type": "string"
+-        }
+-       }
+-      }
+-     }
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#firewall"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-     "annotations": {
+-      "required": [
+-       "compute.firewalls.insert",
+-       "compute.firewalls.patch"
+-      ]
+-     }
+-    },
+-    "network": {
+-     "type": "string",
+-     "description": "URL of the network to which this firewall is applied; provided by the client when the firewall is created.",
+-     "annotations": {
+-      "required": [
+-       "compute.firewalls.insert",
+-       "compute.firewalls.patch"
+-      ]
+-     }
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "sourceRanges": {
+-     "type": "array",
+-     "description": "A list of IP address blocks expressed in CIDR format which this rule applies to. One or both of sourceRanges and sourceTags may be set; an inbound connection is allowed if either the range or the tag of the source matches.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "sourceTags": {
+-     "type": "array",
+-     "description": "A list of instance tags which this rule applies to. One or both of sourceRanges and sourceTags may be set; an inbound connection is allowed if either the range or the tag of the source matches.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "targetTags": {
+-     "type": "array",
+-     "description": "A list of instance tags indicating sets of instances located on network which may make network connections as specified in allowed. If no targetTags are specified, the firewall rule applies to all instances on the specified network.",
+-     "items": {
+-      "type": "string"
+-     }
+-    }
+-   }
+-  },
+-  "FirewallList": {
+-   "id": "FirewallList",
+-   "type": "object",
+-   "description": "Contains a list of firewall resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The firewall resources.",
+-     "items": {
+-      "$ref": "Firewall"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#firewallList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "ForwardingRule": {
+-   "id": "ForwardingRule",
+-   "type": "object",
+-   "description": "A ForwardingRule resource. A ForwardingRule resource specifies which pool of target VMs to forward a packet to if it matches the given [IPAddress, IPProtocol, portRange] tuple.",
+-   "properties": {
+-    "IPAddress": {
+-     "type": "string",
+-     "description": "Value of the reserved IP address that this forwarding rule is serving on behalf of. For global forwarding rules, the address must be a global IP; for regional forwarding rules, the address must live in the same region as the forwarding rule. If left empty (default value), an ephemeral IP from the same scope (global or regional) will be assigned."
+-    },
+-    "IPProtocol": {
+-     "type": "string",
+-     "description": "The IP protocol to which this rule applies, valid options are 'TCP', 'UDP', 'ESP', 'AH' or 'SCTP'",
+-     "enum": [
+-      "AH",
+-      "ESP",
+-      "SCTP",
+-      "TCP",
+-      "UDP"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      "",
+-      "",
+-      "",
+-      ""
+-     ]
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#forwardingRule"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "portRange": {
+-     "type": "string",
+-     "description": "Applicable only when 'IPProtocol' is 'TCP', 'UDP' or 'SCTP', only packets addressed to ports in the specified range will be forwarded to 'target'. If 'portRange' is left empty (default value), all ports are forwarded. Forwarding rules with the same [IPAddress, IPProtocol] pair must have disjoint port ranges. @pattern: \\d+(?:-\\d+)?"
+-    },
+-    "region": {
+-     "type": "string",
+-     "description": "URL of the region where the regional forwarding rule resides (output only). This field is not applicable to global forwarding rules."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "target": {
+-     "type": "string",
+-     "description": "The URL of the target resource to receive the matched traffic. For regional forwarding rules, this target must live in the same region as the forwarding rule. For global forwarding rules, this target must be a global TargetHttpProxy resource."
+-    }
+-   }
+-  },
+-  "ForwardingRuleAggregatedList": {
+-   "id": "ForwardingRuleAggregatedList",
+-   "type": "object",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "object",
+-     "description": "A map of scoped forwarding rule lists.",
+-     "additionalProperties": {
+-      "$ref": "ForwardingRulesScopedList",
+-      "description": "Name of the scope containing this set of addresses."
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#forwardingRuleAggregatedList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "ForwardingRuleList": {
+-   "id": "ForwardingRuleList",
+-   "type": "object",
+-   "description": "Contains a list of ForwardingRule resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The ForwardingRule resources.",
+-     "items": {
+-      "$ref": "ForwardingRule"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#forwardingRuleList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "ForwardingRulesScopedList": {
+-   "id": "ForwardingRulesScopedList",
+-   "type": "object",
+-   "properties": {
+-    "forwardingRules": {
+-     "type": "array",
+-     "description": "List of forwarding rules contained in this scope.",
+-     "items": {
+-      "$ref": "ForwardingRule"
+-     }
+-    },
+-    "warning": {
+-     "type": "object",
+-     "description": "Informational warning which replaces the list of forwarding rules when the list is empty.",
+-     "properties": {
+-      "code": {
+-       "type": "string",
+-       "description": "The warning type identifier for this warning.",
+-       "enum": [
+-        "DEPRECATED_RESOURCE_USED",
+-        "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-        "INJECTED_KERNELS_DEPRECATED",
+-        "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-        "NEXT_HOP_CANNOT_IP_FORWARD",
+-        "NEXT_HOP_INSTANCE_NOT_FOUND",
+-        "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-        "NEXT_HOP_NOT_RUNNING",
+-        "NO_RESULTS_ON_PAGE",
+-        "REQUIRED_TOS_AGREEMENT",
+-        "RESOURCE_NOT_DELETED",
+-        "UNREACHABLE"
+-       ],
+-       "enumDescriptions": [
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        ""
+-       ]
+-      },
+-      "data": {
+-       "type": "array",
+-       "description": "Metadata for this warning in 'key: value' format.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "key": {
+-          "type": "string",
+-          "description": "A key for the warning data."
+-         },
+-         "value": {
+-          "type": "string",
+-          "description": "A warning data value corresponding to the key."
+-         }
+-        }
+-       }
+-      },
+-      "message": {
+-       "type": "string",
+-       "description": "Optional human-readable details for this warning."
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "HealthCheckReference": {
+-   "id": "HealthCheckReference",
+-   "type": "object",
+-   "properties": {
+-    "healthCheck": {
+-     "type": "string"
+-    }
+-   }
+-  },
+-  "HealthStatus": {
+-   "id": "HealthStatus",
+-   "type": "object",
+-   "properties": {
+-    "healthState": {
+-     "type": "string",
+-     "description": "Health state of the instance.",
+-     "enum": [
+-      "HEALTHY",
+-      "UNHEALTHY"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      ""
+-     ]
+-    },
+-    "instance": {
+-     "type": "string",
+-     "description": "URL of the instance resource."
+-    },
+-    "ipAddress": {
+-     "type": "string",
+-     "description": "The IP address represented by this resource."
+-    }
+-   }
+-  },
+-  "HostRule": {
+-   "id": "HostRule",
+-   "type": "object",
+-   "description": "A host-matching rule for a URL. If matched, will use the named PathMatcher to select the BackendService.",
+-   "properties": {
+-    "description": {
+-     "type": "string"
+-    },
+-    "hosts": {
+-     "type": "array",
+-     "description": "The list of host patterns to match. They must be FQDN except that it may start with ?*.? or ?*-?. The ?*? acts like a glob and will match any string of atoms (separated by .?s and -?s) to the left.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "pathMatcher": {
+-     "type": "string",
+-     "description": "The name of the PathMatcher to match the path portion of the URL, if the this HostRule matches the URL's host portion."
+-    }
+-   }
+-  },
+-  "HttpHealthCheck": {
+-   "id": "HttpHealthCheck",
+-   "type": "object",
+-   "description": "An HttpHealthCheck resource. This resource defines a template for how individual VMs should be checked for health, via HTTP.",
+-   "properties": {
+-    "checkIntervalSec": {
+-     "type": "integer",
+-     "description": "How often (in seconds) to send a health check. The default value is 5 seconds.",
+-     "format": "int32"
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "healthyThreshold": {
+-     "type": "integer",
+-     "description": "A so-far unhealthy VM will be marked healthy after this many consecutive successes. The default value is 2.",
+-     "format": "int32"
+-    },
+-    "host": {
+-     "type": "string",
+-     "description": "The value of the host header in the HTTP health check request. If left empty (default value), the public IP on behalf of which this health check is performed will be used."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#httpHealthCheck"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "port": {
+-     "type": "integer",
+-     "description": "The TCP port number for the HTTP health check request. The default value is 80.",
+-     "format": "int32"
+-    },
+-    "requestPath": {
+-     "type": "string",
+-     "description": "The request path of the HTTP health check request. The default value is \"/\"."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "timeoutSec": {
+-     "type": "integer",
+-     "description": "How long (in seconds) to wait before claiming failure. The default value is 5 seconds.",
+-     "format": "int32"
+-    },
+-    "unhealthyThreshold": {
+-     "type": "integer",
+-     "description": "A so-far healthy VM will be marked unhealthy after this many consecutive failures. The default value is 2.",
+-     "format": "int32"
+-    }
+-   }
+-  },
+-  "HttpHealthCheckList": {
+-   "id": "HttpHealthCheckList",
+-   "type": "object",
+-   "description": "Contains a list of HttpHealthCheck resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The HttpHealthCheck resources.",
+-     "items": {
+-      "$ref": "HttpHealthCheck"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#httpHealthCheckList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "Image": {
+-   "id": "Image",
+-   "type": "object",
+-   "description": "A disk image resource.",
+-   "properties": {
+-    "archiveSizeBytes": {
+-     "type": "string",
+-     "description": "Size of the image tar.gz archive stored in Google Cloud Storage (in bytes).",
+-     "format": "int64"
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "deprecated": {
+-     "$ref": "DeprecationStatus",
+-     "description": "The deprecation status associated with this image."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "Textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "diskSizeGb": {
+-     "type": "string",
+-     "description": "Size of the image when restored onto a disk (in GiB).",
+-     "format": "int64"
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#image"
+-    },
+-    "licenses": {
+-     "type": "array",
+-     "description": "Public visible licenses.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-     "annotations": {
+-      "required": [
+-       "compute.images.insert"
+-      ]
+-     }
+-    },
+-    "rawDisk": {
+-     "type": "object",
+-     "description": "The raw disk image parameters.",
+-     "properties": {
+-      "containerType": {
+-       "type": "string",
+-       "description": "The format used to encode and transmit the block device. Should be TAR. This is just a container and transmission format and not a runtime format. Provided by the client when the disk image is created.",
+-       "enum": [
+-        "TAR"
+-       ],
+-       "enumDescriptions": [
+-        ""
+-       ]
+-      },
+-      "sha1Checksum": {
+-       "type": "string",
+-       "description": "An optional SHA1 checksum of the disk image before unpackaging; provided by the client when the disk image is created.",
+-       "pattern": "[a-f0-9]{40}"
+-      },
+-      "source": {
+-       "type": "string",
+-       "description": "The full Google Cloud Storage URL where the disk image is stored; provided by the client when the disk image is created.",
+-       "annotations": {
+-        "required": [
+-         "compute.images.insert"
+-        ]
+-       }
+-      }
+-     }
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "sourceDisk": {
+-     "type": "string",
+-     "description": "The source disk used to create this image. Once the source disk has been deleted from the system, this field will be cleared, and will not be set even if a disk with the same name has been re-created."
+-    },
+-    "sourceDiskId": {
+-     "type": "string",
+-     "description": "The 'id' value of the disk used to create this image. This value may be used to determine whether the image was taken from the current or a previous instance of a given disk name."
+-    },
+-    "sourceType": {
+-     "type": "string",
+-     "description": "Must be \"RAW\"; provided by the client when the disk image is created.",
+-     "default": "RAW",
+-     "enum": [
+-      "RAW"
+-     ],
+-     "enumDescriptions": [
+-      ""
+-     ]
+-    },
+-    "status": {
+-     "type": "string",
+-     "description": "Status of the image (output only). It will be one of the following READY - after image has been successfully created and is ready for use FAILED - if creating the image fails for some reason PENDING - the image creation is in progress An image can be used to create other resources suck as instances only after the image has been successfully created and the status is set to READY.",
+-     "enum": [
+-      "FAILED",
+-      "PENDING",
+-      "READY"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      "",
+-      ""
+-     ]
+-    }
+-   }
+-  },
+-  "ImageList": {
+-   "id": "ImageList",
+-   "type": "object",
+-   "description": "Contains a list of disk image resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The disk image resources.",
+-     "items": {
+-      "$ref": "Image"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#imageList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "Instance": {
+-   "id": "Instance",
+-   "type": "object",
+-   "description": "An instance resource.",
+-   "properties": {
+-    "canIpForward": {
+-     "type": "boolean",
+-     "description": "Allows this instance to send packets with source IP addresses other than its own and receive packets with destination IP addresses other than its own. If this instance will be used as an IP gateway or it will be set as the next-hop in a Route resource, say true. If unsure, leave this set to false."
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "disks": {
+-     "type": "array",
+-     "description": "Array of disks associated with this instance. Persistent disks must be created before you can assign them.",
+-     "items": {
+-      "$ref": "AttachedDisk"
+-     }
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#instance"
+-    },
+-    "machineType": {
+-     "type": "string",
+-     "description": "URL of the machine type resource describing which machine type to use to host the instance; provided by the client when the instance is created.",
+-     "annotations": {
+-      "required": [
+-       "compute.instances.insert"
+-      ]
+-     }
+-    },
+-    "metadata": {
+-     "$ref": "Metadata",
+-     "description": "Metadata key/value pairs assigned to this instance. Consists of custom metadata or predefined keys; see Instance documentation for more information."
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-     "annotations": {
+-      "required": [
+-       "compute.instances.insert"
+-      ]
+-     }
+-    },
+-    "networkInterfaces": {
+-     "type": "array",
+-     "description": "Array of configurations for this interface. This specifies how this interface is configured to interact with other network services, such as connecting to the internet. Currently, ONE_TO_ONE_NAT is the only access config supported. If there are no accessConfigs specified, then this instance will have no external internet access.",
+-     "items": {
+-      "$ref": "NetworkInterface"
+-     }
+-    },
+-    "scheduling": {
+-     "$ref": "Scheduling",
+-     "description": "Scheduling options for this instance."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    },
+-    "serviceAccounts": {
+-     "type": "array",
+-     "description": "A list of service accounts each with specified scopes, for which access tokens are to be made available to the instance through metadata queries.",
+-     "items": {
+-      "$ref": "ServiceAccount"
+-     }
+-    },
+-    "status": {
+-     "type": "string",
+-     "description": "Instance status. One of the following values: \"PROVISIONING\", \"STAGING\", \"RUNNING\", \"STOPPING\", \"STOPPED\", \"TERMINATED\" (output only).",
+-     "enum": [
+-      "PROVISIONING",
+-      "RUNNING",
+-      "STAGING",
+-      "STOPPED",
+-      "STOPPING",
+-      "TERMINATED"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      ""
+-     ]
+-    },
+-    "statusMessage": {
+-     "type": "string",
+-     "description": "An optional, human-readable explanation of the status (output only)."
+-    },
+-    "tags": {
+-     "$ref": "Tags",
+-     "description": "A list of tags to be applied to this instance. Used to identify valid sources or targets for network firewalls. Provided by the client on instance creation. The tags can be later modified by the setTags method. Each tag within the list must comply with RFC1035."
+-    },
+-    "zone": {
+-     "type": "string",
+-     "description": "URL of the zone where the instance resides (output only)."
+-    }
+-   }
+-  },
+-  "InstanceAggregatedList": {
+-   "id": "InstanceAggregatedList",
+-   "type": "object",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "object",
+-     "description": "A map of scoped instance lists.",
+-     "additionalProperties": {
+-      "$ref": "InstancesScopedList",
+-      "description": "Name of the scope containing this set of instances."
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#instanceAggregatedList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "InstanceList": {
+-   "id": "InstanceList",
+-   "type": "object",
+-   "description": "Contains a list of instance resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "A list of instance resources.",
+-     "items": {
+-      "$ref": "Instance"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#instanceList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "InstanceReference": {
+-   "id": "InstanceReference",
+-   "type": "object",
+-   "properties": {
+-    "instance": {
+-     "type": "string"
+-    }
+-   }
+-  },
+-  "InstancesScopedList": {
+-   "id": "InstancesScopedList",
+-   "type": "object",
+-   "properties": {
+-    "instances": {
+-     "type": "array",
+-     "description": "List of instances contained in this scope.",
+-     "items": {
+-      "$ref": "Instance"
+-     }
+-    },
+-    "warning": {
+-     "type": "object",
+-     "description": "Informational warning which replaces the list of instances when the list is empty.",
+-     "properties": {
+-      "code": {
+-       "type": "string",
+-       "description": "The warning type identifier for this warning.",
+-       "enum": [
+-        "DEPRECATED_RESOURCE_USED",
+-        "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-        "INJECTED_KERNELS_DEPRECATED",
+-        "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-        "NEXT_HOP_CANNOT_IP_FORWARD",
+-        "NEXT_HOP_INSTANCE_NOT_FOUND",
+-        "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-        "NEXT_HOP_NOT_RUNNING",
+-        "NO_RESULTS_ON_PAGE",
+-        "REQUIRED_TOS_AGREEMENT",
+-        "RESOURCE_NOT_DELETED",
+-        "UNREACHABLE"
+-       ],
+-       "enumDescriptions": [
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        ""
+-       ]
+-      },
+-      "data": {
+-       "type": "array",
+-       "description": "Metadata for this warning in 'key: value' format.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "key": {
+-          "type": "string",
+-          "description": "A key for the warning data."
+-         },
+-         "value": {
+-          "type": "string",
+-          "description": "A warning data value corresponding to the key."
+-         }
+-        }
+-       }
+-      },
+-      "message": {
+-       "type": "string",
+-       "description": "Optional human-readable details for this warning."
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "License": {
+-   "id": "License",
+-   "type": "object",
+-   "description": "A license resource.",
+-   "properties": {
+-    "kind": {
+-     "type": "string",
+-     "description": "Identifies what kind of resource this is. Value: the fixed string \"compute#license\".",
+-     "default": "compute#license"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-     "annotations": {
+-      "required": [
+-       "compute.images.insert"
+-      ]
+-     }
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    }
+-   }
+-  },
+-  "MachineType": {
+-   "id": "MachineType",
+-   "type": "object",
+-   "description": "A machine type resource.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "deprecated": {
+-     "$ref": "DeprecationStatus",
+-     "description": "The deprecation status associated with this machine type."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource."
+-    },
+-    "guestCpus": {
+-     "type": "integer",
+-     "description": "Count of CPUs exposed to the instance.",
+-     "format": "int32"
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "imageSpaceGb": {
+-     "type": "integer",
+-     "description": "Space allotted for the image, defined in GB.",
+-     "format": "int32"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#machineType"
+-    },
+-    "maximumPersistentDisks": {
+-     "type": "integer",
+-     "description": "Maximum persistent disks allowed.",
+-     "format": "int32"
+-    },
+-    "maximumPersistentDisksSizeGb": {
+-     "type": "string",
+-     "description": "Maximum total persistent disks size (GB) allowed.",
+-     "format": "int64"
+-    },
+-    "memoryMb": {
+-     "type": "integer",
+-     "description": "Physical memory assigned to the instance, defined in MB.",
+-     "format": "int32"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "scratchDisks": {
+-     "type": "array",
+-     "description": "List of extended scratch disks assigned to the instance.",
+-     "items": {
+-      "type": "object",
+-      "properties": {
+-       "diskGb": {
+-        "type": "integer",
+-        "description": "Size of the scratch disk, defined in GB.",
+-        "format": "int32"
+-       }
+-      }
+-     }
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "zone": {
+-     "type": "string",
+-     "description": "Url of the zone where the machine type resides (output only)."
+-    }
+-   }
+-  },
+-  "MachineTypeAggregatedList": {
+-   "id": "MachineTypeAggregatedList",
+-   "type": "object",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "object",
+-     "description": "A map of scoped machine type lists.",
+-     "additionalProperties": {
+-      "$ref": "MachineTypesScopedList",
+-      "description": "Name of the scope containing this set of machine types."
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#machineTypeAggregatedList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "MachineTypeList": {
+-   "id": "MachineTypeList",
+-   "type": "object",
+-   "description": "Contains a list of machine type resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The machine type resources.",
+-     "items": {
+-      "$ref": "MachineType"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#machineTypeList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "MachineTypesScopedList": {
+-   "id": "MachineTypesScopedList",
+-   "type": "object",
+-   "properties": {
+-    "machineTypes": {
+-     "type": "array",
+-     "description": "List of machine types contained in this scope.",
+-     "items": {
+-      "$ref": "MachineType"
+-     }
+-    },
+-    "warning": {
+-     "type": "object",
+-     "description": "Informational warning which replaces the list of machine types when the list is empty.",
+-     "properties": {
+-      "code": {
+-       "type": "string",
+-       "description": "The warning type identifier for this warning.",
+-       "enum": [
+-        "DEPRECATED_RESOURCE_USED",
+-        "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-        "INJECTED_KERNELS_DEPRECATED",
+-        "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-        "NEXT_HOP_CANNOT_IP_FORWARD",
+-        "NEXT_HOP_INSTANCE_NOT_FOUND",
+-        "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-        "NEXT_HOP_NOT_RUNNING",
+-        "NO_RESULTS_ON_PAGE",
+-        "REQUIRED_TOS_AGREEMENT",
+-        "RESOURCE_NOT_DELETED",
+-        "UNREACHABLE"
+-       ],
+-       "enumDescriptions": [
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        ""
+-       ]
+-      },
+-      "data": {
+-       "type": "array",
+-       "description": "Metadata for this warning in 'key: value' format.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "key": {
+-          "type": "string",
+-          "description": "A key for the warning data."
+-         },
+-         "value": {
+-          "type": "string",
+-          "description": "A warning data value corresponding to the key."
+-         }
+-        }
+-       }
+-      },
+-      "message": {
+-       "type": "string",
+-       "description": "Optional human-readable details for this warning."
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "Metadata": {
+-   "id": "Metadata",
+-   "type": "object",
+-   "description": "A metadata key/value entry.",
+-   "properties": {
+-    "fingerprint": {
+-     "type": "string",
+-     "description": "Fingerprint of this resource. A hash of the metadata's contents. This field is used for optimistic locking. An up-to-date metadata fingerprint must be provided in order to modify metadata.",
+-     "format": "byte"
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "Array of key/value pairs. The total size of all keys and values must be less than 512 KB.",
+-     "items": {
+-      "type": "object",
+-      "properties": {
+-       "key": {
+-        "type": "string",
+-        "description": "Key for the metadata entry. Keys must conform to the following regexp: [a-zA-Z0-9-_]+, and be less than 128 bytes in length. This is reflected as part of a URL in the metadata server. Additionally, to avoid ambiguity, keys must not conflict with any other metadata keys for the project.",
+-        "pattern": "[a-zA-Z0-9-_]{1,128}",
+-        "annotations": {
+-         "required": [
+-          "compute.instances.insert",
+-          "compute.projects.setCommonInstanceMetadata"
+-         ]
+-        }
+-       },
+-       "value": {
+-        "type": "string",
+-        "description": "Value for the metadata entry. These are free-form strings, and only have meaning as interpreted by the image running in the instance. The only restriction placed on values is that their size must be less than or equal to 32768 bytes.",
+-        "annotations": {
+-         "required": [
+-          "compute.instances.insert",
+-          "compute.projects.setCommonInstanceMetadata"
+-         ]
+-        }
+-       }
+-      }
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#metadata"
+-    }
+-   }
+-  },
+-  "Network": {
+-   "id": "Network",
+-   "type": "object",
+-   "description": "A network resource.",
+-   "properties": {
+-    "IPv4Range": {
+-     "type": "string",
+-     "description": "Required; The range of internal addresses that are legal on this network. This range is a CIDR specification, for example: 192.168.0.0/16. Provided by the client when the network is created.",
+-     "pattern": "[0-9]{1,3}(?:\\.[0-9]{1,3}){3}/[0-9]{1,2}",
+-     "annotations": {
+-      "required": [
+-       "compute.networks.insert"
+-      ]
+-     }
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "gatewayIPv4": {
+-     "type": "string",
+-     "description": "An optional address that is used for default routing to other networks. This must be within the range specified by IPv4Range, and is typically the first usable address in that range. If not specified, the default value is the first usable address in IPv4Range.",
+-     "pattern": "[0-9]{1,3}(?:\\.[0-9]{1,3}){3}"
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#network"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-     "annotations": {
+-      "required": [
+-       "compute.networks.insert"
+-      ]
+-     }
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    }
+-   }
+-  },
+-  "NetworkInterface": {
+-   "id": "NetworkInterface",
+-   "type": "object",
+-   "description": "A network interface resource attached to an instance.",
+-   "properties": {
+-    "accessConfigs": {
+-     "type": "array",
+-     "description": "Array of configurations for this interface. This specifies how this interface is configured to interact with other network services, such as connecting to the internet. Currently, ONE_TO_ONE_NAT is the only access config supported. If there are no accessConfigs specified, then this instance will have no external internet access.",
+-     "items": {
+-      "$ref": "AccessConfig"
+-     }
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the network interface, determined by the server; for network devices, these are e.g. eth0, eth1, etc. (output only)."
+-    },
+-    "network": {
+-     "type": "string",
+-     "description": "URL of the network resource attached to this interface.",
+-     "annotations": {
+-      "required": [
+-       "compute.instances.insert"
+-      ]
+-     }
+-    },
+-    "networkIP": {
+-     "type": "string",
+-     "description": "An optional IPV4 internal network address assigned to the instance for this network interface (output only)."
+-    }
+-   }
+-  },
+-  "NetworkList": {
+-   "id": "NetworkList",
+-   "type": "object",
+-   "description": "Contains a list of network resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The network resources.",
+-     "items": {
+-      "$ref": "Network"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#networkList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "Operation": {
+-   "id": "Operation",
+-   "type": "object",
+-   "description": "An operation resource, used to manage asynchronous API requests.",
+-   "properties": {
+-    "clientOperationId": {
+-     "type": "string",
+-     "description": "An optional identifier specified by the client when the mutation was initiated. Must be unique for all operation resources in the project (output only)."
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "endTime": {
+-     "type": "string",
+-     "description": "The time that this operation was completed. This is in RFC 3339 format (output only)."
+-    },
+-    "error": {
+-     "type": "object",
+-     "description": "If errors occurred during processing of this operation, this field will be populated (output only).",
+-     "properties": {
+-      "errors": {
+-       "type": "array",
+-       "description": "The array of errors encountered while processing this operation.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "code": {
+-          "type": "string",
+-          "description": "The error type identifier for this error."
+-         },
+-         "location": {
+-          "type": "string",
+-          "description": "Indicates the field in the request which caused the error. This property is optional."
+-         },
+-         "message": {
+-          "type": "string",
+-          "description": "An optional, human-readable error message."
+-         }
+-        }
+-       }
+-      }
+-     }
+-    },
+-    "httpErrorMessage": {
+-     "type": "string",
+-     "description": "If operation fails, the HTTP error message returned, e.g. NOT FOUND. (output only)."
+-    },
+-    "httpErrorStatusCode": {
+-     "type": "integer",
+-     "description": "If operation fails, the HTTP error status code returned, e.g. 404. (output only).",
+-     "format": "int32"
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "insertTime": {
+-     "type": "string",
+-     "description": "The time that this operation was requested. This is in RFC 3339 format (output only)."
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#operation"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource (output only)."
+-    },
+-    "operationType": {
+-     "type": "string",
+-     "description": "Type of the operation. Examples include \"insert\", \"update\", and \"delete\" (output only)."
+-    },
+-    "progress": {
+-     "type": "integer",
+-     "description": "An optional progress indicator that ranges from 0 to 100. There is no requirement that this be linear or support any granularity of operations. This should not be used to guess at when the operation will be complete. This number should be monotonically increasing as the operation progresses (output only).",
+-     "format": "int32"
+-    },
+-    "region": {
+-     "type": "string",
+-     "description": "URL of the region where the operation resides (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "startTime": {
+-     "type": "string",
+-     "description": "The time that this operation was started by the server. This is in RFC 3339 format (output only)."
+-    },
+-    "status": {
+-     "type": "string",
+-     "description": "Status of the operation. Can be one of the following: \"PENDING\", \"RUNNING\", or \"DONE\" (output only).",
+-     "enum": [
+-      "DONE",
+-      "PENDING",
+-      "RUNNING"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      "",
+-      ""
+-     ]
+-    },
+-    "statusMessage": {
+-     "type": "string",
+-     "description": "An optional textual description of the current status of the operation (output only)."
+-    },
+-    "targetId": {
+-     "type": "string",
+-     "description": "Unique target id which identifies a particular incarnation of the target (output only).",
+-     "format": "uint64"
+-    },
+-    "targetLink": {
+-     "type": "string",
+-     "description": "URL of the resource the operation is mutating (output only)."
+-    },
+-    "user": {
+-     "type": "string",
+-     "description": "User who requested the operation, for example \"user at example.com\" (output only)."
+-    },
+-    "warnings": {
+-     "type": "array",
+-     "description": "If warning messages generated during processing of this operation, this field will be populated (output only).",
+-     "items": {
+-      "type": "object",
+-      "properties": {
+-       "code": {
+-        "type": "string",
+-        "description": "The warning type identifier for this warning.",
+-        "enum": [
+-         "DEPRECATED_RESOURCE_USED",
+-         "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-         "INJECTED_KERNELS_DEPRECATED",
+-         "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-         "NEXT_HOP_CANNOT_IP_FORWARD",
+-         "NEXT_HOP_INSTANCE_NOT_FOUND",
+-         "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-         "NEXT_HOP_NOT_RUNNING",
+-         "NO_RESULTS_ON_PAGE",
+-         "REQUIRED_TOS_AGREEMENT",
+-         "RESOURCE_NOT_DELETED",
+-         "UNREACHABLE"
+-        ],
+-        "enumDescriptions": [
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         ""
+-        ]
+-       },
+-       "data": {
+-        "type": "array",
+-        "description": "Metadata for this warning in 'key: value' format.",
+-        "items": {
+-         "type": "object",
+-         "properties": {
+-          "key": {
+-           "type": "string",
+-           "description": "A key for the warning data."
+-          },
+-          "value": {
+-           "type": "string",
+-           "description": "A warning data value corresponding to the key."
+-          }
+-         }
+-        }
+-       },
+-       "message": {
+-        "type": "string",
+-        "description": "Optional human-readable details for this warning."
+-       }
+-      }
+-     }
+-    },
+-    "zone": {
+-     "type": "string",
+-     "description": "URL of the zone where the operation resides (output only)."
+-    }
+-   }
+-  },
+-  "OperationAggregatedList": {
+-   "id": "OperationAggregatedList",
+-   "type": "object",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "object",
+-     "description": "A map of scoped operation lists.",
+-     "additionalProperties": {
+-      "$ref": "OperationsScopedList",
+-      "description": "Name of the scope containing this set of operations."
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#operationAggregatedList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "OperationList": {
+-   "id": "OperationList",
+-   "type": "object",
+-   "description": "Contains a list of operation resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The operation resources.",
+-     "items": {
+-      "$ref": "Operation"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#operationList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "OperationsScopedList": {
+-   "id": "OperationsScopedList",
+-   "type": "object",
+-   "properties": {
+-    "operations": {
+-     "type": "array",
+-     "description": "List of operations contained in this scope.",
+-     "items": {
+-      "$ref": "Operation"
+-     }
+-    },
+-    "warning": {
+-     "type": "object",
+-     "description": "Informational warning which replaces the list of operations when the list is empty.",
+-     "properties": {
+-      "code": {
+-       "type": "string",
+-       "description": "The warning type identifier for this warning.",
+-       "enum": [
+-        "DEPRECATED_RESOURCE_USED",
+-        "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-        "INJECTED_KERNELS_DEPRECATED",
+-        "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-        "NEXT_HOP_CANNOT_IP_FORWARD",
+-        "NEXT_HOP_INSTANCE_NOT_FOUND",
+-        "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-        "NEXT_HOP_NOT_RUNNING",
+-        "NO_RESULTS_ON_PAGE",
+-        "REQUIRED_TOS_AGREEMENT",
+-        "RESOURCE_NOT_DELETED",
+-        "UNREACHABLE"
+-       ],
+-       "enumDescriptions": [
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        ""
+-       ]
+-      },
+-      "data": {
+-       "type": "array",
+-       "description": "Metadata for this warning in 'key: value' format.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "key": {
+-          "type": "string",
+-          "description": "A key for the warning data."
+-         },
+-         "value": {
+-          "type": "string",
+-          "description": "A warning data value corresponding to the key."
+-         }
+-        }
+-       }
+-      },
+-      "message": {
+-       "type": "string",
+-       "description": "Optional human-readable details for this warning."
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "PathMatcher": {
+-   "id": "PathMatcher",
+-   "type": "object",
+-   "description": "A matcher for the path portion of the URL. The BackendService from the longest-matched rule will serve the URL. If no rule was matched, the default_service will be used.",
+-   "properties": {
+-    "defaultService": {
+-     "type": "string",
+-     "description": "The URL to the BackendService resource. This will be used if none of the 'pathRules' defined by this PathMatcher is met by the URL's path portion."
+-    },
+-    "description": {
+-     "type": "string"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "The name to which this PathMatcher is referred by the HostRule."
+-    },
+-    "pathRules": {
+-     "type": "array",
+-     "description": "The list of path rules.",
+-     "items": {
+-      "$ref": "PathRule"
+-     }
+-    }
+-   }
+-  },
+-  "PathRule": {
+-   "id": "PathRule",
+-   "type": "object",
+-   "description": "A path-matching rule for a URL. If matched, will use the specified BackendService to handle the traffic arriving at this URL.",
+-   "properties": {
+-    "paths": {
+-     "type": "array",
+-     "description": "The list of path patterns to match. Each must start with ?/\" and the only place a \"*\" is allowed is at the end following a \"/\". The string fed to the path matcher does not include any text after the first \"?\" or \"#\", and those chars are not allowed here.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "service": {
+-     "type": "string",
+-     "description": "The URL of the BackendService resource if this rule is matched."
+-    }
+-   }
+-  },
+-  "Project": {
+-   "id": "Project",
+-   "type": "object",
+-   "description": "A project resource. Projects can be created only in the APIs Console. Unless marked otherwise, values can only be modified in the console.",
+-   "properties": {
+-    "commonInstanceMetadata": {
+-     "$ref": "Metadata",
+-     "description": "Metadata key/value pairs available to all instances contained in this project."
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#project"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource."
+-    },
+-    "quotas": {
+-     "type": "array",
+-     "description": "Quotas assigned to this project.",
+-     "items": {
+-      "$ref": "Quota"
+-     }
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "usageExportLocation": {
+-     "$ref": "UsageExportLocation",
+-     "description": "The location in Cloud Storage and naming method of the daily usage report."
+-    }
+-   }
+-  },
+-  "Quota": {
+-   "id": "Quota",
+-   "type": "object",
+-   "description": "A quotas entry.",
+-   "properties": {
+-    "limit": {
+-     "type": "number",
+-     "description": "Quota limit for this metric.",
+-     "format": "double"
+-    },
+-    "metric": {
+-     "type": "string",
+-     "description": "Name of the quota metric.",
+-     "enum": [
+-      "BACKEND_SERVICES",
+-      "CPUS",
+-      "DISKS",
+-      "DISKS_TOTAL_GB",
+-      "EPHEMERAL_ADDRESSES",
+-      "FIREWALLS",
+-      "FORWARDING_RULES",
+-      "HEALTH_CHECKS",
+-      "IMAGES",
+-      "IMAGES_TOTAL_GB",
+-      "INSTANCES",
+-      "IN_USE_ADDRESSES",
+-      "KERNELS",
+-      "KERNELS_TOTAL_GB",
+-      "NETWORKS",
+-      "OPERATIONS",
+-      "ROUTES",
+-      "SNAPSHOTS",
+-      "SSD_TOTAL_GB",
+-      "STATIC_ADDRESSES",
+-      "TARGET_HTTP_PROXIES",
+-      "TARGET_INSTANCES",
+-      "TARGET_POOLS",
+-      "URL_MAPS"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      "",
+-      ""
+-     ]
+-    },
+-    "usage": {
+-     "type": "number",
+-     "description": "Current usage of this metric.",
+-     "format": "double"
+-    }
+-   }
+-  },
+-  "Region": {
+-   "id": "Region",
+-   "type": "object",
+-   "description": "Region resource.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "deprecated": {
+-     "$ref": "DeprecationStatus",
+-     "description": "The deprecation status associated with this region."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "Textual description of the resource."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#region"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource."
+-    },
+-    "quotas": {
+-     "type": "array",
+-     "description": "Quotas assigned to this region.",
+-     "items": {
+-      "$ref": "Quota"
+-     }
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "status": {
+-     "type": "string",
+-     "description": "Status of the region, \"UP\" or \"DOWN\".",
+-     "enum": [
+-      "DOWN",
+-      "UP"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      ""
+-     ]
+-    },
+-    "zones": {
+-     "type": "array",
+-     "description": "A list of zones homed in this region, in the form of resource URLs.",
+-     "items": {
+-      "type": "string"
+-     }
+-    }
+-   }
+-  },
+-  "RegionList": {
+-   "id": "RegionList",
+-   "type": "object",
+-   "description": "Contains a list of region resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The region resources.",
+-     "items": {
+-      "$ref": "Region"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#regionList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "ResourceGroupReference": {
+-   "id": "ResourceGroupReference",
+-   "type": "object",
+-   "properties": {
+-    "group": {
+-     "type": "string"
+-    }
+-   }
+-  },
+-  "Route": {
+-   "id": "Route",
+-   "type": "object",
+-   "description": "The route resource. A Route is a rule that specifies how certain packets should be handled by the virtual network. Routes are associated with VMs by tag and the set of Routes for a particular VM is called its routing table. For each packet leaving a VM, the system searches that VM's routing table for a single best matching Route. Routes match packets by destination IP address, preferring smaller or more specific ranges over larger ones. If there is a tie, the system selects the Route with the smallest priority value. If there is still a tie, it uses the layer three and four packet headers to select just one of the remaining matching Routes. The packet is then forwarded as specified by the next_hop field of the winning Route -- either to another VM destination, a VM gateway or a GCE operated gateway. Packets that do not match any Route in the sending VM's routing table will be dropped.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "destRange": {
+-     "type": "string",
+-     "description": "Which packets does this route apply to?",
+-     "annotations": {
+-      "required": [
+-       "compute.routes.insert"
+-      ]
+-     }
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#route"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-     "annotations": {
+-      "required": [
+-       "compute.routes.insert"
+-      ]
+-     }
+-    },
+-    "network": {
+-     "type": "string",
+-     "description": "URL of the network to which this route is applied; provided by the client when the route is created.",
+-     "annotations": {
+-      "required": [
+-       "compute.routes.insert"
+-      ]
+-     }
+-    },
+-    "nextHopGateway": {
+-     "type": "string",
+-     "description": "The URL to a gateway that should handle matching packets."
+-    },
+-    "nextHopInstance": {
+-     "type": "string",
+-     "description": "The URL to an instance that should handle matching packets."
+-    },
+-    "nextHopIp": {
+-     "type": "string",
+-     "description": "The network IP address of an instance that should handle matching packets."
+-    },
+-    "nextHopNetwork": {
+-     "type": "string",
+-     "description": "The URL of the local network if it should handle matching packets."
+-    },
+-    "priority": {
+-     "type": "integer",
+-     "description": "Breaks ties between Routes of equal specificity. Routes with smaller values win when tied with routes with larger values.",
+-     "format": "uint32",
+-     "annotations": {
+-      "required": [
+-       "compute.routes.insert"
+-      ]
+-     }
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "tags": {
+-     "type": "array",
+-     "description": "A list of instance tags to which this route applies.",
+-     "items": {
+-      "type": "string"
+-     },
+-     "annotations": {
+-      "required": [
+-       "compute.routes.insert"
+-      ]
+-     }
+-    },
+-    "warnings": {
+-     "type": "array",
+-     "description": "If potential misconfigurations are detected for this route, this field will be populated with warning messages.",
+-     "items": {
+-      "type": "object",
+-      "properties": {
+-       "code": {
+-        "type": "string",
+-        "description": "The warning type identifier for this warning.",
+-        "enum": [
+-         "DEPRECATED_RESOURCE_USED",
+-         "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-         "INJECTED_KERNELS_DEPRECATED",
+-         "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-         "NEXT_HOP_CANNOT_IP_FORWARD",
+-         "NEXT_HOP_INSTANCE_NOT_FOUND",
+-         "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-         "NEXT_HOP_NOT_RUNNING",
+-         "NO_RESULTS_ON_PAGE",
+-         "REQUIRED_TOS_AGREEMENT",
+-         "RESOURCE_NOT_DELETED",
+-         "UNREACHABLE"
+-        ],
+-        "enumDescriptions": [
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         "",
+-         ""
+-        ]
+-       },
+-       "data": {
+-        "type": "array",
+-        "description": "Metadata for this warning in 'key: value' format.",
+-        "items": {
+-         "type": "object",
+-         "properties": {
+-          "key": {
+-           "type": "string",
+-           "description": "A key for the warning data."
+-          },
+-          "value": {
+-           "type": "string",
+-           "description": "A warning data value corresponding to the key."
+-          }
+-         }
+-        }
+-       },
+-       "message": {
+-        "type": "string",
+-        "description": "Optional human-readable details for this warning."
+-       }
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "RouteList": {
+-   "id": "RouteList",
+-   "type": "object",
+-   "description": "Contains a list of route resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The route resources.",
+-     "items": {
+-      "$ref": "Route"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#routeList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "Scheduling": {
+-   "id": "Scheduling",
+-   "type": "object",
+-   "description": "Scheduling options for an Instance.",
+-   "properties": {
+-    "automaticRestart": {
+-     "type": "boolean",
+-     "description": "Whether the Instance should be automatically restarted whenever it is terminated by Compute Engine (not terminated by user)."
+-    },
+-    "onHostMaintenance": {
+-     "type": "string",
+-     "description": "How the instance should behave when the host machine undergoes maintenance that may temporarily impact instance performance.",
+-     "enum": [
+-      "MIGRATE",
+-      "TERMINATE"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      ""
+-     ]
+-    }
+-   }
+-  },
+-  "SerialPortOutput": {
+-   "id": "SerialPortOutput",
+-   "type": "object",
+-   "description": "An instance serial console output.",
+-   "properties": {
+-    "contents": {
+-     "type": "string",
+-     "description": "The contents of the console output."
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#serialPortOutput"
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    }
+-   }
+-  },
+-  "ServiceAccount": {
+-   "id": "ServiceAccount",
+-   "type": "object",
+-   "description": "A service account.",
+-   "properties": {
+-    "email": {
+-     "type": "string",
+-     "description": "Email address of the service account."
+-    },
+-    "scopes": {
+-     "type": "array",
+-     "description": "The list of scopes to be made available for this service account.",
+-     "items": {
+-      "type": "string"
+-     }
+-    }
+-   }
+-  },
+-  "Snapshot": {
+-   "id": "Snapshot",
+-   "type": "object",
+-   "description": "A persistent disk snapshot resource.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "diskSizeGb": {
+-     "type": "string",
+-     "description": "Size of the persistent disk snapshot, specified in GB (output only).",
+-     "format": "int64"
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#snapshot"
+-    },
+-    "licenses": {
+-     "type": "array",
+-     "description": "Public visible licenses.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "sourceDisk": {
+-     "type": "string",
+-     "description": "The source disk used to create this snapshot. Once the source disk has been deleted from the system, this field will be cleared, and will not be set even if a disk with the same name has been re-created (output only)."
+-    },
+-    "sourceDiskId": {
+-     "type": "string",
+-     "description": "The 'id' value of the disk used to create this snapshot. This value may be used to determine whether the snapshot was taken from the current or a previous instance of a given disk name."
+-    },
+-    "status": {
+-     "type": "string",
+-     "description": "The status of the persistent disk snapshot (output only).",
+-     "enum": [
+-      "CREATING",
+-      "DELETING",
+-      "FAILED",
+-      "READY",
+-      "UPLOADING"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      "",
+-      "",
+-      "",
+-      ""
+-     ]
+-    },
+-    "storageBytes": {
+-     "type": "string",
+-     "description": "A size of the the storage used by the snapshot. As snapshots share storage this number is expected to change with snapshot creation/deletion.",
+-     "format": "int64"
+-    },
+-    "storageBytesStatus": {
+-     "type": "string",
+-     "description": "An indicator whether storageBytes is in a stable state, or it is being adjusted as a result of shared storage reallocation.",
+-     "enum": [
+-      "UPDATING",
+-      "UP_TO_DATE"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      ""
+-     ]
+-    }
+-   }
+-  },
+-  "SnapshotList": {
+-   "id": "SnapshotList",
+-   "type": "object",
+-   "description": "Contains a list of persistent disk snapshot resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The persistent snapshot resources.",
+-     "items": {
+-      "$ref": "Snapshot"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#snapshotList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "Tags": {
+-   "id": "Tags",
+-   "type": "object",
+-   "description": "A set of instance tags.",
+-   "properties": {
+-    "fingerprint": {
+-     "type": "string",
+-     "description": "Fingerprint of this resource. A hash of the tags stored in this object. This field is used optimistic locking. An up-to-date tags fingerprint must be provided in order to modify tags.",
+-     "format": "byte"
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "An array of tags. Each tag must be 1-63 characters long, and comply with RFC1035.",
+-     "items": {
+-      "type": "string"
+-     }
+-    }
+-   }
+-  },
+-  "TargetHttpProxy": {
+-   "id": "TargetHttpProxy",
+-   "type": "object",
+-   "description": "A TargetHttpProxy resource. This resource defines an HTTP proxy.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#targetHttpProxy"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "urlMap": {
+-     "type": "string",
+-     "description": "URL to the UrlMap resource that defines the mapping from URL to the BackendService."
+-    }
+-   }
+-  },
+-  "TargetHttpProxyList": {
+-   "id": "TargetHttpProxyList",
+-   "type": "object",
+-   "description": "Contains a list of TargetHttpProxy resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The TargetHttpProxy resources.",
+-     "items": {
+-      "$ref": "TargetHttpProxy"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#targetHttpProxyList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "TargetInstance": {
+-   "id": "TargetInstance",
+-   "type": "object",
+-   "description": "A TargetInstance resource. This resource defines an endpoint VM that terminates traffic of certain protocols.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "instance": {
+-     "type": "string",
+-     "description": "The URL to the instance that terminates the relevant traffic."
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#targetInstance"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "natPolicy": {
+-     "type": "string",
+-     "description": "NAT option controlling how IPs are NAT'ed to the VM. Currently only NO_NAT (default value) is supported.",
+-     "enum": [
+-      "NO_NAT"
+-     ],
+-     "enumDescriptions": [
+-      ""
+-     ]
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "zone": {
+-     "type": "string",
+-     "description": "URL of the zone where the target instance resides (output only)."
+-    }
+-   }
+-  },
+-  "TargetInstanceAggregatedList": {
+-   "id": "TargetInstanceAggregatedList",
+-   "type": "object",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "object",
+-     "description": "A map of scoped target instance lists.",
+-     "additionalProperties": {
+-      "$ref": "TargetInstancesScopedList",
+-      "description": "Name of the scope containing this set of target instances."
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#targetInstanceAggregatedList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "TargetInstanceList": {
+-   "id": "TargetInstanceList",
+-   "type": "object",
+-   "description": "Contains a list of TargetInstance resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The TargetInstance resources.",
+-     "items": {
+-      "$ref": "TargetInstance"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#targetInstanceList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "TargetInstancesScopedList": {
+-   "id": "TargetInstancesScopedList",
+-   "type": "object",
+-   "properties": {
+-    "targetInstances": {
+-     "type": "array",
+-     "description": "List of target instances contained in this scope.",
+-     "items": {
+-      "$ref": "TargetInstance"
+-     }
+-    },
+-    "warning": {
+-     "type": "object",
+-     "description": "Informational warning which replaces the list of addresses when the list is empty.",
+-     "properties": {
+-      "code": {
+-       "type": "string",
+-       "description": "The warning type identifier for this warning.",
+-       "enum": [
+-        "DEPRECATED_RESOURCE_USED",
+-        "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-        "INJECTED_KERNELS_DEPRECATED",
+-        "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-        "NEXT_HOP_CANNOT_IP_FORWARD",
+-        "NEXT_HOP_INSTANCE_NOT_FOUND",
+-        "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-        "NEXT_HOP_NOT_RUNNING",
+-        "NO_RESULTS_ON_PAGE",
+-        "REQUIRED_TOS_AGREEMENT",
+-        "RESOURCE_NOT_DELETED",
+-        "UNREACHABLE"
+-       ],
+-       "enumDescriptions": [
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        ""
+-       ]
+-      },
+-      "data": {
+-       "type": "array",
+-       "description": "Metadata for this warning in 'key: value' format.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "key": {
+-          "type": "string",
+-          "description": "A key for the warning data."
+-         },
+-         "value": {
+-          "type": "string",
+-          "description": "A warning data value corresponding to the key."
+-         }
+-        }
+-       }
+-      },
+-      "message": {
+-       "type": "string",
+-       "description": "Optional human-readable details for this warning."
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "TargetPool": {
+-   "id": "TargetPool",
+-   "type": "object",
+-   "description": "A TargetPool resource. This resource defines a pool of VMs, associated HttpHealthCheck resources, and the fallback TargetPool.",
+-   "properties": {
+-    "backupPool": {
+-     "type": "string",
+-     "description": "This field is applicable only when the containing target pool is serving a forwarding rule as the primary pool, and its 'failoverRatio' field is properly set to a value between [0, 1].\n\n'backupPool' and 'failoverRatio' together define the fallback behavior of the primary target pool: if the ratio of the healthy VMs in the primary pool is at or below 'failoverRatio', traffic arriving at the load-balanced IP will be directed to the backup pool.\n\nIn case where 'failoverRatio' and 'backupPool' are not set, or all the VMs in the backup pool are unhealthy, the traffic will be directed back to the primary pool in the \"force\" mode, where traffic will be spread to the healthy VMs with the best effort, or to all VMs when no VM is healthy."
+-    },
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "failoverRatio": {
+-     "type": "number",
+-     "description": "This field is applicable only when the containing target pool is serving a forwarding rule as the primary pool (i.e., not as a backup pool to some other target pool). The value of the field must be in [0, 1].\n\nIf set, 'backupPool' must also be set. They together define the fallback behavior of the primary target pool: if the ratio of the healthy VMs in the primary pool is at or below this number, traffic arriving at the load-balanced IP will be directed to the backup pool.\n\nIn case where 'failoverRatio' is not set or all the VMs in the backup pool are unhealthy, the traffic will be directed back to the primary pool in the \"force\" mode, where traffic will be spread to the healthy VMs with the best effort, or to all VMs when no VM is healthy.",
+-     "format": "float"
+-    },
+-    "healthChecks": {
+-     "type": "array",
+-     "description": "A list of URLs to the HttpHealthCheck resource. A member VM in this pool is considered healthy if and only if all specified health checks pass. An empty list means all member VMs will be considered healthy at all times.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "instances": {
+-     "type": "array",
+-     "description": "A list of resource URLs to the member VMs serving this pool. They must live in zones contained in the same region as this pool.",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#targetPool"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "region": {
+-     "type": "string",
+-     "description": "URL of the region where the target pool resides (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "sessionAffinity": {
+-     "type": "string",
+-     "description": "Sesssion affinity option, must be one of the following values: 'NONE': Connections from the same client IP may go to any VM in the pool; 'CLIENT_IP': Connections from the same client IP will go to the same VM in the pool while that VM remains healthy. 'CLIENT_IP_PROTO': Connections from the same client IP with the same IP protocol will go to the same VM in the pool while that VM remains healthy.",
+-     "enum": [
+-      "CLIENT_IP",
+-      "CLIENT_IP_PROTO",
+-      "NONE"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      "",
+-      ""
+-     ]
+-    }
+-   }
+-  },
+-  "TargetPoolAggregatedList": {
+-   "id": "TargetPoolAggregatedList",
+-   "type": "object",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "object",
+-     "description": "A map of scoped target pool lists.",
+-     "additionalProperties": {
+-      "$ref": "TargetPoolsScopedList",
+-      "description": "Name of the scope containing this set of target pools."
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#targetPoolAggregatedList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "TargetPoolInstanceHealth": {
+-   "id": "TargetPoolInstanceHealth",
+-   "type": "object",
+-   "properties": {
+-    "healthStatus": {
+-     "type": "array",
+-     "items": {
+-      "$ref": "HealthStatus"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#targetPoolInstanceHealth"
+-    }
+-   }
+-  },
+-  "TargetPoolList": {
+-   "id": "TargetPoolList",
+-   "type": "object",
+-   "description": "Contains a list of TargetPool resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The TargetPool resources.",
+-     "items": {
+-      "$ref": "TargetPool"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#targetPoolList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "TargetPoolsAddHealthCheckRequest": {
+-   "id": "TargetPoolsAddHealthCheckRequest",
+-   "type": "object",
+-   "properties": {
+-    "healthChecks": {
+-     "type": "array",
+-     "description": "Health check URLs to be added to targetPool.",
+-     "items": {
+-      "$ref": "HealthCheckReference"
+-     }
+-    }
+-   }
+-  },
+-  "TargetPoolsAddInstanceRequest": {
+-   "id": "TargetPoolsAddInstanceRequest",
+-   "type": "object",
+-   "properties": {
+-    "instances": {
+-     "type": "array",
+-     "description": "URLs of the instances to be added to targetPool.",
+-     "items": {
+-      "$ref": "InstanceReference"
+-     }
+-    }
+-   }
+-  },
+-  "TargetPoolsRemoveHealthCheckRequest": {
+-   "id": "TargetPoolsRemoveHealthCheckRequest",
+-   "type": "object",
+-   "properties": {
+-    "healthChecks": {
+-     "type": "array",
+-     "description": "Health check URLs to be removed from targetPool.",
+-     "items": {
+-      "$ref": "HealthCheckReference"
+-     }
+-    }
+-   }
+-  },
+-  "TargetPoolsRemoveInstanceRequest": {
+-   "id": "TargetPoolsRemoveInstanceRequest",
+-   "type": "object",
+-   "properties": {
+-    "instances": {
+-     "type": "array",
+-     "description": "URLs of the instances to be removed from targetPool.",
+-     "items": {
+-      "$ref": "InstanceReference"
+-     }
+-    }
+-   }
+-  },
+-  "TargetPoolsScopedList": {
+-   "id": "TargetPoolsScopedList",
+-   "type": "object",
+-   "properties": {
+-    "targetPools": {
+-     "type": "array",
+-     "description": "List of target pools contained in this scope.",
+-     "items": {
+-      "$ref": "TargetPool"
+-     }
+-    },
+-    "warning": {
+-     "type": "object",
+-     "description": "Informational warning which replaces the list of addresses when the list is empty.",
+-     "properties": {
+-      "code": {
+-       "type": "string",
+-       "description": "The warning type identifier for this warning.",
+-       "enum": [
+-        "DEPRECATED_RESOURCE_USED",
+-        "DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
+-        "INJECTED_KERNELS_DEPRECATED",
+-        "NEXT_HOP_ADDRESS_NOT_ASSIGNED",
+-        "NEXT_HOP_CANNOT_IP_FORWARD",
+-        "NEXT_HOP_INSTANCE_NOT_FOUND",
+-        "NEXT_HOP_INSTANCE_NOT_ON_NETWORK",
+-        "NEXT_HOP_NOT_RUNNING",
+-        "NO_RESULTS_ON_PAGE",
+-        "REQUIRED_TOS_AGREEMENT",
+-        "RESOURCE_NOT_DELETED",
+-        "UNREACHABLE"
+-       ],
+-       "enumDescriptions": [
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        "",
+-        ""
+-       ]
+-      },
+-      "data": {
+-       "type": "array",
+-       "description": "Metadata for this warning in 'key: value' format.",
+-       "items": {
+-        "type": "object",
+-        "properties": {
+-         "key": {
+-          "type": "string",
+-          "description": "A key for the warning data."
+-         },
+-         "value": {
+-          "type": "string",
+-          "description": "A warning data value corresponding to the key."
+-         }
+-        }
+-       }
+-      },
+-      "message": {
+-       "type": "string",
+-       "description": "Optional human-readable details for this warning."
+-      }
+-     }
+-    }
+-   }
+-  },
+-  "TargetReference": {
+-   "id": "TargetReference",
+-   "type": "object",
+-   "properties": {
+-    "target": {
+-     "type": "string"
+-    }
+-   }
+-  },
+-  "TestFailure": {
+-   "id": "TestFailure",
+-   "type": "object",
+-   "properties": {
+-    "actualService": {
+-     "type": "string"
+-    },
+-    "expectedService": {
+-     "type": "string"
+-    },
+-    "host": {
+-     "type": "string"
+-    },
+-    "path": {
+-     "type": "string"
+-    }
+-   }
+-  },
+-  "UrlMap": {
+-   "id": "UrlMap",
+-   "type": "object",
+-   "description": "A UrlMap resource. This resource defines the mapping from URL to the BackendService resource, based on the \"longest-match\" of the URL's host and path.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "defaultService": {
+-     "type": "string",
+-     "description": "The URL of the BackendService resource if none of the hostRules match."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "An optional textual description of the resource; provided by the client when the resource is created."
+-    },
+-    "fingerprint": {
+-     "type": "string",
+-     "description": "Fingerprint of this resource. A hash of the contents stored in this object. This field is used in optimistic locking. This field will be ignored when inserting a UrlMap. An up-to-date fingerprint must be provided in order to update the UrlMap.",
+-     "format": "byte"
+-    },
+-    "hostRules": {
+-     "type": "array",
+-     "description": "The list of HostRules to use against the URL.",
+-     "items": {
+-      "$ref": "HostRule"
+-     }
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#urlMap"
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035.",
+-     "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
+-    },
+-    "pathMatchers": {
+-     "type": "array",
+-     "description": "The list of named PathMatchers to use against the URL.",
+-     "items": {
+-      "$ref": "PathMatcher"
+-     }
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "tests": {
+-     "type": "array",
+-     "description": "The list of expected URL mappings. Request to update this UrlMap will succeed only all of the test cases pass.",
+-     "items": {
+-      "$ref": "UrlMapTest"
+-     }
+-    }
+-   }
+-  },
+-  "UrlMapList": {
+-   "id": "UrlMapList",
+-   "type": "object",
+-   "description": "Contains a list of UrlMap resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The UrlMap resources.",
+-     "items": {
+-      "$ref": "UrlMap"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#urlMapList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  },
+-  "UrlMapReference": {
+-   "id": "UrlMapReference",
+-   "type": "object",
+-   "properties": {
+-    "urlMap": {
+-     "type": "string"
+-    }
+-   }
+-  },
+-  "UrlMapTest": {
+-   "id": "UrlMapTest",
+-   "type": "object",
+-   "description": "Message for the expected URL mappings.",
+-   "properties": {
+-    "description": {
+-     "type": "string",
+-     "description": "Description of this test case."
+-    },
+-    "host": {
+-     "type": "string",
+-     "description": "Host portion of the URL."
+-    },
+-    "path": {
+-     "type": "string",
+-     "description": "Path portion of the URL."
+-    },
+-    "service": {
+-     "type": "string",
+-     "description": "Expected BackendService resource the given URL should be mapped to."
+-    }
+-   }
+-  },
+-  "UrlMapValidationResult": {
+-   "id": "UrlMapValidationResult",
+-   "type": "object",
+-   "description": "Message representing the validation result for a UrlMap.",
+-   "properties": {
+-    "loadErrors": {
+-     "type": "array",
+-     "items": {
+-      "type": "string"
+-     }
+-    },
+-    "loadSucceeded": {
+-     "type": "boolean",
+-     "description": "Whether the given UrlMap can be successfully loaded. If false, 'loadErrors' indicates the reasons."
+-    },
+-    "testFailures": {
+-     "type": "array",
+-     "items": {
+-      "$ref": "TestFailure"
+-     }
+-    },
+-    "testPassed": {
+-     "type": "boolean",
+-     "description": "If successfully loaded, this field indicates whether the test passed. If false, 'testFailures's indicate the reason of failure."
+-    }
+-   }
+-  },
+-  "UrlMapsValidateRequest": {
+-   "id": "UrlMapsValidateRequest",
+-   "type": "object",
+-   "properties": {
+-    "resource": {
+-     "$ref": "UrlMap",
+-     "description": "Content of the UrlMap to be validated."
+-    }
+-   }
+-  },
+-  "UrlMapsValidateResponse": {
+-   "id": "UrlMapsValidateResponse",
+-   "type": "object",
+-   "properties": {
+-    "result": {
+-     "$ref": "UrlMapValidationResult"
+-    }
+-   }
+-  },
+-  "UsageExportLocation": {
+-   "id": "UsageExportLocation",
+-   "type": "object",
+-   "description": "The location in Cloud Storage and naming method of the daily usage report. Contains bucket_name and report_name prefix.",
+-   "properties": {
+-    "bucketName": {
+-     "type": "string",
+-     "description": "The name of an existing bucket in Cloud Storage where the usage report object is stored. The Google Service Account is granted write access to this bucket. This is simply the bucket name, with no \"gs://\" or \"https://storage.googleapis.com/\" in front of it."
+-    },
+-    "reportNamePrefix": {
+-     "type": "string",
+-     "description": "An optional prefix for the name of the usage report object stored in bucket_name. If not supplied, defaults to \"usage_\". The report is stored as a CSV file named _gce_.csv. where  is the day of the usage according to Pacific Time. The prefix should conform to Cloud Storage object naming conventions."
+-    }
+-   }
+-  },
+-  "Zone": {
+-   "id": "Zone",
+-   "type": "object",
+-   "description": "A zone resource.",
+-   "properties": {
+-    "creationTimestamp": {
+-     "type": "string",
+-     "description": "Creation timestamp in RFC3339 text format (output only)."
+-    },
+-    "deprecated": {
+-     "$ref": "DeprecationStatus",
+-     "description": "The deprecation status associated with this zone."
+-    },
+-    "description": {
+-     "type": "string",
+-     "description": "Textual description of the resource."
+-    },
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only).",
+-     "format": "uint64"
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of the resource.",
+-     "default": "compute#zone"
+-    },
+-    "maintenanceWindows": {
+-     "type": "array",
+-     "description": "Scheduled maintenance windows for the zone. When the zone is in a maintenance window, all resources which reside in the zone will be unavailable.",
+-     "items": {
+-      "type": "object",
+-      "properties": {
+-       "beginTime": {
+-        "type": "string",
+-        "description": "Begin time of the maintenance window, in RFC 3339 format."
+-       },
+-       "description": {
+-        "type": "string",
+-        "description": "Textual description of the maintenance window."
+-       },
+-       "endTime": {
+-        "type": "string",
+-        "description": "End time of the maintenance window, in RFC 3339 format."
+-       },
+-       "name": {
+-        "type": "string",
+-        "description": "Name of the maintenance window."
+-       }
+-      }
+-     }
+-    },
+-    "name": {
+-     "type": "string",
+-     "description": "Name of the resource."
+-    },
+-    "region": {
+-     "type": "string",
+-     "description": "Full URL reference to the region which hosts the zone (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for the resource (output only)."
+-    },
+-    "status": {
+-     "type": "string",
+-     "description": "Status of the zone. \"UP\" or \"DOWN\".",
+-     "enum": [
+-      "DOWN",
+-      "UP"
+-     ],
+-     "enumDescriptions": [
+-      "",
+-      ""
+-     ]
+-    }
+-   }
+-  },
+-  "ZoneList": {
+-   "id": "ZoneList",
+-   "type": "object",
+-   "description": "Contains a list of zone resources.",
+-   "properties": {
+-    "id": {
+-     "type": "string",
+-     "description": "Unique identifier for the resource; defined by the server (output only)."
+-    },
+-    "items": {
+-     "type": "array",
+-     "description": "The zone resources.",
+-     "items": {
+-      "$ref": "Zone"
+-     }
+-    },
+-    "kind": {
+-     "type": "string",
+-     "description": "Type of resource.",
+-     "default": "compute#zoneList"
+-    },
+-    "nextPageToken": {
+-     "type": "string",
+-     "description": "A token used to continue a truncated list request (output only)."
+-    },
+-    "selfLink": {
+-     "type": "string",
+-     "description": "Server defined URL for this resource (output only)."
+-    }
+-   }
+-  }
+- },
+- "resources": {
+-  "addresses": {
+-   "methods": {
+-    "aggregatedList": {
+-     "id": "compute.addresses.aggregatedList",
+-     "path": "{project}/aggregated/addresses",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of addresses grouped by scope.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "AddressAggregatedList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "delete": {
+-     "id": "compute.addresses.delete",
+-     "path": "{project}/regions/{region}/addresses/{address}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified address resource.",
+-     "parameters": {
+-      "address": {
+-       "type": "string",
+-       "description": "Name of the address resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "address"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.addresses.get",
+-     "path": "{project}/regions/{region}/addresses/{address}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified address resource.",
+-     "parameters": {
+-      "address": {
+-       "type": "string",
+-       "description": "Name of the address resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "address"
+-     ],
+-     "response": {
+-      "$ref": "Address"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.addresses.insert",
+-     "path": "{project}/regions/{region}/addresses",
+-     "httpMethod": "POST",
+-     "description": "Creates an address resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region"
+-     ],
+-     "request": {
+-      "$ref": "Address"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.addresses.list",
+-     "path": "{project}/regions/{region}/addresses",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of address resources contained within the specified region.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region"
+-     ],
+-     "response": {
+-      "$ref": "AddressList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "backendServices": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.backendServices.delete",
+-     "path": "{project}/global/backendServices/{backendService}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified BackendService resource.",
+-     "parameters": {
+-      "backendService": {
+-       "type": "string",
+-       "description": "Name of the BackendService resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "backendService"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.backendServices.get",
+-     "path": "{project}/global/backendServices/{backendService}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified BackendService resource.",
+-     "parameters": {
+-      "backendService": {
+-       "type": "string",
+-       "description": "Name of the BackendService resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "backendService"
+-     ],
+-     "response": {
+-      "$ref": "BackendService"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "getHealth": {
+-     "id": "compute.backendServices.getHealth",
+-     "path": "{project}/global/backendServices/{backendService}/getHealth",
+-     "httpMethod": "POST",
+-     "description": "Gets the most recent health check results for this BackendService.",
+-     "parameters": {
+-      "backendService": {
+-       "type": "string",
+-       "description": "Name of the BackendService resource to which the queried instance belongs.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "backendService"
+-     ],
+-     "request": {
+-      "$ref": "ResourceGroupReference"
+-     },
+-     "response": {
+-      "$ref": "BackendServiceGroupHealth"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.backendServices.insert",
+-     "path": "{project}/global/backendServices",
+-     "httpMethod": "POST",
+-     "description": "Creates a BackendService resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "BackendService"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.backendServices.list",
+-     "path": "{project}/global/backendServices",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of BackendService resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "BackendServiceList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "patch": {
+-     "id": "compute.backendServices.patch",
+-     "path": "{project}/global/backendServices/{backendService}",
+-     "httpMethod": "PATCH",
+-     "description": "Update the entire content of the BackendService resource. This method supports patch semantics.",
+-     "parameters": {
+-      "backendService": {
+-       "type": "string",
+-       "description": "Name of the BackendService resource to update.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "backendService"
+-     ],
+-     "request": {
+-      "$ref": "BackendService"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "update": {
+-     "id": "compute.backendServices.update",
+-     "path": "{project}/global/backendServices/{backendService}",
+-     "httpMethod": "PUT",
+-     "description": "Update the entire content of the BackendService resource.",
+-     "parameters": {
+-      "backendService": {
+-       "type": "string",
+-       "description": "Name of the BackendService resource to update.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "backendService"
+-     ],
+-     "request": {
+-      "$ref": "BackendService"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    }
+-   }
+-  },
+-  "diskTypes": {
+-   "methods": {
+-    "aggregatedList": {
+-     "id": "compute.diskTypes.aggregatedList",
+-     "path": "{project}/aggregated/diskTypes",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of disk type resources grouped by scope.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "DiskTypeAggregatedList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.diskTypes.get",
+-     "path": "{project}/zones/{zone}/diskTypes/{diskType}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified disk type resource.",
+-     "parameters": {
+-      "diskType": {
+-       "type": "string",
+-       "description": "Name of the disk type resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "diskType"
+-     ],
+-     "response": {
+-      "$ref": "DiskType"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.diskTypes.list",
+-     "path": "{project}/zones/{zone}/diskTypes",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of disk type resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "response": {
+-      "$ref": "DiskTypeList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "disks": {
+-   "methods": {
+-    "aggregatedList": {
+-     "id": "compute.disks.aggregatedList",
+-     "path": "{project}/aggregated/disks",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of disks grouped by scope.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "DiskAggregatedList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "createSnapshot": {
+-     "id": "compute.disks.createSnapshot",
+-     "path": "{project}/zones/{zone}/disks/{disk}/createSnapshot",
+-     "httpMethod": "POST",
+-     "parameters": {
+-      "disk": {
+-       "type": "string",
+-       "description": "Name of the persistent disk resource to snapshot.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "disk"
+-     ],
+-     "request": {
+-      "$ref": "Snapshot"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "delete": {
+-     "id": "compute.disks.delete",
+-     "path": "{project}/zones/{zone}/disks/{disk}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified persistent disk resource.",
+-     "parameters": {
+-      "disk": {
+-       "type": "string",
+-       "description": "Name of the persistent disk resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "disk"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.disks.get",
+-     "path": "{project}/zones/{zone}/disks/{disk}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified persistent disk resource.",
+-     "parameters": {
+-      "disk": {
+-       "type": "string",
+-       "description": "Name of the persistent disk resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "disk"
+-     ],
+-     "response": {
+-      "$ref": "Disk"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.disks.insert",
+-     "path": "{project}/zones/{zone}/disks",
+-     "httpMethod": "POST",
+-     "description": "Creates a persistent disk resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "sourceImage": {
+-       "type": "string",
+-       "description": "Optional. Source image to restore onto a disk.",
+-       "location": "query"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "request": {
+-      "$ref": "Disk"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.disks.list",
+-     "path": "{project}/zones/{zone}/disks",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of persistent disk resources contained within the specified zone.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "response": {
+-      "$ref": "DiskList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "firewalls": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.firewalls.delete",
+-     "path": "{project}/global/firewalls/{firewall}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified firewall resource.",
+-     "parameters": {
+-      "firewall": {
+-       "type": "string",
+-       "description": "Name of the firewall resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "firewall"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.firewalls.get",
+-     "path": "{project}/global/firewalls/{firewall}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified firewall resource.",
+-     "parameters": {
+-      "firewall": {
+-       "type": "string",
+-       "description": "Name of the firewall resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "firewall"
+-     ],
+-     "response": {
+-      "$ref": "Firewall"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.firewalls.insert",
+-     "path": "{project}/global/firewalls",
+-     "httpMethod": "POST",
+-     "description": "Creates a firewall resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "Firewall"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.firewalls.list",
+-     "path": "{project}/global/firewalls",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of firewall resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "FirewallList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "patch": {
+-     "id": "compute.firewalls.patch",
+-     "path": "{project}/global/firewalls/{firewall}",
+-     "httpMethod": "PATCH",
+-     "description": "Updates the specified firewall resource with the data included in the request. This method supports patch semantics.",
+-     "parameters": {
+-      "firewall": {
+-       "type": "string",
+-       "description": "Name of the firewall resource to update.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "firewall"
+-     ],
+-     "request": {
+-      "$ref": "Firewall"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "update": {
+-     "id": "compute.firewalls.update",
+-     "path": "{project}/global/firewalls/{firewall}",
+-     "httpMethod": "PUT",
+-     "description": "Updates the specified firewall resource with the data included in the request.",
+-     "parameters": {
+-      "firewall": {
+-       "type": "string",
+-       "description": "Name of the firewall resource to update.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "firewall"
+-     ],
+-     "request": {
+-      "$ref": "Firewall"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    }
+-   }
+-  },
+-  "forwardingRules": {
+-   "methods": {
+-    "aggregatedList": {
+-     "id": "compute.forwardingRules.aggregatedList",
+-     "path": "{project}/aggregated/forwardingRules",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of forwarding rules grouped by scope.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "ForwardingRuleAggregatedList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "delete": {
+-     "id": "compute.forwardingRules.delete",
+-     "path": "{project}/regions/{region}/forwardingRules/{forwardingRule}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified ForwardingRule resource.",
+-     "parameters": {
+-      "forwardingRule": {
+-       "type": "string",
+-       "description": "Name of the ForwardingRule resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "forwardingRule"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.forwardingRules.get",
+-     "path": "{project}/regions/{region}/forwardingRules/{forwardingRule}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified ForwardingRule resource.",
+-     "parameters": {
+-      "forwardingRule": {
+-       "type": "string",
+-       "description": "Name of the ForwardingRule resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "forwardingRule"
+-     ],
+-     "response": {
+-      "$ref": "ForwardingRule"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.forwardingRules.insert",
+-     "path": "{project}/regions/{region}/forwardingRules",
+-     "httpMethod": "POST",
+-     "description": "Creates a ForwardingRule resource in the specified project and region using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region"
+-     ],
+-     "request": {
+-      "$ref": "ForwardingRule"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.forwardingRules.list",
+-     "path": "{project}/regions/{region}/forwardingRules",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of ForwardingRule resources available to the specified project and region.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region"
+-     ],
+-     "response": {
+-      "$ref": "ForwardingRuleList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "setTarget": {
+-     "id": "compute.forwardingRules.setTarget",
+-     "path": "{project}/regions/{region}/forwardingRules/{forwardingRule}/setTarget",
+-     "httpMethod": "POST",
+-     "description": "Changes target url for forwarding rule.",
+-     "parameters": {
+-      "forwardingRule": {
+-       "type": "string",
+-       "description": "Name of the ForwardingRule resource in which target is to be set.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "forwardingRule"
+-     ],
+-     "request": {
+-      "$ref": "TargetReference"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    }
+-   }
+-  },
+-  "globalAddresses": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.globalAddresses.delete",
+-     "path": "{project}/global/addresses/{address}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified address resource.",
+-     "parameters": {
+-      "address": {
+-       "type": "string",
+-       "description": "Name of the address resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "address"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.globalAddresses.get",
+-     "path": "{project}/global/addresses/{address}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified address resource.",
+-     "parameters": {
+-      "address": {
+-       "type": "string",
+-       "description": "Name of the address resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "address"
+-     ],
+-     "response": {
+-      "$ref": "Address"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.globalAddresses.insert",
+-     "path": "{project}/global/addresses",
+-     "httpMethod": "POST",
+-     "description": "Creates an address resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "Address"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.globalAddresses.list",
+-     "path": "{project}/global/addresses",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of global address resources.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "AddressList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "globalForwardingRules": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.globalForwardingRules.delete",
+-     "path": "{project}/global/forwardingRules/{forwardingRule}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified ForwardingRule resource.",
+-     "parameters": {
+-      "forwardingRule": {
+-       "type": "string",
+-       "description": "Name of the ForwardingRule resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "forwardingRule"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.globalForwardingRules.get",
+-     "path": "{project}/global/forwardingRules/{forwardingRule}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified ForwardingRule resource.",
+-     "parameters": {
+-      "forwardingRule": {
+-       "type": "string",
+-       "description": "Name of the ForwardingRule resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "forwardingRule"
+-     ],
+-     "response": {
+-      "$ref": "ForwardingRule"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.globalForwardingRules.insert",
+-     "path": "{project}/global/forwardingRules",
+-     "httpMethod": "POST",
+-     "description": "Creates a ForwardingRule resource in the specified project and region using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "ForwardingRule"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.globalForwardingRules.list",
+-     "path": "{project}/global/forwardingRules",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of ForwardingRule resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "ForwardingRuleList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "setTarget": {
+-     "id": "compute.globalForwardingRules.setTarget",
+-     "path": "{project}/global/forwardingRules/{forwardingRule}/setTarget",
+-     "httpMethod": "POST",
+-     "description": "Changes target url for forwarding rule.",
+-     "parameters": {
+-      "forwardingRule": {
+-       "type": "string",
+-       "description": "Name of the ForwardingRule resource in which target is to be set.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "forwardingRule"
+-     ],
+-     "request": {
+-      "$ref": "TargetReference"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    }
+-   }
+-  },
+-  "globalOperations": {
+-   "methods": {
+-    "aggregatedList": {
+-     "id": "compute.globalOperations.aggregatedList",
+-     "path": "{project}/aggregated/operations",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of all operations grouped by scope.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "OperationAggregatedList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "delete": {
+-     "id": "compute.globalOperations.delete",
+-     "path": "{project}/global/operations/{operation}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified operation resource.",
+-     "parameters": {
+-      "operation": {
+-       "type": "string",
+-       "description": "Name of the operation resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "operation"
+-     ],
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.globalOperations.get",
+-     "path": "{project}/global/operations/{operation}",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the specified operation resource.",
+-     "parameters": {
+-      "operation": {
+-       "type": "string",
+-       "description": "Name of the operation resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "operation"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.globalOperations.list",
+-     "path": "{project}/global/operations",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of operation resources contained within the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "OperationList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "httpHealthChecks": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.httpHealthChecks.delete",
+-     "path": "{project}/global/httpHealthChecks/{httpHealthCheck}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified HttpHealthCheck resource.",
+-     "parameters": {
+-      "httpHealthCheck": {
+-       "type": "string",
+-       "description": "Name of the HttpHealthCheck resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "httpHealthCheck"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.httpHealthChecks.get",
+-     "path": "{project}/global/httpHealthChecks/{httpHealthCheck}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified HttpHealthCheck resource.",
+-     "parameters": {
+-      "httpHealthCheck": {
+-       "type": "string",
+-       "description": "Name of the HttpHealthCheck resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "httpHealthCheck"
+-     ],
+-     "response": {
+-      "$ref": "HttpHealthCheck"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.httpHealthChecks.insert",
+-     "path": "{project}/global/httpHealthChecks",
+-     "httpMethod": "POST",
+-     "description": "Creates a HttpHealthCheck resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "HttpHealthCheck"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.httpHealthChecks.list",
+-     "path": "{project}/global/httpHealthChecks",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of HttpHealthCheck resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "HttpHealthCheckList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "patch": {
+-     "id": "compute.httpHealthChecks.patch",
+-     "path": "{project}/global/httpHealthChecks/{httpHealthCheck}",
+-     "httpMethod": "PATCH",
+-     "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request. This method supports patch semantics.",
+-     "parameters": {
+-      "httpHealthCheck": {
+-       "type": "string",
+-       "description": "Name of the HttpHealthCheck resource to update.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "httpHealthCheck"
+-     ],
+-     "request": {
+-      "$ref": "HttpHealthCheck"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "update": {
+-     "id": "compute.httpHealthChecks.update",
+-     "path": "{project}/global/httpHealthChecks/{httpHealthCheck}",
+-     "httpMethod": "PUT",
+-     "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "httpHealthCheck": {
+-       "type": "string",
+-       "description": "Name of the HttpHealthCheck resource to update.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "httpHealthCheck"
+-     ],
+-     "request": {
+-      "$ref": "HttpHealthCheck"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    }
+-   }
+-  },
+-  "images": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.images.delete",
+-     "path": "{project}/global/images/{image}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified image resource.",
+-     "parameters": {
+-      "image": {
+-       "type": "string",
+-       "description": "Name of the image resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "image"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "deprecate": {
+-     "id": "compute.images.deprecate",
+-     "path": "{project}/global/images/{image}/deprecate",
+-     "httpMethod": "POST",
+-     "description": "Sets the deprecation status of an image. If no message body is given, clears the deprecation status instead.",
+-     "parameters": {
+-      "image": {
+-       "type": "string",
+-       "description": "Image name.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "image"
+-     ],
+-     "request": {
+-      "$ref": "DeprecationStatus"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.images.get",
+-     "path": "{project}/global/images/{image}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified image resource.",
+-     "parameters": {
+-      "image": {
+-       "type": "string",
+-       "description": "Name of the image resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "image"
+-     ],
+-     "response": {
+-      "$ref": "Image"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.images.insert",
+-     "path": "{project}/global/images",
+-     "httpMethod": "POST",
+-     "description": "Creates an image resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "Image"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/devstorage.full_control",
+-      "https://www.googleapis.com/auth/devstorage.read_only",
+-      "https://www.googleapis.com/auth/devstorage.read_write"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.images.list",
+-     "path": "{project}/global/images",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of image resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "ImageList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "instances": {
+-   "methods": {
+-    "addAccessConfig": {
+-     "id": "compute.instances.addAccessConfig",
+-     "path": "{project}/zones/{zone}/instances/{instance}/addAccessConfig",
+-     "httpMethod": "POST",
+-     "description": "Adds an access config to an instance's network interface.",
+-     "parameters": {
+-      "instance": {
+-       "type": "string",
+-       "description": "Instance name.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "networkInterface": {
+-       "type": "string",
+-       "description": "Network interface name.",
+-       "required": true,
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Project name.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance",
+-      "networkInterface"
+-     ],
+-     "request": {
+-      "$ref": "AccessConfig"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "aggregatedList": {
+-     "id": "compute.instances.aggregatedList",
+-     "path": "{project}/aggregated/instances",
+-     "httpMethod": "GET",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "InstanceAggregatedList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "attachDisk": {
+-     "id": "compute.instances.attachDisk",
+-     "path": "{project}/zones/{zone}/instances/{instance}/attachDisk",
+-     "httpMethod": "POST",
+-     "description": "Attaches a disk resource to an instance.",
+-     "parameters": {
+-      "instance": {
+-       "type": "string",
+-       "description": "Instance name.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Project name.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance"
+-     ],
+-     "request": {
+-      "$ref": "AttachedDisk"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "delete": {
+-     "id": "compute.instances.delete",
+-     "path": "{project}/zones/{zone}/instances/{instance}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified instance resource.",
+-     "parameters": {
+-      "instance": {
+-       "type": "string",
+-       "description": "Name of the instance resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "deleteAccessConfig": {
+-     "id": "compute.instances.deleteAccessConfig",
+-     "path": "{project}/zones/{zone}/instances/{instance}/deleteAccessConfig",
+-     "httpMethod": "POST",
+-     "description": "Deletes an access config from an instance's network interface.",
+-     "parameters": {
+-      "accessConfig": {
+-       "type": "string",
+-       "description": "Access config name.",
+-       "required": true,
+-       "location": "query"
+-      },
+-      "instance": {
+-       "type": "string",
+-       "description": "Instance name.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "networkInterface": {
+-       "type": "string",
+-       "description": "Network interface name.",
+-       "required": true,
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Project name.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance",
+-      "accessConfig",
+-      "networkInterface"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "detachDisk": {
+-     "id": "compute.instances.detachDisk",
+-     "path": "{project}/zones/{zone}/instances/{instance}/detachDisk",
+-     "httpMethod": "POST",
+-     "description": "Detaches a disk from an instance.",
+-     "parameters": {
+-      "deviceName": {
+-       "type": "string",
+-       "description": "Disk device name to detach.",
+-       "required": true,
+-       "pattern": "\\w[\\w.-]{0,254}",
+-       "location": "query"
+-      },
+-      "instance": {
+-       "type": "string",
+-       "description": "Instance name.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Project name.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance",
+-      "deviceName"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.instances.get",
+-     "path": "{project}/zones/{zone}/instances/{instance}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified instance resource.",
+-     "parameters": {
+-      "instance": {
+-       "type": "string",
+-       "description": "Name of the instance resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance"
+-     ],
+-     "response": {
+-      "$ref": "Instance"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "getSerialPortOutput": {
+-     "id": "compute.instances.getSerialPortOutput",
+-     "path": "{project}/zones/{zone}/instances/{instance}/serialPort",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified instance's serial port output.",
+-     "parameters": {
+-      "instance": {
+-       "type": "string",
+-       "description": "Name of the instance scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance"
+-     ],
+-     "response": {
+-      "$ref": "SerialPortOutput"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.instances.insert",
+-     "path": "{project}/zones/{zone}/instances",
+-     "httpMethod": "POST",
+-     "description": "Creates an instance resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "request": {
+-      "$ref": "Instance"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.instances.list",
+-     "path": "{project}/zones/{zone}/instances",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of instance resources contained within the specified zone.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "response": {
+-      "$ref": "InstanceList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "reset": {
+-     "id": "compute.instances.reset",
+-     "path": "{project}/zones/{zone}/instances/{instance}/reset",
+-     "httpMethod": "POST",
+-     "description": "Performs a hard reset on the instance.",
+-     "parameters": {
+-      "instance": {
+-       "type": "string",
+-       "description": "Name of the instance scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "setDiskAutoDelete": {
+-     "id": "compute.instances.setDiskAutoDelete",
+-     "path": "{project}/zones/{zone}/instances/{instance}/setDiskAutoDelete",
+-     "httpMethod": "POST",
+-     "description": "Sets the auto-delete flag for a disk attached to an instance",
+-     "parameters": {
+-      "autoDelete": {
+-       "type": "boolean",
+-       "description": "Whether to auto-delete the disk when the instance is deleted.",
+-       "required": true,
+-       "location": "query"
+-      },
+-      "deviceName": {
+-       "type": "string",
+-       "description": "Disk device name to modify.",
+-       "required": true,
+-       "pattern": "\\w[\\w.-]{0,254}",
+-       "location": "query"
+-      },
+-      "instance": {
+-       "type": "string",
+-       "description": "Instance name.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Project name.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance",
+-      "autoDelete",
+-      "deviceName"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "setMetadata": {
+-     "id": "compute.instances.setMetadata",
+-     "path": "{project}/zones/{zone}/instances/{instance}/setMetadata",
+-     "httpMethod": "POST",
+-     "description": "Sets metadata for the specified instance to the data included in the request.",
+-     "parameters": {
+-      "instance": {
+-       "type": "string",
+-       "description": "Name of the instance scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance"
+-     ],
+-     "request": {
+-      "$ref": "Metadata"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "setScheduling": {
+-     "id": "compute.instances.setScheduling",
+-     "path": "{project}/zones/{zone}/instances/{instance}/setScheduling",
+-     "httpMethod": "POST",
+-     "description": "Sets an instance's scheduling options.",
+-     "parameters": {
+-      "instance": {
+-       "type": "string",
+-       "description": "Instance name.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Project name.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance"
+-     ],
+-     "request": {
+-      "$ref": "Scheduling"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "setTags": {
+-     "id": "compute.instances.setTags",
+-     "path": "{project}/zones/{zone}/instances/{instance}/setTags",
+-     "httpMethod": "POST",
+-     "description": "Sets tags for the specified instance to the data included in the request.",
+-     "parameters": {
+-      "instance": {
+-       "type": "string",
+-       "description": "Name of the instance scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "instance"
+-     ],
+-     "request": {
+-      "$ref": "Tags"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    }
+-   }
+-  },
+-  "licenses": {
+-   "methods": {
+-    "get": {
+-     "id": "compute.licenses.get",
+-     "path": "{project}/global/licenses/{license}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified license resource.",
+-     "parameters": {
+-      "license": {
+-       "type": "string",
+-       "description": "Name of the license resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "license"
+-     ],
+-     "response": {
+-      "$ref": "License"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "machineTypes": {
+-   "methods": {
+-    "aggregatedList": {
+-     "id": "compute.machineTypes.aggregatedList",
+-     "path": "{project}/aggregated/machineTypes",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of machine type resources grouped by scope.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "MachineTypeAggregatedList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.machineTypes.get",
+-     "path": "{project}/zones/{zone}/machineTypes/{machineType}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified machine type resource.",
+-     "parameters": {
+-      "machineType": {
+-       "type": "string",
+-       "description": "Name of the machine type resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "machineType"
+-     ],
+-     "response": {
+-      "$ref": "MachineType"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.machineTypes.list",
+-     "path": "{project}/zones/{zone}/machineTypes",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of machine type resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "response": {
+-      "$ref": "MachineTypeList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "networks": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.networks.delete",
+-     "path": "{project}/global/networks/{network}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified network resource.",
+-     "parameters": {
+-      "network": {
+-       "type": "string",
+-       "description": "Name of the network resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "network"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.networks.get",
+-     "path": "{project}/global/networks/{network}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified network resource.",
+-     "parameters": {
+-      "network": {
+-       "type": "string",
+-       "description": "Name of the network resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "network"
+-     ],
+-     "response": {
+-      "$ref": "Network"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.networks.insert",
+-     "path": "{project}/global/networks",
+-     "httpMethod": "POST",
+-     "description": "Creates a network resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "Network"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.networks.list",
+-     "path": "{project}/global/networks",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of network resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "NetworkList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "projects": {
+-   "methods": {
+-    "get": {
+-     "id": "compute.projects.get",
+-     "path": "{project}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified project resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project resource to retrieve.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "Project"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "setCommonInstanceMetadata": {
+-     "id": "compute.projects.setCommonInstanceMetadata",
+-     "path": "{project}/setCommonInstanceMetadata",
+-     "httpMethod": "POST",
+-     "description": "Sets metadata common to all instances within the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "Metadata"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "setUsageExportBucket": {
+-     "id": "compute.projects.setUsageExportBucket",
+-     "path": "{project}/setUsageExportBucket",
+-     "httpMethod": "POST",
+-     "description": "Sets usage export location",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "UsageExportLocation"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/devstorage.full_control",
+-      "https://www.googleapis.com/auth/devstorage.read_only",
+-      "https://www.googleapis.com/auth/devstorage.read_write"
+-     ]
+-    }
+-   }
+-  },
+-  "regionOperations": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.regionOperations.delete",
+-     "path": "{project}/regions/{region}/operations/{operation}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified region-specific operation resource.",
+-     "parameters": {
+-      "operation": {
+-       "type": "string",
+-       "description": "Name of the operation resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "operation"
+-     ],
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.regionOperations.get",
+-     "path": "{project}/regions/{region}/operations/{operation}",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the specified region-specific operation resource.",
+-     "parameters": {
+-      "operation": {
+-       "type": "string",
+-       "description": "Name of the operation resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "operation"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.regionOperations.list",
+-     "path": "{project}/regions/{region}/operations",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of operation resources contained within the specified region.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region"
+-     ],
+-     "response": {
+-      "$ref": "OperationList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "regions": {
+-   "methods": {
+-    "get": {
+-     "id": "compute.regions.get",
+-     "path": "{project}/regions/{region}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified region resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region"
+-     ],
+-     "response": {
+-      "$ref": "Region"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.regions.list",
+-     "path": "{project}/regions",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of region resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "RegionList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "routes": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.routes.delete",
+-     "path": "{project}/global/routes/{route}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified route resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "route": {
+-       "type": "string",
+-       "description": "Name of the route resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "route"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.routes.get",
+-     "path": "{project}/global/routes/{route}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified route resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "route": {
+-       "type": "string",
+-       "description": "Name of the route resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "route"
+-     ],
+-     "response": {
+-      "$ref": "Route"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.routes.insert",
+-     "path": "{project}/global/routes",
+-     "httpMethod": "POST",
+-     "description": "Creates a route resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "Route"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.routes.list",
+-     "path": "{project}/global/routes",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of route resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "RouteList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "snapshots": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.snapshots.delete",
+-     "path": "{project}/global/snapshots/{snapshot}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified persistent disk snapshot resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "snapshot": {
+-       "type": "string",
+-       "description": "Name of the persistent disk snapshot resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "snapshot"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.snapshots.get",
+-     "path": "{project}/global/snapshots/{snapshot}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified persistent disk snapshot resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "snapshot": {
+-       "type": "string",
+-       "description": "Name of the persistent disk snapshot resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "snapshot"
+-     ],
+-     "response": {
+-      "$ref": "Snapshot"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.snapshots.list",
+-     "path": "{project}/global/snapshots",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of persistent disk snapshot resources contained within the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "SnapshotList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "targetHttpProxies": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.targetHttpProxies.delete",
+-     "path": "{project}/global/targetHttpProxies/{targetHttpProxy}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified TargetHttpProxy resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "targetHttpProxy": {
+-       "type": "string",
+-       "description": "Name of the TargetHttpProxy resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "targetHttpProxy"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.targetHttpProxies.get",
+-     "path": "{project}/global/targetHttpProxies/{targetHttpProxy}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified TargetHttpProxy resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "targetHttpProxy": {
+-       "type": "string",
+-       "description": "Name of the TargetHttpProxy resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "targetHttpProxy"
+-     ],
+-     "response": {
+-      "$ref": "TargetHttpProxy"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.targetHttpProxies.insert",
+-     "path": "{project}/global/targetHttpProxies",
+-     "httpMethod": "POST",
+-     "description": "Creates a TargetHttpProxy resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "TargetHttpProxy"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.targetHttpProxies.list",
+-     "path": "{project}/global/targetHttpProxies",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of TargetHttpProxy resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "TargetHttpProxyList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "setUrlMap": {
+-     "id": "compute.targetHttpProxies.setUrlMap",
+-     "path": "{project}/targetHttpProxies/{targetHttpProxy}/setUrlMap",
+-     "httpMethod": "POST",
+-     "description": "Changes the URL map for TargetHttpProxy.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "targetHttpProxy": {
+-       "type": "string",
+-       "description": "Name of the TargetHttpProxy resource whose URL map is to be set.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "targetHttpProxy"
+-     ],
+-     "request": {
+-      "$ref": "UrlMapReference"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    }
+-   }
+-  },
+-  "targetInstances": {
+-   "methods": {
+-    "aggregatedList": {
+-     "id": "compute.targetInstances.aggregatedList",
+-     "path": "{project}/aggregated/targetInstances",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of target instances grouped by scope.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "TargetInstanceAggregatedList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "delete": {
+-     "id": "compute.targetInstances.delete",
+-     "path": "{project}/zones/{zone}/targetInstances/{targetInstance}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified TargetInstance resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "targetInstance": {
+-       "type": "string",
+-       "description": "Name of the TargetInstance resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "targetInstance"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.targetInstances.get",
+-     "path": "{project}/zones/{zone}/targetInstances/{targetInstance}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified TargetInstance resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "targetInstance": {
+-       "type": "string",
+-       "description": "Name of the TargetInstance resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "targetInstance"
+-     ],
+-     "response": {
+-      "$ref": "TargetInstance"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.targetInstances.insert",
+-     "path": "{project}/zones/{zone}/targetInstances",
+-     "httpMethod": "POST",
+-     "description": "Creates a TargetInstance resource in the specified project and zone using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "request": {
+-      "$ref": "TargetInstance"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.targetInstances.list",
+-     "path": "{project}/zones/{zone}/targetInstances",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of TargetInstance resources available to the specified project and zone.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "response": {
+-      "$ref": "TargetInstanceList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "targetPools": {
+-   "methods": {
+-    "addHealthCheck": {
+-     "id": "compute.targetPools.addHealthCheck",
+-     "path": "{project}/regions/{region}/targetPools/{targetPool}/addHealthCheck",
+-     "httpMethod": "POST",
+-     "description": "Adds health check URL to targetPool.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "targetPool": {
+-       "type": "string",
+-       "description": "Name of the TargetPool resource to which health_check_url is to be added.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "targetPool"
+-     ],
+-     "request": {
+-      "$ref": "TargetPoolsAddHealthCheckRequest"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "addInstance": {
+-     "id": "compute.targetPools.addInstance",
+-     "path": "{project}/regions/{region}/targetPools/{targetPool}/addInstance",
+-     "httpMethod": "POST",
+-     "description": "Adds instance url to targetPool.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "targetPool": {
+-       "type": "string",
+-       "description": "Name of the TargetPool resource to which instance_url is to be added.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "targetPool"
+-     ],
+-     "request": {
+-      "$ref": "TargetPoolsAddInstanceRequest"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "aggregatedList": {
+-     "id": "compute.targetPools.aggregatedList",
+-     "path": "{project}/aggregated/targetPools",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of target pools grouped by scope.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "TargetPoolAggregatedList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "delete": {
+-     "id": "compute.targetPools.delete",
+-     "path": "{project}/regions/{region}/targetPools/{targetPool}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified TargetPool resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "targetPool": {
+-       "type": "string",
+-       "description": "Name of the TargetPool resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "targetPool"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.targetPools.get",
+-     "path": "{project}/regions/{region}/targetPools/{targetPool}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified TargetPool resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "targetPool": {
+-       "type": "string",
+-       "description": "Name of the TargetPool resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "targetPool"
+-     ],
+-     "response": {
+-      "$ref": "TargetPool"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "getHealth": {
+-     "id": "compute.targetPools.getHealth",
+-     "path": "{project}/regions/{region}/targetPools/{targetPool}/getHealth",
+-     "httpMethod": "POST",
+-     "description": "Gets the most recent health check results for each IP for the given instance that is referenced by given TargetPool.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "targetPool": {
+-       "type": "string",
+-       "description": "Name of the TargetPool resource to which the queried instance belongs.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "targetPool"
+-     ],
+-     "request": {
+-      "$ref": "InstanceReference"
+-     },
+-     "response": {
+-      "$ref": "TargetPoolInstanceHealth"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.targetPools.insert",
+-     "path": "{project}/regions/{region}/targetPools",
+-     "httpMethod": "POST",
+-     "description": "Creates a TargetPool resource in the specified project and region using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region"
+-     ],
+-     "request": {
+-      "$ref": "TargetPool"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.targetPools.list",
+-     "path": "{project}/regions/{region}/targetPools",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of TargetPool resources available to the specified project and region.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region"
+-     ],
+-     "response": {
+-      "$ref": "TargetPoolList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "removeHealthCheck": {
+-     "id": "compute.targetPools.removeHealthCheck",
+-     "path": "{project}/regions/{region}/targetPools/{targetPool}/removeHealthCheck",
+-     "httpMethod": "POST",
+-     "description": "Removes health check URL from targetPool.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "targetPool": {
+-       "type": "string",
+-       "description": "Name of the TargetPool resource to which health_check_url is to be removed.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "targetPool"
+-     ],
+-     "request": {
+-      "$ref": "TargetPoolsRemoveHealthCheckRequest"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "removeInstance": {
+-     "id": "compute.targetPools.removeInstance",
+-     "path": "{project}/regions/{region}/targetPools/{targetPool}/removeInstance",
+-     "httpMethod": "POST",
+-     "description": "Removes instance URL from targetPool.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "targetPool": {
+-       "type": "string",
+-       "description": "Name of the TargetPool resource to which instance_url is to be removed.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "targetPool"
+-     ],
+-     "request": {
+-      "$ref": "TargetPoolsRemoveInstanceRequest"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "setBackup": {
+-     "id": "compute.targetPools.setBackup",
+-     "path": "{project}/regions/{region}/targetPools/{targetPool}/setBackup",
+-     "httpMethod": "POST",
+-     "description": "Changes backup pool configurations.",
+-     "parameters": {
+-      "failoverRatio": {
+-       "type": "number",
+-       "description": "New failoverRatio value for the containing target pool.",
+-       "format": "float",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "region": {
+-       "type": "string",
+-       "description": "Name of the region scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "targetPool": {
+-       "type": "string",
+-       "description": "Name of the TargetPool resource for which the backup is to be set.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "region",
+-      "targetPool"
+-     ],
+-     "request": {
+-      "$ref": "TargetReference"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    }
+-   }
+-  },
+-  "urlMaps": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.urlMaps.delete",
+-     "path": "{project}/global/urlMaps/{urlMap}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified UrlMap resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "urlMap": {
+-       "type": "string",
+-       "description": "Name of the UrlMap resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "urlMap"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.urlMaps.get",
+-     "path": "{project}/global/urlMaps/{urlMap}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified UrlMap resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "urlMap": {
+-       "type": "string",
+-       "description": "Name of the UrlMap resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "urlMap"
+-     ],
+-     "response": {
+-      "$ref": "UrlMap"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "insert": {
+-     "id": "compute.urlMaps.insert",
+-     "path": "{project}/global/urlMaps",
+-     "httpMethod": "POST",
+-     "description": "Creates a UrlMap resource in the specified project using the data included in the request.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "request": {
+-      "$ref": "UrlMap"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.urlMaps.list",
+-     "path": "{project}/global/urlMaps",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of UrlMap resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "UrlMapList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "patch": {
+-     "id": "compute.urlMaps.patch",
+-     "path": "{project}/global/urlMaps/{urlMap}",
+-     "httpMethod": "PATCH",
+-     "description": "Update the entire content of the UrlMap resource. This method supports patch semantics.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "urlMap": {
+-       "type": "string",
+-       "description": "Name of the UrlMap resource to update.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "urlMap"
+-     ],
+-     "request": {
+-      "$ref": "UrlMap"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "update": {
+-     "id": "compute.urlMaps.update",
+-     "path": "{project}/global/urlMaps/{urlMap}",
+-     "httpMethod": "PUT",
+-     "description": "Update the entire content of the UrlMap resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "urlMap": {
+-       "type": "string",
+-       "description": "Name of the UrlMap resource to update.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "urlMap"
+-     ],
+-     "request": {
+-      "$ref": "UrlMap"
+-     },
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "validate": {
+-     "id": "compute.urlMaps.validate",
+-     "path": "{project}/global/urlMaps/{urlMap}/validate",
+-     "httpMethod": "POST",
+-     "description": "Run static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "urlMap": {
+-       "type": "string",
+-       "description": "Name of the UrlMap resource to be validated as.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "urlMap"
+-     ],
+-     "request": {
+-      "$ref": "UrlMapsValidateRequest"
+-     },
+-     "response": {
+-      "$ref": "UrlMapsValidateResponse"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    }
+-   }
+-  },
+-  "zoneOperations": {
+-   "methods": {
+-    "delete": {
+-     "id": "compute.zoneOperations.delete",
+-     "path": "{project}/zones/{zone}/operations/{operation}",
+-     "httpMethod": "DELETE",
+-     "description": "Deletes the specified zone-specific operation resource.",
+-     "parameters": {
+-      "operation": {
+-       "type": "string",
+-       "description": "Name of the operation resource to delete.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "operation"
+-     ],
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute"
+-     ]
+-    },
+-    "get": {
+-     "id": "compute.zoneOperations.get",
+-     "path": "{project}/zones/{zone}/operations/{operation}",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the specified zone-specific operation resource.",
+-     "parameters": {
+-      "operation": {
+-       "type": "string",
+-       "description": "Name of the operation resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone",
+-      "operation"
+-     ],
+-     "response": {
+-      "$ref": "Operation"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.zoneOperations.list",
+-     "path": "{project}/zones/{zone}/operations",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of operation resources contained within the specified zone.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone scoping this request.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "response": {
+-      "$ref": "OperationList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  },
+-  "zones": {
+-   "methods": {
+-    "get": {
+-     "id": "compute.zones.get",
+-     "path": "{project}/zones/{zone}",
+-     "httpMethod": "GET",
+-     "description": "Returns the specified zone resource.",
+-     "parameters": {
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      },
+-      "zone": {
+-       "type": "string",
+-       "description": "Name of the zone resource to return.",
+-       "required": true,
+-       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project",
+-      "zone"
+-     ],
+-     "response": {
+-      "$ref": "Zone"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    },
+-    "list": {
+-     "id": "compute.zones.list",
+-     "path": "{project}/zones",
+-     "httpMethod": "GET",
+-     "description": "Retrieves the list of zone resources available to the specified project.",
+-     "parameters": {
+-      "filter": {
+-       "type": "string",
+-       "description": "Optional. Filter expression for filtering listed resources.",
+-       "location": "query"
+-      },
+-      "maxResults": {
+-       "type": "integer",
+-       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-       "default": "500",
+-       "format": "uint32",
+-       "minimum": "0",
+-       "maximum": "500",
+-       "location": "query"
+-      },
+-      "pageToken": {
+-       "type": "string",
+-       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-       "location": "query"
+-      },
+-      "project": {
+-       "type": "string",
+-       "description": "Name of the project scoping this request.",
+-       "required": true,
+-       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-       "location": "path"
+-      }
+-     },
+-     "parameterOrder": [
+-      "project"
+-     ],
+-     "response": {
+-      "$ref": "ZoneList"
+-     },
+-     "scopes": [
+-      "https://www.googleapis.com/auth/compute",
+-      "https://www.googleapis.com/auth/compute.readonly"
+-     ]
+-    }
+-   }
+-  }
+- }
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/compute/v1/compute-gen.go b/Godeps/_workspace/src/code.google.com/p/google-api-go-client/compute/v1/compute-gen.go
+deleted file mode 100644
+index 294df37..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/compute/v1/compute-gen.go
++++ /dev/null
+@@ -1,15027 +0,0 @@
+-// Package compute provides access to the Compute Engine API.
+-//
+-// See https://developers.google.com/compute/docs/reference/latest/
+-//
+-// Usage example:
+-//
+-//   import "code.google.com/p/google-api-go-client/compute/v1"
+-//   ...
+-//   computeService, err := compute.New(oauthHttpClient)
+-package compute
+-
+-import (
+-	"bytes"
+-	"code.google.com/p/google-api-go-client/googleapi"
+-	"encoding/json"
+-	"errors"
+-	"fmt"
+-	"io"
+-	"net/http"
+-	"net/url"
+-	"strconv"
+-	"strings"
+-)
+-
+-// Always reference these packages, just in case the auto-generated code
+-// below doesn't.
+-var _ = bytes.NewBuffer
+-var _ = strconv.Itoa
+-var _ = fmt.Sprintf
+-var _ = json.NewDecoder
+-var _ = io.Copy
+-var _ = url.Parse
+-var _ = googleapi.Version
+-var _ = errors.New
+-var _ = strings.Replace
+-
+-const apiId = "compute:v1"
+-const apiName = "compute"
+-const apiVersion = "v1"
+-const basePath = "https://www.googleapis.com/compute/v1/projects/"
+-
+-// OAuth2 scopes used by this API.
+-const (
+-	// View and manage your Google Compute Engine resources
+-	ComputeScope = "https://www.googleapis.com/auth/compute"
+-
+-	// View your Google Compute Engine resources
+-	ComputeReadonlyScope = "https://www.googleapis.com/auth/compute.readonly"
+-
+-	// Manage your data and permissions in Google Cloud Storage
+-	DevstorageFull_controlScope = "https://www.googleapis.com/auth/devstorage.full_control"
+-
+-	// View your data in Google Cloud Storage
+-	DevstorageRead_onlyScope = "https://www.googleapis.com/auth/devstorage.read_only"
+-
+-	// Manage your data in Google Cloud Storage
+-	DevstorageRead_writeScope = "https://www.googleapis.com/auth/devstorage.read_write"
+-)
+-
+-func New(client *http.Client) (*Service, error) {
+-	if client == nil {
+-		return nil, errors.New("client is nil")
+-	}
+-	s := &Service{client: client, BasePath: basePath}
+-	s.Addresses = NewAddressesService(s)
+-	s.BackendServices = NewBackendServicesService(s)
+-	s.DiskTypes = NewDiskTypesService(s)
+-	s.Disks = NewDisksService(s)
+-	s.Firewalls = NewFirewallsService(s)
+-	s.ForwardingRules = NewForwardingRulesService(s)
+-	s.GlobalAddresses = NewGlobalAddressesService(s)
+-	s.GlobalForwardingRules = NewGlobalForwardingRulesService(s)
+-	s.GlobalOperations = NewGlobalOperationsService(s)
+-	s.HttpHealthChecks = NewHttpHealthChecksService(s)
+-	s.Images = NewImagesService(s)
+-	s.Instances = NewInstancesService(s)
+-	s.Licenses = NewLicensesService(s)
+-	s.MachineTypes = NewMachineTypesService(s)
+-	s.Networks = NewNetworksService(s)
+-	s.Projects = NewProjectsService(s)
+-	s.RegionOperations = NewRegionOperationsService(s)
+-	s.Regions = NewRegionsService(s)
+-	s.Routes = NewRoutesService(s)
+-	s.Snapshots = NewSnapshotsService(s)
+-	s.TargetHttpProxies = NewTargetHttpProxiesService(s)
+-	s.TargetInstances = NewTargetInstancesService(s)
+-	s.TargetPools = NewTargetPoolsService(s)
+-	s.UrlMaps = NewUrlMapsService(s)
+-	s.ZoneOperations = NewZoneOperationsService(s)
+-	s.Zones = NewZonesService(s)
+-	return s, nil
+-}
+-
+-type Service struct {
+-	client   *http.Client
+-	BasePath string // API endpoint base URL
+-
+-	Addresses *AddressesService
+-
+-	BackendServices *BackendServicesService
+-
+-	DiskTypes *DiskTypesService
+-
+-	Disks *DisksService
+-
+-	Firewalls *FirewallsService
+-
+-	ForwardingRules *ForwardingRulesService
+-
+-	GlobalAddresses *GlobalAddressesService
+-
+-	GlobalForwardingRules *GlobalForwardingRulesService
+-
+-	GlobalOperations *GlobalOperationsService
+-
+-	HttpHealthChecks *HttpHealthChecksService
+-
+-	Images *ImagesService
+-
+-	Instances *InstancesService
+-
+-	Licenses *LicensesService
+-
+-	MachineTypes *MachineTypesService
+-
+-	Networks *NetworksService
+-
+-	Projects *ProjectsService
+-
+-	RegionOperations *RegionOperationsService
+-
+-	Regions *RegionsService
+-
+-	Routes *RoutesService
+-
+-	Snapshots *SnapshotsService
+-
+-	TargetHttpProxies *TargetHttpProxiesService
+-
+-	TargetInstances *TargetInstancesService
+-
+-	TargetPools *TargetPoolsService
+-
+-	UrlMaps *UrlMapsService
+-
+-	ZoneOperations *ZoneOperationsService
+-
+-	Zones *ZonesService
+-}
+-
+-func NewAddressesService(s *Service) *AddressesService {
+-	rs := &AddressesService{s: s}
+-	return rs
+-}
+-
+-type AddressesService struct {
+-	s *Service
+-}
+-
+-func NewBackendServicesService(s *Service) *BackendServicesService {
+-	rs := &BackendServicesService{s: s}
+-	return rs
+-}
+-
+-type BackendServicesService struct {
+-	s *Service
+-}
+-
+-func NewDiskTypesService(s *Service) *DiskTypesService {
+-	rs := &DiskTypesService{s: s}
+-	return rs
+-}
+-
+-type DiskTypesService struct {
+-	s *Service
+-}
+-
+-func NewDisksService(s *Service) *DisksService {
+-	rs := &DisksService{s: s}
+-	return rs
+-}
+-
+-type DisksService struct {
+-	s *Service
+-}
+-
+-func NewFirewallsService(s *Service) *FirewallsService {
+-	rs := &FirewallsService{s: s}
+-	return rs
+-}
+-
+-type FirewallsService struct {
+-	s *Service
+-}
+-
+-func NewForwardingRulesService(s *Service) *ForwardingRulesService {
+-	rs := &ForwardingRulesService{s: s}
+-	return rs
+-}
+-
+-type ForwardingRulesService struct {
+-	s *Service
+-}
+-
+-func NewGlobalAddressesService(s *Service) *GlobalAddressesService {
+-	rs := &GlobalAddressesService{s: s}
+-	return rs
+-}
+-
+-type GlobalAddressesService struct {
+-	s *Service
+-}
+-
+-func NewGlobalForwardingRulesService(s *Service) *GlobalForwardingRulesService {
+-	rs := &GlobalForwardingRulesService{s: s}
+-	return rs
+-}
+-
+-type GlobalForwardingRulesService struct {
+-	s *Service
+-}
+-
+-func NewGlobalOperationsService(s *Service) *GlobalOperationsService {
+-	rs := &GlobalOperationsService{s: s}
+-	return rs
+-}
+-
+-type GlobalOperationsService struct {
+-	s *Service
+-}
+-
+-func NewHttpHealthChecksService(s *Service) *HttpHealthChecksService {
+-	rs := &HttpHealthChecksService{s: s}
+-	return rs
+-}
+-
+-type HttpHealthChecksService struct {
+-	s *Service
+-}
+-
+-func NewImagesService(s *Service) *ImagesService {
+-	rs := &ImagesService{s: s}
+-	return rs
+-}
+-
+-type ImagesService struct {
+-	s *Service
+-}
+-
+-func NewInstancesService(s *Service) *InstancesService {
+-	rs := &InstancesService{s: s}
+-	return rs
+-}
+-
+-type InstancesService struct {
+-	s *Service
+-}
+-
+-func NewLicensesService(s *Service) *LicensesService {
+-	rs := &LicensesService{s: s}
+-	return rs
+-}
+-
+-type LicensesService struct {
+-	s *Service
+-}
+-
+-func NewMachineTypesService(s *Service) *MachineTypesService {
+-	rs := &MachineTypesService{s: s}
+-	return rs
+-}
+-
+-type MachineTypesService struct {
+-	s *Service
+-}
+-
+-func NewNetworksService(s *Service) *NetworksService {
+-	rs := &NetworksService{s: s}
+-	return rs
+-}
+-
+-type NetworksService struct {
+-	s *Service
+-}
+-
+-func NewProjectsService(s *Service) *ProjectsService {
+-	rs := &ProjectsService{s: s}
+-	return rs
+-}
+-
+-type ProjectsService struct {
+-	s *Service
+-}
+-
+-func NewRegionOperationsService(s *Service) *RegionOperationsService {
+-	rs := &RegionOperationsService{s: s}
+-	return rs
+-}
+-
+-type RegionOperationsService struct {
+-	s *Service
+-}
+-
+-func NewRegionsService(s *Service) *RegionsService {
+-	rs := &RegionsService{s: s}
+-	return rs
+-}
+-
+-type RegionsService struct {
+-	s *Service
+-}
+-
+-func NewRoutesService(s *Service) *RoutesService {
+-	rs := &RoutesService{s: s}
+-	return rs
+-}
+-
+-type RoutesService struct {
+-	s *Service
+-}
+-
+-func NewSnapshotsService(s *Service) *SnapshotsService {
+-	rs := &SnapshotsService{s: s}
+-	return rs
+-}
+-
+-type SnapshotsService struct {
+-	s *Service
+-}
+-
+-func NewTargetHttpProxiesService(s *Service) *TargetHttpProxiesService {
+-	rs := &TargetHttpProxiesService{s: s}
+-	return rs
+-}
+-
+-type TargetHttpProxiesService struct {
+-	s *Service
+-}
+-
+-func NewTargetInstancesService(s *Service) *TargetInstancesService {
+-	rs := &TargetInstancesService{s: s}
+-	return rs
+-}
+-
+-type TargetInstancesService struct {
+-	s *Service
+-}
+-
+-func NewTargetPoolsService(s *Service) *TargetPoolsService {
+-	rs := &TargetPoolsService{s: s}
+-	return rs
+-}
+-
+-type TargetPoolsService struct {
+-	s *Service
+-}
+-
+-func NewUrlMapsService(s *Service) *UrlMapsService {
+-	rs := &UrlMapsService{s: s}
+-	return rs
+-}
+-
+-type UrlMapsService struct {
+-	s *Service
+-}
+-
+-func NewZoneOperationsService(s *Service) *ZoneOperationsService {
+-	rs := &ZoneOperationsService{s: s}
+-	return rs
+-}
+-
+-type ZoneOperationsService struct {
+-	s *Service
+-}
+-
+-func NewZonesService(s *Service) *ZonesService {
+-	rs := &ZonesService{s: s}
+-	return rs
+-}
+-
+-type ZonesService struct {
+-	s *Service
+-}
+-
+-type AccessConfig struct {
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of this access configuration.
+-	Name string `json:"name,omitempty"`
+-
+-	// NatIP: An external IP address associated with this instance. Specify
+-	// an unused static IP address available to the project. If not
+-	// specified, the external IP will be drawn from a shared ephemeral
+-	// pool.
+-	NatIP string `json:"natIP,omitempty"`
+-
+-	// Type: Type of configuration. Must be set to "ONE_TO_ONE_NAT". This
+-	// configures port-for-port NAT to the internet.
+-	Type string `json:"type,omitempty"`
+-}
+-
+-type Address struct {
+-	// Address: The IP address represented by this resource.
+-	Address string `json:"address,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// Region: URL of the region where the regional address resides (output
+-	// only). This field is not applicable to global addresses.
+-	Region string `json:"region,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// Status: The status of the address (output only).
+-	Status string `json:"status,omitempty"`
+-
+-	// Users: The resources that are using this address resource.
+-	Users []string `json:"users,omitempty"`
+-}
+-
+-type AddressAggregatedList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A map of scoped address lists.
+-	Items map[string]AddressesScopedList `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type AddressList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The address resources.
+-	Items []*Address `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type AddressesScopedList struct {
+-	// Addresses: List of addresses contained in this scope.
+-	Addresses []*Address `json:"addresses,omitempty"`
+-
+-	// Warning: Informational warning which replaces the list of addresses
+-	// when the list is empty.
+-	Warning *AddressesScopedListWarning `json:"warning,omitempty"`
+-}
+-
+-type AddressesScopedListWarning struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*AddressesScopedListWarningData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type AddressesScopedListWarningData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type AttachedDisk struct {
+-	// AutoDelete: Whether the disk will be auto-deleted when the instance
+-	// is deleted (but not when the disk is detached from the instance).
+-	AutoDelete bool `json:"autoDelete,omitempty"`
+-
+-	// Boot: Indicates that this is a boot disk. VM will use the first
+-	// partition of the disk for its root filesystem.
+-	Boot bool `json:"boot,omitempty"`
+-
+-	// DeviceName: Persistent disk only; must be unique within the instance
+-	// when specified. This represents a unique device name that is
+-	// reflected into the /dev/ tree of a Linux operating system running
+-	// within the instance. If not specified, a default will be chosen by
+-	// the system.
+-	DeviceName string `json:"deviceName,omitempty"`
+-
+-	// Index: A zero-based index to assign to this disk, where 0 is reserved
+-	// for the boot disk. If not specified, the server will choose an
+-	// appropriate value (output only).
+-	Index int64 `json:"index,omitempty"`
+-
+-	// InitializeParams: Initialization parameters.
+-	InitializeParams *AttachedDiskInitializeParams `json:"initializeParams,omitempty"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Licenses: Public visible licenses.
+-	Licenses []string `json:"licenses,omitempty"`
+-
+-	// Mode: The mode in which to attach this disk, either "READ_WRITE" or
+-	// "READ_ONLY".
+-	Mode string `json:"mode,omitempty"`
+-
+-	// Source: Persistent disk only; the URL of the persistent disk
+-	// resource.
+-	Source string `json:"source,omitempty"`
+-
+-	// Type: Type of the disk, either "SCRATCH" or "PERSISTENT". Note that
+-	// persistent disks must be created before you can specify them here.
+-	Type string `json:"type,omitempty"`
+-}
+-
+-type AttachedDiskInitializeParams struct {
+-	// DiskName: Name of the disk (when not provided defaults to the name of
+-	// the instance).
+-	DiskName string `json:"diskName,omitempty"`
+-
+-	// DiskSizeGb: Size of the disk in base-2 GB.
+-	DiskSizeGb int64 `json:"diskSizeGb,omitempty,string"`
+-
+-	// DiskType: URL of the disk type resource describing which disk type to
+-	// use to create the disk; provided by the client when the disk is
+-	// created.
+-	DiskType string `json:"diskType,omitempty"`
+-
+-	// SourceImage: The source image used to create this disk.
+-	SourceImage string `json:"sourceImage,omitempty"`
+-}
+-
+-type Backend struct {
+-	// BalancingMode: The balancing mode of this backend, default is
+-	// UTILIZATION.
+-	BalancingMode string `json:"balancingMode,omitempty"`
+-
+-	// CapacityScaler: The multiplier (a value between 0 and 1e6) of the max
+-	// capacity (CPU or RPS, depending on 'balancingMode') the group should
+-	// serve up to. 0 means the group is totally drained. Default value is
+-	// 1. Valid range is [0, 1e6].
+-	CapacityScaler float64 `json:"capacityScaler,omitempty"`
+-
+-	// Description: An optional textual description of the resource, which
+-	// is provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Group: URL of a zonal Cloud Resource View resource. This resoure view
+-	// defines the list of instances that serve traffic. Member virtual
+-	// machine instances from each resource view must live in the same zone
+-	// as the resource view itself.
+-	Group string `json:"group,omitempty"`
+-
+-	// MaxRate: The max RPS of the group. Can be used with either balancing
+-	// mode, but required if RATE mode. For RATE mode, either maxRate or
+-	// maxRatePerInstance must be set.
+-	MaxRate int64 `json:"maxRate,omitempty"`
+-
+-	// MaxRatePerInstance: The max RPS that a single backed instance can
+-	// handle. This is used to calculate the capacity of the group. Can be
+-	// used in either balancing mode. For RATE mode, either maxRate or
+-	// maxRatePerInstance must be set.
+-	MaxRatePerInstance float64 `json:"maxRatePerInstance,omitempty"`
+-
+-	// MaxUtilization: Used when 'balancingMode' is UTILIZATION. This ratio
+-	// defines the CPU utilization target for the group. The default is 0.8.
+-	// Valid range is [0, 1].
+-	MaxUtilization float64 `json:"maxUtilization,omitempty"`
+-}
+-
+-type BackendService struct {
+-	// Backends: The list of backends that serve this BackendService.
+-	Backends []*Backend `json:"backends,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Fingerprint: Fingerprint of this resource. A hash of the contents
+-	// stored in this object. This field is used in optimistic locking. This
+-	// field will be ignored when inserting a BackendService. An up-to-date
+-	// fingerprint must be provided in order to update the BackendService.
+-	Fingerprint string `json:"fingerprint,omitempty"`
+-
+-	// HealthChecks: The list of URLs to the HttpHealthCheck resource for
+-	// health checking this BackendService. Currently at most one health
+-	// check can be specified, and a health check is required.
+-	HealthChecks []string `json:"healthChecks,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// Port: The TCP port to connect on the backend. The default value is
+-	// 80.
+-	Port int64 `json:"port,omitempty"`
+-
+-	Protocol string `json:"protocol,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// TimeoutSec: How many seconds to wait for the backend before
+-	// considering it a failed request. Default is 30 seconds.
+-	TimeoutSec int64 `json:"timeoutSec,omitempty"`
+-}
+-
+-type BackendServiceGroupHealth struct {
+-	HealthStatus []*HealthStatus `json:"healthStatus,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-}
+-
+-type BackendServiceList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The BackendService resources.
+-	Items []*BackendService `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type DeprecationStatus struct {
+-	// Deleted: An optional RFC3339 timestamp on or after which the
+-	// deprecation state of this resource will be changed to DELETED.
+-	Deleted string `json:"deleted,omitempty"`
+-
+-	// Deprecated: An optional RFC3339 timestamp on or after which the
+-	// deprecation state of this resource will be changed to DEPRECATED.
+-	Deprecated string `json:"deprecated,omitempty"`
+-
+-	// Obsolete: An optional RFC3339 timestamp on or after which the
+-	// deprecation state of this resource will be changed to OBSOLETE.
+-	Obsolete string `json:"obsolete,omitempty"`
+-
+-	// Replacement: A URL of the suggested replacement for the deprecated
+-	// resource. The deprecated resource and its replacement must be
+-	// resources of the same kind.
+-	Replacement string `json:"replacement,omitempty"`
+-
+-	// State: The deprecation state. Can be "DEPRECATED", "OBSOLETE", or
+-	// "DELETED". Operations which create a new resource using a
+-	// "DEPRECATED" resource will return successfully, but with a warning
+-	// indicating the deprecated resource and recommending its replacement.
+-	// New uses of "OBSOLETE" or "DELETED" resources will result in an
+-	// error.
+-	State string `json:"state,omitempty"`
+-}
+-
+-type Disk struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Licenses: Public visible licenses.
+-	Licenses []string `json:"licenses,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// Options: Internal use only.
+-	Options string `json:"options,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// SizeGb: Size of the persistent disk, specified in GB. This parameter
+-	// is optional when creating a disk from a disk image or a snapshot,
+-	// otherwise it is required.
+-	SizeGb int64 `json:"sizeGb,omitempty,string"`
+-
+-	// SourceImage: The source image used to create this disk. Once the
+-	// source image has been deleted from the system, this field will not be
+-	// set, even if an image with the same name has been re-created.
+-	SourceImage string `json:"sourceImage,omitempty"`
+-
+-	// SourceImageId: The 'id' value of the image used to create this disk.
+-	// This value may be used to determine whether the disk was created from
+-	// the current or a previous instance of a given image.
+-	SourceImageId string `json:"sourceImageId,omitempty"`
+-
+-	// SourceSnapshot: The source snapshot used to create this disk. Once
+-	// the source snapshot has been deleted from the system, this field will
+-	// be cleared, and will not be set even if a snapshot with the same name
+-	// has been re-created.
+-	SourceSnapshot string `json:"sourceSnapshot,omitempty"`
+-
+-	// SourceSnapshotId: The 'id' value of the snapshot used to create this
+-	// disk. This value may be used to determine whether the disk was
+-	// created from the current or a previous instance of a given disk
+-	// snapshot.
+-	SourceSnapshotId string `json:"sourceSnapshotId,omitempty"`
+-
+-	// Status: The status of disk creation (output only).
+-	Status string `json:"status,omitempty"`
+-
+-	// Type: URL of the disk type resource describing which disk type to use
+-	// to create the disk; provided by the client when the disk is created.
+-	Type string `json:"type,omitempty"`
+-
+-	// Zone: URL of the zone where the disk resides (output only).
+-	Zone string `json:"zone,omitempty"`
+-}
+-
+-type DiskAggregatedList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A map of scoped disk lists.
+-	Items map[string]DisksScopedList `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type DiskList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The persistent disk resources.
+-	Items []*Disk `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type DiskType struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Deprecated: The deprecation status associated with this disk type.
+-	Deprecated *DeprecationStatus `json:"deprecated,omitempty"`
+-
+-	// Description: An optional textual description of the resource.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource.
+-	Name string `json:"name,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// ValidDiskSize: An optional textual descroption of the valid disk
+-	// size, e.g., "10GB-10TB".
+-	ValidDiskSize string `json:"validDiskSize,omitempty"`
+-
+-	// Zone: Url of the zone where the disk type resides (output only).
+-	Zone string `json:"zone,omitempty"`
+-}
+-
+-type DiskTypeAggregatedList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A map of scoped disk type lists.
+-	Items map[string]DiskTypesScopedList `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type DiskTypeList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The disk type resources.
+-	Items []*DiskType `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type DiskTypesScopedList struct {
+-	// DiskTypes: List of disk types contained in this scope.
+-	DiskTypes []*DiskType `json:"diskTypes,omitempty"`
+-
+-	// Warning: Informational warning which replaces the list of disk types
+-	// when the list is empty.
+-	Warning *DiskTypesScopedListWarning `json:"warning,omitempty"`
+-}
+-
+-type DiskTypesScopedListWarning struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*DiskTypesScopedListWarningData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type DiskTypesScopedListWarningData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type DisksScopedList struct {
+-	// Disks: List of disks contained in this scope.
+-	Disks []*Disk `json:"disks,omitempty"`
+-
+-	// Warning: Informational warning which replaces the list of disks when
+-	// the list is empty.
+-	Warning *DisksScopedListWarning `json:"warning,omitempty"`
+-}
+-
+-type DisksScopedListWarning struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*DisksScopedListWarningData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type DisksScopedListWarningData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type Firewall struct {
+-	// Allowed: The list of rules specified by this firewall. Each rule
+-	// specifies a protocol and port-range tuple that describes a permitted
+-	// connection.
+-	Allowed []*FirewallAllowed `json:"allowed,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// Network: URL of the network to which this firewall is applied;
+-	// provided by the client when the firewall is created.
+-	Network string `json:"network,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// SourceRanges: A list of IP address blocks expressed in CIDR format
+-	// which this rule applies to. One or both of sourceRanges and
+-	// sourceTags may be set; an inbound connection is allowed if either the
+-	// range or the tag of the source matches.
+-	SourceRanges []string `json:"sourceRanges,omitempty"`
+-
+-	// SourceTags: A list of instance tags which this rule applies to. One
+-	// or both of sourceRanges and sourceTags may be set; an inbound
+-	// connection is allowed if either the range or the tag of the source
+-	// matches.
+-	SourceTags []string `json:"sourceTags,omitempty"`
+-
+-	// TargetTags: A list of instance tags indicating sets of instances
+-	// located on network which may make network connections as specified in
+-	// allowed. If no targetTags are specified, the firewall rule applies to
+-	// all instances on the specified network.
+-	TargetTags []string `json:"targetTags,omitempty"`
+-}
+-
+-type FirewallAllowed struct {
+-	// IPProtocol: Required; this is the IP protocol that is allowed for
+-	// this rule. This can either be one of the following well known
+-	// protocol strings ["tcp", "udp", "icmp", "esp", "ah", "sctp"], or the
+-	// IP protocol number.
+-	IPProtocol string `json:"IPProtocol,omitempty"`
+-
+-	// Ports: An optional list of ports which are allowed. It is an error to
+-	// specify this for any protocol that isn't UDP or TCP. Each entry must
+-	// be either an integer or a range. If not specified, connections
+-	// through any port are allowed.
+-	//
+-	// Example inputs include: ["22"],
+-	// ["80","443"] and ["12345-12349"].
+-	Ports []string `json:"ports,omitempty"`
+-}
+-
+-type FirewallList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The firewall resources.
+-	Items []*Firewall `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type ForwardingRule struct {
+-	// IPAddress: Value of the reserved IP address that this forwarding rule
+-	// is serving on behalf of. For global forwarding rules, the address
+-	// must be a global IP; for regional forwarding rules, the address must
+-	// live in the same region as the forwarding rule. If left empty
+-	// (default value), an ephemeral IP from the same scope (global or
+-	// regional) will be assigned.
+-	IPAddress string `json:"IPAddress,omitempty"`
+-
+-	// IPProtocol: The IP protocol to which this rule applies, valid options
+-	// are 'TCP', 'UDP', 'ESP', 'AH' or 'SCTP'
+-	IPProtocol string `json:"IPProtocol,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// PortRange: Applicable only when 'IPProtocol' is 'TCP', 'UDP' or
+-	// 'SCTP', only packets addressed to ports in the specified range will
+-	// be forwarded to 'target'. If 'portRange' is left empty (default
+-	// value), all ports are forwarded. Forwarding rules with the same
+-	// [IPAddress, IPProtocol] pair must have disjoint port ranges.
+-	// @pattern: \d+(?:-\d+)?
+-	PortRange string `json:"portRange,omitempty"`
+-
+-	// Region: URL of the region where the regional forwarding rule resides
+-	// (output only). This field is not applicable to global forwarding
+-	// rules.
+-	Region string `json:"region,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// Target: The URL of the target resource to receive the matched
+-	// traffic. For regional forwarding rules, this target must live in the
+-	// same region as the forwarding rule. For global forwarding rules, this
+-	// target must be a global TargetHttpProxy resource.
+-	Target string `json:"target,omitempty"`
+-}
+-
+-type ForwardingRuleAggregatedList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A map of scoped forwarding rule lists.
+-	Items map[string]ForwardingRulesScopedList `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type ForwardingRuleList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The ForwardingRule resources.
+-	Items []*ForwardingRule `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type ForwardingRulesScopedList struct {
+-	// ForwardingRules: List of forwarding rules contained in this scope.
+-	ForwardingRules []*ForwardingRule `json:"forwardingRules,omitempty"`
+-
+-	// Warning: Informational warning which replaces the list of forwarding
+-	// rules when the list is empty.
+-	Warning *ForwardingRulesScopedListWarning `json:"warning,omitempty"`
+-}
+-
+-type ForwardingRulesScopedListWarning struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*ForwardingRulesScopedListWarningData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type ForwardingRulesScopedListWarningData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type HealthCheckReference struct {
+-	HealthCheck string `json:"healthCheck,omitempty"`
+-}
+-
+-type HealthStatus struct {
+-	// HealthState: Health state of the instance.
+-	HealthState string `json:"healthState,omitempty"`
+-
+-	// Instance: URL of the instance resource.
+-	Instance string `json:"instance,omitempty"`
+-
+-	// IpAddress: The IP address represented by this resource.
+-	IpAddress string `json:"ipAddress,omitempty"`
+-}
+-
+-type HostRule struct {
+-	Description string `json:"description,omitempty"`
+-
+-	// Hosts: The list of host patterns to match. They must be FQDN except
+-	// that it may start with ?*.? or ?*-?. The ?*? acts like a glob and
+-	// will match any string of atoms (separated by .?s and -?s) to the
+-	// left.
+-	Hosts []string `json:"hosts,omitempty"`
+-
+-	// PathMatcher: The name of the PathMatcher to match the path portion of
+-	// the URL, if the this HostRule matches the URL's host portion.
+-	PathMatcher string `json:"pathMatcher,omitempty"`
+-}
+-
+-type HttpHealthCheck struct {
+-	// CheckIntervalSec: How often (in seconds) to send a health check. The
+-	// default value is 5 seconds.
+-	CheckIntervalSec int64 `json:"checkIntervalSec,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// HealthyThreshold: A so-far unhealthy VM will be marked healthy after
+-	// this many consecutive successes. The default value is 2.
+-	HealthyThreshold int64 `json:"healthyThreshold,omitempty"`
+-
+-	// Host: The value of the host header in the HTTP health check request.
+-	// If left empty (default value), the public IP on behalf of which this
+-	// health check is performed will be used.
+-	Host string `json:"host,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// Port: The TCP port number for the HTTP health check request. The
+-	// default value is 80.
+-	Port int64 `json:"port,omitempty"`
+-
+-	// RequestPath: The request path of the HTTP health check request. The
+-	// default value is "/".
+-	RequestPath string `json:"requestPath,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// TimeoutSec: How long (in seconds) to wait before claiming failure.
+-	// The default value is 5 seconds.
+-	TimeoutSec int64 `json:"timeoutSec,omitempty"`
+-
+-	// UnhealthyThreshold: A so-far healthy VM will be marked unhealthy
+-	// after this many consecutive failures. The default value is 2.
+-	UnhealthyThreshold int64 `json:"unhealthyThreshold,omitempty"`
+-}
+-
+-type HttpHealthCheckList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The HttpHealthCheck resources.
+-	Items []*HttpHealthCheck `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type Image struct {
+-	// ArchiveSizeBytes: Size of the image tar.gz archive stored in Google
+-	// Cloud Storage (in bytes).
+-	ArchiveSizeBytes int64 `json:"archiveSizeBytes,omitempty,string"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Deprecated: The deprecation status associated with this image.
+-	Deprecated *DeprecationStatus `json:"deprecated,omitempty"`
+-
+-	// Description: Textual description of the resource; provided by the
+-	// client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// DiskSizeGb: Size of the image when restored onto a disk (in GiB).
+-	DiskSizeGb int64 `json:"diskSizeGb,omitempty,string"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Licenses: Public visible licenses.
+-	Licenses []string `json:"licenses,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// RawDisk: The raw disk image parameters.
+-	RawDisk *ImageRawDisk `json:"rawDisk,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// SourceDisk: The source disk used to create this image. Once the
+-	// source disk has been deleted from the system, this field will be
+-	// cleared, and will not be set even if a disk with the same name has
+-	// been re-created.
+-	SourceDisk string `json:"sourceDisk,omitempty"`
+-
+-	// SourceDiskId: The 'id' value of the disk used to create this image.
+-	// This value may be used to determine whether the image was taken from
+-	// the current or a previous instance of a given disk name.
+-	SourceDiskId string `json:"sourceDiskId,omitempty"`
+-
+-	// SourceType: Must be "RAW"; provided by the client when the disk image
+-	// is created.
+-	SourceType string `json:"sourceType,omitempty"`
+-
+-	// Status: Status of the image (output only). It will be one of the
+-	// following READY - after image has been successfully created and is
+-	// ready for use FAILED - if creating the image fails for some reason
+-	// PENDING - the image creation is in progress An image can be used to
+-	// create other resources suck as instances only after the image has
+-	// been successfully created and the status is set to READY.
+-	Status string `json:"status,omitempty"`
+-}
+-
+-type ImageRawDisk struct {
+-	// ContainerType: The format used to encode and transmit the block
+-	// device. Should be TAR. This is just a container and transmission
+-	// format and not a runtime format. Provided by the client when the disk
+-	// image is created.
+-	ContainerType string `json:"containerType,omitempty"`
+-
+-	// Sha1Checksum: An optional SHA1 checksum of the disk image before
+-	// unpackaging; provided by the client when the disk image is created.
+-	Sha1Checksum string `json:"sha1Checksum,omitempty"`
+-
+-	// Source: The full Google Cloud Storage URL where the disk image is
+-	// stored; provided by the client when the disk image is created.
+-	Source string `json:"source,omitempty"`
+-}
+-
+-type ImageList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The disk image resources.
+-	Items []*Image `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type Instance struct {
+-	// CanIpForward: Allows this instance to send packets with source IP
+-	// addresses other than its own and receive packets with destination IP
+-	// addresses other than its own. If this instance will be used as an IP
+-	// gateway or it will be set as the next-hop in a Route resource, say
+-	// true. If unsure, leave this set to false.
+-	CanIpForward bool `json:"canIpForward,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Disks: Array of disks associated with this instance. Persistent disks
+-	// must be created before you can assign them.
+-	Disks []*AttachedDisk `json:"disks,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// MachineType: URL of the machine type resource describing which
+-	// machine type to use to host the instance; provided by the client when
+-	// the instance is created.
+-	MachineType string `json:"machineType,omitempty"`
+-
+-	// Metadata: Metadata key/value pairs assigned to this instance.
+-	// Consists of custom metadata or predefined keys; see Instance
+-	// documentation for more information.
+-	Metadata *Metadata `json:"metadata,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// NetworkInterfaces: Array of configurations for this interface. This
+-	// specifies how this interface is configured to interact with other
+-	// network services, such as connecting to the internet. Currently,
+-	// ONE_TO_ONE_NAT is the only access config supported. If there are no
+-	// accessConfigs specified, then this instance will have no external
+-	// internet access.
+-	NetworkInterfaces []*NetworkInterface `json:"networkInterfaces,omitempty"`
+-
+-	// Scheduling: Scheduling options for this instance.
+-	Scheduling *Scheduling `json:"scheduling,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// ServiceAccounts: A list of service accounts each with specified
+-	// scopes, for which access tokens are to be made available to the
+-	// instance through metadata queries.
+-	ServiceAccounts []*ServiceAccount `json:"serviceAccounts,omitempty"`
+-
+-	// Status: Instance status. One of the following values: "PROVISIONING",
+-	// "STAGING", "RUNNING", "STOPPING", "STOPPED", "TERMINATED" (output
+-	// only).
+-	Status string `json:"status,omitempty"`
+-
+-	// StatusMessage: An optional, human-readable explanation of the status
+-	// (output only).
+-	StatusMessage string `json:"statusMessage,omitempty"`
+-
+-	// Tags: A list of tags to be applied to this instance. Used to identify
+-	// valid sources or targets for network firewalls. Provided by the
+-	// client on instance creation. The tags can be later modified by the
+-	// setTags method. Each tag within the list must comply with RFC1035.
+-	Tags *Tags `json:"tags,omitempty"`
+-
+-	// Zone: URL of the zone where the instance resides (output only).
+-	Zone string `json:"zone,omitempty"`
+-}
+-
+-type InstanceAggregatedList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A map of scoped instance lists.
+-	Items map[string]InstancesScopedList `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type InstanceList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A list of instance resources.
+-	Items []*Instance `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type InstanceReference struct {
+-	Instance string `json:"instance,omitempty"`
+-}
+-
+-type InstancesScopedList struct {
+-	// Instances: List of instances contained in this scope.
+-	Instances []*Instance `json:"instances,omitempty"`
+-
+-	// Warning: Informational warning which replaces the list of instances
+-	// when the list is empty.
+-	Warning *InstancesScopedListWarning `json:"warning,omitempty"`
+-}
+-
+-type InstancesScopedListWarning struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*InstancesScopedListWarningData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type InstancesScopedListWarningData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type License struct {
+-	// Kind: Identifies what kind of resource this is. Value: the fixed
+-	// string "compute#license".
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type MachineType struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Deprecated: The deprecation status associated with this machine type.
+-	Deprecated *DeprecationStatus `json:"deprecated,omitempty"`
+-
+-	// Description: An optional textual description of the resource.
+-	Description string `json:"description,omitempty"`
+-
+-	// GuestCpus: Count of CPUs exposed to the instance.
+-	GuestCpus int64 `json:"guestCpus,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// ImageSpaceGb: Space allotted for the image, defined in GB.
+-	ImageSpaceGb int64 `json:"imageSpaceGb,omitempty"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// MaximumPersistentDisks: Maximum persistent disks allowed.
+-	MaximumPersistentDisks int64 `json:"maximumPersistentDisks,omitempty"`
+-
+-	// MaximumPersistentDisksSizeGb: Maximum total persistent disks size
+-	// (GB) allowed.
+-	MaximumPersistentDisksSizeGb int64 `json:"maximumPersistentDisksSizeGb,omitempty,string"`
+-
+-	// MemoryMb: Physical memory assigned to the instance, defined in MB.
+-	MemoryMb int64 `json:"memoryMb,omitempty"`
+-
+-	// Name: Name of the resource.
+-	Name string `json:"name,omitempty"`
+-
+-	// ScratchDisks: List of extended scratch disks assigned to the
+-	// instance.
+-	ScratchDisks []*MachineTypeScratchDisks `json:"scratchDisks,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// Zone: Url of the zone where the machine type resides (output only).
+-	Zone string `json:"zone,omitempty"`
+-}
+-
+-type MachineTypeScratchDisks struct {
+-	// DiskGb: Size of the scratch disk, defined in GB.
+-	DiskGb int64 `json:"diskGb,omitempty"`
+-}
+-
+-type MachineTypeAggregatedList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A map of scoped machine type lists.
+-	Items map[string]MachineTypesScopedList `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type MachineTypeList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The machine type resources.
+-	Items []*MachineType `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type MachineTypesScopedList struct {
+-	// MachineTypes: List of machine types contained in this scope.
+-	MachineTypes []*MachineType `json:"machineTypes,omitempty"`
+-
+-	// Warning: Informational warning which replaces the list of machine
+-	// types when the list is empty.
+-	Warning *MachineTypesScopedListWarning `json:"warning,omitempty"`
+-}
+-
+-type MachineTypesScopedListWarning struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*MachineTypesScopedListWarningData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type MachineTypesScopedListWarningData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type Metadata struct {
+-	// Fingerprint: Fingerprint of this resource. A hash of the metadata's
+-	// contents. This field is used for optimistic locking. An up-to-date
+-	// metadata fingerprint must be provided in order to modify metadata.
+-	Fingerprint string `json:"fingerprint,omitempty"`
+-
+-	// Items: Array of key/value pairs. The total size of all keys and
+-	// values must be less than 512 KB.
+-	Items []*MetadataItems `json:"items,omitempty"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-}
+-
+-type MetadataItems struct {
+-	// Key: Key for the metadata entry. Keys must conform to the following
+-	// regexp: [a-zA-Z0-9-_]+, and be less than 128 bytes in length. This is
+-	// reflected as part of a URL in the metadata server. Additionally, to
+-	// avoid ambiguity, keys must not conflict with any other metadata keys
+-	// for the project.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: Value for the metadata entry. These are free-form strings, and
+-	// only have meaning as interpreted by the image running in the
+-	// instance. The only restriction placed on values is that their size
+-	// must be less than or equal to 32768 bytes.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type Network struct {
+-	// IPv4Range: Required; The range of internal addresses that are legal
+-	// on this network. This range is a CIDR specification, for example:
+-	// 192.168.0.0/16. Provided by the client when the network is created.
+-	IPv4Range string `json:"IPv4Range,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// GatewayIPv4: An optional address that is used for default routing to
+-	// other networks. This must be within the range specified by IPv4Range,
+-	// and is typically the first usable address in that range. If not
+-	// specified, the default value is the first usable address in
+-	// IPv4Range.
+-	GatewayIPv4 string `json:"gatewayIPv4,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type NetworkInterface struct {
+-	// AccessConfigs: Array of configurations for this interface. This
+-	// specifies how this interface is configured to interact with other
+-	// network services, such as connecting to the internet. Currently,
+-	// ONE_TO_ONE_NAT is the only access config supported. If there are no
+-	// accessConfigs specified, then this instance will have no external
+-	// internet access.
+-	AccessConfigs []*AccessConfig `json:"accessConfigs,omitempty"`
+-
+-	// Name: Name of the network interface, determined by the server; for
+-	// network devices, these are e.g. eth0, eth1, etc. (output only).
+-	Name string `json:"name,omitempty"`
+-
+-	// Network: URL of the network resource attached to this interface.
+-	Network string `json:"network,omitempty"`
+-
+-	// NetworkIP: An optional IPV4 internal network address assigned to the
+-	// instance for this network interface (output only).
+-	NetworkIP string `json:"networkIP,omitempty"`
+-}
+-
+-type NetworkList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The network resources.
+-	Items []*Network `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type Operation struct {
+-	// ClientOperationId: An optional identifier specified by the client
+-	// when the mutation was initiated. Must be unique for all operation
+-	// resources in the project (output only).
+-	ClientOperationId string `json:"clientOperationId,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// EndTime: The time that this operation was completed. This is in RFC
+-	// 3339 format (output only).
+-	EndTime string `json:"endTime,omitempty"`
+-
+-	// Error: If errors occurred during processing of this operation, this
+-	// field will be populated (output only).
+-	Error *OperationError `json:"error,omitempty"`
+-
+-	// HttpErrorMessage: If operation fails, the HTTP error message
+-	// returned, e.g. NOT FOUND. (output only).
+-	HttpErrorMessage string `json:"httpErrorMessage,omitempty"`
+-
+-	// HttpErrorStatusCode: If operation fails, the HTTP error status code
+-	// returned, e.g. 404. (output only).
+-	HttpErrorStatusCode int64 `json:"httpErrorStatusCode,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// InsertTime: The time that this operation was requested. This is in
+-	// RFC 3339 format (output only).
+-	InsertTime string `json:"insertTime,omitempty"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource (output only).
+-	Name string `json:"name,omitempty"`
+-
+-	// OperationType: Type of the operation. Examples include "insert",
+-	// "update", and "delete" (output only).
+-	OperationType string `json:"operationType,omitempty"`
+-
+-	// Progress: An optional progress indicator that ranges from 0 to 100.
+-	// There is no requirement that this be linear or support any
+-	// granularity of operations. This should not be used to guess at when
+-	// the operation will be complete. This number should be monotonically
+-	// increasing as the operation progresses (output only).
+-	Progress int64 `json:"progress,omitempty"`
+-
+-	// Region: URL of the region where the operation resides (output only).
+-	Region string `json:"region,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// StartTime: The time that this operation was started by the server.
+-	// This is in RFC 3339 format (output only).
+-	StartTime string `json:"startTime,omitempty"`
+-
+-	// Status: Status of the operation. Can be one of the following:
+-	// "PENDING", "RUNNING", or "DONE" (output only).
+-	Status string `json:"status,omitempty"`
+-
+-	// StatusMessage: An optional textual description of the current status
+-	// of the operation (output only).
+-	StatusMessage string `json:"statusMessage,omitempty"`
+-
+-	// TargetId: Unique target id which identifies a particular incarnation
+-	// of the target (output only).
+-	TargetId uint64 `json:"targetId,omitempty,string"`
+-
+-	// TargetLink: URL of the resource the operation is mutating (output
+-	// only).
+-	TargetLink string `json:"targetLink,omitempty"`
+-
+-	// User: User who requested the operation, for example
+-	// "user at example.com" (output only).
+-	User string `json:"user,omitempty"`
+-
+-	// Warnings: If warning messages generated during processing of this
+-	// operation, this field will be populated (output only).
+-	Warnings []*OperationWarnings `json:"warnings,omitempty"`
+-
+-	// Zone: URL of the zone where the operation resides (output only).
+-	Zone string `json:"zone,omitempty"`
+-}
+-
+-type OperationError struct {
+-	// Errors: The array of errors encountered while processing this
+-	// operation.
+-	Errors []*OperationErrorErrors `json:"errors,omitempty"`
+-}
+-
+-type OperationErrorErrors struct {
+-	// Code: The error type identifier for this error.
+-	Code string `json:"code,omitempty"`
+-
+-	// Location: Indicates the field in the request which caused the error.
+-	// This property is optional.
+-	Location string `json:"location,omitempty"`
+-
+-	// Message: An optional, human-readable error message.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type OperationWarnings struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*OperationWarningsData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type OperationWarningsData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type OperationAggregatedList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A map of scoped operation lists.
+-	Items map[string]OperationsScopedList `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type OperationList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The operation resources.
+-	Items []*Operation `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type OperationsScopedList struct {
+-	// Operations: List of operations contained in this scope.
+-	Operations []*Operation `json:"operations,omitempty"`
+-
+-	// Warning: Informational warning which replaces the list of operations
+-	// when the list is empty.
+-	Warning *OperationsScopedListWarning `json:"warning,omitempty"`
+-}
+-
+-type OperationsScopedListWarning struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*OperationsScopedListWarningData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type OperationsScopedListWarningData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type PathMatcher struct {
+-	// DefaultService: The URL to the BackendService resource. This will be
+-	// used if none of the 'pathRules' defined by this PathMatcher is met by
+-	// the URL's path portion.
+-	DefaultService string `json:"defaultService,omitempty"`
+-
+-	Description string `json:"description,omitempty"`
+-
+-	// Name: The name to which this PathMatcher is referred by the HostRule.
+-	Name string `json:"name,omitempty"`
+-
+-	// PathRules: The list of path rules.
+-	PathRules []*PathRule `json:"pathRules,omitempty"`
+-}
+-
+-type PathRule struct {
+-	// Paths: The list of path patterns to match. Each must start with ?/"
+-	// and the only place a "*" is allowed is at the end following a "/".
+-	// The string fed to the path matcher does not include any text after
+-	// the first "?" or "#", and those chars are not allowed here.
+-	Paths []string `json:"paths,omitempty"`
+-
+-	// Service: The URL of the BackendService resource if this rule is
+-	// matched.
+-	Service string `json:"service,omitempty"`
+-}
+-
+-type Project struct {
+-	// CommonInstanceMetadata: Metadata key/value pairs available to all
+-	// instances contained in this project.
+-	CommonInstanceMetadata *Metadata `json:"commonInstanceMetadata,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource.
+-	Name string `json:"name,omitempty"`
+-
+-	// Quotas: Quotas assigned to this project.
+-	Quotas []*Quota `json:"quotas,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// UsageExportLocation: The location in Cloud Storage and naming method
+-	// of the daily usage report.
+-	UsageExportLocation *UsageExportLocation `json:"usageExportLocation,omitempty"`
+-}
+-
+-type Quota struct {
+-	// Limit: Quota limit for this metric.
+-	Limit float64 `json:"limit,omitempty"`
+-
+-	// Metric: Name of the quota metric.
+-	Metric string `json:"metric,omitempty"`
+-
+-	// Usage: Current usage of this metric.
+-	Usage float64 `json:"usage,omitempty"`
+-}
+-
+-type Region struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Deprecated: The deprecation status associated with this region.
+-	Deprecated *DeprecationStatus `json:"deprecated,omitempty"`
+-
+-	// Description: Textual description of the resource.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource.
+-	Name string `json:"name,omitempty"`
+-
+-	// Quotas: Quotas assigned to this region.
+-	Quotas []*Quota `json:"quotas,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// Status: Status of the region, "UP" or "DOWN".
+-	Status string `json:"status,omitempty"`
+-
+-	// Zones: A list of zones homed in this region, in the form of resource
+-	// URLs.
+-	Zones []string `json:"zones,omitempty"`
+-}
+-
+-type RegionList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The region resources.
+-	Items []*Region `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type ResourceGroupReference struct {
+-	Group string `json:"group,omitempty"`
+-}
+-
+-type Route struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// DestRange: Which packets does this route apply to?
+-	DestRange string `json:"destRange,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// Network: URL of the network to which this route is applied; provided
+-	// by the client when the route is created.
+-	Network string `json:"network,omitempty"`
+-
+-	// NextHopGateway: The URL to a gateway that should handle matching
+-	// packets.
+-	NextHopGateway string `json:"nextHopGateway,omitempty"`
+-
+-	// NextHopInstance: The URL to an instance that should handle matching
+-	// packets.
+-	NextHopInstance string `json:"nextHopInstance,omitempty"`
+-
+-	// NextHopIp: The network IP address of an instance that should handle
+-	// matching packets.
+-	NextHopIp string `json:"nextHopIp,omitempty"`
+-
+-	// NextHopNetwork: The URL of the local network if it should handle
+-	// matching packets.
+-	NextHopNetwork string `json:"nextHopNetwork,omitempty"`
+-
+-	// Priority: Breaks ties between Routes of equal specificity. Routes
+-	// with smaller values win when tied with routes with larger values.
+-	Priority int64 `json:"priority,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// Tags: A list of instance tags to which this route applies.
+-	Tags []string `json:"tags,omitempty"`
+-
+-	// Warnings: If potential misconfigurations are detected for this route,
+-	// this field will be populated with warning messages.
+-	Warnings []*RouteWarnings `json:"warnings,omitempty"`
+-}
+-
+-type RouteWarnings struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*RouteWarningsData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type RouteWarningsData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type RouteList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The route resources.
+-	Items []*Route `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type Scheduling struct {
+-	// AutomaticRestart: Whether the Instance should be automatically
+-	// restarted whenever it is terminated by Compute Engine (not terminated
+-	// by user).
+-	AutomaticRestart bool `json:"automaticRestart,omitempty"`
+-
+-	// OnHostMaintenance: How the instance should behave when the host
+-	// machine undergoes maintenance that may temporarily impact instance
+-	// performance.
+-	OnHostMaintenance string `json:"onHostMaintenance,omitempty"`
+-}
+-
+-type SerialPortOutput struct {
+-	// Contents: The contents of the console output.
+-	Contents string `json:"contents,omitempty"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type ServiceAccount struct {
+-	// Email: Email address of the service account.
+-	Email string `json:"email,omitempty"`
+-
+-	// Scopes: The list of scopes to be made available for this service
+-	// account.
+-	Scopes []string `json:"scopes,omitempty"`
+-}
+-
+-type Snapshot struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// DiskSizeGb: Size of the persistent disk snapshot, specified in GB
+-	// (output only).
+-	DiskSizeGb int64 `json:"diskSizeGb,omitempty,string"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Licenses: Public visible licenses.
+-	Licenses []string `json:"licenses,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// SourceDisk: The source disk used to create this snapshot. Once the
+-	// source disk has been deleted from the system, this field will be
+-	// cleared, and will not be set even if a disk with the same name has
+-	// been re-created (output only).
+-	SourceDisk string `json:"sourceDisk,omitempty"`
+-
+-	// SourceDiskId: The 'id' value of the disk used to create this
+-	// snapshot. This value may be used to determine whether the snapshot
+-	// was taken from the current or a previous instance of a given disk
+-	// name.
+-	SourceDiskId string `json:"sourceDiskId,omitempty"`
+-
+-	// Status: The status of the persistent disk snapshot (output only).
+-	Status string `json:"status,omitempty"`
+-
+-	// StorageBytes: A size of the the storage used by the snapshot. As
+-	// snapshots share storage this number is expected to change with
+-	// snapshot creation/deletion.
+-	StorageBytes int64 `json:"storageBytes,omitempty,string"`
+-
+-	// StorageBytesStatus: An indicator whether storageBytes is in a stable
+-	// state, or it is being adjusted as a result of shared storage
+-	// reallocation.
+-	StorageBytesStatus string `json:"storageBytesStatus,omitempty"`
+-}
+-
+-type SnapshotList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The persistent snapshot resources.
+-	Items []*Snapshot `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type Tags struct {
+-	// Fingerprint: Fingerprint of this resource. A hash of the tags stored
+-	// in this object. This field is used optimistic locking. An up-to-date
+-	// tags fingerprint must be provided in order to modify tags.
+-	Fingerprint string `json:"fingerprint,omitempty"`
+-
+-	// Items: An array of tags. Each tag must be 1-63 characters long, and
+-	// comply with RFC1035.
+-	Items []string `json:"items,omitempty"`
+-}
+-
+-type TargetHttpProxy struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// UrlMap: URL to the UrlMap resource that defines the mapping from URL
+-	// to the BackendService.
+-	UrlMap string `json:"urlMap,omitempty"`
+-}
+-
+-type TargetHttpProxyList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The TargetHttpProxy resources.
+-	Items []*TargetHttpProxy `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type TargetInstance struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Instance: The URL to the instance that terminates the relevant
+-	// traffic.
+-	Instance string `json:"instance,omitempty"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// NatPolicy: NAT option controlling how IPs are NAT'ed to the VM.
+-	// Currently only NO_NAT (default value) is supported.
+-	NatPolicy string `json:"natPolicy,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// Zone: URL of the zone where the target instance resides (output
+-	// only).
+-	Zone string `json:"zone,omitempty"`
+-}
+-
+-type TargetInstanceAggregatedList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A map of scoped target instance lists.
+-	Items map[string]TargetInstancesScopedList `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type TargetInstanceList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The TargetInstance resources.
+-	Items []*TargetInstance `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type TargetInstancesScopedList struct {
+-	// TargetInstances: List of target instances contained in this scope.
+-	TargetInstances []*TargetInstance `json:"targetInstances,omitempty"`
+-
+-	// Warning: Informational warning which replaces the list of addresses
+-	// when the list is empty.
+-	Warning *TargetInstancesScopedListWarning `json:"warning,omitempty"`
+-}
+-
+-type TargetInstancesScopedListWarning struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*TargetInstancesScopedListWarningData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type TargetInstancesScopedListWarningData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type TargetPool struct {
+-	// BackupPool: This field is applicable only when the containing target
+-	// pool is serving a forwarding rule as the primary pool, and its
+-	// 'failoverRatio' field is properly set to a value between [0,
+-	// 1].
+-	//
+-	// 'backupPool' and 'failoverRatio' together define the fallback
+-	// behavior of the primary target pool: if the ratio of the healthy VMs
+-	// in the primary pool is at or below 'failoverRatio', traffic arriving
+-	// at the load-balanced IP will be directed to the backup pool.
+-	//
+-	// In case
+-	// where 'failoverRatio' and 'backupPool' are not set, or all the VMs in
+-	// the backup pool are unhealthy, the traffic will be directed back to
+-	// the primary pool in the "force" mode, where traffic will be spread to
+-	// the healthy VMs with the best effort, or to all VMs when no VM is
+-	// healthy.
+-	BackupPool string `json:"backupPool,omitempty"`
+-
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// FailoverRatio: This field is applicable only when the containing
+-	// target pool is serving a forwarding rule as the primary pool (i.e.,
+-	// not as a backup pool to some other target pool). The value of the
+-	// field must be in [0, 1].
+-	//
+-	// If set, 'backupPool' must also be set. They
+-	// together define the fallback behavior of the primary target pool: if
+-	// the ratio of the healthy VMs in the primary pool is at or below this
+-	// number, traffic arriving at the load-balanced IP will be directed to
+-	// the backup pool.
+-	//
+-	// In case where 'failoverRatio' is not set or all the
+-	// VMs in the backup pool are unhealthy, the traffic will be directed
+-	// back to the primary pool in the "force" mode, where traffic will be
+-	// spread to the healthy VMs with the best effort, or to all VMs when no
+-	// VM is healthy.
+-	FailoverRatio float64 `json:"failoverRatio,omitempty"`
+-
+-	// HealthChecks: A list of URLs to the HttpHealthCheck resource. A
+-	// member VM in this pool is considered healthy if and only if all
+-	// specified health checks pass. An empty list means all member VMs will
+-	// be considered healthy at all times.
+-	HealthChecks []string `json:"healthChecks,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Instances: A list of resource URLs to the member VMs serving this
+-	// pool. They must live in zones contained in the same region as this
+-	// pool.
+-	Instances []string `json:"instances,omitempty"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// Region: URL of the region where the target pool resides (output
+-	// only).
+-	Region string `json:"region,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// SessionAffinity: Sesssion affinity option, must be one of the
+-	// following values: 'NONE': Connections from the same client IP may go
+-	// to any VM in the pool; 'CLIENT_IP': Connections from the same client
+-	// IP will go to the same VM in the pool while that VM remains healthy.
+-	// 'CLIENT_IP_PROTO': Connections from the same client IP with the same
+-	// IP protocol will go to the same VM in the pool while that VM remains
+-	// healthy.
+-	SessionAffinity string `json:"sessionAffinity,omitempty"`
+-}
+-
+-type TargetPoolAggregatedList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: A map of scoped target pool lists.
+-	Items map[string]TargetPoolsScopedList `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type TargetPoolInstanceHealth struct {
+-	HealthStatus []*HealthStatus `json:"healthStatus,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-}
+-
+-type TargetPoolList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The TargetPool resources.
+-	Items []*TargetPool `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type TargetPoolsAddHealthCheckRequest struct {
+-	// HealthChecks: Health check URLs to be added to targetPool.
+-	HealthChecks []*HealthCheckReference `json:"healthChecks,omitempty"`
+-}
+-
+-type TargetPoolsAddInstanceRequest struct {
+-	// Instances: URLs of the instances to be added to targetPool.
+-	Instances []*InstanceReference `json:"instances,omitempty"`
+-}
+-
+-type TargetPoolsRemoveHealthCheckRequest struct {
+-	// HealthChecks: Health check URLs to be removed from targetPool.
+-	HealthChecks []*HealthCheckReference `json:"healthChecks,omitempty"`
+-}
+-
+-type TargetPoolsRemoveInstanceRequest struct {
+-	// Instances: URLs of the instances to be removed from targetPool.
+-	Instances []*InstanceReference `json:"instances,omitempty"`
+-}
+-
+-type TargetPoolsScopedList struct {
+-	// TargetPools: List of target pools contained in this scope.
+-	TargetPools []*TargetPool `json:"targetPools,omitempty"`
+-
+-	// Warning: Informational warning which replaces the list of addresses
+-	// when the list is empty.
+-	Warning *TargetPoolsScopedListWarning `json:"warning,omitempty"`
+-}
+-
+-type TargetPoolsScopedListWarning struct {
+-	// Code: The warning type identifier for this warning.
+-	Code string `json:"code,omitempty"`
+-
+-	// Data: Metadata for this warning in 'key: value' format.
+-	Data []*TargetPoolsScopedListWarningData `json:"data,omitempty"`
+-
+-	// Message: Optional human-readable details for this warning.
+-	Message string `json:"message,omitempty"`
+-}
+-
+-type TargetPoolsScopedListWarningData struct {
+-	// Key: A key for the warning data.
+-	Key string `json:"key,omitempty"`
+-
+-	// Value: A warning data value corresponding to the key.
+-	Value string `json:"value,omitempty"`
+-}
+-
+-type TargetReference struct {
+-	Target string `json:"target,omitempty"`
+-}
+-
+-type TestFailure struct {
+-	ActualService string `json:"actualService,omitempty"`
+-
+-	ExpectedService string `json:"expectedService,omitempty"`
+-
+-	Host string `json:"host,omitempty"`
+-
+-	Path string `json:"path,omitempty"`
+-}
+-
+-type UrlMap struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// DefaultService: The URL of the BackendService resource if none of the
+-	// hostRules match.
+-	DefaultService string `json:"defaultService,omitempty"`
+-
+-	// Description: An optional textual description of the resource;
+-	// provided by the client when the resource is created.
+-	Description string `json:"description,omitempty"`
+-
+-	// Fingerprint: Fingerprint of this resource. A hash of the contents
+-	// stored in this object. This field is used in optimistic locking. This
+-	// field will be ignored when inserting a UrlMap. An up-to-date
+-	// fingerprint must be provided in order to update the UrlMap.
+-	Fingerprint string `json:"fingerprint,omitempty"`
+-
+-	// HostRules: The list of HostRules to use against the URL.
+-	HostRules []*HostRule `json:"hostRules,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// Name: Name of the resource; provided by the client when the resource
+-	// is created. The name must be 1-63 characters long, and comply with
+-	// RFC1035.
+-	Name string `json:"name,omitempty"`
+-
+-	// PathMatchers: The list of named PathMatchers to use against the URL.
+-	PathMatchers []*PathMatcher `json:"pathMatchers,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// Tests: The list of expected URL mappings. Request to update this
+-	// UrlMap will succeed only all of the test cases pass.
+-	Tests []*UrlMapTest `json:"tests,omitempty"`
+-}
+-
+-type UrlMapList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The UrlMap resources.
+-	Items []*UrlMap `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-type UrlMapReference struct {
+-	UrlMap string `json:"urlMap,omitempty"`
+-}
+-
+-type UrlMapTest struct {
+-	// Description: Description of this test case.
+-	Description string `json:"description,omitempty"`
+-
+-	// Host: Host portion of the URL.
+-	Host string `json:"host,omitempty"`
+-
+-	// Path: Path portion of the URL.
+-	Path string `json:"path,omitempty"`
+-
+-	// Service: Expected BackendService resource the given URL should be
+-	// mapped to.
+-	Service string `json:"service,omitempty"`
+-}
+-
+-type UrlMapValidationResult struct {
+-	LoadErrors []string `json:"loadErrors,omitempty"`
+-
+-	// LoadSucceeded: Whether the given UrlMap can be successfully loaded.
+-	// If false, 'loadErrors' indicates the reasons.
+-	LoadSucceeded bool `json:"loadSucceeded,omitempty"`
+-
+-	TestFailures []*TestFailure `json:"testFailures,omitempty"`
+-
+-	// TestPassed: If successfully loaded, this field indicates whether the
+-	// test passed. If false, 'testFailures's indicate the reason of
+-	// failure.
+-	TestPassed bool `json:"testPassed,omitempty"`
+-}
+-
+-type UrlMapsValidateRequest struct {
+-	// Resource: Content of the UrlMap to be validated.
+-	Resource *UrlMap `json:"resource,omitempty"`
+-}
+-
+-type UrlMapsValidateResponse struct {
+-	Result *UrlMapValidationResult `json:"result,omitempty"`
+-}
+-
+-type UsageExportLocation struct {
+-	// BucketName: The name of an existing bucket in Cloud Storage where the
+-	// usage report object is stored. The Google Service Account is granted
+-	// write access to this bucket. This is simply the bucket name, with no
+-	// "gs://" or "https://storage.googleapis.com/" in front of it.
+-	BucketName string `json:"bucketName,omitempty"`
+-
+-	// ReportNamePrefix: An optional prefix for the name of the usage report
+-	// object stored in bucket_name. If not supplied, defaults to "usage_".
+-	// The report is stored as a CSV file named _gce_.csv. where  is the day
+-	// of the usage according to Pacific Time. The prefix should conform to
+-	// Cloud Storage object naming conventions.
+-	ReportNamePrefix string `json:"reportNamePrefix,omitempty"`
+-}
+-
+-type Zone struct {
+-	// CreationTimestamp: Creation timestamp in RFC3339 text format (output
+-	// only).
+-	CreationTimestamp string `json:"creationTimestamp,omitempty"`
+-
+-	// Deprecated: The deprecation status associated with this zone.
+-	Deprecated *DeprecationStatus `json:"deprecated,omitempty"`
+-
+-	// Description: Textual description of the resource.
+-	Description string `json:"description,omitempty"`
+-
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id uint64 `json:"id,omitempty,string"`
+-
+-	// Kind: Type of the resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// MaintenanceWindows: Scheduled maintenance windows for the zone. When
+-	// the zone is in a maintenance window, all resources which reside in
+-	// the zone will be unavailable.
+-	MaintenanceWindows []*ZoneMaintenanceWindows `json:"maintenanceWindows,omitempty"`
+-
+-	// Name: Name of the resource.
+-	Name string `json:"name,omitempty"`
+-
+-	// Region: Full URL reference to the region which hosts the zone (output
+-	// only).
+-	Region string `json:"region,omitempty"`
+-
+-	// SelfLink: Server defined URL for the resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-
+-	// Status: Status of the zone. "UP" or "DOWN".
+-	Status string `json:"status,omitempty"`
+-}
+-
+-type ZoneMaintenanceWindows struct {
+-	// BeginTime: Begin time of the maintenance window, in RFC 3339 format.
+-	BeginTime string `json:"beginTime,omitempty"`
+-
+-	// Description: Textual description of the maintenance window.
+-	Description string `json:"description,omitempty"`
+-
+-	// EndTime: End time of the maintenance window, in RFC 3339 format.
+-	EndTime string `json:"endTime,omitempty"`
+-
+-	// Name: Name of the maintenance window.
+-	Name string `json:"name,omitempty"`
+-}
+-
+-type ZoneList struct {
+-	// Id: Unique identifier for the resource; defined by the server (output
+-	// only).
+-	Id string `json:"id,omitempty"`
+-
+-	// Items: The zone resources.
+-	Items []*Zone `json:"items,omitempty"`
+-
+-	// Kind: Type of resource.
+-	Kind string `json:"kind,omitempty"`
+-
+-	// NextPageToken: A token used to continue a truncated list request
+-	// (output only).
+-	NextPageToken string `json:"nextPageToken,omitempty"`
+-
+-	// SelfLink: Server defined URL for this resource (output only).
+-	SelfLink string `json:"selfLink,omitempty"`
+-}
+-
+-// method id "compute.addresses.aggregatedList":
+-
+-type AddressesAggregatedListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// AggregatedList: Retrieves the list of addresses grouped by scope.
+-func (r *AddressesService) AggregatedList(project string) *AddressesAggregatedListCall {
+-	c := &AddressesAggregatedListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *AddressesAggregatedListCall) Filter(filter string) *AddressesAggregatedListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *AddressesAggregatedListCall) MaxResults(maxResults int64) *AddressesAggregatedListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *AddressesAggregatedListCall) PageToken(pageToken string) *AddressesAggregatedListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *AddressesAggregatedListCall) Do() (*AddressAggregatedList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/aggregated/addresses")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *AddressAggregatedList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of addresses grouped by scope.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.addresses.aggregatedList",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/aggregated/addresses",
+-	//   "response": {
+-	//     "$ref": "AddressAggregatedList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.addresses.delete":
+-
+-type AddressesDeleteCall struct {
+-	s       *Service
+-	project string
+-	region  string
+-	address string
+-	opt_    map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified address resource.
+-func (r *AddressesService) Delete(project string, region string, address string) *AddressesDeleteCall {
+-	c := &AddressesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.address = address
+-	return c
+-}
+-
+-func (c *AddressesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/addresses/{address}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-		"address": c.address,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified address resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.addresses.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "address"
+-	//   ],
+-	//   "parameters": {
+-	//     "address": {
+-	//       "description": "Name of the address resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/addresses/{address}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.addresses.get":
+-
+-type AddressesGetCall struct {
+-	s       *Service
+-	project string
+-	region  string
+-	address string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified address resource.
+-func (r *AddressesService) Get(project string, region string, address string) *AddressesGetCall {
+-	c := &AddressesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.address = address
+-	return c
+-}
+-
+-func (c *AddressesGetCall) Do() (*Address, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/addresses/{address}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-		"address": c.address,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Address
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified address resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.addresses.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "address"
+-	//   ],
+-	//   "parameters": {
+-	//     "address": {
+-	//       "description": "Name of the address resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/addresses/{address}",
+-	//   "response": {
+-	//     "$ref": "Address"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.addresses.insert":
+-
+-type AddressesInsertCall struct {
+-	s       *Service
+-	project string
+-	region  string
+-	address *Address
+-	opt_    map[string]interface{}
+-}
+-
+-// Insert: Creates an address resource in the specified project using
+-// the data included in the request.
+-func (r *AddressesService) Insert(project string, region string, address *Address) *AddressesInsertCall {
+-	c := &AddressesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.address = address
+-	return c
+-}
+-
+-func (c *AddressesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.address)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/addresses")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates an address resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.addresses.insert",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/addresses",
+-	//   "request": {
+-	//     "$ref": "Address"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.addresses.list":
+-
+-type AddressesListCall struct {
+-	s       *Service
+-	project string
+-	region  string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of address resources contained within the
+-// specified region.
+-func (r *AddressesService) List(project string, region string) *AddressesListCall {
+-	c := &AddressesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *AddressesListCall) Filter(filter string) *AddressesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *AddressesListCall) MaxResults(maxResults int64) *AddressesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *AddressesListCall) PageToken(pageToken string) *AddressesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *AddressesListCall) Do() (*AddressList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/addresses")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *AddressList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of address resources contained within the specified region.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.addresses.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/addresses",
+-	//   "response": {
+-	//     "$ref": "AddressList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.backendServices.delete":
+-
+-type BackendServicesDeleteCall struct {
+-	s              *Service
+-	project        string
+-	backendService string
+-	opt_           map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified BackendService resource.
+-func (r *BackendServicesService) Delete(project string, backendService string) *BackendServicesDeleteCall {
+-	c := &BackendServicesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.backendService = backendService
+-	return c
+-}
+-
+-func (c *BackendServicesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/backendServices/{backendService}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"backendService": c.backendService,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified BackendService resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.backendServices.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "backendService"
+-	//   ],
+-	//   "parameters": {
+-	//     "backendService": {
+-	//       "description": "Name of the BackendService resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/backendServices/{backendService}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.backendServices.get":
+-
+-type BackendServicesGetCall struct {
+-	s              *Service
+-	project        string
+-	backendService string
+-	opt_           map[string]interface{}
+-}
+-
+-// Get: Returns the specified BackendService resource.
+-func (r *BackendServicesService) Get(project string, backendService string) *BackendServicesGetCall {
+-	c := &BackendServicesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.backendService = backendService
+-	return c
+-}
+-
+-func (c *BackendServicesGetCall) Do() (*BackendService, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/backendServices/{backendService}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"backendService": c.backendService,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *BackendService
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified BackendService resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.backendServices.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "backendService"
+-	//   ],
+-	//   "parameters": {
+-	//     "backendService": {
+-	//       "description": "Name of the BackendService resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/backendServices/{backendService}",
+-	//   "response": {
+-	//     "$ref": "BackendService"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.backendServices.getHealth":
+-
+-type BackendServicesGetHealthCall struct {
+-	s                      *Service
+-	project                string
+-	backendService         string
+-	resourcegroupreference *ResourceGroupReference
+-	opt_                   map[string]interface{}
+-}
+-
+-// GetHealth: Gets the most recent health check results for this
+-// BackendService.
+-func (r *BackendServicesService) GetHealth(project string, backendService string, resourcegroupreference *ResourceGroupReference) *BackendServicesGetHealthCall {
+-	c := &BackendServicesGetHealthCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.backendService = backendService
+-	c.resourcegroupreference = resourcegroupreference
+-	return c
+-}
+-
+-func (c *BackendServicesGetHealthCall) Do() (*BackendServiceGroupHealth, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.resourcegroupreference)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/backendServices/{backendService}/getHealth")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"backendService": c.backendService,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *BackendServiceGroupHealth
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Gets the most recent health check results for this BackendService.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.backendServices.getHealth",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "backendService"
+-	//   ],
+-	//   "parameters": {
+-	//     "backendService": {
+-	//       "description": "Name of the BackendService resource to which the queried instance belongs.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/backendServices/{backendService}/getHealth",
+-	//   "request": {
+-	//     "$ref": "ResourceGroupReference"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "BackendServiceGroupHealth"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.backendServices.insert":
+-
+-type BackendServicesInsertCall struct {
+-	s              *Service
+-	project        string
+-	backendservice *BackendService
+-	opt_           map[string]interface{}
+-}
+-
+-// Insert: Creates a BackendService resource in the specified project
+-// using the data included in the request.
+-func (r *BackendServicesService) Insert(project string, backendservice *BackendService) *BackendServicesInsertCall {
+-	c := &BackendServicesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.backendservice = backendservice
+-	return c
+-}
+-
+-func (c *BackendServicesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.backendservice)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/backendServices")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a BackendService resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.backendServices.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/backendServices",
+-	//   "request": {
+-	//     "$ref": "BackendService"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.backendServices.list":
+-
+-type BackendServicesListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of BackendService resources available to the
+-// specified project.
+-func (r *BackendServicesService) List(project string) *BackendServicesListCall {
+-	c := &BackendServicesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *BackendServicesListCall) Filter(filter string) *BackendServicesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *BackendServicesListCall) MaxResults(maxResults int64) *BackendServicesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *BackendServicesListCall) PageToken(pageToken string) *BackendServicesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *BackendServicesListCall) Do() (*BackendServiceList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/backendServices")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *BackendServiceList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of BackendService resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.backendServices.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/backendServices",
+-	//   "response": {
+-	//     "$ref": "BackendServiceList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.backendServices.patch":
+-
+-type BackendServicesPatchCall struct {
+-	s              *Service
+-	project        string
+-	backendService string
+-	backendservice *BackendService
+-	opt_           map[string]interface{}
+-}
+-
+-// Patch: Update the entire content of the BackendService resource. This
+-// method supports patch semantics.
+-func (r *BackendServicesService) Patch(project string, backendService string, backendservice *BackendService) *BackendServicesPatchCall {
+-	c := &BackendServicesPatchCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.backendService = backendService
+-	c.backendservice = backendservice
+-	return c
+-}
+-
+-func (c *BackendServicesPatchCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.backendservice)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/backendServices/{backendService}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("PATCH", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"backendService": c.backendService,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Update the entire content of the BackendService resource. This method supports patch semantics.",
+-	//   "httpMethod": "PATCH",
+-	//   "id": "compute.backendServices.patch",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "backendService"
+-	//   ],
+-	//   "parameters": {
+-	//     "backendService": {
+-	//       "description": "Name of the BackendService resource to update.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/backendServices/{backendService}",
+-	//   "request": {
+-	//     "$ref": "BackendService"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.backendServices.update":
+-
+-type BackendServicesUpdateCall struct {
+-	s              *Service
+-	project        string
+-	backendService string
+-	backendservice *BackendService
+-	opt_           map[string]interface{}
+-}
+-
+-// Update: Update the entire content of the BackendService resource.
+-func (r *BackendServicesService) Update(project string, backendService string, backendservice *BackendService) *BackendServicesUpdateCall {
+-	c := &BackendServicesUpdateCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.backendService = backendService
+-	c.backendservice = backendservice
+-	return c
+-}
+-
+-func (c *BackendServicesUpdateCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.backendservice)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/backendServices/{backendService}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("PUT", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"backendService": c.backendService,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Update the entire content of the BackendService resource.",
+-	//   "httpMethod": "PUT",
+-	//   "id": "compute.backendServices.update",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "backendService"
+-	//   ],
+-	//   "parameters": {
+-	//     "backendService": {
+-	//       "description": "Name of the BackendService resource to update.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/backendServices/{backendService}",
+-	//   "request": {
+-	//     "$ref": "BackendService"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.diskTypes.aggregatedList":
+-
+-type DiskTypesAggregatedListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// AggregatedList: Retrieves the list of disk type resources grouped by
+-// scope.
+-func (r *DiskTypesService) AggregatedList(project string) *DiskTypesAggregatedListCall {
+-	c := &DiskTypesAggregatedListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *DiskTypesAggregatedListCall) Filter(filter string) *DiskTypesAggregatedListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *DiskTypesAggregatedListCall) MaxResults(maxResults int64) *DiskTypesAggregatedListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *DiskTypesAggregatedListCall) PageToken(pageToken string) *DiskTypesAggregatedListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *DiskTypesAggregatedListCall) Do() (*DiskTypeAggregatedList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/aggregated/diskTypes")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *DiskTypeAggregatedList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of disk type resources grouped by scope.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.diskTypes.aggregatedList",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/aggregated/diskTypes",
+-	//   "response": {
+-	//     "$ref": "DiskTypeAggregatedList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.diskTypes.get":
+-
+-type DiskTypesGetCall struct {
+-	s        *Service
+-	project  string
+-	zone     string
+-	diskType string
+-	opt_     map[string]interface{}
+-}
+-
+-// Get: Returns the specified disk type resource.
+-func (r *DiskTypesService) Get(project string, zone string, diskType string) *DiskTypesGetCall {
+-	c := &DiskTypesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.diskType = diskType
+-	return c
+-}
+-
+-func (c *DiskTypesGetCall) Do() (*DiskType, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/diskTypes/{diskType}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"diskType": c.diskType,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *DiskType
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified disk type resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.diskTypes.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "diskType"
+-	//   ],
+-	//   "parameters": {
+-	//     "diskType": {
+-	//       "description": "Name of the disk type resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/diskTypes/{diskType}",
+-	//   "response": {
+-	//     "$ref": "DiskType"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.diskTypes.list":
+-
+-type DiskTypesListCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of disk type resources available to the
+-// specified project.
+-func (r *DiskTypesService) List(project string, zone string) *DiskTypesListCall {
+-	c := &DiskTypesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *DiskTypesListCall) Filter(filter string) *DiskTypesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *DiskTypesListCall) MaxResults(maxResults int64) *DiskTypesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *DiskTypesListCall) PageToken(pageToken string) *DiskTypesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *DiskTypesListCall) Do() (*DiskTypeList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/diskTypes")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *DiskTypeList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of disk type resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.diskTypes.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/diskTypes",
+-	//   "response": {
+-	//     "$ref": "DiskTypeList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.disks.aggregatedList":
+-
+-type DisksAggregatedListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// AggregatedList: Retrieves the list of disks grouped by scope.
+-func (r *DisksService) AggregatedList(project string) *DisksAggregatedListCall {
+-	c := &DisksAggregatedListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *DisksAggregatedListCall) Filter(filter string) *DisksAggregatedListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *DisksAggregatedListCall) MaxResults(maxResults int64) *DisksAggregatedListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *DisksAggregatedListCall) PageToken(pageToken string) *DisksAggregatedListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *DisksAggregatedListCall) Do() (*DiskAggregatedList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/aggregated/disks")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *DiskAggregatedList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of disks grouped by scope.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.disks.aggregatedList",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/aggregated/disks",
+-	//   "response": {
+-	//     "$ref": "DiskAggregatedList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.disks.createSnapshot":
+-
+-type DisksCreateSnapshotCall struct {
+-	s        *Service
+-	project  string
+-	zone     string
+-	disk     string
+-	snapshot *Snapshot
+-	opt_     map[string]interface{}
+-}
+-
+-// CreateSnapshot:
+-func (r *DisksService) CreateSnapshot(project string, zone string, disk string, snapshot *Snapshot) *DisksCreateSnapshotCall {
+-	c := &DisksCreateSnapshotCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.disk = disk
+-	c.snapshot = snapshot
+-	return c
+-}
+-
+-func (c *DisksCreateSnapshotCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.snapshot)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/disks/{disk}/createSnapshot")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-		"disk":    c.disk,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.disks.createSnapshot",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "disk"
+-	//   ],
+-	//   "parameters": {
+-	//     "disk": {
+-	//       "description": "Name of the persistent disk resource to snapshot.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/disks/{disk}/createSnapshot",
+-	//   "request": {
+-	//     "$ref": "Snapshot"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.disks.delete":
+-
+-type DisksDeleteCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	disk    string
+-	opt_    map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified persistent disk resource.
+-func (r *DisksService) Delete(project string, zone string, disk string) *DisksDeleteCall {
+-	c := &DisksDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.disk = disk
+-	return c
+-}
+-
+-func (c *DisksDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/disks/{disk}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-		"disk":    c.disk,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified persistent disk resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.disks.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "disk"
+-	//   ],
+-	//   "parameters": {
+-	//     "disk": {
+-	//       "description": "Name of the persistent disk resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/disks/{disk}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.disks.get":
+-
+-type DisksGetCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	disk    string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified persistent disk resource.
+-func (r *DisksService) Get(project string, zone string, disk string) *DisksGetCall {
+-	c := &DisksGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.disk = disk
+-	return c
+-}
+-
+-func (c *DisksGetCall) Do() (*Disk, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/disks/{disk}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-		"disk":    c.disk,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Disk
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified persistent disk resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.disks.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "disk"
+-	//   ],
+-	//   "parameters": {
+-	//     "disk": {
+-	//       "description": "Name of the persistent disk resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/disks/{disk}",
+-	//   "response": {
+-	//     "$ref": "Disk"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.disks.insert":
+-
+-type DisksInsertCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	disk    *Disk
+-	opt_    map[string]interface{}
+-}
+-
+-// Insert: Creates a persistent disk resource in the specified project
+-// using the data included in the request.
+-func (r *DisksService) Insert(project string, zone string, disk *Disk) *DisksInsertCall {
+-	c := &DisksInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.disk = disk
+-	return c
+-}
+-
+-// SourceImage sets the optional parameter "sourceImage": Source image
+-// to restore onto a disk.
+-func (c *DisksInsertCall) SourceImage(sourceImage string) *DisksInsertCall {
+-	c.opt_["sourceImage"] = sourceImage
+-	return c
+-}
+-
+-func (c *DisksInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.disk)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["sourceImage"]; ok {
+-		params.Set("sourceImage", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/disks")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a persistent disk resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.disks.insert",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "sourceImage": {
+-	//       "description": "Optional. Source image to restore onto a disk.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/disks",
+-	//   "request": {
+-	//     "$ref": "Disk"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.disks.list":
+-
+-type DisksListCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of persistent disk resources contained
+-// within the specified zone.
+-func (r *DisksService) List(project string, zone string) *DisksListCall {
+-	c := &DisksListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *DisksListCall) Filter(filter string) *DisksListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *DisksListCall) MaxResults(maxResults int64) *DisksListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *DisksListCall) PageToken(pageToken string) *DisksListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *DisksListCall) Do() (*DiskList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/disks")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *DiskList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of persistent disk resources contained within the specified zone.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.disks.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/disks",
+-	//   "response": {
+-	//     "$ref": "DiskList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.firewalls.delete":
+-
+-type FirewallsDeleteCall struct {
+-	s        *Service
+-	project  string
+-	firewall string
+-	opt_     map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified firewall resource.
+-func (r *FirewallsService) Delete(project string, firewall string) *FirewallsDeleteCall {
+-	c := &FirewallsDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.firewall = firewall
+-	return c
+-}
+-
+-func (c *FirewallsDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/firewalls/{firewall}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"firewall": c.firewall,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified firewall resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.firewalls.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "firewall"
+-	//   ],
+-	//   "parameters": {
+-	//     "firewall": {
+-	//       "description": "Name of the firewall resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/firewalls/{firewall}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.firewalls.get":
+-
+-type FirewallsGetCall struct {
+-	s        *Service
+-	project  string
+-	firewall string
+-	opt_     map[string]interface{}
+-}
+-
+-// Get: Returns the specified firewall resource.
+-func (r *FirewallsService) Get(project string, firewall string) *FirewallsGetCall {
+-	c := &FirewallsGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.firewall = firewall
+-	return c
+-}
+-
+-func (c *FirewallsGetCall) Do() (*Firewall, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/firewalls/{firewall}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"firewall": c.firewall,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Firewall
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified firewall resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.firewalls.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "firewall"
+-	//   ],
+-	//   "parameters": {
+-	//     "firewall": {
+-	//       "description": "Name of the firewall resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/firewalls/{firewall}",
+-	//   "response": {
+-	//     "$ref": "Firewall"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.firewalls.insert":
+-
+-type FirewallsInsertCall struct {
+-	s        *Service
+-	project  string
+-	firewall *Firewall
+-	opt_     map[string]interface{}
+-}
+-
+-// Insert: Creates a firewall resource in the specified project using
+-// the data included in the request.
+-func (r *FirewallsService) Insert(project string, firewall *Firewall) *FirewallsInsertCall {
+-	c := &FirewallsInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.firewall = firewall
+-	return c
+-}
+-
+-func (c *FirewallsInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.firewall)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/firewalls")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a firewall resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.firewalls.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/firewalls",
+-	//   "request": {
+-	//     "$ref": "Firewall"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.firewalls.list":
+-
+-type FirewallsListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of firewall resources available to the
+-// specified project.
+-func (r *FirewallsService) List(project string) *FirewallsListCall {
+-	c := &FirewallsListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *FirewallsListCall) Filter(filter string) *FirewallsListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *FirewallsListCall) MaxResults(maxResults int64) *FirewallsListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *FirewallsListCall) PageToken(pageToken string) *FirewallsListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *FirewallsListCall) Do() (*FirewallList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/firewalls")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *FirewallList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of firewall resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.firewalls.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/firewalls",
+-	//   "response": {
+-	//     "$ref": "FirewallList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.firewalls.patch":
+-
+-type FirewallsPatchCall struct {
+-	s         *Service
+-	project   string
+-	firewall  string
+-	firewall2 *Firewall
+-	opt_      map[string]interface{}
+-}
+-
+-// Patch: Updates the specified firewall resource with the data included
+-// in the request. This method supports patch semantics.
+-func (r *FirewallsService) Patch(project string, firewall string, firewall2 *Firewall) *FirewallsPatchCall {
+-	c := &FirewallsPatchCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.firewall = firewall
+-	c.firewall2 = firewall2
+-	return c
+-}
+-
+-func (c *FirewallsPatchCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.firewall2)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/firewalls/{firewall}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("PATCH", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"firewall": c.firewall,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Updates the specified firewall resource with the data included in the request. This method supports patch semantics.",
+-	//   "httpMethod": "PATCH",
+-	//   "id": "compute.firewalls.patch",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "firewall"
+-	//   ],
+-	//   "parameters": {
+-	//     "firewall": {
+-	//       "description": "Name of the firewall resource to update.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/firewalls/{firewall}",
+-	//   "request": {
+-	//     "$ref": "Firewall"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.firewalls.update":
+-
+-type FirewallsUpdateCall struct {
+-	s         *Service
+-	project   string
+-	firewall  string
+-	firewall2 *Firewall
+-	opt_      map[string]interface{}
+-}
+-
+-// Update: Updates the specified firewall resource with the data
+-// included in the request.
+-func (r *FirewallsService) Update(project string, firewall string, firewall2 *Firewall) *FirewallsUpdateCall {
+-	c := &FirewallsUpdateCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.firewall = firewall
+-	c.firewall2 = firewall2
+-	return c
+-}
+-
+-func (c *FirewallsUpdateCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.firewall2)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/firewalls/{firewall}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("PUT", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"firewall": c.firewall,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Updates the specified firewall resource with the data included in the request.",
+-	//   "httpMethod": "PUT",
+-	//   "id": "compute.firewalls.update",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "firewall"
+-	//   ],
+-	//   "parameters": {
+-	//     "firewall": {
+-	//       "description": "Name of the firewall resource to update.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/firewalls/{firewall}",
+-	//   "request": {
+-	//     "$ref": "Firewall"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.forwardingRules.aggregatedList":
+-
+-type ForwardingRulesAggregatedListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// AggregatedList: Retrieves the list of forwarding rules grouped by
+-// scope.
+-func (r *ForwardingRulesService) AggregatedList(project string) *ForwardingRulesAggregatedListCall {
+-	c := &ForwardingRulesAggregatedListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *ForwardingRulesAggregatedListCall) Filter(filter string) *ForwardingRulesAggregatedListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *ForwardingRulesAggregatedListCall) MaxResults(maxResults int64) *ForwardingRulesAggregatedListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *ForwardingRulesAggregatedListCall) PageToken(pageToken string) *ForwardingRulesAggregatedListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *ForwardingRulesAggregatedListCall) Do() (*ForwardingRuleAggregatedList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/aggregated/forwardingRules")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *ForwardingRuleAggregatedList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of forwarding rules grouped by scope.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.forwardingRules.aggregatedList",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/aggregated/forwardingRules",
+-	//   "response": {
+-	//     "$ref": "ForwardingRuleAggregatedList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.forwardingRules.delete":
+-
+-type ForwardingRulesDeleteCall struct {
+-	s              *Service
+-	project        string
+-	region         string
+-	forwardingRule string
+-	opt_           map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified ForwardingRule resource.
+-func (r *ForwardingRulesService) Delete(project string, region string, forwardingRule string) *ForwardingRulesDeleteCall {
+-	c := &ForwardingRulesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.forwardingRule = forwardingRule
+-	return c
+-}
+-
+-func (c *ForwardingRulesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/forwardingRules/{forwardingRule}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"region":         c.region,
+-		"forwardingRule": c.forwardingRule,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified ForwardingRule resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.forwardingRules.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "forwardingRule"
+-	//   ],
+-	//   "parameters": {
+-	//     "forwardingRule": {
+-	//       "description": "Name of the ForwardingRule resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/forwardingRules/{forwardingRule}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.forwardingRules.get":
+-
+-type ForwardingRulesGetCall struct {
+-	s              *Service
+-	project        string
+-	region         string
+-	forwardingRule string
+-	opt_           map[string]interface{}
+-}
+-
+-// Get: Returns the specified ForwardingRule resource.
+-func (r *ForwardingRulesService) Get(project string, region string, forwardingRule string) *ForwardingRulesGetCall {
+-	c := &ForwardingRulesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.forwardingRule = forwardingRule
+-	return c
+-}
+-
+-func (c *ForwardingRulesGetCall) Do() (*ForwardingRule, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/forwardingRules/{forwardingRule}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"region":         c.region,
+-		"forwardingRule": c.forwardingRule,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *ForwardingRule
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified ForwardingRule resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.forwardingRules.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "forwardingRule"
+-	//   ],
+-	//   "parameters": {
+-	//     "forwardingRule": {
+-	//       "description": "Name of the ForwardingRule resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/forwardingRules/{forwardingRule}",
+-	//   "response": {
+-	//     "$ref": "ForwardingRule"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.forwardingRules.insert":
+-
+-type ForwardingRulesInsertCall struct {
+-	s              *Service
+-	project        string
+-	region         string
+-	forwardingrule *ForwardingRule
+-	opt_           map[string]interface{}
+-}
+-
+-// Insert: Creates a ForwardingRule resource in the specified project
+-// and region using the data included in the request.
+-func (r *ForwardingRulesService) Insert(project string, region string, forwardingrule *ForwardingRule) *ForwardingRulesInsertCall {
+-	c := &ForwardingRulesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.forwardingrule = forwardingrule
+-	return c
+-}
+-
+-func (c *ForwardingRulesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.forwardingrule)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/forwardingRules")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a ForwardingRule resource in the specified project and region using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.forwardingRules.insert",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/forwardingRules",
+-	//   "request": {
+-	//     "$ref": "ForwardingRule"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.forwardingRules.list":
+-
+-type ForwardingRulesListCall struct {
+-	s       *Service
+-	project string
+-	region  string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of ForwardingRule resources available to the
+-// specified project and region.
+-func (r *ForwardingRulesService) List(project string, region string) *ForwardingRulesListCall {
+-	c := &ForwardingRulesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *ForwardingRulesListCall) Filter(filter string) *ForwardingRulesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *ForwardingRulesListCall) MaxResults(maxResults int64) *ForwardingRulesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *ForwardingRulesListCall) PageToken(pageToken string) *ForwardingRulesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *ForwardingRulesListCall) Do() (*ForwardingRuleList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/forwardingRules")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *ForwardingRuleList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of ForwardingRule resources available to the specified project and region.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.forwardingRules.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/forwardingRules",
+-	//   "response": {
+-	//     "$ref": "ForwardingRuleList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.forwardingRules.setTarget":
+-
+-type ForwardingRulesSetTargetCall struct {
+-	s               *Service
+-	project         string
+-	region          string
+-	forwardingRule  string
+-	targetreference *TargetReference
+-	opt_            map[string]interface{}
+-}
+-
+-// SetTarget: Changes target url for forwarding rule.
+-func (r *ForwardingRulesService) SetTarget(project string, region string, forwardingRule string, targetreference *TargetReference) *ForwardingRulesSetTargetCall {
+-	c := &ForwardingRulesSetTargetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.forwardingRule = forwardingRule
+-	c.targetreference = targetreference
+-	return c
+-}
+-
+-func (c *ForwardingRulesSetTargetCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targetreference)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/forwardingRules/{forwardingRule}/setTarget")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"region":         c.region,
+-		"forwardingRule": c.forwardingRule,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Changes target url for forwarding rule.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.forwardingRules.setTarget",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "forwardingRule"
+-	//   ],
+-	//   "parameters": {
+-	//     "forwardingRule": {
+-	//       "description": "Name of the ForwardingRule resource in which target is to be set.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/forwardingRules/{forwardingRule}/setTarget",
+-	//   "request": {
+-	//     "$ref": "TargetReference"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalAddresses.delete":
+-
+-type GlobalAddressesDeleteCall struct {
+-	s       *Service
+-	project string
+-	address string
+-	opt_    map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified address resource.
+-func (r *GlobalAddressesService) Delete(project string, address string) *GlobalAddressesDeleteCall {
+-	c := &GlobalAddressesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.address = address
+-	return c
+-}
+-
+-func (c *GlobalAddressesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/addresses/{address}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"address": c.address,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified address resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.globalAddresses.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "address"
+-	//   ],
+-	//   "parameters": {
+-	//     "address": {
+-	//       "description": "Name of the address resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/addresses/{address}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalAddresses.get":
+-
+-type GlobalAddressesGetCall struct {
+-	s       *Service
+-	project string
+-	address string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified address resource.
+-func (r *GlobalAddressesService) Get(project string, address string) *GlobalAddressesGetCall {
+-	c := &GlobalAddressesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.address = address
+-	return c
+-}
+-
+-func (c *GlobalAddressesGetCall) Do() (*Address, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/addresses/{address}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"address": c.address,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Address
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified address resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.globalAddresses.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "address"
+-	//   ],
+-	//   "parameters": {
+-	//     "address": {
+-	//       "description": "Name of the address resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/addresses/{address}",
+-	//   "response": {
+-	//     "$ref": "Address"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalAddresses.insert":
+-
+-type GlobalAddressesInsertCall struct {
+-	s       *Service
+-	project string
+-	address *Address
+-	opt_    map[string]interface{}
+-}
+-
+-// Insert: Creates an address resource in the specified project using
+-// the data included in the request.
+-func (r *GlobalAddressesService) Insert(project string, address *Address) *GlobalAddressesInsertCall {
+-	c := &GlobalAddressesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.address = address
+-	return c
+-}
+-
+-func (c *GlobalAddressesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.address)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/addresses")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates an address resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.globalAddresses.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/addresses",
+-	//   "request": {
+-	//     "$ref": "Address"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalAddresses.list":
+-
+-type GlobalAddressesListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of global address resources.
+-func (r *GlobalAddressesService) List(project string) *GlobalAddressesListCall {
+-	c := &GlobalAddressesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *GlobalAddressesListCall) Filter(filter string) *GlobalAddressesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *GlobalAddressesListCall) MaxResults(maxResults int64) *GlobalAddressesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *GlobalAddressesListCall) PageToken(pageToken string) *GlobalAddressesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *GlobalAddressesListCall) Do() (*AddressList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/addresses")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *AddressList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of global address resources.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.globalAddresses.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/addresses",
+-	//   "response": {
+-	//     "$ref": "AddressList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalForwardingRules.delete":
+-
+-type GlobalForwardingRulesDeleteCall struct {
+-	s              *Service
+-	project        string
+-	forwardingRule string
+-	opt_           map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified ForwardingRule resource.
+-func (r *GlobalForwardingRulesService) Delete(project string, forwardingRule string) *GlobalForwardingRulesDeleteCall {
+-	c := &GlobalForwardingRulesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.forwardingRule = forwardingRule
+-	return c
+-}
+-
+-func (c *GlobalForwardingRulesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/forwardingRules/{forwardingRule}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"forwardingRule": c.forwardingRule,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified ForwardingRule resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.globalForwardingRules.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "forwardingRule"
+-	//   ],
+-	//   "parameters": {
+-	//     "forwardingRule": {
+-	//       "description": "Name of the ForwardingRule resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/forwardingRules/{forwardingRule}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalForwardingRules.get":
+-
+-type GlobalForwardingRulesGetCall struct {
+-	s              *Service
+-	project        string
+-	forwardingRule string
+-	opt_           map[string]interface{}
+-}
+-
+-// Get: Returns the specified ForwardingRule resource.
+-func (r *GlobalForwardingRulesService) Get(project string, forwardingRule string) *GlobalForwardingRulesGetCall {
+-	c := &GlobalForwardingRulesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.forwardingRule = forwardingRule
+-	return c
+-}
+-
+-func (c *GlobalForwardingRulesGetCall) Do() (*ForwardingRule, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/forwardingRules/{forwardingRule}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"forwardingRule": c.forwardingRule,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *ForwardingRule
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified ForwardingRule resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.globalForwardingRules.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "forwardingRule"
+-	//   ],
+-	//   "parameters": {
+-	//     "forwardingRule": {
+-	//       "description": "Name of the ForwardingRule resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/forwardingRules/{forwardingRule}",
+-	//   "response": {
+-	//     "$ref": "ForwardingRule"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalForwardingRules.insert":
+-
+-type GlobalForwardingRulesInsertCall struct {
+-	s              *Service
+-	project        string
+-	forwardingrule *ForwardingRule
+-	opt_           map[string]interface{}
+-}
+-
+-// Insert: Creates a ForwardingRule resource in the specified project
+-// and region using the data included in the request.
+-func (r *GlobalForwardingRulesService) Insert(project string, forwardingrule *ForwardingRule) *GlobalForwardingRulesInsertCall {
+-	c := &GlobalForwardingRulesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.forwardingrule = forwardingrule
+-	return c
+-}
+-
+-func (c *GlobalForwardingRulesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.forwardingrule)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/forwardingRules")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a ForwardingRule resource in the specified project and region using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.globalForwardingRules.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/forwardingRules",
+-	//   "request": {
+-	//     "$ref": "ForwardingRule"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalForwardingRules.list":
+-
+-type GlobalForwardingRulesListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of ForwardingRule resources available to the
+-// specified project.
+-func (r *GlobalForwardingRulesService) List(project string) *GlobalForwardingRulesListCall {
+-	c := &GlobalForwardingRulesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *GlobalForwardingRulesListCall) Filter(filter string) *GlobalForwardingRulesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *GlobalForwardingRulesListCall) MaxResults(maxResults int64) *GlobalForwardingRulesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *GlobalForwardingRulesListCall) PageToken(pageToken string) *GlobalForwardingRulesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *GlobalForwardingRulesListCall) Do() (*ForwardingRuleList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/forwardingRules")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *ForwardingRuleList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of ForwardingRule resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.globalForwardingRules.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/forwardingRules",
+-	//   "response": {
+-	//     "$ref": "ForwardingRuleList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalForwardingRules.setTarget":
+-
+-type GlobalForwardingRulesSetTargetCall struct {
+-	s               *Service
+-	project         string
+-	forwardingRule  string
+-	targetreference *TargetReference
+-	opt_            map[string]interface{}
+-}
+-
+-// SetTarget: Changes target url for forwarding rule.
+-func (r *GlobalForwardingRulesService) SetTarget(project string, forwardingRule string, targetreference *TargetReference) *GlobalForwardingRulesSetTargetCall {
+-	c := &GlobalForwardingRulesSetTargetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.forwardingRule = forwardingRule
+-	c.targetreference = targetreference
+-	return c
+-}
+-
+-func (c *GlobalForwardingRulesSetTargetCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targetreference)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/forwardingRules/{forwardingRule}/setTarget")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"forwardingRule": c.forwardingRule,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Changes target url for forwarding rule.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.globalForwardingRules.setTarget",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "forwardingRule"
+-	//   ],
+-	//   "parameters": {
+-	//     "forwardingRule": {
+-	//       "description": "Name of the ForwardingRule resource in which target is to be set.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/forwardingRules/{forwardingRule}/setTarget",
+-	//   "request": {
+-	//     "$ref": "TargetReference"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalOperations.aggregatedList":
+-
+-type GlobalOperationsAggregatedListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// AggregatedList: Retrieves the list of all operations grouped by
+-// scope.
+-func (r *GlobalOperationsService) AggregatedList(project string) *GlobalOperationsAggregatedListCall {
+-	c := &GlobalOperationsAggregatedListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *GlobalOperationsAggregatedListCall) Filter(filter string) *GlobalOperationsAggregatedListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *GlobalOperationsAggregatedListCall) MaxResults(maxResults int64) *GlobalOperationsAggregatedListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *GlobalOperationsAggregatedListCall) PageToken(pageToken string) *GlobalOperationsAggregatedListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *GlobalOperationsAggregatedListCall) Do() (*OperationAggregatedList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/aggregated/operations")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *OperationAggregatedList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of all operations grouped by scope.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.globalOperations.aggregatedList",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/aggregated/operations",
+-	//   "response": {
+-	//     "$ref": "OperationAggregatedList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalOperations.delete":
+-
+-type GlobalOperationsDeleteCall struct {
+-	s         *Service
+-	project   string
+-	operation string
+-	opt_      map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified operation resource.
+-func (r *GlobalOperationsService) Delete(project string, operation string) *GlobalOperationsDeleteCall {
+-	c := &GlobalOperationsDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.operation = operation
+-	return c
+-}
+-
+-func (c *GlobalOperationsDeleteCall) Do() error {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/operations/{operation}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":   c.project,
+-		"operation": c.operation,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return err
+-	}
+-	return nil
+-	// {
+-	//   "description": "Deletes the specified operation resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.globalOperations.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "operation"
+-	//   ],
+-	//   "parameters": {
+-	//     "operation": {
+-	//       "description": "Name of the operation resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/operations/{operation}",
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalOperations.get":
+-
+-type GlobalOperationsGetCall struct {
+-	s         *Service
+-	project   string
+-	operation string
+-	opt_      map[string]interface{}
+-}
+-
+-// Get: Retrieves the specified operation resource.
+-func (r *GlobalOperationsService) Get(project string, operation string) *GlobalOperationsGetCall {
+-	c := &GlobalOperationsGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.operation = operation
+-	return c
+-}
+-
+-func (c *GlobalOperationsGetCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/operations/{operation}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":   c.project,
+-		"operation": c.operation,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the specified operation resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.globalOperations.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "operation"
+-	//   ],
+-	//   "parameters": {
+-	//     "operation": {
+-	//       "description": "Name of the operation resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/operations/{operation}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.globalOperations.list":
+-
+-type GlobalOperationsListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of operation resources contained within the
+-// specified project.
+-func (r *GlobalOperationsService) List(project string) *GlobalOperationsListCall {
+-	c := &GlobalOperationsListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *GlobalOperationsListCall) Filter(filter string) *GlobalOperationsListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *GlobalOperationsListCall) MaxResults(maxResults int64) *GlobalOperationsListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *GlobalOperationsListCall) PageToken(pageToken string) *GlobalOperationsListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *GlobalOperationsListCall) Do() (*OperationList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/operations")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *OperationList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of operation resources contained within the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.globalOperations.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/operations",
+-	//   "response": {
+-	//     "$ref": "OperationList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.httpHealthChecks.delete":
+-
+-type HttpHealthChecksDeleteCall struct {
+-	s               *Service
+-	project         string
+-	httpHealthCheck string
+-	opt_            map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified HttpHealthCheck resource.
+-func (r *HttpHealthChecksService) Delete(project string, httpHealthCheck string) *HttpHealthChecksDeleteCall {
+-	c := &HttpHealthChecksDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.httpHealthCheck = httpHealthCheck
+-	return c
+-}
+-
+-func (c *HttpHealthChecksDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/httpHealthChecks/{httpHealthCheck}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":         c.project,
+-		"httpHealthCheck": c.httpHealthCheck,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified HttpHealthCheck resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.httpHealthChecks.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "httpHealthCheck"
+-	//   ],
+-	//   "parameters": {
+-	//     "httpHealthCheck": {
+-	//       "description": "Name of the HttpHealthCheck resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/httpHealthChecks/{httpHealthCheck}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.httpHealthChecks.get":
+-
+-type HttpHealthChecksGetCall struct {
+-	s               *Service
+-	project         string
+-	httpHealthCheck string
+-	opt_            map[string]interface{}
+-}
+-
+-// Get: Returns the specified HttpHealthCheck resource.
+-func (r *HttpHealthChecksService) Get(project string, httpHealthCheck string) *HttpHealthChecksGetCall {
+-	c := &HttpHealthChecksGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.httpHealthCheck = httpHealthCheck
+-	return c
+-}
+-
+-func (c *HttpHealthChecksGetCall) Do() (*HttpHealthCheck, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/httpHealthChecks/{httpHealthCheck}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":         c.project,
+-		"httpHealthCheck": c.httpHealthCheck,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *HttpHealthCheck
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified HttpHealthCheck resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.httpHealthChecks.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "httpHealthCheck"
+-	//   ],
+-	//   "parameters": {
+-	//     "httpHealthCheck": {
+-	//       "description": "Name of the HttpHealthCheck resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/httpHealthChecks/{httpHealthCheck}",
+-	//   "response": {
+-	//     "$ref": "HttpHealthCheck"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.httpHealthChecks.insert":
+-
+-type HttpHealthChecksInsertCall struct {
+-	s               *Service
+-	project         string
+-	httphealthcheck *HttpHealthCheck
+-	opt_            map[string]interface{}
+-}
+-
+-// Insert: Creates a HttpHealthCheck resource in the specified project
+-// using the data included in the request.
+-func (r *HttpHealthChecksService) Insert(project string, httphealthcheck *HttpHealthCheck) *HttpHealthChecksInsertCall {
+-	c := &HttpHealthChecksInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.httphealthcheck = httphealthcheck
+-	return c
+-}
+-
+-func (c *HttpHealthChecksInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.httphealthcheck)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/httpHealthChecks")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a HttpHealthCheck resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.httpHealthChecks.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/httpHealthChecks",
+-	//   "request": {
+-	//     "$ref": "HttpHealthCheck"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.httpHealthChecks.list":
+-
+-type HttpHealthChecksListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of HttpHealthCheck resources available to
+-// the specified project.
+-func (r *HttpHealthChecksService) List(project string) *HttpHealthChecksListCall {
+-	c := &HttpHealthChecksListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *HttpHealthChecksListCall) Filter(filter string) *HttpHealthChecksListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *HttpHealthChecksListCall) MaxResults(maxResults int64) *HttpHealthChecksListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *HttpHealthChecksListCall) PageToken(pageToken string) *HttpHealthChecksListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *HttpHealthChecksListCall) Do() (*HttpHealthCheckList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/httpHealthChecks")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *HttpHealthCheckList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of HttpHealthCheck resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.httpHealthChecks.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/httpHealthChecks",
+-	//   "response": {
+-	//     "$ref": "HttpHealthCheckList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.httpHealthChecks.patch":
+-
+-type HttpHealthChecksPatchCall struct {
+-	s               *Service
+-	project         string
+-	httpHealthCheck string
+-	httphealthcheck *HttpHealthCheck
+-	opt_            map[string]interface{}
+-}
+-
+-// Patch: Updates a HttpHealthCheck resource in the specified project
+-// using the data included in the request. This method supports patch
+-// semantics.
+-func (r *HttpHealthChecksService) Patch(project string, httpHealthCheck string, httphealthcheck *HttpHealthCheck) *HttpHealthChecksPatchCall {
+-	c := &HttpHealthChecksPatchCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.httpHealthCheck = httpHealthCheck
+-	c.httphealthcheck = httphealthcheck
+-	return c
+-}
+-
+-func (c *HttpHealthChecksPatchCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.httphealthcheck)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/httpHealthChecks/{httpHealthCheck}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("PATCH", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":         c.project,
+-		"httpHealthCheck": c.httpHealthCheck,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request. This method supports patch semantics.",
+-	//   "httpMethod": "PATCH",
+-	//   "id": "compute.httpHealthChecks.patch",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "httpHealthCheck"
+-	//   ],
+-	//   "parameters": {
+-	//     "httpHealthCheck": {
+-	//       "description": "Name of the HttpHealthCheck resource to update.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/httpHealthChecks/{httpHealthCheck}",
+-	//   "request": {
+-	//     "$ref": "HttpHealthCheck"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.httpHealthChecks.update":
+-
+-type HttpHealthChecksUpdateCall struct {
+-	s               *Service
+-	project         string
+-	httpHealthCheck string
+-	httphealthcheck *HttpHealthCheck
+-	opt_            map[string]interface{}
+-}
+-
+-// Update: Updates a HttpHealthCheck resource in the specified project
+-// using the data included in the request.
+-func (r *HttpHealthChecksService) Update(project string, httpHealthCheck string, httphealthcheck *HttpHealthCheck) *HttpHealthChecksUpdateCall {
+-	c := &HttpHealthChecksUpdateCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.httpHealthCheck = httpHealthCheck
+-	c.httphealthcheck = httphealthcheck
+-	return c
+-}
+-
+-func (c *HttpHealthChecksUpdateCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.httphealthcheck)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/httpHealthChecks/{httpHealthCheck}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("PUT", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":         c.project,
+-		"httpHealthCheck": c.httpHealthCheck,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "PUT",
+-	//   "id": "compute.httpHealthChecks.update",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "httpHealthCheck"
+-	//   ],
+-	//   "parameters": {
+-	//     "httpHealthCheck": {
+-	//       "description": "Name of the HttpHealthCheck resource to update.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/httpHealthChecks/{httpHealthCheck}",
+-	//   "request": {
+-	//     "$ref": "HttpHealthCheck"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.images.delete":
+-
+-type ImagesDeleteCall struct {
+-	s       *Service
+-	project string
+-	image   string
+-	opt_    map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified image resource.
+-func (r *ImagesService) Delete(project string, image string) *ImagesDeleteCall {
+-	c := &ImagesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.image = image
+-	return c
+-}
+-
+-func (c *ImagesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/images/{image}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"image":   c.image,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified image resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.images.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "image"
+-	//   ],
+-	//   "parameters": {
+-	//     "image": {
+-	//       "description": "Name of the image resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/images/{image}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.images.deprecate":
+-
+-type ImagesDeprecateCall struct {
+-	s                 *Service
+-	project           string
+-	image             string
+-	deprecationstatus *DeprecationStatus
+-	opt_              map[string]interface{}
+-}
+-
+-// Deprecate: Sets the deprecation status of an image. If no message
+-// body is given, clears the deprecation status instead.
+-func (r *ImagesService) Deprecate(project string, image string, deprecationstatus *DeprecationStatus) *ImagesDeprecateCall {
+-	c := &ImagesDeprecateCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.image = image
+-	c.deprecationstatus = deprecationstatus
+-	return c
+-}
+-
+-func (c *ImagesDeprecateCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.deprecationstatus)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/images/{image}/deprecate")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"image":   c.image,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Sets the deprecation status of an image. If no message body is given, clears the deprecation status instead.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.images.deprecate",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "image"
+-	//   ],
+-	//   "parameters": {
+-	//     "image": {
+-	//       "description": "Image name.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/images/{image}/deprecate",
+-	//   "request": {
+-	//     "$ref": "DeprecationStatus"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.images.get":
+-
+-type ImagesGetCall struct {
+-	s       *Service
+-	project string
+-	image   string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified image resource.
+-func (r *ImagesService) Get(project string, image string) *ImagesGetCall {
+-	c := &ImagesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.image = image
+-	return c
+-}
+-
+-func (c *ImagesGetCall) Do() (*Image, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/images/{image}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"image":   c.image,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Image
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified image resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.images.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "image"
+-	//   ],
+-	//   "parameters": {
+-	//     "image": {
+-	//       "description": "Name of the image resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/images/{image}",
+-	//   "response": {
+-	//     "$ref": "Image"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.images.insert":
+-
+-type ImagesInsertCall struct {
+-	s       *Service
+-	project string
+-	image   *Image
+-	opt_    map[string]interface{}
+-}
+-
+-// Insert: Creates an image resource in the specified project using the
+-// data included in the request.
+-func (r *ImagesService) Insert(project string, image *Image) *ImagesInsertCall {
+-	c := &ImagesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.image = image
+-	return c
+-}
+-
+-func (c *ImagesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.image)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/images")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates an image resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.images.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/images",
+-	//   "request": {
+-	//     "$ref": "Image"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/devstorage.full_control",
+-	//     "https://www.googleapis.com/auth/devstorage.read_only",
+-	//     "https://www.googleapis.com/auth/devstorage.read_write"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.images.list":
+-
+-type ImagesListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of image resources available to the
+-// specified project.
+-func (r *ImagesService) List(project string) *ImagesListCall {
+-	c := &ImagesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *ImagesListCall) Filter(filter string) *ImagesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *ImagesListCall) MaxResults(maxResults int64) *ImagesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *ImagesListCall) PageToken(pageToken string) *ImagesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *ImagesListCall) Do() (*ImageList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/images")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *ImageList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of image resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.images.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/images",
+-	//   "response": {
+-	//     "$ref": "ImageList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.addAccessConfig":
+-
+-type InstancesAddAccessConfigCall struct {
+-	s                *Service
+-	project          string
+-	zone             string
+-	instance         string
+-	networkInterface string
+-	accessconfig     *AccessConfig
+-	opt_             map[string]interface{}
+-}
+-
+-// AddAccessConfig: Adds an access config to an instance's network
+-// interface.
+-func (r *InstancesService) AddAccessConfig(project string, zone string, instance string, networkInterface string, accessconfig *AccessConfig) *InstancesAddAccessConfigCall {
+-	c := &InstancesAddAccessConfigCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	c.networkInterface = networkInterface
+-	c.accessconfig = accessconfig
+-	return c
+-}
+-
+-func (c *InstancesAddAccessConfigCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.accessconfig)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	params.Set("networkInterface", fmt.Sprintf("%v", c.networkInterface))
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/addAccessConfig")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Adds an access config to an instance's network interface.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.addAccessConfig",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance",
+-	//     "networkInterface"
+-	//   ],
+-	//   "parameters": {
+-	//     "instance": {
+-	//       "description": "Instance name.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "networkInterface": {
+-	//       "description": "Network interface name.",
+-	//       "location": "query",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Project name.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/addAccessConfig",
+-	//   "request": {
+-	//     "$ref": "AccessConfig"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.aggregatedList":
+-
+-type InstancesAggregatedListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// AggregatedList:
+-func (r *InstancesService) AggregatedList(project string) *InstancesAggregatedListCall {
+-	c := &InstancesAggregatedListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *InstancesAggregatedListCall) Filter(filter string) *InstancesAggregatedListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *InstancesAggregatedListCall) MaxResults(maxResults int64) *InstancesAggregatedListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *InstancesAggregatedListCall) PageToken(pageToken string) *InstancesAggregatedListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *InstancesAggregatedListCall) Do() (*InstanceAggregatedList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/aggregated/instances")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *InstanceAggregatedList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.instances.aggregatedList",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/aggregated/instances",
+-	//   "response": {
+-	//     "$ref": "InstanceAggregatedList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.attachDisk":
+-
+-type InstancesAttachDiskCall struct {
+-	s            *Service
+-	project      string
+-	zone         string
+-	instance     string
+-	attacheddisk *AttachedDisk
+-	opt_         map[string]interface{}
+-}
+-
+-// AttachDisk: Attaches a disk resource to an instance.
+-func (r *InstancesService) AttachDisk(project string, zone string, instance string, attacheddisk *AttachedDisk) *InstancesAttachDiskCall {
+-	c := &InstancesAttachDiskCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	c.attacheddisk = attacheddisk
+-	return c
+-}
+-
+-func (c *InstancesAttachDiskCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.attacheddisk)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/attachDisk")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Attaches a disk resource to an instance.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.attachDisk",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance"
+-	//   ],
+-	//   "parameters": {
+-	//     "instance": {
+-	//       "description": "Instance name.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Project name.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/attachDisk",
+-	//   "request": {
+-	//     "$ref": "AttachedDisk"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.delete":
+-
+-type InstancesDeleteCall struct {
+-	s        *Service
+-	project  string
+-	zone     string
+-	instance string
+-	opt_     map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified instance resource.
+-func (r *InstancesService) Delete(project string, zone string, instance string) *InstancesDeleteCall {
+-	c := &InstancesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	return c
+-}
+-
+-func (c *InstancesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified instance resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.instances.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance"
+-	//   ],
+-	//   "parameters": {
+-	//     "instance": {
+-	//       "description": "Name of the instance resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.deleteAccessConfig":
+-
+-type InstancesDeleteAccessConfigCall struct {
+-	s                *Service
+-	project          string
+-	zone             string
+-	instance         string
+-	accessConfig     string
+-	networkInterface string
+-	opt_             map[string]interface{}
+-}
+-
+-// DeleteAccessConfig: Deletes an access config from an instance's
+-// network interface.
+-func (r *InstancesService) DeleteAccessConfig(project string, zone string, instance string, accessConfig string, networkInterface string) *InstancesDeleteAccessConfigCall {
+-	c := &InstancesDeleteAccessConfigCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	c.accessConfig = accessConfig
+-	c.networkInterface = networkInterface
+-	return c
+-}
+-
+-func (c *InstancesDeleteAccessConfigCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	params.Set("accessConfig", fmt.Sprintf("%v", c.accessConfig))
+-	params.Set("networkInterface", fmt.Sprintf("%v", c.networkInterface))
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/deleteAccessConfig")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes an access config from an instance's network interface.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.deleteAccessConfig",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance",
+-	//     "accessConfig",
+-	//     "networkInterface"
+-	//   ],
+-	//   "parameters": {
+-	//     "accessConfig": {
+-	//       "description": "Access config name.",
+-	//       "location": "query",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "instance": {
+-	//       "description": "Instance name.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "networkInterface": {
+-	//       "description": "Network interface name.",
+-	//       "location": "query",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Project name.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/deleteAccessConfig",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.detachDisk":
+-
+-type InstancesDetachDiskCall struct {
+-	s          *Service
+-	project    string
+-	zone       string
+-	instance   string
+-	deviceName string
+-	opt_       map[string]interface{}
+-}
+-
+-// DetachDisk: Detaches a disk from an instance.
+-func (r *InstancesService) DetachDisk(project string, zone string, instance string, deviceName string) *InstancesDetachDiskCall {
+-	c := &InstancesDetachDiskCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	c.deviceName = deviceName
+-	return c
+-}
+-
+-func (c *InstancesDetachDiskCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	params.Set("deviceName", fmt.Sprintf("%v", c.deviceName))
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/detachDisk")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Detaches a disk from an instance.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.detachDisk",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance",
+-	//     "deviceName"
+-	//   ],
+-	//   "parameters": {
+-	//     "deviceName": {
+-	//       "description": "Disk device name to detach.",
+-	//       "location": "query",
+-	//       "pattern": "\\w[\\w.-]{0,254}",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "instance": {
+-	//       "description": "Instance name.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Project name.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/detachDisk",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.get":
+-
+-type InstancesGetCall struct {
+-	s        *Service
+-	project  string
+-	zone     string
+-	instance string
+-	opt_     map[string]interface{}
+-}
+-
+-// Get: Returns the specified instance resource.
+-func (r *InstancesService) Get(project string, zone string, instance string) *InstancesGetCall {
+-	c := &InstancesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	return c
+-}
+-
+-func (c *InstancesGetCall) Do() (*Instance, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Instance
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified instance resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.instances.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance"
+-	//   ],
+-	//   "parameters": {
+-	//     "instance": {
+-	//       "description": "Name of the instance resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}",
+-	//   "response": {
+-	//     "$ref": "Instance"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.getSerialPortOutput":
+-
+-type InstancesGetSerialPortOutputCall struct {
+-	s        *Service
+-	project  string
+-	zone     string
+-	instance string
+-	opt_     map[string]interface{}
+-}
+-
+-// GetSerialPortOutput: Returns the specified instance's serial port
+-// output.
+-func (r *InstancesService) GetSerialPortOutput(project string, zone string, instance string) *InstancesGetSerialPortOutputCall {
+-	c := &InstancesGetSerialPortOutputCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	return c
+-}
+-
+-func (c *InstancesGetSerialPortOutputCall) Do() (*SerialPortOutput, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/serialPort")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *SerialPortOutput
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified instance's serial port output.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.instances.getSerialPortOutput",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance"
+-	//   ],
+-	//   "parameters": {
+-	//     "instance": {
+-	//       "description": "Name of the instance scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/serialPort",
+-	//   "response": {
+-	//     "$ref": "SerialPortOutput"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.insert":
+-
+-type InstancesInsertCall struct {
+-	s        *Service
+-	project  string
+-	zone     string
+-	instance *Instance
+-	opt_     map[string]interface{}
+-}
+-
+-// Insert: Creates an instance resource in the specified project using
+-// the data included in the request.
+-func (r *InstancesService) Insert(project string, zone string, instance *Instance) *InstancesInsertCall {
+-	c := &InstancesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	return c
+-}
+-
+-func (c *InstancesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.instance)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates an instance resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.insert",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances",
+-	//   "request": {
+-	//     "$ref": "Instance"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.list":
+-
+-type InstancesListCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of instance resources contained within the
+-// specified zone.
+-func (r *InstancesService) List(project string, zone string) *InstancesListCall {
+-	c := &InstancesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *InstancesListCall) Filter(filter string) *InstancesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *InstancesListCall) MaxResults(maxResults int64) *InstancesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *InstancesListCall) PageToken(pageToken string) *InstancesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *InstancesListCall) Do() (*InstanceList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *InstanceList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of instance resources contained within the specified zone.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.instances.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances",
+-	//   "response": {
+-	//     "$ref": "InstanceList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.reset":
+-
+-type InstancesResetCall struct {
+-	s        *Service
+-	project  string
+-	zone     string
+-	instance string
+-	opt_     map[string]interface{}
+-}
+-
+-// Reset: Performs a hard reset on the instance.
+-func (r *InstancesService) Reset(project string, zone string, instance string) *InstancesResetCall {
+-	c := &InstancesResetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	return c
+-}
+-
+-func (c *InstancesResetCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/reset")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Performs a hard reset on the instance.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.reset",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance"
+-	//   ],
+-	//   "parameters": {
+-	//     "instance": {
+-	//       "description": "Name of the instance scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/reset",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.setDiskAutoDelete":
+-
+-type InstancesSetDiskAutoDeleteCall struct {
+-	s          *Service
+-	project    string
+-	zone       string
+-	instance   string
+-	autoDelete bool
+-	deviceName string
+-	opt_       map[string]interface{}
+-}
+-
+-// SetDiskAutoDelete: Sets the auto-delete flag for a disk attached to
+-// an instance
+-func (r *InstancesService) SetDiskAutoDelete(project string, zone string, instance string, autoDelete bool, deviceName string) *InstancesSetDiskAutoDeleteCall {
+-	c := &InstancesSetDiskAutoDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	c.autoDelete = autoDelete
+-	c.deviceName = deviceName
+-	return c
+-}
+-
+-func (c *InstancesSetDiskAutoDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	params.Set("autoDelete", fmt.Sprintf("%v", c.autoDelete))
+-	params.Set("deviceName", fmt.Sprintf("%v", c.deviceName))
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/setDiskAutoDelete")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Sets the auto-delete flag for a disk attached to an instance",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.setDiskAutoDelete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance",
+-	//     "autoDelete",
+-	//     "deviceName"
+-	//   ],
+-	//   "parameters": {
+-	//     "autoDelete": {
+-	//       "description": "Whether to auto-delete the disk when the instance is deleted.",
+-	//       "location": "query",
+-	//       "required": true,
+-	//       "type": "boolean"
+-	//     },
+-	//     "deviceName": {
+-	//       "description": "Disk device name to modify.",
+-	//       "location": "query",
+-	//       "pattern": "\\w[\\w.-]{0,254}",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "instance": {
+-	//       "description": "Instance name.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Project name.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/setDiskAutoDelete",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.setMetadata":
+-
+-type InstancesSetMetadataCall struct {
+-	s        *Service
+-	project  string
+-	zone     string
+-	instance string
+-	metadata *Metadata
+-	opt_     map[string]interface{}
+-}
+-
+-// SetMetadata: Sets metadata for the specified instance to the data
+-// included in the request.
+-func (r *InstancesService) SetMetadata(project string, zone string, instance string, metadata *Metadata) *InstancesSetMetadataCall {
+-	c := &InstancesSetMetadataCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	c.metadata = metadata
+-	return c
+-}
+-
+-func (c *InstancesSetMetadataCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.metadata)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/setMetadata")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Sets metadata for the specified instance to the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.setMetadata",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance"
+-	//   ],
+-	//   "parameters": {
+-	//     "instance": {
+-	//       "description": "Name of the instance scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/setMetadata",
+-	//   "request": {
+-	//     "$ref": "Metadata"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.setScheduling":
+-
+-type InstancesSetSchedulingCall struct {
+-	s          *Service
+-	project    string
+-	zone       string
+-	instance   string
+-	scheduling *Scheduling
+-	opt_       map[string]interface{}
+-}
+-
+-// SetScheduling: Sets an instance's scheduling options.
+-func (r *InstancesService) SetScheduling(project string, zone string, instance string, scheduling *Scheduling) *InstancesSetSchedulingCall {
+-	c := &InstancesSetSchedulingCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	c.scheduling = scheduling
+-	return c
+-}
+-
+-func (c *InstancesSetSchedulingCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.scheduling)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/setScheduling")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Sets an instance's scheduling options.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.setScheduling",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance"
+-	//   ],
+-	//   "parameters": {
+-	//     "instance": {
+-	//       "description": "Instance name.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Project name.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/setScheduling",
+-	//   "request": {
+-	//     "$ref": "Scheduling"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.instances.setTags":
+-
+-type InstancesSetTagsCall struct {
+-	s        *Service
+-	project  string
+-	zone     string
+-	instance string
+-	tags     *Tags
+-	opt_     map[string]interface{}
+-}
+-
+-// SetTags: Sets tags for the specified instance to the data included in
+-// the request.
+-func (r *InstancesService) SetTags(project string, zone string, instance string, tags *Tags) *InstancesSetTagsCall {
+-	c := &InstancesSetTagsCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.instance = instance
+-	c.tags = tags
+-	return c
+-}
+-
+-func (c *InstancesSetTagsCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.tags)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/instances/{instance}/setTags")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"zone":     c.zone,
+-		"instance": c.instance,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Sets tags for the specified instance to the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.instances.setTags",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "instance"
+-	//   ],
+-	//   "parameters": {
+-	//     "instance": {
+-	//       "description": "Name of the instance scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/instances/{instance}/setTags",
+-	//   "request": {
+-	//     "$ref": "Tags"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.licenses.get":
+-
+-type LicensesGetCall struct {
+-	s       *Service
+-	project string
+-	license string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified license resource.
+-func (r *LicensesService) Get(project string, license string) *LicensesGetCall {
+-	c := &LicensesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.license = license
+-	return c
+-}
+-
+-func (c *LicensesGetCall) Do() (*License, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/licenses/{license}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"license": c.license,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *License
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified license resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.licenses.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "license"
+-	//   ],
+-	//   "parameters": {
+-	//     "license": {
+-	//       "description": "Name of the license resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/licenses/{license}",
+-	//   "response": {
+-	//     "$ref": "License"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.machineTypes.aggregatedList":
+-
+-type MachineTypesAggregatedListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// AggregatedList: Retrieves the list of machine type resources grouped
+-// by scope.
+-func (r *MachineTypesService) AggregatedList(project string) *MachineTypesAggregatedListCall {
+-	c := &MachineTypesAggregatedListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *MachineTypesAggregatedListCall) Filter(filter string) *MachineTypesAggregatedListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *MachineTypesAggregatedListCall) MaxResults(maxResults int64) *MachineTypesAggregatedListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *MachineTypesAggregatedListCall) PageToken(pageToken string) *MachineTypesAggregatedListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *MachineTypesAggregatedListCall) Do() (*MachineTypeAggregatedList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/aggregated/machineTypes")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *MachineTypeAggregatedList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of machine type resources grouped by scope.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.machineTypes.aggregatedList",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/aggregated/machineTypes",
+-	//   "response": {
+-	//     "$ref": "MachineTypeAggregatedList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.machineTypes.get":
+-
+-type MachineTypesGetCall struct {
+-	s           *Service
+-	project     string
+-	zone        string
+-	machineType string
+-	opt_        map[string]interface{}
+-}
+-
+-// Get: Returns the specified machine type resource.
+-func (r *MachineTypesService) Get(project string, zone string, machineType string) *MachineTypesGetCall {
+-	c := &MachineTypesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.machineType = machineType
+-	return c
+-}
+-
+-func (c *MachineTypesGetCall) Do() (*MachineType, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/machineTypes/{machineType}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":     c.project,
+-		"zone":        c.zone,
+-		"machineType": c.machineType,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *MachineType
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified machine type resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.machineTypes.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "machineType"
+-	//   ],
+-	//   "parameters": {
+-	//     "machineType": {
+-	//       "description": "Name of the machine type resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/machineTypes/{machineType}",
+-	//   "response": {
+-	//     "$ref": "MachineType"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.machineTypes.list":
+-
+-type MachineTypesListCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of machine type resources available to the
+-// specified project.
+-func (r *MachineTypesService) List(project string, zone string) *MachineTypesListCall {
+-	c := &MachineTypesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *MachineTypesListCall) Filter(filter string) *MachineTypesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *MachineTypesListCall) MaxResults(maxResults int64) *MachineTypesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *MachineTypesListCall) PageToken(pageToken string) *MachineTypesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *MachineTypesListCall) Do() (*MachineTypeList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/machineTypes")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *MachineTypeList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of machine type resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.machineTypes.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/machineTypes",
+-	//   "response": {
+-	//     "$ref": "MachineTypeList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.networks.delete":
+-
+-type NetworksDeleteCall struct {
+-	s       *Service
+-	project string
+-	network string
+-	opt_    map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified network resource.
+-func (r *NetworksService) Delete(project string, network string) *NetworksDeleteCall {
+-	c := &NetworksDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.network = network
+-	return c
+-}
+-
+-func (c *NetworksDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/networks/{network}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"network": c.network,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified network resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.networks.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "network"
+-	//   ],
+-	//   "parameters": {
+-	//     "network": {
+-	//       "description": "Name of the network resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/networks/{network}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.networks.get":
+-
+-type NetworksGetCall struct {
+-	s       *Service
+-	project string
+-	network string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified network resource.
+-func (r *NetworksService) Get(project string, network string) *NetworksGetCall {
+-	c := &NetworksGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.network = network
+-	return c
+-}
+-
+-func (c *NetworksGetCall) Do() (*Network, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/networks/{network}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"network": c.network,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Network
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified network resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.networks.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "network"
+-	//   ],
+-	//   "parameters": {
+-	//     "network": {
+-	//       "description": "Name of the network resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/networks/{network}",
+-	//   "response": {
+-	//     "$ref": "Network"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.networks.insert":
+-
+-type NetworksInsertCall struct {
+-	s       *Service
+-	project string
+-	network *Network
+-	opt_    map[string]interface{}
+-}
+-
+-// Insert: Creates a network resource in the specified project using the
+-// data included in the request.
+-func (r *NetworksService) Insert(project string, network *Network) *NetworksInsertCall {
+-	c := &NetworksInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.network = network
+-	return c
+-}
+-
+-func (c *NetworksInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.network)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/networks")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a network resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.networks.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/networks",
+-	//   "request": {
+-	//     "$ref": "Network"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.networks.list":
+-
+-type NetworksListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of network resources available to the
+-// specified project.
+-func (r *NetworksService) List(project string) *NetworksListCall {
+-	c := &NetworksListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *NetworksListCall) Filter(filter string) *NetworksListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *NetworksListCall) MaxResults(maxResults int64) *NetworksListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *NetworksListCall) PageToken(pageToken string) *NetworksListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *NetworksListCall) Do() (*NetworkList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/networks")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *NetworkList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of network resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.networks.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/networks",
+-	//   "response": {
+-	//     "$ref": "NetworkList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.projects.get":
+-
+-type ProjectsGetCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified project resource.
+-func (r *ProjectsService) Get(project string) *ProjectsGetCall {
+-	c := &ProjectsGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-func (c *ProjectsGetCall) Do() (*Project, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Project
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified project resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.projects.get",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project resource to retrieve.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}",
+-	//   "response": {
+-	//     "$ref": "Project"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.projects.setCommonInstanceMetadata":
+-
+-type ProjectsSetCommonInstanceMetadataCall struct {
+-	s        *Service
+-	project  string
+-	metadata *Metadata
+-	opt_     map[string]interface{}
+-}
+-
+-// SetCommonInstanceMetadata: Sets metadata common to all instances
+-// within the specified project using the data included in the request.
+-func (r *ProjectsService) SetCommonInstanceMetadata(project string, metadata *Metadata) *ProjectsSetCommonInstanceMetadataCall {
+-	c := &ProjectsSetCommonInstanceMetadataCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.metadata = metadata
+-	return c
+-}
+-
+-func (c *ProjectsSetCommonInstanceMetadataCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.metadata)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/setCommonInstanceMetadata")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Sets metadata common to all instances within the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.projects.setCommonInstanceMetadata",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/setCommonInstanceMetadata",
+-	//   "request": {
+-	//     "$ref": "Metadata"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.projects.setUsageExportBucket":
+-
+-type ProjectsSetUsageExportBucketCall struct {
+-	s                   *Service
+-	project             string
+-	usageexportlocation *UsageExportLocation
+-	opt_                map[string]interface{}
+-}
+-
+-// SetUsageExportBucket: Sets usage export location
+-func (r *ProjectsService) SetUsageExportBucket(project string, usageexportlocation *UsageExportLocation) *ProjectsSetUsageExportBucketCall {
+-	c := &ProjectsSetUsageExportBucketCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.usageexportlocation = usageexportlocation
+-	return c
+-}
+-
+-func (c *ProjectsSetUsageExportBucketCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.usageexportlocation)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/setUsageExportBucket")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Sets usage export location",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.projects.setUsageExportBucket",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/setUsageExportBucket",
+-	//   "request": {
+-	//     "$ref": "UsageExportLocation"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/devstorage.full_control",
+-	//     "https://www.googleapis.com/auth/devstorage.read_only",
+-	//     "https://www.googleapis.com/auth/devstorage.read_write"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.regionOperations.delete":
+-
+-type RegionOperationsDeleteCall struct {
+-	s         *Service
+-	project   string
+-	region    string
+-	operation string
+-	opt_      map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified region-specific operation resource.
+-func (r *RegionOperationsService) Delete(project string, region string, operation string) *RegionOperationsDeleteCall {
+-	c := &RegionOperationsDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.operation = operation
+-	return c
+-}
+-
+-func (c *RegionOperationsDeleteCall) Do() error {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/operations/{operation}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":   c.project,
+-		"region":    c.region,
+-		"operation": c.operation,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return err
+-	}
+-	return nil
+-	// {
+-	//   "description": "Deletes the specified region-specific operation resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.regionOperations.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "operation"
+-	//   ],
+-	//   "parameters": {
+-	//     "operation": {
+-	//       "description": "Name of the operation resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/operations/{operation}",
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.regionOperations.get":
+-
+-type RegionOperationsGetCall struct {
+-	s         *Service
+-	project   string
+-	region    string
+-	operation string
+-	opt_      map[string]interface{}
+-}
+-
+-// Get: Retrieves the specified region-specific operation resource.
+-func (r *RegionOperationsService) Get(project string, region string, operation string) *RegionOperationsGetCall {
+-	c := &RegionOperationsGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.operation = operation
+-	return c
+-}
+-
+-func (c *RegionOperationsGetCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/operations/{operation}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":   c.project,
+-		"region":    c.region,
+-		"operation": c.operation,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the specified region-specific operation resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.regionOperations.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "operation"
+-	//   ],
+-	//   "parameters": {
+-	//     "operation": {
+-	//       "description": "Name of the operation resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/operations/{operation}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.regionOperations.list":
+-
+-type RegionOperationsListCall struct {
+-	s       *Service
+-	project string
+-	region  string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of operation resources contained within the
+-// specified region.
+-func (r *RegionOperationsService) List(project string, region string) *RegionOperationsListCall {
+-	c := &RegionOperationsListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *RegionOperationsListCall) Filter(filter string) *RegionOperationsListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *RegionOperationsListCall) MaxResults(maxResults int64) *RegionOperationsListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *RegionOperationsListCall) PageToken(pageToken string) *RegionOperationsListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *RegionOperationsListCall) Do() (*OperationList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/operations")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *OperationList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of operation resources contained within the specified region.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.regionOperations.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/operations",
+-	//   "response": {
+-	//     "$ref": "OperationList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.regions.get":
+-
+-type RegionsGetCall struct {
+-	s       *Service
+-	project string
+-	region  string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified region resource.
+-func (r *RegionsService) Get(project string, region string) *RegionsGetCall {
+-	c := &RegionsGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	return c
+-}
+-
+-func (c *RegionsGetCall) Do() (*Region, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Region
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified region resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.regions.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}",
+-	//   "response": {
+-	//     "$ref": "Region"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.regions.list":
+-
+-type RegionsListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of region resources available to the
+-// specified project.
+-func (r *RegionsService) List(project string) *RegionsListCall {
+-	c := &RegionsListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *RegionsListCall) Filter(filter string) *RegionsListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *RegionsListCall) MaxResults(maxResults int64) *RegionsListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *RegionsListCall) PageToken(pageToken string) *RegionsListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *RegionsListCall) Do() (*RegionList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *RegionList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of region resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.regions.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions",
+-	//   "response": {
+-	//     "$ref": "RegionList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.routes.delete":
+-
+-type RoutesDeleteCall struct {
+-	s       *Service
+-	project string
+-	route   string
+-	opt_    map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified route resource.
+-func (r *RoutesService) Delete(project string, route string) *RoutesDeleteCall {
+-	c := &RoutesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.route = route
+-	return c
+-}
+-
+-func (c *RoutesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/routes/{route}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"route":   c.route,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified route resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.routes.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "route"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "route": {
+-	//       "description": "Name of the route resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/routes/{route}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.routes.get":
+-
+-type RoutesGetCall struct {
+-	s       *Service
+-	project string
+-	route   string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified route resource.
+-func (r *RoutesService) Get(project string, route string) *RoutesGetCall {
+-	c := &RoutesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.route = route
+-	return c
+-}
+-
+-func (c *RoutesGetCall) Do() (*Route, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/routes/{route}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"route":   c.route,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Route
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified route resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.routes.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "route"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "route": {
+-	//       "description": "Name of the route resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/routes/{route}",
+-	//   "response": {
+-	//     "$ref": "Route"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.routes.insert":
+-
+-type RoutesInsertCall struct {
+-	s       *Service
+-	project string
+-	route   *Route
+-	opt_    map[string]interface{}
+-}
+-
+-// Insert: Creates a route resource in the specified project using the
+-// data included in the request.
+-func (r *RoutesService) Insert(project string, route *Route) *RoutesInsertCall {
+-	c := &RoutesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.route = route
+-	return c
+-}
+-
+-func (c *RoutesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.route)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/routes")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a route resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.routes.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/routes",
+-	//   "request": {
+-	//     "$ref": "Route"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.routes.list":
+-
+-type RoutesListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of route resources available to the
+-// specified project.
+-func (r *RoutesService) List(project string) *RoutesListCall {
+-	c := &RoutesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *RoutesListCall) Filter(filter string) *RoutesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *RoutesListCall) MaxResults(maxResults int64) *RoutesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *RoutesListCall) PageToken(pageToken string) *RoutesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *RoutesListCall) Do() (*RouteList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/routes")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *RouteList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of route resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.routes.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/routes",
+-	//   "response": {
+-	//     "$ref": "RouteList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.snapshots.delete":
+-
+-type SnapshotsDeleteCall struct {
+-	s        *Service
+-	project  string
+-	snapshot string
+-	opt_     map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified persistent disk snapshot resource.
+-func (r *SnapshotsService) Delete(project string, snapshot string) *SnapshotsDeleteCall {
+-	c := &SnapshotsDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.snapshot = snapshot
+-	return c
+-}
+-
+-func (c *SnapshotsDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/snapshots/{snapshot}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"snapshot": c.snapshot,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified persistent disk snapshot resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.snapshots.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "snapshot"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "snapshot": {
+-	//       "description": "Name of the persistent disk snapshot resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/snapshots/{snapshot}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.snapshots.get":
+-
+-type SnapshotsGetCall struct {
+-	s        *Service
+-	project  string
+-	snapshot string
+-	opt_     map[string]interface{}
+-}
+-
+-// Get: Returns the specified persistent disk snapshot resource.
+-func (r *SnapshotsService) Get(project string, snapshot string) *SnapshotsGetCall {
+-	c := &SnapshotsGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.snapshot = snapshot
+-	return c
+-}
+-
+-func (c *SnapshotsGetCall) Do() (*Snapshot, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/snapshots/{snapshot}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":  c.project,
+-		"snapshot": c.snapshot,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Snapshot
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified persistent disk snapshot resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.snapshots.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "snapshot"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "snapshot": {
+-	//       "description": "Name of the persistent disk snapshot resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/snapshots/{snapshot}",
+-	//   "response": {
+-	//     "$ref": "Snapshot"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.snapshots.list":
+-
+-type SnapshotsListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of persistent disk snapshot resources
+-// contained within the specified project.
+-func (r *SnapshotsService) List(project string) *SnapshotsListCall {
+-	c := &SnapshotsListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *SnapshotsListCall) Filter(filter string) *SnapshotsListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *SnapshotsListCall) MaxResults(maxResults int64) *SnapshotsListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *SnapshotsListCall) PageToken(pageToken string) *SnapshotsListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *SnapshotsListCall) Do() (*SnapshotList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/snapshots")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *SnapshotList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of persistent disk snapshot resources contained within the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.snapshots.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/snapshots",
+-	//   "response": {
+-	//     "$ref": "SnapshotList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetHttpProxies.delete":
+-
+-type TargetHttpProxiesDeleteCall struct {
+-	s               *Service
+-	project         string
+-	targetHttpProxy string
+-	opt_            map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified TargetHttpProxy resource.
+-func (r *TargetHttpProxiesService) Delete(project string, targetHttpProxy string) *TargetHttpProxiesDeleteCall {
+-	c := &TargetHttpProxiesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.targetHttpProxy = targetHttpProxy
+-	return c
+-}
+-
+-func (c *TargetHttpProxiesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/targetHttpProxies/{targetHttpProxy}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":         c.project,
+-		"targetHttpProxy": c.targetHttpProxy,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified TargetHttpProxy resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.targetHttpProxies.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "targetHttpProxy"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetHttpProxy": {
+-	//       "description": "Name of the TargetHttpProxy resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/targetHttpProxies/{targetHttpProxy}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetHttpProxies.get":
+-
+-type TargetHttpProxiesGetCall struct {
+-	s               *Service
+-	project         string
+-	targetHttpProxy string
+-	opt_            map[string]interface{}
+-}
+-
+-// Get: Returns the specified TargetHttpProxy resource.
+-func (r *TargetHttpProxiesService) Get(project string, targetHttpProxy string) *TargetHttpProxiesGetCall {
+-	c := &TargetHttpProxiesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.targetHttpProxy = targetHttpProxy
+-	return c
+-}
+-
+-func (c *TargetHttpProxiesGetCall) Do() (*TargetHttpProxy, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/targetHttpProxies/{targetHttpProxy}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":         c.project,
+-		"targetHttpProxy": c.targetHttpProxy,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *TargetHttpProxy
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified TargetHttpProxy resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.targetHttpProxies.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "targetHttpProxy"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetHttpProxy": {
+-	//       "description": "Name of the TargetHttpProxy resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/targetHttpProxies/{targetHttpProxy}",
+-	//   "response": {
+-	//     "$ref": "TargetHttpProxy"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetHttpProxies.insert":
+-
+-type TargetHttpProxiesInsertCall struct {
+-	s               *Service
+-	project         string
+-	targethttpproxy *TargetHttpProxy
+-	opt_            map[string]interface{}
+-}
+-
+-// Insert: Creates a TargetHttpProxy resource in the specified project
+-// using the data included in the request.
+-func (r *TargetHttpProxiesService) Insert(project string, targethttpproxy *TargetHttpProxy) *TargetHttpProxiesInsertCall {
+-	c := &TargetHttpProxiesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.targethttpproxy = targethttpproxy
+-	return c
+-}
+-
+-func (c *TargetHttpProxiesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targethttpproxy)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/targetHttpProxies")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a TargetHttpProxy resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetHttpProxies.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/targetHttpProxies",
+-	//   "request": {
+-	//     "$ref": "TargetHttpProxy"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetHttpProxies.list":
+-
+-type TargetHttpProxiesListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of TargetHttpProxy resources available to
+-// the specified project.
+-func (r *TargetHttpProxiesService) List(project string) *TargetHttpProxiesListCall {
+-	c := &TargetHttpProxiesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *TargetHttpProxiesListCall) Filter(filter string) *TargetHttpProxiesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *TargetHttpProxiesListCall) MaxResults(maxResults int64) *TargetHttpProxiesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *TargetHttpProxiesListCall) PageToken(pageToken string) *TargetHttpProxiesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *TargetHttpProxiesListCall) Do() (*TargetHttpProxyList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/targetHttpProxies")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *TargetHttpProxyList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of TargetHttpProxy resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.targetHttpProxies.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/targetHttpProxies",
+-	//   "response": {
+-	//     "$ref": "TargetHttpProxyList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetHttpProxies.setUrlMap":
+-
+-type TargetHttpProxiesSetUrlMapCall struct {
+-	s               *Service
+-	project         string
+-	targetHttpProxy string
+-	urlmapreference *UrlMapReference
+-	opt_            map[string]interface{}
+-}
+-
+-// SetUrlMap: Changes the URL map for TargetHttpProxy.
+-func (r *TargetHttpProxiesService) SetUrlMap(project string, targetHttpProxy string, urlmapreference *UrlMapReference) *TargetHttpProxiesSetUrlMapCall {
+-	c := &TargetHttpProxiesSetUrlMapCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.targetHttpProxy = targetHttpProxy
+-	c.urlmapreference = urlmapreference
+-	return c
+-}
+-
+-func (c *TargetHttpProxiesSetUrlMapCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.urlmapreference)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/targetHttpProxies/{targetHttpProxy}/setUrlMap")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":         c.project,
+-		"targetHttpProxy": c.targetHttpProxy,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Changes the URL map for TargetHttpProxy.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetHttpProxies.setUrlMap",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "targetHttpProxy"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetHttpProxy": {
+-	//       "description": "Name of the TargetHttpProxy resource whose URL map is to be set.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/targetHttpProxies/{targetHttpProxy}/setUrlMap",
+-	//   "request": {
+-	//     "$ref": "UrlMapReference"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetInstances.aggregatedList":
+-
+-type TargetInstancesAggregatedListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// AggregatedList: Retrieves the list of target instances grouped by
+-// scope.
+-func (r *TargetInstancesService) AggregatedList(project string) *TargetInstancesAggregatedListCall {
+-	c := &TargetInstancesAggregatedListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *TargetInstancesAggregatedListCall) Filter(filter string) *TargetInstancesAggregatedListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *TargetInstancesAggregatedListCall) MaxResults(maxResults int64) *TargetInstancesAggregatedListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *TargetInstancesAggregatedListCall) PageToken(pageToken string) *TargetInstancesAggregatedListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *TargetInstancesAggregatedListCall) Do() (*TargetInstanceAggregatedList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/aggregated/targetInstances")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *TargetInstanceAggregatedList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of target instances grouped by scope.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.targetInstances.aggregatedList",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/aggregated/targetInstances",
+-	//   "response": {
+-	//     "$ref": "TargetInstanceAggregatedList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetInstances.delete":
+-
+-type TargetInstancesDeleteCall struct {
+-	s              *Service
+-	project        string
+-	zone           string
+-	targetInstance string
+-	opt_           map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified TargetInstance resource.
+-func (r *TargetInstancesService) Delete(project string, zone string, targetInstance string) *TargetInstancesDeleteCall {
+-	c := &TargetInstancesDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.targetInstance = targetInstance
+-	return c
+-}
+-
+-func (c *TargetInstancesDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/targetInstances/{targetInstance}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"zone":           c.zone,
+-		"targetInstance": c.targetInstance,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified TargetInstance resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.targetInstances.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "targetInstance"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetInstance": {
+-	//       "description": "Name of the TargetInstance resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/targetInstances/{targetInstance}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetInstances.get":
+-
+-type TargetInstancesGetCall struct {
+-	s              *Service
+-	project        string
+-	zone           string
+-	targetInstance string
+-	opt_           map[string]interface{}
+-}
+-
+-// Get: Returns the specified TargetInstance resource.
+-func (r *TargetInstancesService) Get(project string, zone string, targetInstance string) *TargetInstancesGetCall {
+-	c := &TargetInstancesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.targetInstance = targetInstance
+-	return c
+-}
+-
+-func (c *TargetInstancesGetCall) Do() (*TargetInstance, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/targetInstances/{targetInstance}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":        c.project,
+-		"zone":           c.zone,
+-		"targetInstance": c.targetInstance,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *TargetInstance
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified TargetInstance resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.targetInstances.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "targetInstance"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetInstance": {
+-	//       "description": "Name of the TargetInstance resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/targetInstances/{targetInstance}",
+-	//   "response": {
+-	//     "$ref": "TargetInstance"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetInstances.insert":
+-
+-type TargetInstancesInsertCall struct {
+-	s              *Service
+-	project        string
+-	zone           string
+-	targetinstance *TargetInstance
+-	opt_           map[string]interface{}
+-}
+-
+-// Insert: Creates a TargetInstance resource in the specified project
+-// and zone using the data included in the request.
+-func (r *TargetInstancesService) Insert(project string, zone string, targetinstance *TargetInstance) *TargetInstancesInsertCall {
+-	c := &TargetInstancesInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.targetinstance = targetinstance
+-	return c
+-}
+-
+-func (c *TargetInstancesInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targetinstance)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/targetInstances")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a TargetInstance resource in the specified project and zone using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetInstances.insert",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/targetInstances",
+-	//   "request": {
+-	//     "$ref": "TargetInstance"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetInstances.list":
+-
+-type TargetInstancesListCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of TargetInstance resources available to the
+-// specified project and zone.
+-func (r *TargetInstancesService) List(project string, zone string) *TargetInstancesListCall {
+-	c := &TargetInstancesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *TargetInstancesListCall) Filter(filter string) *TargetInstancesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *TargetInstancesListCall) MaxResults(maxResults int64) *TargetInstancesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *TargetInstancesListCall) PageToken(pageToken string) *TargetInstancesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *TargetInstancesListCall) Do() (*TargetInstanceList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/targetInstances")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *TargetInstanceList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of TargetInstance resources available to the specified project and zone.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.targetInstances.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/targetInstances",
+-	//   "response": {
+-	//     "$ref": "TargetInstanceList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.addHealthCheck":
+-
+-type TargetPoolsAddHealthCheckCall struct {
+-	s                                *Service
+-	project                          string
+-	region                           string
+-	targetPool                       string
+-	targetpoolsaddhealthcheckrequest *TargetPoolsAddHealthCheckRequest
+-	opt_                             map[string]interface{}
+-}
+-
+-// AddHealthCheck: Adds health check URL to targetPool.
+-func (r *TargetPoolsService) AddHealthCheck(project string, region string, targetPool string, targetpoolsaddhealthcheckrequest *TargetPoolsAddHealthCheckRequest) *TargetPoolsAddHealthCheckCall {
+-	c := &TargetPoolsAddHealthCheckCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.targetPool = targetPool
+-	c.targetpoolsaddhealthcheckrequest = targetpoolsaddhealthcheckrequest
+-	return c
+-}
+-
+-func (c *TargetPoolsAddHealthCheckCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targetpoolsaddhealthcheckrequest)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools/{targetPool}/addHealthCheck")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":    c.project,
+-		"region":     c.region,
+-		"targetPool": c.targetPool,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Adds health check URL to targetPool.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetPools.addHealthCheck",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "targetPool"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetPool": {
+-	//       "description": "Name of the TargetPool resource to which health_check_url is to be added.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools/{targetPool}/addHealthCheck",
+-	//   "request": {
+-	//     "$ref": "TargetPoolsAddHealthCheckRequest"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.addInstance":
+-
+-type TargetPoolsAddInstanceCall struct {
+-	s                             *Service
+-	project                       string
+-	region                        string
+-	targetPool                    string
+-	targetpoolsaddinstancerequest *TargetPoolsAddInstanceRequest
+-	opt_                          map[string]interface{}
+-}
+-
+-// AddInstance: Adds instance url to targetPool.
+-func (r *TargetPoolsService) AddInstance(project string, region string, targetPool string, targetpoolsaddinstancerequest *TargetPoolsAddInstanceRequest) *TargetPoolsAddInstanceCall {
+-	c := &TargetPoolsAddInstanceCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.targetPool = targetPool
+-	c.targetpoolsaddinstancerequest = targetpoolsaddinstancerequest
+-	return c
+-}
+-
+-func (c *TargetPoolsAddInstanceCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targetpoolsaddinstancerequest)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools/{targetPool}/addInstance")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":    c.project,
+-		"region":     c.region,
+-		"targetPool": c.targetPool,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Adds instance url to targetPool.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetPools.addInstance",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "targetPool"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetPool": {
+-	//       "description": "Name of the TargetPool resource to which instance_url is to be added.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools/{targetPool}/addInstance",
+-	//   "request": {
+-	//     "$ref": "TargetPoolsAddInstanceRequest"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.aggregatedList":
+-
+-type TargetPoolsAggregatedListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// AggregatedList: Retrieves the list of target pools grouped by scope.
+-func (r *TargetPoolsService) AggregatedList(project string) *TargetPoolsAggregatedListCall {
+-	c := &TargetPoolsAggregatedListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *TargetPoolsAggregatedListCall) Filter(filter string) *TargetPoolsAggregatedListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *TargetPoolsAggregatedListCall) MaxResults(maxResults int64) *TargetPoolsAggregatedListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *TargetPoolsAggregatedListCall) PageToken(pageToken string) *TargetPoolsAggregatedListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *TargetPoolsAggregatedListCall) Do() (*TargetPoolAggregatedList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/aggregated/targetPools")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *TargetPoolAggregatedList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of target pools grouped by scope.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.targetPools.aggregatedList",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/aggregated/targetPools",
+-	//   "response": {
+-	//     "$ref": "TargetPoolAggregatedList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.delete":
+-
+-type TargetPoolsDeleteCall struct {
+-	s          *Service
+-	project    string
+-	region     string
+-	targetPool string
+-	opt_       map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified TargetPool resource.
+-func (r *TargetPoolsService) Delete(project string, region string, targetPool string) *TargetPoolsDeleteCall {
+-	c := &TargetPoolsDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.targetPool = targetPool
+-	return c
+-}
+-
+-func (c *TargetPoolsDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools/{targetPool}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":    c.project,
+-		"region":     c.region,
+-		"targetPool": c.targetPool,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified TargetPool resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.targetPools.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "targetPool"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetPool": {
+-	//       "description": "Name of the TargetPool resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools/{targetPool}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.get":
+-
+-type TargetPoolsGetCall struct {
+-	s          *Service
+-	project    string
+-	region     string
+-	targetPool string
+-	opt_       map[string]interface{}
+-}
+-
+-// Get: Returns the specified TargetPool resource.
+-func (r *TargetPoolsService) Get(project string, region string, targetPool string) *TargetPoolsGetCall {
+-	c := &TargetPoolsGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.targetPool = targetPool
+-	return c
+-}
+-
+-func (c *TargetPoolsGetCall) Do() (*TargetPool, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools/{targetPool}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":    c.project,
+-		"region":     c.region,
+-		"targetPool": c.targetPool,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *TargetPool
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified TargetPool resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.targetPools.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "targetPool"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetPool": {
+-	//       "description": "Name of the TargetPool resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools/{targetPool}",
+-	//   "response": {
+-	//     "$ref": "TargetPool"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.getHealth":
+-
+-type TargetPoolsGetHealthCall struct {
+-	s                 *Service
+-	project           string
+-	region            string
+-	targetPool        string
+-	instancereference *InstanceReference
+-	opt_              map[string]interface{}
+-}
+-
+-// GetHealth: Gets the most recent health check results for each IP for
+-// the given instance that is referenced by given TargetPool.
+-func (r *TargetPoolsService) GetHealth(project string, region string, targetPool string, instancereference *InstanceReference) *TargetPoolsGetHealthCall {
+-	c := &TargetPoolsGetHealthCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.targetPool = targetPool
+-	c.instancereference = instancereference
+-	return c
+-}
+-
+-func (c *TargetPoolsGetHealthCall) Do() (*TargetPoolInstanceHealth, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.instancereference)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools/{targetPool}/getHealth")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":    c.project,
+-		"region":     c.region,
+-		"targetPool": c.targetPool,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *TargetPoolInstanceHealth
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Gets the most recent health check results for each IP for the given instance that is referenced by given TargetPool.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetPools.getHealth",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "targetPool"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetPool": {
+-	//       "description": "Name of the TargetPool resource to which the queried instance belongs.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools/{targetPool}/getHealth",
+-	//   "request": {
+-	//     "$ref": "InstanceReference"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "TargetPoolInstanceHealth"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.insert":
+-
+-type TargetPoolsInsertCall struct {
+-	s          *Service
+-	project    string
+-	region     string
+-	targetpool *TargetPool
+-	opt_       map[string]interface{}
+-}
+-
+-// Insert: Creates a TargetPool resource in the specified project and
+-// region using the data included in the request.
+-func (r *TargetPoolsService) Insert(project string, region string, targetpool *TargetPool) *TargetPoolsInsertCall {
+-	c := &TargetPoolsInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.targetpool = targetpool
+-	return c
+-}
+-
+-func (c *TargetPoolsInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targetpool)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a TargetPool resource in the specified project and region using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetPools.insert",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools",
+-	//   "request": {
+-	//     "$ref": "TargetPool"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.list":
+-
+-type TargetPoolsListCall struct {
+-	s       *Service
+-	project string
+-	region  string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of TargetPool resources available to the
+-// specified project and region.
+-func (r *TargetPoolsService) List(project string, region string) *TargetPoolsListCall {
+-	c := &TargetPoolsListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *TargetPoolsListCall) Filter(filter string) *TargetPoolsListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *TargetPoolsListCall) MaxResults(maxResults int64) *TargetPoolsListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *TargetPoolsListCall) PageToken(pageToken string) *TargetPoolsListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *TargetPoolsListCall) Do() (*TargetPoolList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"region":  c.region,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *TargetPoolList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of TargetPool resources available to the specified project and region.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.targetPools.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools",
+-	//   "response": {
+-	//     "$ref": "TargetPoolList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.removeHealthCheck":
+-
+-type TargetPoolsRemoveHealthCheckCall struct {
+-	s                                   *Service
+-	project                             string
+-	region                              string
+-	targetPool                          string
+-	targetpoolsremovehealthcheckrequest *TargetPoolsRemoveHealthCheckRequest
+-	opt_                                map[string]interface{}
+-}
+-
+-// RemoveHealthCheck: Removes health check URL from targetPool.
+-func (r *TargetPoolsService) RemoveHealthCheck(project string, region string, targetPool string, targetpoolsremovehealthcheckrequest *TargetPoolsRemoveHealthCheckRequest) *TargetPoolsRemoveHealthCheckCall {
+-	c := &TargetPoolsRemoveHealthCheckCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.targetPool = targetPool
+-	c.targetpoolsremovehealthcheckrequest = targetpoolsremovehealthcheckrequest
+-	return c
+-}
+-
+-func (c *TargetPoolsRemoveHealthCheckCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targetpoolsremovehealthcheckrequest)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools/{targetPool}/removeHealthCheck")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":    c.project,
+-		"region":     c.region,
+-		"targetPool": c.targetPool,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Removes health check URL from targetPool.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetPools.removeHealthCheck",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "targetPool"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetPool": {
+-	//       "description": "Name of the TargetPool resource to which health_check_url is to be removed.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools/{targetPool}/removeHealthCheck",
+-	//   "request": {
+-	//     "$ref": "TargetPoolsRemoveHealthCheckRequest"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.removeInstance":
+-
+-type TargetPoolsRemoveInstanceCall struct {
+-	s                                *Service
+-	project                          string
+-	region                           string
+-	targetPool                       string
+-	targetpoolsremoveinstancerequest *TargetPoolsRemoveInstanceRequest
+-	opt_                             map[string]interface{}
+-}
+-
+-// RemoveInstance: Removes instance URL from targetPool.
+-func (r *TargetPoolsService) RemoveInstance(project string, region string, targetPool string, targetpoolsremoveinstancerequest *TargetPoolsRemoveInstanceRequest) *TargetPoolsRemoveInstanceCall {
+-	c := &TargetPoolsRemoveInstanceCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.targetPool = targetPool
+-	c.targetpoolsremoveinstancerequest = targetpoolsremoveinstancerequest
+-	return c
+-}
+-
+-func (c *TargetPoolsRemoveInstanceCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targetpoolsremoveinstancerequest)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools/{targetPool}/removeInstance")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":    c.project,
+-		"region":     c.region,
+-		"targetPool": c.targetPool,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Removes instance URL from targetPool.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetPools.removeInstance",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "targetPool"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetPool": {
+-	//       "description": "Name of the TargetPool resource to which instance_url is to be removed.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools/{targetPool}/removeInstance",
+-	//   "request": {
+-	//     "$ref": "TargetPoolsRemoveInstanceRequest"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.targetPools.setBackup":
+-
+-type TargetPoolsSetBackupCall struct {
+-	s               *Service
+-	project         string
+-	region          string
+-	targetPool      string
+-	targetreference *TargetReference
+-	opt_            map[string]interface{}
+-}
+-
+-// SetBackup: Changes backup pool configurations.
+-func (r *TargetPoolsService) SetBackup(project string, region string, targetPool string, targetreference *TargetReference) *TargetPoolsSetBackupCall {
+-	c := &TargetPoolsSetBackupCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.region = region
+-	c.targetPool = targetPool
+-	c.targetreference = targetreference
+-	return c
+-}
+-
+-// FailoverRatio sets the optional parameter "failoverRatio": New
+-// failoverRatio value for the containing target pool.
+-func (c *TargetPoolsSetBackupCall) FailoverRatio(failoverRatio float64) *TargetPoolsSetBackupCall {
+-	c.opt_["failoverRatio"] = failoverRatio
+-	return c
+-}
+-
+-func (c *TargetPoolsSetBackupCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.targetreference)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["failoverRatio"]; ok {
+-		params.Set("failoverRatio", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/regions/{region}/targetPools/{targetPool}/setBackup")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":    c.project,
+-		"region":     c.region,
+-		"targetPool": c.targetPool,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Changes backup pool configurations.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.targetPools.setBackup",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "region",
+-	//     "targetPool"
+-	//   ],
+-	//   "parameters": {
+-	//     "failoverRatio": {
+-	//       "description": "New failoverRatio value for the containing target pool.",
+-	//       "format": "float",
+-	//       "location": "query",
+-	//       "type": "number"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "region": {
+-	//       "description": "Name of the region scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "targetPool": {
+-	//       "description": "Name of the TargetPool resource for which the backup is to be set.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/regions/{region}/targetPools/{targetPool}/setBackup",
+-	//   "request": {
+-	//     "$ref": "TargetReference"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.urlMaps.delete":
+-
+-type UrlMapsDeleteCall struct {
+-	s       *Service
+-	project string
+-	urlMap  string
+-	opt_    map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified UrlMap resource.
+-func (r *UrlMapsService) Delete(project string, urlMap string) *UrlMapsDeleteCall {
+-	c := &UrlMapsDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.urlMap = urlMap
+-	return c
+-}
+-
+-func (c *UrlMapsDeleteCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/urlMaps/{urlMap}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"urlMap":  c.urlMap,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Deletes the specified UrlMap resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.urlMaps.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "urlMap"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "urlMap": {
+-	//       "description": "Name of the UrlMap resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/urlMaps/{urlMap}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.urlMaps.get":
+-
+-type UrlMapsGetCall struct {
+-	s       *Service
+-	project string
+-	urlMap  string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified UrlMap resource.
+-func (r *UrlMapsService) Get(project string, urlMap string) *UrlMapsGetCall {
+-	c := &UrlMapsGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.urlMap = urlMap
+-	return c
+-}
+-
+-func (c *UrlMapsGetCall) Do() (*UrlMap, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/urlMaps/{urlMap}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"urlMap":  c.urlMap,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *UrlMap
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified UrlMap resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.urlMaps.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "urlMap"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "urlMap": {
+-	//       "description": "Name of the UrlMap resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/urlMaps/{urlMap}",
+-	//   "response": {
+-	//     "$ref": "UrlMap"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.urlMaps.insert":
+-
+-type UrlMapsInsertCall struct {
+-	s       *Service
+-	project string
+-	urlmap  *UrlMap
+-	opt_    map[string]interface{}
+-}
+-
+-// Insert: Creates a UrlMap resource in the specified project using the
+-// data included in the request.
+-func (r *UrlMapsService) Insert(project string, urlmap *UrlMap) *UrlMapsInsertCall {
+-	c := &UrlMapsInsertCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.urlmap = urlmap
+-	return c
+-}
+-
+-func (c *UrlMapsInsertCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.urlmap)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/urlMaps")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Creates a UrlMap resource in the specified project using the data included in the request.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.urlMaps.insert",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/urlMaps",
+-	//   "request": {
+-	//     "$ref": "UrlMap"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.urlMaps.list":
+-
+-type UrlMapsListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of UrlMap resources available to the
+-// specified project.
+-func (r *UrlMapsService) List(project string) *UrlMapsListCall {
+-	c := &UrlMapsListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *UrlMapsListCall) Filter(filter string) *UrlMapsListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *UrlMapsListCall) MaxResults(maxResults int64) *UrlMapsListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *UrlMapsListCall) PageToken(pageToken string) *UrlMapsListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *UrlMapsListCall) Do() (*UrlMapList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/urlMaps")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *UrlMapList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of UrlMap resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.urlMaps.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/urlMaps",
+-	//   "response": {
+-	//     "$ref": "UrlMapList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.urlMaps.patch":
+-
+-type UrlMapsPatchCall struct {
+-	s       *Service
+-	project string
+-	urlMap  string
+-	urlmap  *UrlMap
+-	opt_    map[string]interface{}
+-}
+-
+-// Patch: Update the entire content of the UrlMap resource. This method
+-// supports patch semantics.
+-func (r *UrlMapsService) Patch(project string, urlMap string, urlmap *UrlMap) *UrlMapsPatchCall {
+-	c := &UrlMapsPatchCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.urlMap = urlMap
+-	c.urlmap = urlmap
+-	return c
+-}
+-
+-func (c *UrlMapsPatchCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.urlmap)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/urlMaps/{urlMap}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("PATCH", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"urlMap":  c.urlMap,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Update the entire content of the UrlMap resource. This method supports patch semantics.",
+-	//   "httpMethod": "PATCH",
+-	//   "id": "compute.urlMaps.patch",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "urlMap"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "urlMap": {
+-	//       "description": "Name of the UrlMap resource to update.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/urlMaps/{urlMap}",
+-	//   "request": {
+-	//     "$ref": "UrlMap"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.urlMaps.update":
+-
+-type UrlMapsUpdateCall struct {
+-	s       *Service
+-	project string
+-	urlMap  string
+-	urlmap  *UrlMap
+-	opt_    map[string]interface{}
+-}
+-
+-// Update: Update the entire content of the UrlMap resource.
+-func (r *UrlMapsService) Update(project string, urlMap string, urlmap *UrlMap) *UrlMapsUpdateCall {
+-	c := &UrlMapsUpdateCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.urlMap = urlMap
+-	c.urlmap = urlmap
+-	return c
+-}
+-
+-func (c *UrlMapsUpdateCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.urlmap)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/urlMaps/{urlMap}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("PUT", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"urlMap":  c.urlMap,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Update the entire content of the UrlMap resource.",
+-	//   "httpMethod": "PUT",
+-	//   "id": "compute.urlMaps.update",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "urlMap"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "urlMap": {
+-	//       "description": "Name of the UrlMap resource to update.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/urlMaps/{urlMap}",
+-	//   "request": {
+-	//     "$ref": "UrlMap"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.urlMaps.validate":
+-
+-type UrlMapsValidateCall struct {
+-	s                      *Service
+-	project                string
+-	urlMap                 string
+-	urlmapsvalidaterequest *UrlMapsValidateRequest
+-	opt_                   map[string]interface{}
+-}
+-
+-// Validate: Run static validation for the UrlMap. In particular, the
+-// tests of the provided UrlMap will be run. Calling this method does
+-// NOT create the UrlMap.
+-func (r *UrlMapsService) Validate(project string, urlMap string, urlmapsvalidaterequest *UrlMapsValidateRequest) *UrlMapsValidateCall {
+-	c := &UrlMapsValidateCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.urlMap = urlMap
+-	c.urlmapsvalidaterequest = urlmapsvalidaterequest
+-	return c
+-}
+-
+-func (c *UrlMapsValidateCall) Do() (*UrlMapsValidateResponse, error) {
+-	var body io.Reader = nil
+-	body, err := googleapi.WithoutDataWrapper.JSONReader(c.urlmapsvalidaterequest)
+-	if err != nil {
+-		return nil, err
+-	}
+-	ctype := "application/json"
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/global/urlMaps/{urlMap}/validate")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("POST", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"urlMap":  c.urlMap,
+-	})
+-	req.Header.Set("Content-Type", ctype)
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *UrlMapsValidateResponse
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Run static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap.",
+-	//   "httpMethod": "POST",
+-	//   "id": "compute.urlMaps.validate",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "urlMap"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "urlMap": {
+-	//       "description": "Name of the UrlMap resource to be validated as.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/global/urlMaps/{urlMap}/validate",
+-	//   "request": {
+-	//     "$ref": "UrlMapsValidateRequest"
+-	//   },
+-	//   "response": {
+-	//     "$ref": "UrlMapsValidateResponse"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.zoneOperations.delete":
+-
+-type ZoneOperationsDeleteCall struct {
+-	s         *Service
+-	project   string
+-	zone      string
+-	operation string
+-	opt_      map[string]interface{}
+-}
+-
+-// Delete: Deletes the specified zone-specific operation resource.
+-func (r *ZoneOperationsService) Delete(project string, zone string, operation string) *ZoneOperationsDeleteCall {
+-	c := &ZoneOperationsDeleteCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.operation = operation
+-	return c
+-}
+-
+-func (c *ZoneOperationsDeleteCall) Do() error {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/operations/{operation}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("DELETE", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":   c.project,
+-		"zone":      c.zone,
+-		"operation": c.operation,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return err
+-	}
+-	return nil
+-	// {
+-	//   "description": "Deletes the specified zone-specific operation resource.",
+-	//   "httpMethod": "DELETE",
+-	//   "id": "compute.zoneOperations.delete",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "operation"
+-	//   ],
+-	//   "parameters": {
+-	//     "operation": {
+-	//       "description": "Name of the operation resource to delete.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/operations/{operation}",
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.zoneOperations.get":
+-
+-type ZoneOperationsGetCall struct {
+-	s         *Service
+-	project   string
+-	zone      string
+-	operation string
+-	opt_      map[string]interface{}
+-}
+-
+-// Get: Retrieves the specified zone-specific operation resource.
+-func (r *ZoneOperationsService) Get(project string, zone string, operation string) *ZoneOperationsGetCall {
+-	c := &ZoneOperationsGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	c.operation = operation
+-	return c
+-}
+-
+-func (c *ZoneOperationsGetCall) Do() (*Operation, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/operations/{operation}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project":   c.project,
+-		"zone":      c.zone,
+-		"operation": c.operation,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Operation
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the specified zone-specific operation resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.zoneOperations.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone",
+-	//     "operation"
+-	//   ],
+-	//   "parameters": {
+-	//     "operation": {
+-	//       "description": "Name of the operation resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/operations/{operation}",
+-	//   "response": {
+-	//     "$ref": "Operation"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.zoneOperations.list":
+-
+-type ZoneOperationsListCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of operation resources contained within the
+-// specified zone.
+-func (r *ZoneOperationsService) List(project string, zone string) *ZoneOperationsListCall {
+-	c := &ZoneOperationsListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *ZoneOperationsListCall) Filter(filter string) *ZoneOperationsListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *ZoneOperationsListCall) MaxResults(maxResults int64) *ZoneOperationsListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *ZoneOperationsListCall) PageToken(pageToken string) *ZoneOperationsListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *ZoneOperationsListCall) Do() (*OperationList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}/operations")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *OperationList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of operation resources contained within the specified zone.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.zoneOperations.list",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}/operations",
+-	//   "response": {
+-	//     "$ref": "OperationList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.zones.get":
+-
+-type ZonesGetCall struct {
+-	s       *Service
+-	project string
+-	zone    string
+-	opt_    map[string]interface{}
+-}
+-
+-// Get: Returns the specified zone resource.
+-func (r *ZonesService) Get(project string, zone string) *ZonesGetCall {
+-	c := &ZonesGetCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	c.zone = zone
+-	return c
+-}
+-
+-func (c *ZonesGetCall) Do() (*Zone, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones/{zone}")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-		"zone":    c.zone,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *Zone
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Returns the specified zone resource.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.zones.get",
+-	//   "parameterOrder": [
+-	//     "project",
+-	//     "zone"
+-	//   ],
+-	//   "parameters": {
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     },
+-	//     "zone": {
+-	//       "description": "Name of the zone resource to return.",
+-	//       "location": "path",
+-	//       "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones/{zone}",
+-	//   "response": {
+-	//     "$ref": "Zone"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+-
+-// method id "compute.zones.list":
+-
+-type ZonesListCall struct {
+-	s       *Service
+-	project string
+-	opt_    map[string]interface{}
+-}
+-
+-// List: Retrieves the list of zone resources available to the specified
+-// project.
+-func (r *ZonesService) List(project string) *ZonesListCall {
+-	c := &ZonesListCall{s: r.s, opt_: make(map[string]interface{})}
+-	c.project = project
+-	return c
+-}
+-
+-// Filter sets the optional parameter "filter": Filter expression for
+-// filtering listed resources.
+-func (c *ZonesListCall) Filter(filter string) *ZonesListCall {
+-	c.opt_["filter"] = filter
+-	return c
+-}
+-
+-// MaxResults sets the optional parameter "maxResults": Maximum count of
+-// results to be returned. Maximum value is 500 and default value is
+-// 500.
+-func (c *ZonesListCall) MaxResults(maxResults int64) *ZonesListCall {
+-	c.opt_["maxResults"] = maxResults
+-	return c
+-}
+-
+-// PageToken sets the optional parameter "pageToken": Tag returned by a
+-// previous list request truncated by maxResults. Used to continue a
+-// previous list request.
+-func (c *ZonesListCall) PageToken(pageToken string) *ZonesListCall {
+-	c.opt_["pageToken"] = pageToken
+-	return c
+-}
+-
+-func (c *ZonesListCall) Do() (*ZoneList, error) {
+-	var body io.Reader = nil
+-	params := make(url.Values)
+-	params.Set("alt", "json")
+-	if v, ok := c.opt_["filter"]; ok {
+-		params.Set("filter", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["maxResults"]; ok {
+-		params.Set("maxResults", fmt.Sprintf("%v", v))
+-	}
+-	if v, ok := c.opt_["pageToken"]; ok {
+-		params.Set("pageToken", fmt.Sprintf("%v", v))
+-	}
+-	urls := googleapi.ResolveRelative(c.s.BasePath, "{project}/zones")
+-	urls += "?" + params.Encode()
+-	req, _ := http.NewRequest("GET", urls, body)
+-	googleapi.Expand(req.URL, map[string]string{
+-		"project": c.project,
+-	})
+-	req.Header.Set("User-Agent", "google-api-go-client/0.5")
+-	res, err := c.s.client.Do(req)
+-	if err != nil {
+-		return nil, err
+-	}
+-	defer googleapi.CloseBody(res)
+-	if err := googleapi.CheckResponse(res); err != nil {
+-		return nil, err
+-	}
+-	var ret *ZoneList
+-	if err := json.NewDecoder(res.Body).Decode(&ret); err != nil {
+-		return nil, err
+-	}
+-	return ret, nil
+-	// {
+-	//   "description": "Retrieves the list of zone resources available to the specified project.",
+-	//   "httpMethod": "GET",
+-	//   "id": "compute.zones.list",
+-	//   "parameterOrder": [
+-	//     "project"
+-	//   ],
+-	//   "parameters": {
+-	//     "filter": {
+-	//       "description": "Optional. Filter expression for filtering listed resources.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "maxResults": {
+-	//       "default": "500",
+-	//       "description": "Optional. Maximum count of results to be returned. Maximum value is 500 and default value is 500.",
+-	//       "format": "uint32",
+-	//       "location": "query",
+-	//       "maximum": "500",
+-	//       "minimum": "0",
+-	//       "type": "integer"
+-	//     },
+-	//     "pageToken": {
+-	//       "description": "Optional. Tag returned by a previous list request truncated by maxResults. Used to continue a previous list request.",
+-	//       "location": "query",
+-	//       "type": "string"
+-	//     },
+-	//     "project": {
+-	//       "description": "Name of the project scoping this request.",
+-	//       "location": "path",
+-	//       "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
+-	//       "required": true,
+-	//       "type": "string"
+-	//     }
+-	//   },
+-	//   "path": "{project}/zones",
+-	//   "response": {
+-	//     "$ref": "ZoneList"
+-	//   },
+-	//   "scopes": [
+-	//     "https://www.googleapis.com/auth/compute",
+-	//     "https://www.googleapis.com/auth/compute.readonly"
+-	//   ]
+-	// }
+-
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/googleapi.go b/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/googleapi.go
+deleted file mode 100644
+index c3a1e80..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/googleapi.go
++++ /dev/null
+@@ -1,377 +0,0 @@
+-// Copyright 2011 Google Inc. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// Package googleapi contains the common code shared by all Google API
+-// libraries.
+-package googleapi
+-
+-import (
+-	"bytes"
+-	"encoding/json"
+-	"fmt"
+-	"io"
+-	"io/ioutil"
+-	"mime/multipart"
+-	"net/http"
+-	"net/textproto"
+-	"net/url"
+-	"os"
+-	"regexp"
+-	"strings"
+-)
+-
+-// ContentTyper is an interface for Readers which know (or would like
+-// to override) their Content-Type. If a media body doesn't implement
+-// ContentTyper, the type is sniffed from the content using
+-// http.DetectContentType.
+-type ContentTyper interface {
+-	ContentType() string
+-}
+-
+-const Version = "0.5"
+-
+-// Error contains an error response from the server.
+-type Error struct {
+-	// Code is the HTTP response status code and will always be populated.
+-	Code int `json:"code"`
+-	// Message is the server response message and is only populated when
+-	// explicitly referenced by the JSON server response.
+-	Message string `json:"message"`
+-	// Body is the raw response returned by the server.
+-	// It is often but not always JSON, depending on how the request fails.
+-	Body string
+-
+-	Errors []ErrorItem
+-}
+-
+-// ErrorItem is a detailed error code & message from the Google API frontend.
+-type ErrorItem struct {
+-	// Reason is the typed error code. For example: "some_example".
+-	Reason string `json:"reason"`
+-	// Message is the human-readable description of the error.
+-	Message string `json:"message"`
+-}
+-
+-func (e *Error) Error() string {
+-	if len(e.Errors) == 0 && e.Message == "" {
+-		return fmt.Sprintf("googleapi: got HTTP response code %d with body: %v", e.Code, e.Body)
+-	}
+-	var buf bytes.Buffer
+-	fmt.Fprintf(&buf, "googleapi: Error %d: ", e.Code)
+-	if e.Message != "" {
+-		fmt.Fprintf(&buf, "%s", e.Message)
+-	}
+-	if len(e.Errors) == 0 {
+-		return strings.TrimSpace(buf.String())
+-	}
+-	if len(e.Errors) == 1 && e.Errors[0].Message == e.Message {
+-		fmt.Fprintf(&buf, ", %s", e.Errors[0].Reason)
+-		return buf.String()
+-	}
+-	fmt.Fprintln(&buf, "\nMore details:")
+-	for _, v := range e.Errors {
+-		fmt.Fprintf(&buf, "Reason: %s, Message: %s\n", v.Reason, v.Message)
+-	}
+-	return buf.String()
+-}
+-
+-type errorReply struct {
+-	Error *Error `json:"error"`
+-}
+-
+-// CheckResponse returns an error (of type *Error) if the response
+-// status code is not 2xx.
+-func CheckResponse(res *http.Response) error {
+-	if res.StatusCode >= 200 && res.StatusCode <= 299 {
+-		return nil
+-	}
+-	slurp, err := ioutil.ReadAll(res.Body)
+-	if err == nil {
+-		jerr := new(errorReply)
+-		err = json.Unmarshal(slurp, jerr)
+-		if err == nil && jerr.Error != nil {
+-			if jerr.Error.Code == 0 {
+-				jerr.Error.Code = res.StatusCode
+-			}
+-			jerr.Error.Body = string(slurp)
+-			return jerr.Error
+-		}
+-	}
+-	return &Error{
+-		Code: res.StatusCode,
+-		Body: string(slurp),
+-	}
+-}
+-
+-type MarshalStyle bool
+-
+-var WithDataWrapper = MarshalStyle(true)
+-var WithoutDataWrapper = MarshalStyle(false)
+-
+-func (wrap MarshalStyle) JSONReader(v interface{}) (io.Reader, error) {
+-	buf := new(bytes.Buffer)
+-	if wrap {
+-		buf.Write([]byte(`{"data": `))
+-	}
+-	err := json.NewEncoder(buf).Encode(v)
+-	if err != nil {
+-		return nil, err
+-	}
+-	if wrap {
+-		buf.Write([]byte(`}`))
+-	}
+-	return buf, nil
+-}
+-
+-func getMediaType(media io.Reader) (io.Reader, string) {
+-	if typer, ok := media.(ContentTyper); ok {
+-		return media, typer.ContentType()
+-	}
+-
+-	typ := "application/octet-stream"
+-	buf := make([]byte, 1024)
+-	n, err := media.Read(buf)
+-	buf = buf[:n]
+-	if err == nil {
+-		typ = http.DetectContentType(buf)
+-	}
+-	return io.MultiReader(bytes.NewBuffer(buf), media), typ
+-}
+-
+-type Lengther interface {
+-	Len() int
+-}
+-
+-// endingWithErrorReader from r until it returns an error.  If the
+-// final error from r is os.EOF and e is non-nil, e is used instead.
+-type endingWithErrorReader struct {
+-	r io.Reader
+-	e error
+-}
+-
+-func (er endingWithErrorReader) Read(p []byte) (n int, err error) {
+-	n, err = er.r.Read(p)
+-	if err == io.EOF && er.e != nil {
+-		err = er.e
+-	}
+-	return
+-}
+-
+-func getReaderSize(r io.Reader) (io.Reader, int64) {
+-	// Ideal case, the reader knows its own size.
+-	if lr, ok := r.(Lengther); ok {
+-		return r, int64(lr.Len())
+-	}
+-
+-	// But maybe it's a seeker and we can seek to the end to find its size.
+-	if s, ok := r.(io.Seeker); ok {
+-		pos0, err := s.Seek(0, os.SEEK_CUR)
+-		if err == nil {
+-			posend, err := s.Seek(0, os.SEEK_END)
+-			if err == nil {
+-				_, err = s.Seek(pos0, os.SEEK_SET)
+-				if err == nil {
+-					return r, posend - pos0
+-				} else {
+-					// We moved it forward but can't restore it.
+-					// Seems unlikely, but can't really restore now.
+-					return endingWithErrorReader{strings.NewReader(""), err}, posend - pos0
+-				}
+-			}
+-		}
+-	}
+-
+-	// Otherwise we have to make a copy to calculate how big the reader is.
+-	buf := new(bytes.Buffer)
+-	// TODO(bradfitz): put a cap on this copy? spill to disk after
+-	// a certain point?
+-	_, err := io.Copy(buf, r)
+-	return endingWithErrorReader{buf, err}, int64(buf.Len())
+-}
+-
+-func typeHeader(contentType string) textproto.MIMEHeader {
+-	h := make(textproto.MIMEHeader)
+-	h.Set("Content-Type", contentType)
+-	return h
+-}
+-
+-// countingWriter counts the number of bytes it receives to write, but
+-// discards them.
+-type countingWriter struct {
+-	n *int64
+-}
+-
+-func (w countingWriter) Write(p []byte) (int, error) {
+-	*w.n += int64(len(p))
+-	return len(p), nil
+-}
+-
+-// ConditionallyIncludeMedia does nothing if media is nil.
+-//
+-// bodyp is an in/out parameter.  It should initially point to the
+-// reader of the application/json (or whatever) payload to send in the
+-// API request.  It's updated to point to the multipart body reader.
+-//
+-// ctypep is an in/out parameter.  It should initially point to the
+-// content type of the bodyp, usually "application/json".  It's updated
+-// to the "multipart/related" content type, with random boundary.
+-//
+-// The return value is the content-length of the entire multpart body.
+-func ConditionallyIncludeMedia(media io.Reader, bodyp *io.Reader, ctypep *string) (totalContentLength int64, ok bool) {
+-	if media == nil {
+-		return
+-	}
+-	// Get the media type and size. The type check might return a
+-	// different reader instance, so do the size check first,
+-	// which looks at the specific type of the io.Reader.
+-	var mediaType string
+-	if typer, ok := media.(ContentTyper); ok {
+-		mediaType = typer.ContentType()
+-	}
+-	media, mediaSize := getReaderSize(media)
+-	if mediaType == "" {
+-		media, mediaType = getMediaType(media)
+-	}
+-	body, bodyType := *bodyp, *ctypep
+-	body, bodySize := getReaderSize(body)
+-
+-	// Calculate how big the the multipart will be.
+-	{
+-		totalContentLength = bodySize + mediaSize
+-		mpw := multipart.NewWriter(countingWriter{&totalContentLength})
+-		mpw.CreatePart(typeHeader(bodyType))
+-		mpw.CreatePart(typeHeader(mediaType))
+-		mpw.Close()
+-	}
+-
+-	pr, pw := io.Pipe()
+-	mpw := multipart.NewWriter(pw)
+-	*bodyp = pr
+-	*ctypep = "multipart/related; boundary=" + mpw.Boundary()
+-	go func() {
+-		defer pw.Close()
+-		defer mpw.Close()
+-
+-		w, err := mpw.CreatePart(typeHeader(bodyType))
+-		if err != nil {
+-			return
+-		}
+-		_, err = io.Copy(w, body)
+-		if err != nil {
+-			return
+-		}
+-
+-		w, err = mpw.CreatePart(typeHeader(mediaType))
+-		if err != nil {
+-			return
+-		}
+-		_, err = io.Copy(w, media)
+-		if err != nil {
+-			return
+-		}
+-	}()
+-	return totalContentLength, true
+-}
+-
+-func ResolveRelative(basestr, relstr string) string {
+-	u, _ := url.Parse(basestr)
+-	rel, _ := url.Parse(relstr)
+-	u = u.ResolveReference(rel)
+-	us := u.String()
+-	us = strings.Replace(us, "%7B", "{", -1)
+-	us = strings.Replace(us, "%7D", "}", -1)
+-	return us
+-}
+-
+-// has4860Fix is whether this Go environment contains the fix for
+-// http://golang.org/issue/4860
+-var has4860Fix bool
+-
+-// init initializes has4860Fix by checking the behavior of the net/http package.
+-func init() {
+-	r := http.Request{
+-		URL: &url.URL{
+-			Scheme: "http",
+-			Opaque: "//opaque",
+-		},
+-	}
+-	b := &bytes.Buffer{}
+-	r.Write(b)
+-	has4860Fix = bytes.HasPrefix(b.Bytes(), []byte("GET http"))
+-}
+-
+-// SetOpaque sets u.Opaque from u.Path such that HTTP requests to it
+-// don't alter any hex-escaped characters in u.Path.
+-func SetOpaque(u *url.URL) {
+-	u.Opaque = "//" + u.Host + u.Path
+-	if !has4860Fix {
+-		u.Opaque = u.Scheme + ":" + u.Opaque
+-	}
+-}
+-
+-// Find {encoded} strings
+-var findEncodedStrings = regexp.MustCompile(`(\{[A-Za-z_]+\})`)
+-
+-// Expand subsitutes any {encoded} strings in the URL passed in using
+-// the map supplied.
+-//
+-// This calls SetOpaque to avoid encoding of the parameters in the URL path.
+-func Expand(u *url.URL, expansions map[string]string) {
+-	u.Path = findEncodedStrings.ReplaceAllStringFunc(u.Path, func(replace string) string {
+-		argument := replace[1 : len(replace)-1]
+-		value, ok := expansions[argument]
+-		if !ok {
+-			// Expansion not found - leave unchanged
+-			return replace
+-		}
+-		// Would like to call url.escape(value, encodePath) here
+-		encodedValue := url.QueryEscape(value)
+-		encodedValue = strings.Replace(encodedValue, "+", "%20", -1)
+-		return encodedValue
+-	})
+-	SetOpaque(u)
+-}
+-
+-// CloseBody is used to close res.Body.
+-// Prior to calling Close, it also tries to Read a small amount to see an EOF.
+-// Not seeing an EOF can prevent HTTP Transports from reusing connections.
+-func CloseBody(res *http.Response) {
+-	if res == nil || res.Body == nil {
+-		return
+-	}
+-	// Justification for 3 byte reads: two for up to "\r\n" after
+-	// a JSON/XML document, and then 1 to see EOF if we haven't yet.
+-	// TODO(bradfitz): detect Go 1.3+ and skip these reads.
+-	// See https://codereview.appspot.com/58240043
+-	// and https://codereview.appspot.com/49570044
+-	buf := make([]byte, 1)
+-	for i := 0; i < 3; i++ {
+-		_, err := res.Body.Read(buf)
+-		if err != nil {
+-			break
+-		}
+-	}
+-	res.Body.Close()
+-
+-}
+-
+-// VariantType returns the type name of the given variant.
+-// If the map doesn't contain the named key or the value is not a []interface{}, "" is returned.
+-// This is used to support "variant" APIs that can return one of a number of different types.
+-func VariantType(t map[string]interface{}) string {
+-	s, _ := t["type"].(string)
+-	return s
+-}
+-
+-// ConvertVariant uses the JSON encoder/decoder to fill in the struct 'dst' with the fields found in variant 'v'.
+-// This is used to support "variant" APIs that can return one of a number of different types.
+-// It reports whether the conversion was successful.
+-func ConvertVariant(v map[string]interface{}, dst interface{}) bool {
+-	var buf bytes.Buffer
+-	err := json.NewEncoder(&buf).Encode(v)
+-	if err != nil {
+-		return false
+-	}
+-	return json.Unmarshal(buf.Bytes(), dst) == nil
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/googleapi_test.go b/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/googleapi_test.go
+deleted file mode 100644
+index d94e6c0..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/googleapi_test.go
++++ /dev/null
+@@ -1,351 +0,0 @@
+-// Copyright 2011 Google Inc. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package googleapi
+-
+-import (
+-	"bytes"
+-	"encoding/json"
+-	"fmt"
+-	"io/ioutil"
+-	"net/http"
+-	"net/url"
+-	"reflect"
+-	"strings"
+-	"testing"
+-)
+-
+-type SetOpaqueTest struct {
+-	in             *url.URL
+-	wantRequestURI string
+-}
+-
+-var setOpaqueTests = []SetOpaqueTest{
+-	// no path
+-	{
+-		&url.URL{
+-			Scheme: "http",
+-			Host:   "www.golang.org",
+-		},
+-		"http://www.golang.org",
+-	},
+-	// path
+-	{
+-		&url.URL{
+-			Scheme: "http",
+-			Host:   "www.golang.org",
+-			Path:   "/",
+-		},
+-		"http://www.golang.org/",
+-	},
+-	// file with hex escaping
+-	{
+-		&url.URL{
+-			Scheme: "https",
+-			Host:   "www.golang.org",
+-			Path:   "/file%20one&two",
+-		},
+-		"https://www.golang.org/file%20one&two",
+-	},
+-	// query
+-	{
+-		&url.URL{
+-			Scheme:   "http",
+-			Host:     "www.golang.org",
+-			Path:     "/",
+-			RawQuery: "q=go+language",
+-		},
+-		"http://www.golang.org/?q=go+language",
+-	},
+-	// file with hex escaping in path plus query
+-	{
+-		&url.URL{
+-			Scheme:   "https",
+-			Host:     "www.golang.org",
+-			Path:     "/file%20one&two",
+-			RawQuery: "q=go+language",
+-		},
+-		"https://www.golang.org/file%20one&two?q=go+language",
+-	},
+-	// query with hex escaping
+-	{
+-		&url.URL{
+-			Scheme:   "http",
+-			Host:     "www.golang.org",
+-			Path:     "/",
+-			RawQuery: "q=go%20language",
+-		},
+-		"http://www.golang.org/?q=go%20language",
+-	},
+-}
+-
+-// prefixTmpl is a template for the expected prefix of the output of writing
+-// an HTTP request.
+-const prefixTmpl = "GET %v HTTP/1.1\r\nHost: %v\r\n"
+-
+-func TestSetOpaque(t *testing.T) {
+-	for _, test := range setOpaqueTests {
+-		u := *test.in
+-		SetOpaque(&u)
+-
+-		w := &bytes.Buffer{}
+-		r := &http.Request{URL: &u}
+-		if err := r.Write(w); err != nil {
+-			t.Errorf("write request: %v", err)
+-			continue
+-		}
+-
+-		prefix := fmt.Sprintf(prefixTmpl, test.wantRequestURI, test.in.Host)
+-		if got := string(w.Bytes()); !strings.HasPrefix(got, prefix) {
+-			t.Errorf("got %q expected prefix %q", got, prefix)
+-		}
+-	}
+-}
+-
+-type ExpandTest struct {
+-	in         string
+-	expansions map[string]string
+-	want       string
+-}
+-
+-var expandTests = []ExpandTest{
+-	// no expansions
+-	{
+-		"http://www.golang.org/",
+-		map[string]string{},
+-		"http://www.golang.org/",
+-	},
+-	// one expansion, no escaping
+-	{
+-		"http://www.golang.org/{bucket}/delete",
+-		map[string]string{
+-			"bucket": "red",
+-		},
+-		"http://www.golang.org/red/delete",
+-	},
+-	// one expansion, with hex escapes
+-	{
+-		"http://www.golang.org/{bucket}/delete",
+-		map[string]string{
+-			"bucket": "red/blue",
+-		},
+-		"http://www.golang.org/red%2Fblue/delete",
+-	},
+-	// one expansion, with space
+-	{
+-		"http://www.golang.org/{bucket}/delete",
+-		map[string]string{
+-			"bucket": "red or blue",
+-		},
+-		"http://www.golang.org/red%20or%20blue/delete",
+-	},
+-	// expansion not found
+-	{
+-		"http://www.golang.org/{object}/delete",
+-		map[string]string{
+-			"bucket": "red or blue",
+-		},
+-		"http://www.golang.org/{object}/delete",
+-	},
+-	// multiple expansions
+-	{
+-		"http://www.golang.org/{one}/{two}/{three}/get",
+-		map[string]string{
+-			"one":   "ONE",
+-			"two":   "TWO",
+-			"three": "THREE",
+-		},
+-		"http://www.golang.org/ONE/TWO/THREE/get",
+-	},
+-	// utf-8 characters
+-	{
+-		"http://www.golang.org/{bucket}/get",
+-		map[string]string{
+-			"bucket": "£100",
+-		},
+-		"http://www.golang.org/%C2%A3100/get",
+-	},
+-	// punctuations
+-	{
+-		"http://www.golang.org/{bucket}/get",
+-		map[string]string{
+-			"bucket": `/\@:,.`,
+-		},
+-		"http://www.golang.org/%2F%5C%40%3A%2C./get",
+-	},
+-	// mis-matched brackets
+-	{
+-		"http://www.golang.org/{bucket/get",
+-		map[string]string{
+-			"bucket": "red",
+-		},
+-		"http://www.golang.org/{bucket/get",
+-	},
+-}
+-
+-func TestExpand(t *testing.T) {
+-	for i, test := range expandTests {
+-		u := url.URL{
+-			Path: test.in,
+-		}
+-		Expand(&u, test.expansions)
+-		got := u.Path
+-		if got != test.want {
+-			t.Errorf("got %q expected %q in test %d", got, test.want, i+1)
+-		}
+-	}
+-}
+-
+-type CheckResponseTest struct {
+-	in       *http.Response
+-	bodyText string
+-	want     error
+-	errText  string
+-}
+-
+-var checkResponseTests = []CheckResponseTest{
+-	{
+-		&http.Response{
+-			StatusCode: http.StatusOK,
+-		},
+-		"",
+-		nil,
+-		"",
+-	},
+-	{
+-		&http.Response{
+-			StatusCode: http.StatusInternalServerError,
+-		},
+-		`{"error":{}}`,
+-		&Error{
+-			Code: http.StatusInternalServerError,
+-			Body: `{"error":{}}`,
+-		},
+-		`googleapi: got HTTP response code 500 with body: {"error":{}}`,
+-	},
+-	{
+-		&http.Response{
+-			StatusCode: http.StatusNotFound,
+-		},
+-		`{"error":{"message":"Error message for StatusNotFound."}}`,
+-		&Error{
+-			Code:    http.StatusNotFound,
+-			Message: "Error message for StatusNotFound.",
+-			Body:    `{"error":{"message":"Error message for StatusNotFound."}}`,
+-		},
+-		"googleapi: Error 404: Error message for StatusNotFound.",
+-	},
+-	{
+-		&http.Response{
+-			StatusCode: http.StatusBadRequest,
+-		},
+-		`{"error":"invalid_token","error_description":"Invalid Value"}`,
+-		&Error{
+-			Code: http.StatusBadRequest,
+-			Body: `{"error":"invalid_token","error_description":"Invalid Value"}`,
+-		},
+-		`googleapi: got HTTP response code 400 with body: {"error":"invalid_token","error_description":"Invalid Value"}`,
+-	},
+-	{
+-		&http.Response{
+-			StatusCode: http.StatusBadRequest,
+-		},
+-		`{"error":{"errors":[{"domain":"usageLimits","reason":"keyInvalid","message":"Bad Request"}],"code":400,"message":"Bad Request"}}`,
+-		&Error{
+-			Code: http.StatusBadRequest,
+-			Errors: []ErrorItem{
+-				{
+-					Reason:  "keyInvalid",
+-					Message: "Bad Request",
+-				},
+-			},
+-			Body:    `{"error":{"errors":[{"domain":"usageLimits","reason":"keyInvalid","message":"Bad Request"}],"code":400,"message":"Bad Request"}}`,
+-			Message: "Bad Request",
+-		},
+-		"googleapi: Error 400: Bad Request, keyInvalid",
+-	},
+-}
+-
+-func TestCheckResponse(t *testing.T) {
+-	for _, test := range checkResponseTests {
+-		res := test.in
+-		if test.bodyText != "" {
+-			res.Body = ioutil.NopCloser(strings.NewReader(test.bodyText))
+-		}
+-		g := CheckResponse(res)
+-		if !reflect.DeepEqual(g, test.want) {
+-			t.Errorf("CheckResponse: got %v, want %v", g, test.want)
+-			gotJson, err := json.Marshal(g)
+-			if err != nil {
+-				t.Error(err)
+-			}
+-			wantJson, err := json.Marshal(test.want)
+-			if err != nil {
+-				t.Error(err)
+-			}
+-			t.Errorf("json(got):  %q\njson(want): %q", string(gotJson), string(wantJson))
+-		}
+-		if g != nil && g.Error() != test.errText {
+-			t.Errorf("CheckResponse: unexpected error message.\nGot:  %q\nwant: %q", g, test.errText)
+-		}
+-	}
+-}
+-
+-type VariantPoint struct {
+-	Type        string
+-	Coordinates []float64
+-}
+-
+-type VariantTest struct {
+-	in     map[string]interface{}
+-	result bool
+-	want   VariantPoint
+-}
+-
+-var coords = []interface{}{1.0, 2.0}
+-
+-var variantTests = []VariantTest{
+-	{
+-		in: map[string]interface{}{
+-			"type":        "Point",
+-			"coordinates": coords,
+-		},
+-		result: true,
+-		want: VariantPoint{
+-			Type:        "Point",
+-			Coordinates: []float64{1.0, 2.0},
+-		},
+-	},
+-	{
+-		in: map[string]interface{}{
+-			"type":  "Point",
+-			"bogus": coords,
+-		},
+-		result: true,
+-		want: VariantPoint{
+-			Type: "Point",
+-		},
+-	},
+-}
+-
+-func TestVariantType(t *testing.T) {
+-	for _, test := range variantTests {
+-		if g := VariantType(test.in); g != test.want.Type {
+-			t.Errorf("VariantType(%v): got %v, want %v", test.in, g, test.want.Type)
+-		}
+-	}
+-}
+-
+-func TestConvertVariant(t *testing.T) {
+-	for _, test := range variantTests {
+-		g := VariantPoint{}
+-		r := ConvertVariant(test.in, &g)
+-		if r != test.result {
+-			t.Errorf("ConvertVariant(%v): got %v, want %v", test.in, r, test.result)
+-		}
+-		if !reflect.DeepEqual(g, test.want) {
+-			t.Errorf("ConvertVariant(%v): got %v, want %v", test.in, g, test.want)
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/transport/apikey.go b/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/transport/apikey.go
+deleted file mode 100644
+index eca1ea2..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/transport/apikey.go
++++ /dev/null
+@@ -1,38 +0,0 @@
+-// Copyright 2012 Google Inc. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// Package transport contains HTTP transports used to make
+-// authenticated API requests.
+-package transport
+-
+-import (
+-	"errors"
+-	"net/http"
+-)
+-
+-// APIKey is an HTTP Transport which wraps an underlying transport and
+-// appends an API Key "key" parameter to the URL of outgoing requests.
+-type APIKey struct {
+-	// Key is the API Key to set on requests.
+-	Key string
+-
+-	// Transport is the underlying HTTP transport.
+-	// If nil, http.DefaultTransport is used.
+-	Transport http.RoundTripper
+-}
+-
+-func (t *APIKey) RoundTrip(req *http.Request) (*http.Response, error) {
+-	rt := t.Transport
+-	if rt == nil {
+-		rt = http.DefaultTransport
+-		if rt == nil {
+-			return nil, errors.New("googleapi/transport: no Transport specified or available")
+-		}
+-	}
+-	newReq := *req
+-	args := newReq.URL.Query()
+-	args.Set("key", t.Key)
+-	newReq.URL.RawQuery = args.Encode()
+-	return rt.RoundTrip(&newReq)
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/types.go b/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/types.go
+deleted file mode 100644
+index 7ed7dd9..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/types.go
++++ /dev/null
+@@ -1,150 +0,0 @@
+-// Copyright 2013 Google Inc. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package googleapi
+-
+-import (
+-	"encoding/json"
+-	"strconv"
+-)
+-
+-// Int64s is a slice of int64s that marshal as quoted strings in JSON.
+-type Int64s []int64
+-
+-func (q *Int64s) UnmarshalJSON(raw []byte) error {
+-	*q = (*q)[:0]
+-	var ss []string
+-	if err := json.Unmarshal(raw, &ss); err != nil {
+-		return err
+-	}
+-	for _, s := range ss {
+-		v, err := strconv.ParseInt(s, 10, 64)
+-		if err != nil {
+-			return err
+-		}
+-		*q = append(*q, int64(v))
+-	}
+-	return nil
+-}
+-
+-// Int32s is a slice of int32s that marshal as quoted strings in JSON.
+-type Int32s []int32
+-
+-func (q *Int32s) UnmarshalJSON(raw []byte) error {
+-	*q = (*q)[:0]
+-	var ss []string
+-	if err := json.Unmarshal(raw, &ss); err != nil {
+-		return err
+-	}
+-	for _, s := range ss {
+-		v, err := strconv.ParseInt(s, 10, 32)
+-		if err != nil {
+-			return err
+-		}
+-		*q = append(*q, int32(v))
+-	}
+-	return nil
+-}
+-
+-// Uint64s is a slice of uint64s that marshal as quoted strings in JSON.
+-type Uint64s []uint64
+-
+-func (q *Uint64s) UnmarshalJSON(raw []byte) error {
+-	*q = (*q)[:0]
+-	var ss []string
+-	if err := json.Unmarshal(raw, &ss); err != nil {
+-		return err
+-	}
+-	for _, s := range ss {
+-		v, err := strconv.ParseUint(s, 10, 64)
+-		if err != nil {
+-			return err
+-		}
+-		*q = append(*q, uint64(v))
+-	}
+-	return nil
+-}
+-
+-// Uint32s is a slice of uint32s that marshal as quoted strings in JSON.
+-type Uint32s []uint32
+-
+-func (q *Uint32s) UnmarshalJSON(raw []byte) error {
+-	*q = (*q)[:0]
+-	var ss []string
+-	if err := json.Unmarshal(raw, &ss); err != nil {
+-		return err
+-	}
+-	for _, s := range ss {
+-		v, err := strconv.ParseUint(s, 10, 32)
+-		if err != nil {
+-			return err
+-		}
+-		*q = append(*q, uint32(v))
+-	}
+-	return nil
+-}
+-
+-// Float64s is a slice of float64s that marshal as quoted strings in JSON.
+-type Float64s []float64
+-
+-func (q *Float64s) UnmarshalJSON(raw []byte) error {
+-	*q = (*q)[:0]
+-	var ss []string
+-	if err := json.Unmarshal(raw, &ss); err != nil {
+-		return err
+-	}
+-	for _, s := range ss {
+-		v, err := strconv.ParseFloat(s, 64)
+-		if err != nil {
+-			return err
+-		}
+-		*q = append(*q, float64(v))
+-	}
+-	return nil
+-}
+-
+-func quotedList(n int, fn func(dst []byte, i int) []byte) ([]byte, error) {
+-	dst := make([]byte, 0, 2+n*10) // somewhat arbitrary
+-	dst = append(dst, '[')
+-	for i := 0; i < n; i++ {
+-		if i > 0 {
+-			dst = append(dst, ',')
+-		}
+-		dst = append(dst, '"')
+-		dst = fn(dst, i)
+-		dst = append(dst, '"')
+-	}
+-	dst = append(dst, ']')
+-	return dst, nil
+-}
+-
+-func (s Int64s) MarshalJSON() ([]byte, error) {
+-	return quotedList(len(s), func(dst []byte, i int) []byte {
+-		return strconv.AppendInt(dst, s[i], 10)
+-	})
+-}
+-
+-func (s Int32s) MarshalJSON() ([]byte, error) {
+-	return quotedList(len(s), func(dst []byte, i int) []byte {
+-		return strconv.AppendInt(dst, int64(s[i]), 10)
+-	})
+-}
+-
+-func (s Uint64s) MarshalJSON() ([]byte, error) {
+-	return quotedList(len(s), func(dst []byte, i int) []byte {
+-		return strconv.AppendUint(dst, s[i], 10)
+-	})
+-}
+-
+-func (s Uint32s) MarshalJSON() ([]byte, error) {
+-	return quotedList(len(s), func(dst []byte, i int) []byte {
+-		return strconv.AppendUint(dst, uint64(s[i]), 10)
+-	})
+-}
+-
+-func (s Float64s) MarshalJSON() ([]byte, error) {
+-	return quotedList(len(s), func(dst []byte, i int) []byte {
+-		return strconv.AppendFloat(dst, s[i], 'g', -1, 64)
+-	})
+-}
+diff --git a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/types_test.go b/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/types_test.go
+deleted file mode 100644
+index a6b2045..0000000
+--- a/Godeps/_workspace/src/code.google.com/p/google-api-go-client/googleapi/types_test.go
++++ /dev/null
+@@ -1,44 +0,0 @@
+-// Copyright 2013 Google Inc. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package googleapi
+-
+-import (
+-	"encoding/json"
+-	"reflect"
+-	"testing"
+-)
+-
+-func TestTypes(t *testing.T) {
+-	type T struct {
+-		I32 Int32s
+-		I64 Int64s
+-		U32 Uint32s
+-		U64 Uint64s
+-		F64 Float64s
+-	}
+-	v := &T{
+-		I32: Int32s{-1, 2, 3},
+-		I64: Int64s{-1, 2, 1 << 33},
+-		U32: Uint32s{1, 2},
+-		U64: Uint64s{1, 2, 1 << 33},
+-		F64: Float64s{1.5, 3.33},
+-	}
+-	got, err := json.Marshal(v)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	want := `{"I32":["-1","2","3"],"I64":["-1","2","8589934592"],"U32":["1","2"],"U64":["1","2","8589934592"],"F64":["1.5","3.33"]}`
+-	if string(got) != want {
+-		t.Fatalf("Marshal mismatch.\n got: %s\nwant: %s\n", got, want)
+-	}
+-
+-	v2 := new(T)
+-	if err := json.Unmarshal(got, v2); err != nil {
+-		t.Fatalf("Unmarshal: %v", err)
+-	}
+-	if !reflect.DeepEqual(v, v2) {
+-		t.Fatalf("Unmarshal didn't produce same results.\n got: %#v\nwant: %#v\n", v, v2)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/add_child.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/add_child.go
+deleted file mode 100644
+index 7122be0..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/add_child.go
++++ /dev/null
+@@ -1,23 +0,0 @@
+-package etcd
+-
+-// Add a new directory with a random etcd-generated key under the given path.
+-func (c *Client) AddChildDir(key string, ttl uint64) (*Response, error) {
+-	raw, err := c.post(key, "", ttl)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-// Add a new file with a random etcd-generated key under the given path.
+-func (c *Client) AddChild(key string, value string, ttl uint64) (*Response, error) {
+-	raw, err := c.post(key, value, ttl)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/add_child_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/add_child_test.go
+deleted file mode 100644
+index 26223ff..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/add_child_test.go
++++ /dev/null
+@@ -1,73 +0,0 @@
+-package etcd
+-
+-import "testing"
+-
+-func TestAddChild(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("fooDir", true)
+-		c.Delete("nonexistentDir", true)
+-	}()
+-
+-	c.CreateDir("fooDir", 5)
+-
+-	_, err := c.AddChild("fooDir", "v0", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	_, err = c.AddChild("fooDir", "v1", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	resp, err := c.Get("fooDir", true, false)
+-	// The child with v0 should proceed the child with v1 because it's added
+-	// earlier, so it should have a lower key.
+-	if !(len(resp.Node.Nodes) == 2 && (resp.Node.Nodes[0].Value == "v0" && resp.Node.Nodes[1].Value == "v1")) {
+-		t.Fatalf("AddChild 1 failed.  There should be two chlidren whose values are v0 and v1, respectively."+
+-			"  The response was: %#v", resp)
+-	}
+-
+-	// Creating a child under a nonexistent directory should succeed.
+-	// The directory should be created.
+-	resp, err = c.AddChild("nonexistentDir", "foo", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-}
+-
+-func TestAddChildDir(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("fooDir", true)
+-		c.Delete("nonexistentDir", true)
+-	}()
+-
+-	c.CreateDir("fooDir", 5)
+-
+-	_, err := c.AddChildDir("fooDir", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	_, err = c.AddChildDir("fooDir", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	resp, err := c.Get("fooDir", true, false)
+-	// The child with v0 should proceed the child with v1 because it's added
+-	// earlier, so it should have a lower key.
+-	if !(len(resp.Node.Nodes) == 2 && (len(resp.Node.Nodes[0].Nodes) == 0 && len(resp.Node.Nodes[1].Nodes) == 0)) {
+-		t.Fatalf("AddChildDir 1 failed.  There should be two chlidren whose values are v0 and v1, respectively."+
+-			"  The response was: %#v", resp)
+-	}
+-
+-	// Creating a child under a nonexistent directory should succeed.
+-	// The directory should be created.
+-	resp, err = c.AddChildDir("nonexistentDir", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/client.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/client.go
+deleted file mode 100644
+index f6ae548..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/client.go
++++ /dev/null
+@@ -1,435 +0,0 @@
+-package etcd
+-
+-import (
+-	"crypto/tls"
+-	"crypto/x509"
+-	"encoding/json"
+-	"errors"
+-	"io"
+-	"io/ioutil"
+-	"net"
+-	"net/http"
+-	"net/url"
+-	"os"
+-	"path"
+-	"time"
+-)
+-
+-// See SetConsistency for how to use these constants.
+-const (
+-	// Using strings rather than iota because the consistency level
+-	// could be persisted to disk, so it'd be better to use
+-	// human-readable values.
+-	STRONG_CONSISTENCY = "STRONG"
+-	WEAK_CONSISTENCY   = "WEAK"
+-)
+-
+-const (
+-	defaultBufferSize = 10
+-)
+-
+-type Config struct {
+-	CertFile    string        `json:"certFile"`
+-	KeyFile     string        `json:"keyFile"`
+-	CaCertFile  []string      `json:"caCertFiles"`
+-	DialTimeout time.Duration `json:"timeout"`
+-	Consistency string        `json:"consistency"`
+-}
+-
+-type Client struct {
+-	config      Config   `json:"config"`
+-	cluster     *Cluster `json:"cluster"`
+-	httpClient  *http.Client
+-	persistence io.Writer
+-	cURLch      chan string
+-	// CheckRetry can be used to control the policy for failed requests
+-	// and modify the cluster if needed.
+-	// The client calls it before sending requests again, and
+-	// stops retrying if CheckRetry returns some error. The cases that
+-	// this function needs to handle include no response and unexpected
+-	// http status code of response.
+-	// If CheckRetry is nil, client will call the default one
+-	// `DefaultCheckRetry`.
+-	// Argument cluster is the etcd.Cluster object that these requests have been made on.
+-	// Argument numReqs is the number of http.Requests that have been made so far.
+-	// Argument lastResp is the http.Responses from the last request.
+-	// Argument err is the reason of the failure.
+-	CheckRetry func(cluster *Cluster, numReqs int,
+-		lastResp http.Response, err error) error
+-}
+-
+-// NewClient create a basic client that is configured to be used
+-// with the given machine list.
+-func NewClient(machines []string) *Client {
+-	config := Config{
+-		// default timeout is one second
+-		DialTimeout: time.Second,
+-		// default consistency level is STRONG
+-		Consistency: STRONG_CONSISTENCY,
+-	}
+-
+-	client := &Client{
+-		cluster: NewCluster(machines),
+-		config:  config,
+-	}
+-
+-	client.initHTTPClient()
+-	client.saveConfig()
+-
+-	return client
+-}
+-
+-// NewTLSClient create a basic client with TLS configuration
+-func NewTLSClient(machines []string, cert, key, caCert string) (*Client, error) {
+-	// overwrite the default machine to use https
+-	if len(machines) == 0 {
+-		machines = []string{"https://127.0.0.1:4001"}
+-	}
+-
+-	config := Config{
+-		// default timeout is one second
+-		DialTimeout: time.Second,
+-		// default consistency level is STRONG
+-		Consistency: STRONG_CONSISTENCY,
+-		CertFile:    cert,
+-		KeyFile:     key,
+-		CaCertFile:  make([]string, 0),
+-	}
+-
+-	client := &Client{
+-		cluster: NewCluster(machines),
+-		config:  config,
+-	}
+-
+-	err := client.initHTTPSClient(cert, key)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	err = client.AddRootCA(caCert)
+-
+-	client.saveConfig()
+-
+-	return client, nil
+-}
+-
+-// NewClientFromFile creates a client from a given file path.
+-// The given file is expected to use the JSON format.
+-func NewClientFromFile(fpath string) (*Client, error) {
+-	fi, err := os.Open(fpath)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	defer func() {
+-		if err := fi.Close(); err != nil {
+-			panic(err)
+-		}
+-	}()
+-
+-	return NewClientFromReader(fi)
+-}
+-
+-// NewClientFromReader creates a Client configured from a given reader.
+-// The configuration is expected to use the JSON format.
+-func NewClientFromReader(reader io.Reader) (*Client, error) {
+-	c := new(Client)
+-
+-	b, err := ioutil.ReadAll(reader)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	err = json.Unmarshal(b, c)
+-	if err != nil {
+-		return nil, err
+-	}
+-	if c.config.CertFile == "" {
+-		c.initHTTPClient()
+-	} else {
+-		err = c.initHTTPSClient(c.config.CertFile, c.config.KeyFile)
+-	}
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	for _, caCert := range c.config.CaCertFile {
+-		if err := c.AddRootCA(caCert); err != nil {
+-			return nil, err
+-		}
+-	}
+-
+-	return c, nil
+-}
+-
+-// Override the Client's HTTP Transport object
+-func (c *Client) SetTransport(tr *http.Transport) {
+-	c.httpClient.Transport = tr
+-}
+-
+-// initHTTPClient initializes a HTTP client for etcd client
+-func (c *Client) initHTTPClient() {
+-	tr := &http.Transport{
+-		Dial: c.dial,
+-		TLSClientConfig: &tls.Config{
+-			InsecureSkipVerify: true,
+-		},
+-	}
+-	c.httpClient = &http.Client{Transport: tr}
+-}
+-
+-// initHTTPClient initializes a HTTPS client for etcd client
+-func (c *Client) initHTTPSClient(cert, key string) error {
+-	if cert == "" || key == "" {
+-		return errors.New("Require both cert and key path")
+-	}
+-
+-	tlsCert, err := tls.LoadX509KeyPair(cert, key)
+-	if err != nil {
+-		return err
+-	}
+-
+-	tlsConfig := &tls.Config{
+-		Certificates:       []tls.Certificate{tlsCert},
+-		InsecureSkipVerify: true,
+-	}
+-
+-	tr := &http.Transport{
+-		TLSClientConfig: tlsConfig,
+-		Dial:            c.dial,
+-	}
+-
+-	c.httpClient = &http.Client{Transport: tr}
+-	return nil
+-}
+-
+-// SetPersistence sets a writer to which the config will be
+-// written every time it's changed.
+-func (c *Client) SetPersistence(writer io.Writer) {
+-	c.persistence = writer
+-}
+-
+-// SetConsistency changes the consistency level of the client.
+-//
+-// When consistency is set to STRONG_CONSISTENCY, all requests,
+-// including GET, are sent to the leader.  This means that, assuming
+-// the absence of leader failures, GET requests are guaranteed to see
+-// the changes made by previous requests.
+-//
+-// When consistency is set to WEAK_CONSISTENCY, other requests
+-// are still sent to the leader, but GET requests are sent to a
+-// random server from the server pool.  This reduces the read
+-// load on the leader, but it's not guaranteed that the GET requests
+-// will see changes made by previous requests (they might have not
+-// yet been committed on non-leader servers).
+-func (c *Client) SetConsistency(consistency string) error {
+-	if !(consistency == STRONG_CONSISTENCY || consistency == WEAK_CONSISTENCY) {
+-		return errors.New("The argument must be either STRONG_CONSISTENCY or WEAK_CONSISTENCY.")
+-	}
+-	c.config.Consistency = consistency
+-	return nil
+-}
+-
+-// Sets the DialTimeout value
+-func (c *Client) SetDialTimeout(d time.Duration) {
+-	c.config.DialTimeout = d
+-}
+-
+-// AddRootCA adds a root CA cert for the etcd client
+-func (c *Client) AddRootCA(caCert string) error {
+-	if c.httpClient == nil {
+-		return errors.New("Client has not been initialized yet!")
+-	}
+-
+-	certBytes, err := ioutil.ReadFile(caCert)
+-	if err != nil {
+-		return err
+-	}
+-
+-	tr, ok := c.httpClient.Transport.(*http.Transport)
+-
+-	if !ok {
+-		panic("AddRootCA(): Transport type assert should not fail")
+-	}
+-
+-	if tr.TLSClientConfig.RootCAs == nil {
+-		caCertPool := x509.NewCertPool()
+-		ok = caCertPool.AppendCertsFromPEM(certBytes)
+-		if ok {
+-			tr.TLSClientConfig.RootCAs = caCertPool
+-		}
+-		tr.TLSClientConfig.InsecureSkipVerify = false
+-	} else {
+-		ok = tr.TLSClientConfig.RootCAs.AppendCertsFromPEM(certBytes)
+-	}
+-
+-	if !ok {
+-		err = errors.New("Unable to load caCert")
+-	}
+-
+-	c.config.CaCertFile = append(c.config.CaCertFile, caCert)
+-	c.saveConfig()
+-
+-	return err
+-}
+-
+-// SetCluster updates cluster information using the given machine list.
+-func (c *Client) SetCluster(machines []string) bool {
+-	success := c.internalSyncCluster(machines)
+-	return success
+-}
+-
+-func (c *Client) GetCluster() []string {
+-	return c.cluster.Machines
+-}
+-
+-// SyncCluster updates the cluster information using the internal machine list.
+-func (c *Client) SyncCluster() bool {
+-	return c.internalSyncCluster(c.cluster.Machines)
+-}
+-
+-// internalSyncCluster syncs cluster information using the given machine list.
+-func (c *Client) internalSyncCluster(machines []string) bool {
+-	for _, machine := range machines {
+-		httpPath := c.createHttpPath(machine, path.Join(version, "machines"))
+-		resp, err := c.httpClient.Get(httpPath)
+-		if err != nil {
+-			// try another machine in the cluster
+-			continue
+-		} else {
+-			b, err := ioutil.ReadAll(resp.Body)
+-			resp.Body.Close()
+-			if err != nil {
+-				// try another machine in the cluster
+-				continue
+-			}
+-
+-			// update Machines List
+-			c.cluster.updateFromStr(string(b))
+-
+-			// update leader
+-			// the first one in the machine list is the leader
+-			c.cluster.switchLeader(0)
+-
+-			logger.Debug("sync.machines ", c.cluster.Machines)
+-			c.saveConfig()
+-			return true
+-		}
+-	}
+-	return false
+-}
+-
+-// createHttpPath creates a complete HTTP URL.
+-// serverName should contain both the host name and a port number, if any.
+-func (c *Client) createHttpPath(serverName string, _path string) string {
+-	u, err := url.Parse(serverName)
+-	if err != nil {
+-		panic(err)
+-	}
+-
+-	u.Path = path.Join(u.Path, _path)
+-
+-	if u.Scheme == "" {
+-		u.Scheme = "http"
+-	}
+-	return u.String()
+-}
+-
+-// dial attempts to open a TCP connection to the provided address, explicitly
+-// enabling keep-alives with a one-second interval.
+-func (c *Client) dial(network, addr string) (net.Conn, error) {
+-	conn, err := net.DialTimeout(network, addr, c.config.DialTimeout)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	tcpConn, ok := conn.(*net.TCPConn)
+-	if !ok {
+-		return nil, errors.New("Failed type-assertion of net.Conn as *net.TCPConn")
+-	}
+-
+-	// Keep TCP alive to check whether or not the remote machine is down
+-	if err = tcpConn.SetKeepAlive(true); err != nil {
+-		return nil, err
+-	}
+-
+-	if err = tcpConn.SetKeepAlivePeriod(time.Second); err != nil {
+-		return nil, err
+-	}
+-
+-	return tcpConn, nil
+-}
+-
+-func (c *Client) OpenCURL() {
+-	c.cURLch = make(chan string, defaultBufferSize)
+-}
+-
+-func (c *Client) CloseCURL() {
+-	c.cURLch = nil
+-}
+-
+-func (c *Client) sendCURL(command string) {
+-	go func() {
+-		select {
+-		case c.cURLch <- command:
+-		default:
+-		}
+-	}()
+-}
+-
+-func (c *Client) RecvCURL() string {
+-	return <-c.cURLch
+-}
+-
+-// saveConfig saves the current config using c.persistence.
+-func (c *Client) saveConfig() error {
+-	if c.persistence != nil {
+-		b, err := json.Marshal(c)
+-		if err != nil {
+-			return err
+-		}
+-
+-		_, err = c.persistence.Write(b)
+-		if err != nil {
+-			return err
+-		}
+-	}
+-
+-	return nil
+-}
+-
+-// MarshalJSON implements the Marshaller interface
+-// as defined by the standard JSON package.
+-func (c *Client) MarshalJSON() ([]byte, error) {
+-	b, err := json.Marshal(struct {
+-		Config  Config   `json:"config"`
+-		Cluster *Cluster `json:"cluster"`
+-	}{
+-		Config:  c.config,
+-		Cluster: c.cluster,
+-	})
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return b, nil
+-}
+-
+-// UnmarshalJSON implements the Unmarshaller interface
+-// as defined by the standard JSON package.
+-func (c *Client) UnmarshalJSON(b []byte) error {
+-	temp := struct {
+-		Config  Config   `json:"config"`
+-		Cluster *Cluster `json:"cluster"`
+-	}{}
+-	err := json.Unmarshal(b, &temp)
+-	if err != nil {
+-		return err
+-	}
+-
+-	c.cluster = temp.Cluster
+-	c.config = temp.Config
+-	return nil
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/client_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/client_test.go
+deleted file mode 100644
+index c245e47..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/client_test.go
++++ /dev/null
+@@ -1,96 +0,0 @@
+-package etcd
+-
+-import (
+-	"encoding/json"
+-	"fmt"
+-	"net"
+-	"net/url"
+-	"os"
+-	"testing"
+-)
+-
+-// To pass this test, we need to create a cluster of 3 machines
+-// The server should be listening on 127.0.0.1:4001, 4002, 4003
+-func TestSync(t *testing.T) {
+-	fmt.Println("Make sure there are three nodes at 0.0.0.0:4001-4003")
+-
+-	// Explicit trailing slash to ensure this doesn't reproduce:
+-	// https://github.com/coreos/go-etcd/issues/82
+-	c := NewClient([]string{"http://127.0.0.1:4001/"})
+-
+-	success := c.SyncCluster()
+-	if !success {
+-		t.Fatal("cannot sync machines")
+-	}
+-
+-	for _, m := range c.GetCluster() {
+-		u, err := url.Parse(m)
+-		if err != nil {
+-			t.Fatal(err)
+-		}
+-		if u.Scheme != "http" {
+-			t.Fatal("scheme must be http")
+-		}
+-
+-		host, _, err := net.SplitHostPort(u.Host)
+-		if err != nil {
+-			t.Fatal(err)
+-		}
+-		if host != "127.0.0.1" {
+-			t.Fatal("Host must be 127.0.0.1")
+-		}
+-	}
+-
+-	badMachines := []string{"abc", "edef"}
+-
+-	success = c.SetCluster(badMachines)
+-
+-	if success {
+-		t.Fatal("should not sync on bad machines")
+-	}
+-
+-	goodMachines := []string{"127.0.0.1:4002"}
+-
+-	success = c.SetCluster(goodMachines)
+-
+-	if !success {
+-		t.Fatal("cannot sync machines")
+-	} else {
+-		fmt.Println(c.cluster.Machines)
+-	}
+-
+-}
+-
+-func TestPersistence(t *testing.T) {
+-	c := NewClient(nil)
+-	c.SyncCluster()
+-
+-	fo, err := os.Create("config.json")
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	defer func() {
+-		if err := fo.Close(); err != nil {
+-			panic(err)
+-		}
+-	}()
+-
+-	c.SetPersistence(fo)
+-	err = c.saveConfig()
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	c2, err := NewClientFromFile("config.json")
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	// Verify that the two clients have the same config
+-	b1, _ := json.Marshal(c)
+-	b2, _ := json.Marshal(c2)
+-
+-	if string(b1) != string(b2) {
+-		t.Fatalf("The two configs should be equal!")
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/cluster.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/cluster.go
+deleted file mode 100644
+index aaa2054..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/cluster.go
++++ /dev/null
+@@ -1,51 +0,0 @@
+-package etcd
+-
+-import (
+-	"net/url"
+-	"strings"
+-)
+-
+-type Cluster struct {
+-	Leader   string   `json:"leader"`
+-	Machines []string `json:"machines"`
+-}
+-
+-func NewCluster(machines []string) *Cluster {
+-	// if an empty slice was sent in then just assume HTTP 4001 on localhost
+-	if len(machines) == 0 {
+-		machines = []string{"http://127.0.0.1:4001"}
+-	}
+-
+-	// default leader and machines
+-	return &Cluster{
+-		Leader:   machines[0],
+-		Machines: machines,
+-	}
+-}
+-
+-// switchLeader switch the current leader to machines[num]
+-func (cl *Cluster) switchLeader(num int) {
+-	logger.Debugf("switch.leader[from %v to %v]",
+-		cl.Leader, cl.Machines[num])
+-
+-	cl.Leader = cl.Machines[num]
+-}
+-
+-func (cl *Cluster) updateFromStr(machines string) {
+-	cl.Machines = strings.Split(machines, ", ")
+-}
+-
+-func (cl *Cluster) updateLeader(leader string) {
+-	logger.Debugf("update.leader[%s,%s]", cl.Leader, leader)
+-	cl.Leader = leader
+-}
+-
+-func (cl *Cluster) updateLeaderFromURL(u *url.URL) {
+-	var leader string
+-	if u.Scheme == "" {
+-		leader = "http://" + u.Host
+-	} else {
+-		leader = u.Scheme + "://" + u.Host
+-	}
+-	cl.updateLeader(leader)
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_delete.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_delete.go
+deleted file mode 100644
+index 11131bb..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_delete.go
++++ /dev/null
+@@ -1,34 +0,0 @@
+-package etcd
+-
+-import "fmt"
+-
+-func (c *Client) CompareAndDelete(key string, prevValue string, prevIndex uint64) (*Response, error) {
+-	raw, err := c.RawCompareAndDelete(key, prevValue, prevIndex)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-func (c *Client) RawCompareAndDelete(key string, prevValue string, prevIndex uint64) (*RawResponse, error) {
+-	if prevValue == "" && prevIndex == 0 {
+-		return nil, fmt.Errorf("You must give either prevValue or prevIndex.")
+-	}
+-
+-	options := Options{}
+-	if prevValue != "" {
+-		options["prevValue"] = prevValue
+-	}
+-	if prevIndex != 0 {
+-		options["prevIndex"] = prevIndex
+-	}
+-
+-	raw, err := c.delete(key, options)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw, err
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_delete_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_delete_test.go
+deleted file mode 100644
+index 223e50f..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_delete_test.go
++++ /dev/null
+@@ -1,46 +0,0 @@
+-package etcd
+-
+-import (
+-	"testing"
+-)
+-
+-func TestCompareAndDelete(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("foo", true)
+-	}()
+-
+-	c.Set("foo", "bar", 5)
+-
+-	// This should succeed an correct prevValue
+-	resp, err := c.CompareAndDelete("foo", "bar", 0)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.PrevNode.Value == "bar" && resp.PrevNode.Key == "/foo" && resp.PrevNode.TTL == 5) {
+-		t.Fatalf("CompareAndDelete 1 prevNode failed: %#v", resp)
+-	}
+-
+-	resp, _ = c.Set("foo", "bar", 5)
+-	// This should fail because it gives an incorrect prevValue
+-	_, err = c.CompareAndDelete("foo", "xxx", 0)
+-	if err == nil {
+-		t.Fatalf("CompareAndDelete 2 should have failed.  The response is: %#v", resp)
+-	}
+-
+-	// This should succeed because it gives an correct prevIndex
+-	resp, err = c.CompareAndDelete("foo", "", resp.Node.ModifiedIndex)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.PrevNode.Value == "bar" && resp.PrevNode.Key == "/foo" && resp.PrevNode.TTL == 5) {
+-		t.Fatalf("CompareAndSwap 3 prevNode failed: %#v", resp)
+-	}
+-
+-	c.Set("foo", "bar", 5)
+-	// This should fail because it gives an incorrect prevIndex
+-	resp, err = c.CompareAndDelete("foo", "", 29817514)
+-	if err == nil {
+-		t.Fatalf("CompareAndDelete 4 should have failed.  The response is: %#v", resp)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_swap.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_swap.go
+deleted file mode 100644
+index bb4f906..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_swap.go
++++ /dev/null
+@@ -1,36 +0,0 @@
+-package etcd
+-
+-import "fmt"
+-
+-func (c *Client) CompareAndSwap(key string, value string, ttl uint64,
+-	prevValue string, prevIndex uint64) (*Response, error) {
+-	raw, err := c.RawCompareAndSwap(key, value, ttl, prevValue, prevIndex)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-func (c *Client) RawCompareAndSwap(key string, value string, ttl uint64,
+-	prevValue string, prevIndex uint64) (*RawResponse, error) {
+-	if prevValue == "" && prevIndex == 0 {
+-		return nil, fmt.Errorf("You must give either prevValue or prevIndex.")
+-	}
+-
+-	options := Options{}
+-	if prevValue != "" {
+-		options["prevValue"] = prevValue
+-	}
+-	if prevIndex != 0 {
+-		options["prevIndex"] = prevIndex
+-	}
+-
+-	raw, err := c.put(key, value, ttl, options)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw, err
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_swap_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_swap_test.go
+deleted file mode 100644
+index 14a1b00..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/compare_and_swap_test.go
++++ /dev/null
+@@ -1,57 +0,0 @@
+-package etcd
+-
+-import (
+-	"testing"
+-)
+-
+-func TestCompareAndSwap(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("foo", true)
+-	}()
+-
+-	c.Set("foo", "bar", 5)
+-
+-	// This should succeed
+-	resp, err := c.CompareAndSwap("foo", "bar2", 5, "bar", 0)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.Node.Value == "bar2" && resp.Node.Key == "/foo" && resp.Node.TTL == 5) {
+-		t.Fatalf("CompareAndSwap 1 failed: %#v", resp)
+-	}
+-
+-	if !(resp.PrevNode.Value == "bar" && resp.PrevNode.Key == "/foo" && resp.PrevNode.TTL == 5) {
+-		t.Fatalf("CompareAndSwap 1 prevNode failed: %#v", resp)
+-	}
+-
+-	// This should fail because it gives an incorrect prevValue
+-	resp, err = c.CompareAndSwap("foo", "bar3", 5, "xxx", 0)
+-	if err == nil {
+-		t.Fatalf("CompareAndSwap 2 should have failed.  The response is: %#v", resp)
+-	}
+-
+-	resp, err = c.Set("foo", "bar", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	// This should succeed
+-	resp, err = c.CompareAndSwap("foo", "bar2", 5, "", resp.Node.ModifiedIndex)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.Node.Value == "bar2" && resp.Node.Key == "/foo" && resp.Node.TTL == 5) {
+-		t.Fatalf("CompareAndSwap 3 failed: %#v", resp)
+-	}
+-
+-	if !(resp.PrevNode.Value == "bar" && resp.PrevNode.Key == "/foo" && resp.PrevNode.TTL == 5) {
+-		t.Fatalf("CompareAndSwap 3 prevNode failed: %#v", resp)
+-	}
+-
+-	// This should fail because it gives an incorrect prevIndex
+-	resp, err = c.CompareAndSwap("foo", "bar3", 5, "", 29817514)
+-	if err == nil {
+-		t.Fatalf("CompareAndSwap 4 should have failed.  The response is: %#v", resp)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/debug.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/debug.go
+deleted file mode 100644
+index 0f77788..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/debug.go
++++ /dev/null
+@@ -1,55 +0,0 @@
+-package etcd
+-
+-import (
+-	"fmt"
+-	"io/ioutil"
+-	"log"
+-	"strings"
+-)
+-
+-var logger *etcdLogger
+-
+-func SetLogger(l *log.Logger) {
+-	logger = &etcdLogger{l}
+-}
+-
+-func GetLogger() *log.Logger {
+-	return logger.log
+-}
+-
+-type etcdLogger struct {
+-	log *log.Logger
+-}
+-
+-func (p *etcdLogger) Debug(args ...interface{}) {
+-	msg := "DEBUG: " + fmt.Sprint(args...)
+-	p.log.Println(msg)
+-}
+-
+-func (p *etcdLogger) Debugf(f string, args ...interface{}) {
+-	msg := "DEBUG: " + fmt.Sprintf(f, args...)
+-	// Append newline if necessary
+-	if !strings.HasSuffix(msg, "\n") {
+-		msg = msg + "\n"
+-	}
+-	p.log.Print(msg)
+-}
+-
+-func (p *etcdLogger) Warning(args ...interface{}) {
+-	msg := "WARNING: " + fmt.Sprint(args...)
+-	p.log.Println(msg)
+-}
+-
+-func (p *etcdLogger) Warningf(f string, args ...interface{}) {
+-	msg := "WARNING: " + fmt.Sprintf(f, args...)
+-	// Append newline if necessary
+-	if !strings.HasSuffix(msg, "\n") {
+-		msg = msg + "\n"
+-	}
+-	p.log.Print(msg)
+-}
+-
+-func init() {
+-	// Default logger uses the go default log.
+-	SetLogger(log.New(ioutil.Discard, "go-etcd", log.LstdFlags))
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/debug_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/debug_test.go
+deleted file mode 100644
+index 97f6d11..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/debug_test.go
++++ /dev/null
+@@ -1,28 +0,0 @@
+-package etcd
+-
+-import (
+-	"testing"
+-)
+-
+-type Foo struct{}
+-type Bar struct {
+-	one string
+-	two int
+-}
+-
+-// Tests that logs don't panic with arbitrary interfaces
+-func TestDebug(t *testing.T) {
+-	f := &Foo{}
+-	b := &Bar{"asfd", 3}
+-	for _, test := range []interface{}{
+-		1234,
+-		"asdf",
+-		f,
+-		b,
+-	} {
+-		logger.Debug(test)
+-		logger.Debugf("something, %s", test)
+-		logger.Warning(test)
+-		logger.Warningf("something, %s", test)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/delete.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/delete.go
+deleted file mode 100644
+index b37accd..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/delete.go
++++ /dev/null
+@@ -1,40 +0,0 @@
+-package etcd
+-
+-// Delete deletes the given key.
+-//
+-// When recursive set to false, if the key points to a
+-// directory the method will fail.
+-//
+-// When recursive set to true, if the key points to a file,
+-// the file will be deleted; if the key points to a directory,
+-// then everything under the directory (including all child directories)
+-// will be deleted.
+-func (c *Client) Delete(key string, recursive bool) (*Response, error) {
+-	raw, err := c.RawDelete(key, recursive, false)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-// DeleteDir deletes an empty directory or a key value pair
+-func (c *Client) DeleteDir(key string) (*Response, error) {
+-	raw, err := c.RawDelete(key, false, true)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-func (c *Client) RawDelete(key string, recursive bool, dir bool) (*RawResponse, error) {
+-	ops := Options{
+-		"recursive": recursive,
+-		"dir":       dir,
+-	}
+-
+-	return c.delete(key, ops)
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/delete_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/delete_test.go
+deleted file mode 100644
+index 5904971..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/delete_test.go
++++ /dev/null
+@@ -1,81 +0,0 @@
+-package etcd
+-
+-import (
+-	"testing"
+-)
+-
+-func TestDelete(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("foo", true)
+-	}()
+-
+-	c.Set("foo", "bar", 5)
+-	resp, err := c.Delete("foo", false)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !(resp.Node.Value == "") {
+-		t.Fatalf("Delete failed with %s", resp.Node.Value)
+-	}
+-
+-	if !(resp.PrevNode.Value == "bar") {
+-		t.Fatalf("Delete PrevNode failed with %s", resp.Node.Value)
+-	}
+-
+-	resp, err = c.Delete("foo", false)
+-	if err == nil {
+-		t.Fatalf("Delete should have failed because the key foo did not exist.  "+
+-			"The response was: %v", resp)
+-	}
+-}
+-
+-func TestDeleteAll(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("foo", true)
+-		c.Delete("fooDir", true)
+-	}()
+-
+-	c.SetDir("foo", 5)
+-	// test delete an empty dir
+-	resp, err := c.DeleteDir("foo")
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !(resp.Node.Value == "") {
+-		t.Fatalf("DeleteAll 1 failed: %#v", resp)
+-	}
+-
+-	if !(resp.PrevNode.Dir == true && resp.PrevNode.Value == "") {
+-		t.Fatalf("DeleteAll 1 PrevNode failed: %#v", resp)
+-	}
+-
+-	c.CreateDir("fooDir", 5)
+-	c.Set("fooDir/foo", "bar", 5)
+-	_, err = c.DeleteDir("fooDir")
+-	if err == nil {
+-		t.Fatal("should not able to delete a non-empty dir with deletedir")
+-	}
+-
+-	resp, err = c.Delete("fooDir", true)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !(resp.Node.Value == "") {
+-		t.Fatalf("DeleteAll 2 failed: %#v", resp)
+-	}
+-
+-	if !(resp.PrevNode.Dir == true && resp.PrevNode.Value == "") {
+-		t.Fatalf("DeleteAll 2 PrevNode failed: %#v", resp)
+-	}
+-
+-	resp, err = c.Delete("foo", true)
+-	if err == nil {
+-		t.Fatalf("DeleteAll should have failed because the key foo did not exist.  "+
+-			"The response was: %v", resp)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/error.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/error.go
+deleted file mode 100644
+index 7e69287..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/error.go
++++ /dev/null
+@@ -1,48 +0,0 @@
+-package etcd
+-
+-import (
+-	"encoding/json"
+-	"fmt"
+-)
+-
+-const (
+-	ErrCodeEtcdNotReachable = 501
+-)
+-
+-var (
+-	errorMap = map[int]string{
+-		ErrCodeEtcdNotReachable: "All the given peers are not reachable",
+-	}
+-)
+-
+-type EtcdError struct {
+-	ErrorCode int    `json:"errorCode"`
+-	Message   string `json:"message"`
+-	Cause     string `json:"cause,omitempty"`
+-	Index     uint64 `json:"index"`
+-}
+-
+-func (e EtcdError) Error() string {
+-	return fmt.Sprintf("%v: %v (%v) [%v]", e.ErrorCode, e.Message, e.Cause, e.Index)
+-}
+-
+-func newError(errorCode int, cause string, index uint64) *EtcdError {
+-	return &EtcdError{
+-		ErrorCode: errorCode,
+-		Message:   errorMap[errorCode],
+-		Cause:     cause,
+-		Index:     index,
+-	}
+-}
+-
+-func handleError(b []byte) error {
+-	etcdErr := new(EtcdError)
+-
+-	err := json.Unmarshal(b, etcdErr)
+-	if err != nil {
+-		logger.Warningf("cannot unmarshal etcd error: %v", err)
+-		return err
+-	}
+-
+-	return etcdErr
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/get.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/get.go
+deleted file mode 100644
+index 976bf07..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/get.go
++++ /dev/null
+@@ -1,27 +0,0 @@
+-package etcd
+-
+-// Get gets the file or directory associated with the given key.
+-// If the key points to a directory, files and directories under
+-// it will be returned in sorted or unsorted order, depending on
+-// the sort flag.
+-// If recursive is set to false, contents under child directories
+-// will not be returned.
+-// If recursive is set to true, all the contents will be returned.
+-func (c *Client) Get(key string, sort, recursive bool) (*Response, error) {
+-	raw, err := c.RawGet(key, sort, recursive)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-func (c *Client) RawGet(key string, sort, recursive bool) (*RawResponse, error) {
+-	ops := Options{
+-		"recursive": recursive,
+-		"sorted":    sort,
+-	}
+-
+-	return c.get(key, ops)
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/get_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/get_test.go
+deleted file mode 100644
+index 279c4e2..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/get_test.go
++++ /dev/null
+@@ -1,131 +0,0 @@
+-package etcd
+-
+-import (
+-	"reflect"
+-	"testing"
+-)
+-
+-// cleanNode scrubs Expiration, ModifiedIndex and CreatedIndex of a node.
+-func cleanNode(n *Node) {
+-	n.Expiration = nil
+-	n.ModifiedIndex = 0
+-	n.CreatedIndex = 0
+-}
+-
+-// cleanResult scrubs a result object two levels deep of Expiration,
+-// ModifiedIndex and CreatedIndex.
+-func cleanResult(result *Response) {
+-	//  TODO(philips): make this recursive.
+-	cleanNode(result.Node)
+-	for i, _ := range result.Node.Nodes {
+-		cleanNode(result.Node.Nodes[i])
+-		for j, _ := range result.Node.Nodes[i].Nodes {
+-			cleanNode(result.Node.Nodes[i].Nodes[j])
+-		}
+-	}
+-}
+-
+-func TestGet(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("foo", true)
+-	}()
+-
+-	c.Set("foo", "bar", 5)
+-
+-	result, err := c.Get("foo", false, false)
+-
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if result.Node.Key != "/foo" || result.Node.Value != "bar" {
+-		t.Fatalf("Get failed with %s %s %v", result.Node.Key, result.Node.Value, result.Node.TTL)
+-	}
+-
+-	result, err = c.Get("goo", false, false)
+-	if err == nil {
+-		t.Fatalf("should not be able to get non-exist key")
+-	}
+-}
+-
+-func TestGetAll(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("fooDir", true)
+-	}()
+-
+-	c.CreateDir("fooDir", 5)
+-	c.Set("fooDir/k0", "v0", 5)
+-	c.Set("fooDir/k1", "v1", 5)
+-
+-	// Return kv-pairs in sorted order
+-	result, err := c.Get("fooDir", true, false)
+-
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	expected := Nodes{
+-		&Node{
+-			Key:   "/fooDir/k0",
+-			Value: "v0",
+-			TTL:   5,
+-		},
+-		&Node{
+-			Key:   "/fooDir/k1",
+-			Value: "v1",
+-			TTL:   5,
+-		},
+-	}
+-
+-	cleanResult(result)
+-
+-	if !reflect.DeepEqual(result.Node.Nodes, expected) {
+-		t.Fatalf("(actual) %v != (expected) %v", result.Node.Nodes, expected)
+-	}
+-
+-	// Test the `recursive` option
+-	c.CreateDir("fooDir/childDir", 5)
+-	c.Set("fooDir/childDir/k2", "v2", 5)
+-
+-	// Return kv-pairs in sorted order
+-	result, err = c.Get("fooDir", true, true)
+-
+-	cleanResult(result)
+-
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	expected = Nodes{
+-		&Node{
+-			Key: "/fooDir/childDir",
+-			Dir: true,
+-			Nodes: Nodes{
+-				&Node{
+-					Key:   "/fooDir/childDir/k2",
+-					Value: "v2",
+-					TTL:   5,
+-				},
+-			},
+-			TTL: 5,
+-		},
+-		&Node{
+-			Key:   "/fooDir/k0",
+-			Value: "v0",
+-			TTL:   5,
+-		},
+-		&Node{
+-			Key:   "/fooDir/k1",
+-			Value: "v1",
+-			TTL:   5,
+-		},
+-	}
+-
+-	cleanResult(result)
+-
+-	if !reflect.DeepEqual(result.Node.Nodes, expected) {
+-		t.Fatalf("(actual) %v != (expected) %v", result.Node.Nodes, expected)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/options.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/options.go
+deleted file mode 100644
+index 701c9b3..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/options.go
++++ /dev/null
+@@ -1,72 +0,0 @@
+-package etcd
+-
+-import (
+-	"fmt"
+-	"net/url"
+-	"reflect"
+-)
+-
+-type Options map[string]interface{}
+-
+-// An internally-used data structure that represents a mapping
+-// between valid options and their kinds
+-type validOptions map[string]reflect.Kind
+-
+-// Valid options for GET, PUT, POST, DELETE
+-// Using CAPITALIZED_UNDERSCORE to emphasize that these
+-// values are meant to be used as constants.
+-var (
+-	VALID_GET_OPTIONS = validOptions{
+-		"recursive":  reflect.Bool,
+-		"consistent": reflect.Bool,
+-		"sorted":     reflect.Bool,
+-		"wait":       reflect.Bool,
+-		"waitIndex":  reflect.Uint64,
+-	}
+-
+-	VALID_PUT_OPTIONS = validOptions{
+-		"prevValue": reflect.String,
+-		"prevIndex": reflect.Uint64,
+-		"prevExist": reflect.Bool,
+-		"dir":       reflect.Bool,
+-	}
+-
+-	VALID_POST_OPTIONS = validOptions{}
+-
+-	VALID_DELETE_OPTIONS = validOptions{
+-		"recursive": reflect.Bool,
+-		"dir":       reflect.Bool,
+-		"prevValue": reflect.String,
+-		"prevIndex": reflect.Uint64,
+-	}
+-)
+-
+-// Convert options to a string of HTML parameters
+-func (ops Options) toParameters(validOps validOptions) (string, error) {
+-	p := "?"
+-	values := url.Values{}
+-
+-	if ops == nil {
+-		return "", nil
+-	}
+-
+-	for k, v := range ops {
+-		// Check if the given option is valid (that it exists)
+-		kind := validOps[k]
+-		if kind == reflect.Invalid {
+-			return "", fmt.Errorf("Invalid option: %v", k)
+-		}
+-
+-		// Check if the given option is of the valid type
+-		t := reflect.TypeOf(v)
+-		if kind != t.Kind() {
+-			return "", fmt.Errorf("Option %s should be of %v kind, not of %v kind.",
+-				k, kind, t.Kind())
+-		}
+-
+-		values.Set(k, fmt.Sprintf("%v", v))
+-	}
+-
+-	p += values.Encode()
+-	return p, nil
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go
+deleted file mode 100644
+index 5d8b45a..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go
++++ /dev/null
+@@ -1,377 +0,0 @@
+-package etcd
+-
+-import (
+-	"errors"
+-	"fmt"
+-	"io/ioutil"
+-	"math/rand"
+-	"net/http"
+-	"net/url"
+-	"path"
+-	"strings"
+-	"sync"
+-	"time"
+-)
+-
+-// Errors introduced by handling requests
+-var (
+-	ErrRequestCancelled = errors.New("sending request is cancelled")
+-)
+-
+-type RawRequest struct {
+-	Method       string
+-	RelativePath string
+-	Values       url.Values
+-	Cancel       <-chan bool
+-}
+-
+-// NewRawRequest returns a new RawRequest
+-func NewRawRequest(method, relativePath string, values url.Values, cancel <-chan bool) *RawRequest {
+-	return &RawRequest{
+-		Method:       method,
+-		RelativePath: relativePath,
+-		Values:       values,
+-		Cancel:       cancel,
+-	}
+-}
+-
+-// getCancelable issues a cancelable GET request
+-func (c *Client) getCancelable(key string, options Options,
+-	cancel <-chan bool) (*RawResponse, error) {
+-	logger.Debugf("get %s [%s]", key, c.cluster.Leader)
+-	p := keyToPath(key)
+-
+-	// If consistency level is set to STRONG, append
+-	// the `consistent` query string.
+-	if c.config.Consistency == STRONG_CONSISTENCY {
+-		options["consistent"] = true
+-	}
+-
+-	str, err := options.toParameters(VALID_GET_OPTIONS)
+-	if err != nil {
+-		return nil, err
+-	}
+-	p += str
+-
+-	req := NewRawRequest("GET", p, nil, cancel)
+-	resp, err := c.SendRequest(req)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return resp, nil
+-}
+-
+-// get issues a GET request
+-func (c *Client) get(key string, options Options) (*RawResponse, error) {
+-	return c.getCancelable(key, options, nil)
+-}
+-
+-// put issues a PUT request
+-func (c *Client) put(key string, value string, ttl uint64,
+-	options Options) (*RawResponse, error) {
+-
+-	logger.Debugf("put %s, %s, ttl: %d, [%s]", key, value, ttl, c.cluster.Leader)
+-	p := keyToPath(key)
+-
+-	str, err := options.toParameters(VALID_PUT_OPTIONS)
+-	if err != nil {
+-		return nil, err
+-	}
+-	p += str
+-
+-	req := NewRawRequest("PUT", p, buildValues(value, ttl), nil)
+-	resp, err := c.SendRequest(req)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return resp, nil
+-}
+-
+-// post issues a POST request
+-func (c *Client) post(key string, value string, ttl uint64) (*RawResponse, error) {
+-	logger.Debugf("post %s, %s, ttl: %d, [%s]", key, value, ttl, c.cluster.Leader)
+-	p := keyToPath(key)
+-
+-	req := NewRawRequest("POST", p, buildValues(value, ttl), nil)
+-	resp, err := c.SendRequest(req)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return resp, nil
+-}
+-
+-// delete issues a DELETE request
+-func (c *Client) delete(key string, options Options) (*RawResponse, error) {
+-	logger.Debugf("delete %s [%s]", key, c.cluster.Leader)
+-	p := keyToPath(key)
+-
+-	str, err := options.toParameters(VALID_DELETE_OPTIONS)
+-	if err != nil {
+-		return nil, err
+-	}
+-	p += str
+-
+-	req := NewRawRequest("DELETE", p, nil, nil)
+-	resp, err := c.SendRequest(req)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return resp, nil
+-}
+-
+-// SendRequest sends a HTTP request and returns a Response as defined by etcd
+-func (c *Client) SendRequest(rr *RawRequest) (*RawResponse, error) {
+-
+-	var req *http.Request
+-	var resp *http.Response
+-	var httpPath string
+-	var err error
+-	var respBody []byte
+-
+-	var numReqs = 1
+-
+-	checkRetry := c.CheckRetry
+-	if checkRetry == nil {
+-		checkRetry = DefaultCheckRetry
+-	}
+-
+-	cancelled := make(chan bool, 1)
+-	reqLock := new(sync.Mutex)
+-
+-	if rr.Cancel != nil {
+-		cancelRoutine := make(chan bool)
+-		defer close(cancelRoutine)
+-
+-		go func() {
+-			select {
+-			case <-rr.Cancel:
+-				cancelled <- true
+-				logger.Debug("send.request is cancelled")
+-			case <-cancelRoutine:
+-				return
+-			}
+-
+-			// Repeat canceling request until this thread is stopped
+-			// because we have no idea about whether it succeeds.
+-			for {
+-				reqLock.Lock()
+-				c.httpClient.Transport.(*http.Transport).CancelRequest(req)
+-				reqLock.Unlock()
+-
+-				select {
+-				case <-time.After(100 * time.Millisecond):
+-				case <-cancelRoutine:
+-					return
+-				}
+-			}
+-		}()
+-	}
+-
+-	// If we connect to a follower and consistency is required, retry until
+-	// we connect to a leader
+-	sleep := 25 * time.Millisecond
+-	maxSleep := time.Second
+-	for attempt := 0; ; attempt++ {
+-		if attempt > 0 {
+-			select {
+-			case <-cancelled:
+-				return nil, ErrRequestCancelled
+-			case <-time.After(sleep):
+-				sleep = sleep * 2
+-				if sleep > maxSleep {
+-					sleep = maxSleep
+-				}
+-			}
+-		}
+-
+-		logger.Debug("Connecting to etcd: attempt", attempt+1, "for", rr.RelativePath)
+-
+-		if rr.Method == "GET" && c.config.Consistency == WEAK_CONSISTENCY {
+-			// If it's a GET and consistency level is set to WEAK,
+-			// then use a random machine.
+-			httpPath = c.getHttpPath(true, rr.RelativePath)
+-		} else {
+-			// Else use the leader.
+-			httpPath = c.getHttpPath(false, rr.RelativePath)
+-		}
+-
+-		// Return a cURL command if curlChan is set
+-		if c.cURLch != nil {
+-			command := fmt.Sprintf("curl -X %s %s", rr.Method, httpPath)
+-			for key, value := range rr.Values {
+-				command += fmt.Sprintf(" -d %s=%s", key, value[0])
+-			}
+-			c.sendCURL(command)
+-		}
+-
+-		logger.Debug("send.request.to ", httpPath, " | method ", rr.Method)
+-
+-		reqLock.Lock()
+-		if rr.Values == nil {
+-			if req, err = http.NewRequest(rr.Method, httpPath, nil); err != nil {
+-				return nil, err
+-			}
+-		} else {
+-			body := strings.NewReader(rr.Values.Encode())
+-			if req, err = http.NewRequest(rr.Method, httpPath, body); err != nil {
+-				return nil, err
+-			}
+-
+-			req.Header.Set("Content-Type",
+-				"application/x-www-form-urlencoded; param=value")
+-		}
+-		reqLock.Unlock()
+-
+-		resp, err = c.httpClient.Do(req)
+-		defer func() {
+-			if resp != nil {
+-				resp.Body.Close()
+-			}
+-		}()
+-
+-		// If the request was cancelled, return ErrRequestCancelled directly
+-		select {
+-		case <-cancelled:
+-			return nil, ErrRequestCancelled
+-		default:
+-		}
+-
+-		numReqs++
+-
+-		// network error, change a machine!
+-		if err != nil {
+-			logger.Debug("network error:", err.Error())
+-			lastResp := http.Response{}
+-			if checkErr := checkRetry(c.cluster, numReqs, lastResp, err); checkErr != nil {
+-				return nil, checkErr
+-			}
+-
+-			c.cluster.switchLeader(attempt % len(c.cluster.Machines))
+-			continue
+-		}
+-
+-		// if there is no error, it should receive response
+-		logger.Debug("recv.response.from", httpPath)
+-
+-		if validHttpStatusCode[resp.StatusCode] {
+-			// try to read byte code and break the loop
+-			respBody, err = ioutil.ReadAll(resp.Body)
+-			if err == nil {
+-				logger.Debug("recv.success.", httpPath)
+-				break
+-			}
+-			// ReadAll error may be caused due to cancel request
+-			select {
+-			case <-cancelled:
+-				return nil, ErrRequestCancelled
+-			default:
+-			}
+-		}
+-
+-		// if resp is TemporaryRedirect, set the new leader and retry
+-		if resp.StatusCode == http.StatusTemporaryRedirect {
+-			u, err := resp.Location()
+-
+-			if err != nil {
+-				logger.Warning(err)
+-			} else {
+-				// Update cluster leader based on redirect location
+-				// because it should point to the leader address
+-				c.cluster.updateLeaderFromURL(u)
+-				logger.Debug("recv.response.relocate", u.String())
+-			}
+-			resp.Body.Close()
+-			continue
+-		}
+-
+-		if checkErr := checkRetry(c.cluster, numReqs, *resp,
+-			errors.New("Unexpected HTTP status code")); checkErr != nil {
+-			return nil, checkErr
+-		}
+-		resp.Body.Close()
+-	}
+-
+-	r := &RawResponse{
+-		StatusCode: resp.StatusCode,
+-		Body:       respBody,
+-		Header:     resp.Header,
+-	}
+-
+-	return r, nil
+-}
+-
+-// DefaultCheckRetry defines the retrying behaviour for bad HTTP requests
+-// If we have retried 2 * machine number, stop retrying.
+-// If status code is InternalServerError, sleep for 200ms.
+-func DefaultCheckRetry(cluster *Cluster, numReqs int, lastResp http.Response,
+-	err error) error {
+-
+-	if numReqs >= 2*len(cluster.Machines) {
+-		return newError(ErrCodeEtcdNotReachable,
+-			"Tried to connect to each peer twice and failed", 0)
+-	}
+-
+-	code := lastResp.StatusCode
+-	if code == http.StatusInternalServerError {
+-		time.Sleep(time.Millisecond * 200)
+-
+-	}
+-
+-	logger.Warning("bad response status code", code)
+-	return nil
+-}
+-
+-func (c *Client) getHttpPath(random bool, s ...string) string {
+-	var machine string
+-	if random {
+-		machine = c.cluster.Machines[rand.Intn(len(c.cluster.Machines))]
+-	} else {
+-		machine = c.cluster.Leader
+-	}
+-
+-	fullPath := machine + "/" + version
+-	for _, seg := range s {
+-		fullPath = fullPath + "/" + seg
+-	}
+-
+-	return fullPath
+-}
+-
+-// buildValues builds a url.Values map according to the given value and ttl
+-func buildValues(value string, ttl uint64) url.Values {
+-	v := url.Values{}
+-
+-	if value != "" {
+-		v.Set("value", value)
+-	}
+-
+-	if ttl > 0 {
+-		v.Set("ttl", fmt.Sprintf("%v", ttl))
+-	}
+-
+-	return v
+-}
+-
+-// convert key string to http path exclude version
+-// for example: key[foo] -> path[keys/foo]
+-// key[/] -> path[keys/]
+-func keyToPath(key string) string {
+-	p := path.Join("keys", key)
+-
+-	// corner case: if key is "/" or "//" ect
+-	// path join will clear the tailing "/"
+-	// we need to add it back
+-	if p == "keys" {
+-		p = "keys/"
+-	}
+-
+-	return p
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/response.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/response.go
+deleted file mode 100644
+index 1fe9b4e..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/response.go
++++ /dev/null
+@@ -1,89 +0,0 @@
+-package etcd
+-
+-import (
+-	"encoding/json"
+-	"net/http"
+-	"strconv"
+-	"time"
+-)
+-
+-const (
+-	rawResponse = iota
+-	normalResponse
+-)
+-
+-type responseType int
+-
+-type RawResponse struct {
+-	StatusCode int
+-	Body       []byte
+-	Header     http.Header
+-}
+-
+-var (
+-	validHttpStatusCode = map[int]bool{
+-		http.StatusCreated:            true,
+-		http.StatusOK:                 true,
+-		http.StatusBadRequest:         true,
+-		http.StatusNotFound:           true,
+-		http.StatusPreconditionFailed: true,
+-		http.StatusForbidden:          true,
+-	}
+-)
+-
+-// Unmarshal parses RawResponse and stores the result in Response
+-func (rr *RawResponse) Unmarshal() (*Response, error) {
+-	if rr.StatusCode != http.StatusOK && rr.StatusCode != http.StatusCreated {
+-		return nil, handleError(rr.Body)
+-	}
+-
+-	resp := new(Response)
+-
+-	err := json.Unmarshal(rr.Body, resp)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	// attach index and term to response
+-	resp.EtcdIndex, _ = strconv.ParseUint(rr.Header.Get("X-Etcd-Index"), 10, 64)
+-	resp.RaftIndex, _ = strconv.ParseUint(rr.Header.Get("X-Raft-Index"), 10, 64)
+-	resp.RaftTerm, _ = strconv.ParseUint(rr.Header.Get("X-Raft-Term"), 10, 64)
+-
+-	return resp, nil
+-}
+-
+-type Response struct {
+-	Action    string `json:"action"`
+-	Node      *Node  `json:"node"`
+-	PrevNode  *Node  `json:"prevNode,omitempty"`
+-	EtcdIndex uint64 `json:"etcdIndex"`
+-	RaftIndex uint64 `json:"raftIndex"`
+-	RaftTerm  uint64 `json:"raftTerm"`
+-}
+-
+-type Node struct {
+-	Key           string     `json:"key, omitempty"`
+-	Value         string     `json:"value,omitempty"`
+-	Dir           bool       `json:"dir,omitempty"`
+-	Expiration    *time.Time `json:"expiration,omitempty"`
+-	TTL           int64      `json:"ttl,omitempty"`
+-	Nodes         Nodes      `json:"nodes,omitempty"`
+-	ModifiedIndex uint64     `json:"modifiedIndex,omitempty"`
+-	CreatedIndex  uint64     `json:"createdIndex,omitempty"`
+-}
+-
+-type Nodes []*Node
+-
+-// interfaces for sorting
+-func (ns Nodes) Len() int {
+-	return len(ns)
+-}
+-
+-func (ns Nodes) Less(i, j int) bool {
+-	return ns[i].Key < ns[j].Key
+-}
+-
+-func (ns Nodes) Swap(i, j int) {
+-	ns[i], ns[j] = ns[j], ns[i]
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_curl_chan_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_curl_chan_test.go
+deleted file mode 100644
+index 756e317..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_curl_chan_test.go
++++ /dev/null
+@@ -1,42 +0,0 @@
+-package etcd
+-
+-import (
+-	"fmt"
+-	"testing"
+-)
+-
+-func TestSetCurlChan(t *testing.T) {
+-	c := NewClient(nil)
+-	c.OpenCURL()
+-
+-	defer func() {
+-		c.Delete("foo", true)
+-	}()
+-
+-	_, err := c.Set("foo", "bar", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	expected := fmt.Sprintf("curl -X PUT %s/v2/keys/foo -d value=bar -d ttl=5",
+-		c.cluster.Leader)
+-	actual := c.RecvCURL()
+-	if expected != actual {
+-		t.Fatalf(`Command "%s" is not equal to expected value "%s"`,
+-			actual, expected)
+-	}
+-
+-	c.SetConsistency(STRONG_CONSISTENCY)
+-	_, err = c.Get("foo", false, false)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	expected = fmt.Sprintf("curl -X GET %s/v2/keys/foo?consistent=true&recursive=false&sorted=false",
+-		c.cluster.Leader)
+-	actual = c.RecvCURL()
+-	if expected != actual {
+-		t.Fatalf(`Command "%s" is not equal to expected value "%s"`,
+-			actual, expected)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_update_create.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_update_create.go
+deleted file mode 100644
+index cb0d567..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_update_create.go
++++ /dev/null
+@@ -1,137 +0,0 @@
+-package etcd
+-
+-// Set sets the given key to the given value.
+-// It will create a new key value pair or replace the old one.
+-// It will not replace a existing directory.
+-func (c *Client) Set(key string, value string, ttl uint64) (*Response, error) {
+-	raw, err := c.RawSet(key, value, ttl)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-// Set sets the given key to a directory.
+-// It will create a new directory or replace the old key value pair by a directory.
+-// It will not replace a existing directory.
+-func (c *Client) SetDir(key string, ttl uint64) (*Response, error) {
+-	raw, err := c.RawSetDir(key, ttl)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-// CreateDir creates a directory. It succeeds only if
+-// the given key does not yet exist.
+-func (c *Client) CreateDir(key string, ttl uint64) (*Response, error) {
+-	raw, err := c.RawCreateDir(key, ttl)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-// UpdateDir updates the given directory. It succeeds only if the
+-// given key already exists.
+-func (c *Client) UpdateDir(key string, ttl uint64) (*Response, error) {
+-	raw, err := c.RawUpdateDir(key, ttl)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-// Create creates a file with the given value under the given key.  It succeeds
+-// only if the given key does not yet exist.
+-func (c *Client) Create(key string, value string, ttl uint64) (*Response, error) {
+-	raw, err := c.RawCreate(key, value, ttl)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-// CreateInOrder creates a file with a key that's guaranteed to be higher than other
+-// keys in the given directory. It is useful for creating queues.
+-func (c *Client) CreateInOrder(dir string, value string, ttl uint64) (*Response, error) {
+-	raw, err := c.RawCreateInOrder(dir, value, ttl)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-// Update updates the given key to the given value.  It succeeds only if the
+-// given key already exists.
+-func (c *Client) Update(key string, value string, ttl uint64) (*Response, error) {
+-	raw, err := c.RawUpdate(key, value, ttl)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return raw.Unmarshal()
+-}
+-
+-func (c *Client) RawUpdateDir(key string, ttl uint64) (*RawResponse, error) {
+-	ops := Options{
+-		"prevExist": true,
+-		"dir":       true,
+-	}
+-
+-	return c.put(key, "", ttl, ops)
+-}
+-
+-func (c *Client) RawCreateDir(key string, ttl uint64) (*RawResponse, error) {
+-	ops := Options{
+-		"prevExist": false,
+-		"dir":       true,
+-	}
+-
+-	return c.put(key, "", ttl, ops)
+-}
+-
+-func (c *Client) RawSet(key string, value string, ttl uint64) (*RawResponse, error) {
+-	return c.put(key, value, ttl, nil)
+-}
+-
+-func (c *Client) RawSetDir(key string, ttl uint64) (*RawResponse, error) {
+-	ops := Options{
+-		"dir": true,
+-	}
+-
+-	return c.put(key, "", ttl, ops)
+-}
+-
+-func (c *Client) RawUpdate(key string, value string, ttl uint64) (*RawResponse, error) {
+-	ops := Options{
+-		"prevExist": true,
+-	}
+-
+-	return c.put(key, value, ttl, ops)
+-}
+-
+-func (c *Client) RawCreate(key string, value string, ttl uint64) (*RawResponse, error) {
+-	ops := Options{
+-		"prevExist": false,
+-	}
+-
+-	return c.put(key, value, ttl, ops)
+-}
+-
+-func (c *Client) RawCreateInOrder(dir string, value string, ttl uint64) (*RawResponse, error) {
+-	return c.post(dir, value, ttl)
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_update_create_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_update_create_test.go
+deleted file mode 100644
+index ced0f06..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/set_update_create_test.go
++++ /dev/null
+@@ -1,241 +0,0 @@
+-package etcd
+-
+-import (
+-	"testing"
+-)
+-
+-func TestSet(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("foo", true)
+-	}()
+-
+-	resp, err := c.Set("foo", "bar", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if resp.Node.Key != "/foo" || resp.Node.Value != "bar" || resp.Node.TTL != 5 {
+-		t.Fatalf("Set 1 failed: %#v", resp)
+-	}
+-	if resp.PrevNode != nil {
+-		t.Fatalf("Set 1 PrevNode failed: %#v", resp)
+-	}
+-
+-	resp, err = c.Set("foo", "bar2", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.Node.Key == "/foo" && resp.Node.Value == "bar2" && resp.Node.TTL == 5) {
+-		t.Fatalf("Set 2 failed: %#v", resp)
+-	}
+-	if resp.PrevNode.Key != "/foo" || resp.PrevNode.Value != "bar" || resp.Node.TTL != 5 {
+-		t.Fatalf("Set 2 PrevNode failed: %#v", resp)
+-	}
+-}
+-
+-func TestUpdate(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("foo", true)
+-		c.Delete("nonexistent", true)
+-	}()
+-
+-	resp, err := c.Set("foo", "bar", 5)
+-
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	// This should succeed.
+-	resp, err = c.Update("foo", "wakawaka", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !(resp.Action == "update" && resp.Node.Key == "/foo" && resp.Node.TTL == 5) {
+-		t.Fatalf("Update 1 failed: %#v", resp)
+-	}
+-	if !(resp.PrevNode.Key == "/foo" && resp.PrevNode.Value == "bar" && resp.Node.TTL == 5) {
+-		t.Fatalf("Update 1 prevValue failed: %#v", resp)
+-	}
+-
+-	// This should fail because the key does not exist.
+-	resp, err = c.Update("nonexistent", "whatever", 5)
+-	if err == nil {
+-		t.Fatalf("The key %v did not exist, so the update should have failed."+
+-			"The response was: %#v", resp.Node.Key, resp)
+-	}
+-}
+-
+-func TestCreate(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("newKey", true)
+-	}()
+-
+-	newKey := "/newKey"
+-	newValue := "/newValue"
+-
+-	// This should succeed
+-	resp, err := c.Create(newKey, newValue, 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !(resp.Action == "create" && resp.Node.Key == newKey &&
+-		resp.Node.Value == newValue && resp.Node.TTL == 5) {
+-		t.Fatalf("Create 1 failed: %#v", resp)
+-	}
+-	if resp.PrevNode != nil {
+-		t.Fatalf("Create 1 PrevNode failed: %#v", resp)
+-	}
+-
+-	// This should fail, because the key is already there
+-	resp, err = c.Create(newKey, newValue, 5)
+-	if err == nil {
+-		t.Fatalf("The key %v did exist, so the creation should have failed."+
+-			"The response was: %#v", resp.Node.Key, resp)
+-	}
+-}
+-
+-func TestCreateInOrder(t *testing.T) {
+-	c := NewClient(nil)
+-	dir := "/queue"
+-	defer func() {
+-		c.DeleteDir(dir)
+-	}()
+-
+-	var firstKey, secondKey string
+-
+-	resp, err := c.CreateInOrder(dir, "1", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !(resp.Action == "create" && resp.Node.Value == "1" && resp.Node.TTL == 5) {
+-		t.Fatalf("Create 1 failed: %#v", resp)
+-	}
+-
+-	firstKey = resp.Node.Key
+-
+-	resp, err = c.CreateInOrder(dir, "2", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !(resp.Action == "create" && resp.Node.Value == "2" && resp.Node.TTL == 5) {
+-		t.Fatalf("Create 2 failed: %#v", resp)
+-	}
+-
+-	secondKey = resp.Node.Key
+-
+-	if firstKey >= secondKey {
+-		t.Fatalf("Expected first key to be greater than second key, but %s is not greater than %s",
+-			firstKey, secondKey)
+-	}
+-}
+-
+-func TestSetDir(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("foo", true)
+-		c.Delete("fooDir", true)
+-	}()
+-
+-	resp, err := c.CreateDir("fooDir", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.Node.Key == "/fooDir" && resp.Node.Value == "" && resp.Node.TTL == 5) {
+-		t.Fatalf("SetDir 1 failed: %#v", resp)
+-	}
+-	if resp.PrevNode != nil {
+-		t.Fatalf("SetDir 1 PrevNode failed: %#v", resp)
+-	}
+-
+-	// This should fail because /fooDir already points to a directory
+-	resp, err = c.CreateDir("/fooDir", 5)
+-	if err == nil {
+-		t.Fatalf("fooDir already points to a directory, so SetDir should have failed."+
+-			"The response was: %#v", resp)
+-	}
+-
+-	_, err = c.Set("foo", "bar", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	// This should succeed
+-	// It should replace the key
+-	resp, err = c.SetDir("foo", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.Node.Key == "/foo" && resp.Node.Value == "" && resp.Node.TTL == 5) {
+-		t.Fatalf("SetDir 2 failed: %#v", resp)
+-	}
+-	if !(resp.PrevNode.Key == "/foo" && resp.PrevNode.Value == "bar" && resp.PrevNode.TTL == 5) {
+-		t.Fatalf("SetDir 2 failed: %#v", resp)
+-	}
+-}
+-
+-func TestUpdateDir(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("fooDir", true)
+-	}()
+-
+-	resp, err := c.CreateDir("fooDir", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	// This should succeed.
+-	resp, err = c.UpdateDir("fooDir", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !(resp.Action == "update" && resp.Node.Key == "/fooDir" &&
+-		resp.Node.Value == "" && resp.Node.TTL == 5) {
+-		t.Fatalf("UpdateDir 1 failed: %#v", resp)
+-	}
+-	if !(resp.PrevNode.Key == "/fooDir" && resp.PrevNode.Dir == true && resp.PrevNode.TTL == 5) {
+-		t.Fatalf("UpdateDir 1 PrevNode failed: %#v", resp)
+-	}
+-
+-	// This should fail because the key does not exist.
+-	resp, err = c.UpdateDir("nonexistentDir", 5)
+-	if err == nil {
+-		t.Fatalf("The key %v did not exist, so the update should have failed."+
+-			"The response was: %#v", resp.Node.Key, resp)
+-	}
+-}
+-
+-func TestCreateDir(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("fooDir", true)
+-	}()
+-
+-	// This should succeed
+-	resp, err := c.CreateDir("fooDir", 5)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !(resp.Action == "create" && resp.Node.Key == "/fooDir" &&
+-		resp.Node.Value == "" && resp.Node.TTL == 5) {
+-		t.Fatalf("CreateDir 1 failed: %#v", resp)
+-	}
+-	if resp.PrevNode != nil {
+-		t.Fatalf("CreateDir 1 PrevNode failed: %#v", resp)
+-	}
+-
+-	// This should fail, because the key is already there
+-	resp, err = c.CreateDir("fooDir", 5)
+-	if err == nil {
+-		t.Fatalf("The key %v did exist, so the creation should have failed."+
+-			"The response was: %#v", resp.Node.Key, resp)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/version.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/version.go
+deleted file mode 100644
+index b3d05df..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/version.go
++++ /dev/null
+@@ -1,3 +0,0 @@
+-package etcd
+-
+-const version = "v2"
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/watch.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/watch.go
+deleted file mode 100644
+index aa8d3df..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/watch.go
++++ /dev/null
+@@ -1,103 +0,0 @@
+-package etcd
+-
+-import (
+-	"errors"
+-)
+-
+-// Errors introduced by the Watch command.
+-var (
+-	ErrWatchStoppedByUser = errors.New("Watch stopped by the user via stop channel")
+-)
+-
+-// If recursive is set to true the watch returns the first change under the given
+-// prefix since the given index.
+-//
+-// If recursive is set to false the watch returns the first change to the given key
+-// since the given index.
+-//
+-// To watch for the latest change, set waitIndex = 0.
+-//
+-// If a receiver channel is given, it will be a long-term watch. Watch will block at the
+-//channel. After someone receives the channel, it will go on to watch that
+-// prefix.  If a stop channel is given, the client can close long-term watch using
+-// the stop channel.
+-func (c *Client) Watch(prefix string, waitIndex uint64, recursive bool,
+-	receiver chan *Response, stop chan bool) (*Response, error) {
+-	logger.Debugf("watch %s [%s]", prefix, c.cluster.Leader)
+-	if receiver == nil {
+-		raw, err := c.watchOnce(prefix, waitIndex, recursive, stop)
+-
+-		if err != nil {
+-			return nil, err
+-		}
+-
+-		return raw.Unmarshal()
+-	}
+-	defer close(receiver)
+-
+-	for {
+-		raw, err := c.watchOnce(prefix, waitIndex, recursive, stop)
+-
+-		if err != nil {
+-			return nil, err
+-		}
+-
+-		resp, err := raw.Unmarshal()
+-
+-		if err != nil {
+-			return nil, err
+-		}
+-
+-		waitIndex = resp.Node.ModifiedIndex + 1
+-		receiver <- resp
+-	}
+-}
+-
+-func (c *Client) RawWatch(prefix string, waitIndex uint64, recursive bool,
+-	receiver chan *RawResponse, stop chan bool) (*RawResponse, error) {
+-
+-	logger.Debugf("rawWatch %s [%s]", prefix, c.cluster.Leader)
+-	if receiver == nil {
+-		return c.watchOnce(prefix, waitIndex, recursive, stop)
+-	}
+-
+-	for {
+-		raw, err := c.watchOnce(prefix, waitIndex, recursive, stop)
+-
+-		if err != nil {
+-			return nil, err
+-		}
+-
+-		resp, err := raw.Unmarshal()
+-
+-		if err != nil {
+-			return nil, err
+-		}
+-
+-		waitIndex = resp.Node.ModifiedIndex + 1
+-		receiver <- raw
+-	}
+-}
+-
+-// helper func
+-// return when there is change under the given prefix
+-func (c *Client) watchOnce(key string, waitIndex uint64, recursive bool, stop chan bool) (*RawResponse, error) {
+-
+-	options := Options{
+-		"wait": true,
+-	}
+-	if waitIndex > 0 {
+-		options["waitIndex"] = waitIndex
+-	}
+-	if recursive {
+-		options["recursive"] = true
+-	}
+-
+-	resp, err := c.getCancelable(key, options, stop)
+-
+-	if err == ErrRequestCancelled {
+-		return nil, ErrWatchStoppedByUser
+-	}
+-
+-	return resp, err
+-}
+diff --git a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/watch_test.go b/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/watch_test.go
+deleted file mode 100644
+index 43e1dfe..0000000
+--- a/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/watch_test.go
++++ /dev/null
+@@ -1,119 +0,0 @@
+-package etcd
+-
+-import (
+-	"fmt"
+-	"runtime"
+-	"testing"
+-	"time"
+-)
+-
+-func TestWatch(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("watch_foo", true)
+-	}()
+-
+-	go setHelper("watch_foo", "bar", c)
+-
+-	resp, err := c.Watch("watch_foo", 0, false, nil, nil)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.Node.Key == "/watch_foo" && resp.Node.Value == "bar") {
+-		t.Fatalf("Watch 1 failed: %#v", resp)
+-	}
+-
+-	go setHelper("watch_foo", "bar", c)
+-
+-	resp, err = c.Watch("watch_foo", resp.Node.ModifiedIndex+1, false, nil, nil)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.Node.Key == "/watch_foo" && resp.Node.Value == "bar") {
+-		t.Fatalf("Watch 2 failed: %#v", resp)
+-	}
+-
+-	routineNum := runtime.NumGoroutine()
+-
+-	ch := make(chan *Response, 10)
+-	stop := make(chan bool, 1)
+-
+-	go setLoop("watch_foo", "bar", c)
+-
+-	go receiver(ch, stop)
+-
+-	_, err = c.Watch("watch_foo", 0, false, ch, stop)
+-	if err != ErrWatchStoppedByUser {
+-		t.Fatalf("Watch returned a non-user stop error")
+-	}
+-
+-	if newRoutineNum := runtime.NumGoroutine(); newRoutineNum != routineNum {
+-		t.Fatalf("Routine numbers differ after watch stop: %v, %v", routineNum, newRoutineNum)
+-	}
+-}
+-
+-func TestWatchAll(t *testing.T) {
+-	c := NewClient(nil)
+-	defer func() {
+-		c.Delete("watch_foo", true)
+-	}()
+-
+-	go setHelper("watch_foo/foo", "bar", c)
+-
+-	resp, err := c.Watch("watch_foo", 0, true, nil, nil)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.Node.Key == "/watch_foo/foo" && resp.Node.Value == "bar") {
+-		t.Fatalf("WatchAll 1 failed: %#v", resp)
+-	}
+-
+-	go setHelper("watch_foo/foo", "bar", c)
+-
+-	resp, err = c.Watch("watch_foo", resp.Node.ModifiedIndex+1, true, nil, nil)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !(resp.Node.Key == "/watch_foo/foo" && resp.Node.Value == "bar") {
+-		t.Fatalf("WatchAll 2 failed: %#v", resp)
+-	}
+-
+-	ch := make(chan *Response, 10)
+-	stop := make(chan bool, 1)
+-
+-	routineNum := runtime.NumGoroutine()
+-
+-	go setLoop("watch_foo/foo", "bar", c)
+-
+-	go receiver(ch, stop)
+-
+-	_, err = c.Watch("watch_foo", 0, true, ch, stop)
+-	if err != ErrWatchStoppedByUser {
+-		t.Fatalf("Watch returned a non-user stop error")
+-	}
+-
+-	if newRoutineNum := runtime.NumGoroutine(); newRoutineNum != routineNum {
+-		t.Fatalf("Routine numbers differ after watch stop: %v, %v", routineNum, newRoutineNum)
+-	}
+-}
+-
+-func setHelper(key, value string, c *Client) {
+-	time.Sleep(time.Second)
+-	c.Set(key, value, 100)
+-}
+-
+-func setLoop(key, value string, c *Client) {
+-	time.Sleep(time.Second)
+-	for i := 0; i < 10; i++ {
+-		newValue := fmt.Sprintf("%s_%v", value, i)
+-		c.Set(key, newValue, 100)
+-		time.Sleep(time.Second / 10)
+-	}
+-}
+-
+-func receiver(c chan *Response, stop chan bool) {
+-	for i := 0; i < 10; i++ {
+-		<-c
+-	}
+-	stop <- true
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/.travis.yml b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/.travis.yml
+deleted file mode 100644
+index 24bbadb..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/.travis.yml
++++ /dev/null
+@@ -1,13 +0,0 @@
+-language: go
+-go:
+-  - 1.1.2
+-  - 1.2
+-  - 1.3.1
+-  - tip
+-env:
+-  - GOARCH=amd64
+-  - GOARCH=386
+-install:
+-  - go get -d ./...
+-script:
+-  - go test ./...
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/AUTHORS b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/AUTHORS
+deleted file mode 100644
+index 0f08b9c..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/AUTHORS
++++ /dev/null
+@@ -1,41 +0,0 @@
+-# This is the official list of go-dockerclient authors for copyright purposes.
+-
+-Aldrin Leal <aldrin at leal.eng.br>
+-Andreas Jaekle <andreas at jaekle.net>
+-Andrews Medina <andrewsmedina at gmail.com>
+-Andy Goldstein <andy.goldstein at redhat.com>
+-Ben McCann <benmccann.com>
+-Cezar Sa Espinola <cezar.sa at corp.globo.com>
+-Cheah Chu Yeow <chuyeow at gmail.com>
+-cheneydeng <cheneydeng at qq.com>
+-Daniel, Dao Quang Minh <dqminh89 at gmail.com>
+-David Huie <dahuie at gmail.com>
+-Ed <edrocksit at gmail.com>
+-Eric Anderson <anderson at copperegg.com>
+-Fabio Rehm <fgrehm at gmail.com>
+-Flavia Missi <flaviamissi at gmail.com>
+-Francisco Souza <f at souza.cc>
+-Jari Kolehmainen <jari.kolehmainen at digia.com>
+-Jason Wilder <jwilder at litl.com>
+-Jean-Baptiste Dalido <jeanbaptiste at appgratis.com>
+-Jeff Mitchell <jeffrey.mitchell at gmail.com>
+-Jeffrey Hulten <jhulten at gmail.com>
+-Johan Euphrosine <proppy at google.com>
+-Karan Misra <kidoman at gmail.com>
+-Kim, Hirokuni <hirokuni.kim at kvh.co.jp>
+-Lucas Clemente <lucas at clemente.io>
+-Omeid Matten <public at omeid.me>
+-Paul Morie <pmorie at gmail.com>
+-Peter Jihoon Kim <raingrove at gmail.com>
+-Philippe Lafoucrière <philippe.lafoucriere at tech-angels.com>
+-Rafe Colton <r.colton at modcloth.com>
+-Salvador Gironès <salvadorgirones at gmail.com>
+-Simon Eskildsen <sirup at sirupsen.com>
+-Simon Menke <simon.menke at gmail.com>
+-Skolos <skolos at gopherlab.com>
+-Soulou <leo at unbekandt.eu>
+-Sridhar Ratnakumar <sridharr at activestate.com>
+-Summer Mousa <smousa at zenoss.com>
+-Tarsis Azevedo <tarsis at corp.globo.com>
+-Tim Schindler <tim at catalyst-zero.com>
+-Wiliam Souza <wiliamsouza83 at gmail.com>
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/DOCKER-LICENSE b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/DOCKER-LICENSE
+deleted file mode 100644
+index f4130a5..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/DOCKER-LICENSE
++++ /dev/null
+@@ -1,6 +0,0 @@
+-                                 Apache License
+-                           Version 2.0, January 2004
+-                        http://www.apache.org/licenses/
+-
+-You can find the Docker license int the following link:
+-https://raw2.github.com/dotcloud/docker/master/LICENSE
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/LICENSE b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/LICENSE
+deleted file mode 100644
+index 7a6d8bb..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/LICENSE
++++ /dev/null
+@@ -1,22 +0,0 @@
+-Copyright (c) 2014, go-dockerclient authors
+-All rights reserved.
+-
+-Redistribution and use in source and binary forms, with or without
+-modification, are permitted provided that the following conditions are met:
+-
+-  * Redistributions of source code must retain the above copyright notice,
+-this list of conditions and the following disclaimer.
+-  * Redistributions in binary form must reproduce the above copyright notice,
+-this list of conditions and the following disclaimer in the documentation
+-and/or other materials provided with the distribution.
+-
+-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+-ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+-FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+-DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+-SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+-CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+-OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/README.markdown b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/README.markdown
+deleted file mode 100644
+index 0b7e83c..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/README.markdown
++++ /dev/null
+@@ -1,43 +0,0 @@
+-#go-dockerclient
+-
+-[![Build Status](https://drone.io/github.com/fsouza/go-dockerclient/status.png)](https://drone.io/github.com/fsouza/go-dockerclient/latest)
+-[![Build Status](https://travis-ci.org/fsouza/go-dockerclient.png)](https://travis-ci.org/fsouza/go-dockerclient)
+-
+-[![GoDoc](http://godoc.org/github.com/fsouza/go-dockerclient?status.png)](http://godoc.org/github.com/fsouza/go-dockerclient)
+-
+-This package presents a client for the Docker remote API.
+-
+-For more details, check the [remote API documentation](http://docs.docker.io/en/latest/reference/api/docker_remote_api/).
+-
+-## Example
+-
+-```go
+-package main
+-
+-import (
+-        "fmt"
+-        "github.com/fsouza/go-dockerclient"
+-)
+-
+-func main() {
+-        endpoint := "unix:///var/run/docker.sock"
+-        client, _ := docker.NewClient(endpoint)
+-        imgs, _ := client.ListImages(true)
+-        for _, img := range imgs {
+-                fmt.Println("ID: ", img.ID)
+-                fmt.Println("RepoTags: ", img.RepoTags)
+-                fmt.Println("Created: ", img.Created)
+-                fmt.Println("Size: ", img.Size)
+-                fmt.Println("VirtualSize: ", img.VirtualSize)
+-                fmt.Println("ParentId: ", img.ParentId)
+-                fmt.Println("Repository: ", img.Repository)
+-        }
+-}
+-```
+-
+-## Developing
+-
+-You can run the tests with:
+-
+-    go get -d ./...
+-    go test ./...
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/change.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/change.go
+deleted file mode 100644
+index 7926073..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/change.go
++++ /dev/null
+@@ -1,36 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import "fmt"
+-
+-type ChangeType int
+-
+-const (
+-	ChangeModify ChangeType = iota
+-	ChangeAdd
+-	ChangeDelete
+-)
+-
+-// Change represents a change in a container.
+-//
+-// See http://goo.gl/DpGyzK for more details.
+-type Change struct {
+-	Path string
+-	Kind ChangeType
+-}
+-
+-func (change *Change) String() string {
+-	var kind string
+-	switch change.Kind {
+-	case ChangeModify:
+-		kind = "C"
+-	case ChangeAdd:
+-		kind = "A"
+-	case ChangeDelete:
+-		kind = "D"
+-	}
+-	return fmt.Sprintf("%s %s", kind, change.Path)
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/change_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/change_test.go
+deleted file mode 100644
+index 7c2ec30..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/change_test.go
++++ /dev/null
+@@ -1,26 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"testing"
+-)
+-
+-func TestChangeString(t *testing.T) {
+-	var tests = []struct {
+-		change   Change
+-		expected string
+-	}{
+-		{Change{"/etc/passwd", ChangeModify}, "C /etc/passwd"},
+-		{Change{"/etc/passwd", ChangeAdd}, "A /etc/passwd"},
+-		{Change{"/etc/passwd", ChangeDelete}, "D /etc/passwd"},
+-		{Change{"/etc/passwd", 33}, " /etc/passwd"},
+-	}
+-	for _, tt := range tests {
+-		if got := tt.change.String(); got != tt.expected {
+-			t.Errorf("Change.String(): want %q. Got %q.", tt.expected, got)
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/client.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/client.go
+deleted file mode 100644
+index 436695d..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/client.go
++++ /dev/null
+@@ -1,536 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// Package docker provides a client for the Docker remote API.
+-//
+-// See http://goo.gl/mxyql for more details on the remote API.
+-package docker
+-
+-import (
+-	"bytes"
+-	"encoding/json"
+-	"errors"
+-	"fmt"
+-	"io"
+-	"io/ioutil"
+-	"net"
+-	"net/http"
+-	"net/http/httputil"
+-	"net/url"
+-	"reflect"
+-	"strconv"
+-	"strings"
+-)
+-
+-const userAgent = "go-dockerclient"
+-
+-var (
+-	// ErrInvalidEndpoint is returned when the endpoint is not a valid HTTP URL.
+-	ErrInvalidEndpoint = errors.New("invalid endpoint")
+-
+-	// ErrConnectionRefused is returned when the client cannot connect to the given endpoint.
+-	ErrConnectionRefused = errors.New("cannot connect to Docker endpoint")
+-
+-	apiVersion_1_12, _ = NewApiVersion("1.12")
+-)
+-
+-// ApiVersion is an internal representation of a version of the Remote API.
+-type ApiVersion []int
+-
+-// NewApiVersion returns an instance of ApiVersion for the given string.
+-//
+-// The given string must be in the form <major>.<minor>.<patch>, where <major>,
+-// <minor> and <patch> are integer numbers.
+-func NewApiVersion(input string) (ApiVersion, error) {
+-	if !strings.Contains(input, ".") {
+-		return nil, fmt.Errorf("Unable to parse version %q", input)
+-	}
+-	arr := strings.Split(input, ".")
+-	ret := make(ApiVersion, len(arr))
+-	var err error
+-	for i, val := range arr {
+-		ret[i], err = strconv.Atoi(val)
+-		if err != nil {
+-			return nil, fmt.Errorf("Unable to parse version %q: %q is not an integer", input, val)
+-		}
+-	}
+-	return ret, nil
+-}
+-
+-func (version ApiVersion) String() string {
+-	var str string
+-	for i, val := range version {
+-		str += strconv.Itoa(val)
+-		if i < len(version)-1 {
+-			str += "."
+-		}
+-	}
+-	return str
+-}
+-
+-func (version ApiVersion) LessThan(other ApiVersion) bool {
+-	return version.compare(other) < 0
+-}
+-
+-func (version ApiVersion) LessThanOrEqualTo(other ApiVersion) bool {
+-	return version.compare(other) <= 0
+-}
+-
+-func (version ApiVersion) GreaterThan(other ApiVersion) bool {
+-	return version.compare(other) > 0
+-}
+-
+-func (version ApiVersion) GreaterThanOrEqualTo(other ApiVersion) bool {
+-	return version.compare(other) >= 0
+-}
+-
+-func (version ApiVersion) compare(other ApiVersion) int {
+-	for i, v := range version {
+-		if i <= len(other)-1 {
+-			otherVersion := other[i]
+-
+-			if v < otherVersion {
+-				return -1
+-			} else if v > otherVersion {
+-				return 1
+-			}
+-		}
+-	}
+-	if len(version) > len(other) {
+-		return 1
+-	}
+-	if len(version) < len(other) {
+-		return -1
+-	}
+-	return 0
+-}
+-
+-// Client is the basic type of this package. It provides methods for
+-// interaction with the API.
+-type Client struct {
+-	SkipServerVersionCheck bool
+-	HTTPClient             *http.Client
+-
+-	endpoint            string
+-	endpointURL         *url.URL
+-	eventMonitor        *eventMonitoringState
+-	requestedApiVersion ApiVersion
+-	serverApiVersion    ApiVersion
+-	expectedApiVersion  ApiVersion
+-}
+-
+-// NewClient returns a Client instance ready for communication with the given
+-// server endpoint. It will use the latest remote API version available in the
+-// server.
+-func NewClient(endpoint string) (*Client, error) {
+-	client, err := NewVersionedClient(endpoint, "")
+-	if err != nil {
+-		return nil, err
+-	}
+-	client.SkipServerVersionCheck = true
+-	return client, nil
+-}
+-
+-// NewVersionedClient returns a Client instance ready for communication with
+-// the given server endpoint, using a specific remote API version.
+-func NewVersionedClient(endpoint string, apiVersionString string) (*Client, error) {
+-	u, err := parseEndpoint(endpoint)
+-	if err != nil {
+-		return nil, err
+-	}
+-	var requestedApiVersion ApiVersion
+-	if strings.Contains(apiVersionString, ".") {
+-		requestedApiVersion, err = NewApiVersion(apiVersionString)
+-		if err != nil {
+-			return nil, err
+-		}
+-	}
+-	return &Client{
+-		HTTPClient:          http.DefaultClient,
+-		endpoint:            endpoint,
+-		endpointURL:         u,
+-		eventMonitor:        new(eventMonitoringState),
+-		requestedApiVersion: requestedApiVersion,
+-	}, nil
+-}
+-
+-func (c *Client) checkApiVersion() error {
+-	serverApiVersionString, err := c.getServerApiVersionString()
+-	if err != nil {
+-		return err
+-	}
+-	c.serverApiVersion, err = NewApiVersion(serverApiVersionString)
+-	if err != nil {
+-		return err
+-	}
+-	if c.requestedApiVersion == nil {
+-		c.expectedApiVersion = c.serverApiVersion
+-	} else {
+-		c.expectedApiVersion = c.requestedApiVersion
+-	}
+-	return nil
+-}
+-
+-// Ping pings the docker server
+-//
+-// See http://goo.gl/stJENm for more details.
+-func (c *Client) Ping() error {
+-	path := "/_ping"
+-	body, status, err := c.do("GET", path, nil)
+-	if err != nil {
+-		return err
+-	}
+-	if status != http.StatusOK {
+-		return newError(status, body)
+-	}
+-	return nil
+-}
+-
+-func (c *Client) getServerApiVersionString() (version string, err error) {
+-	body, status, err := c.do("GET", "/version", nil)
+-	if err != nil {
+-		return "", err
+-	}
+-	if status != http.StatusOK {
+-		return "", fmt.Errorf("Received unexpected status %d while trying to retrieve the server version", status)
+-	}
+-	var versionResponse map[string]string
+-	err = json.Unmarshal(body, &versionResponse)
+-	if err != nil {
+-		return "", err
+-	}
+-	version = versionResponse["ApiVersion"]
+-	return version, nil
+-}
+-
+-func (c *Client) do(method, path string, data interface{}) ([]byte, int, error) {
+-	var params io.Reader
+-	if data != nil {
+-		buf, err := json.Marshal(data)
+-		if err != nil {
+-			return nil, -1, err
+-		}
+-		params = bytes.NewBuffer(buf)
+-	}
+-	if path != "/version" && !c.SkipServerVersionCheck && c.expectedApiVersion == nil {
+-		err := c.checkApiVersion()
+-		if err != nil {
+-			return nil, -1, err
+-		}
+-	}
+-	req, err := http.NewRequest(method, c.getURL(path), params)
+-	if err != nil {
+-		return nil, -1, err
+-	}
+-	req.Header.Set("User-Agent", userAgent)
+-	if data != nil {
+-		req.Header.Set("Content-Type", "application/json")
+-	} else if method == "POST" {
+-		req.Header.Set("Content-Type", "plain/text")
+-	}
+-	var resp *http.Response
+-	protocol := c.endpointURL.Scheme
+-	address := c.endpointURL.Path
+-	if protocol == "unix" {
+-		dial, err := net.Dial(protocol, address)
+-		if err != nil {
+-			return nil, -1, err
+-		}
+-		defer dial.Close()
+-		clientconn := httputil.NewClientConn(dial, nil)
+-		resp, err = clientconn.Do(req)
+-		if err != nil {
+-			return nil, -1, err
+-		}
+-		defer clientconn.Close()
+-	} else {
+-		resp, err = c.HTTPClient.Do(req)
+-	}
+-	if err != nil {
+-		if strings.Contains(err.Error(), "connection refused") {
+-			return nil, -1, ErrConnectionRefused
+-		}
+-		return nil, -1, err
+-	}
+-	defer resp.Body.Close()
+-	body, err := ioutil.ReadAll(resp.Body)
+-	if err != nil {
+-		return nil, -1, err
+-	}
+-	if resp.StatusCode < 200 || resp.StatusCode >= 400 {
+-		return nil, resp.StatusCode, newError(resp.StatusCode, body)
+-	}
+-	return body, resp.StatusCode, nil
+-}
+-
+-func (c *Client) stream(method, path string, setRawTerminal, rawJSONStream bool, headers map[string]string, in io.Reader, stdout, stderr io.Writer) error {
+-	if (method == "POST" || method == "PUT") && in == nil {
+-		in = bytes.NewReader(nil)
+-	}
+-	if path != "/version" && !c.SkipServerVersionCheck && c.expectedApiVersion == nil {
+-		err := c.checkApiVersion()
+-		if err != nil {
+-			return err
+-		}
+-	}
+-	req, err := http.NewRequest(method, c.getURL(path), in)
+-	if err != nil {
+-		return err
+-	}
+-	req.Header.Set("User-Agent", userAgent)
+-	if method == "POST" {
+-		req.Header.Set("Content-Type", "plain/text")
+-	}
+-	for key, val := range headers {
+-		req.Header.Set(key, val)
+-	}
+-	var resp *http.Response
+-	protocol := c.endpointURL.Scheme
+-	address := c.endpointURL.Path
+-	if stdout == nil {
+-		stdout = ioutil.Discard
+-	}
+-	if stderr == nil {
+-		stderr = ioutil.Discard
+-	}
+-	if protocol == "unix" {
+-		dial, err := net.Dial(protocol, address)
+-		if err != nil {
+-			return err
+-		}
+-		clientconn := httputil.NewClientConn(dial, nil)
+-		resp, err = clientconn.Do(req)
+-		defer clientconn.Close()
+-	} else {
+-		resp, err = c.HTTPClient.Do(req)
+-	}
+-	if err != nil {
+-		if strings.Contains(err.Error(), "connection refused") {
+-			return ErrConnectionRefused
+-		}
+-		return err
+-	}
+-	defer resp.Body.Close()
+-	if resp.StatusCode < 200 || resp.StatusCode >= 400 {
+-		body, err := ioutil.ReadAll(resp.Body)
+-		if err != nil {
+-			return err
+-		}
+-		return newError(resp.StatusCode, body)
+-	}
+-	if resp.Header.Get("Content-Type") == "application/json" {
+-		// if we want to get raw json stream, just copy it back to output
+-		// without decoding it
+-		if rawJSONStream {
+-			_, err = io.Copy(stdout, resp.Body)
+-			return err
+-		}
+-		dec := json.NewDecoder(resp.Body)
+-		for {
+-			var m jsonMessage
+-			if err := dec.Decode(&m); err == io.EOF {
+-				break
+-			} else if err != nil {
+-				return err
+-			}
+-			if m.Stream != "" {
+-				fmt.Fprint(stdout, m.Stream)
+-			} else if m.Progress != "" {
+-				fmt.Fprintf(stdout, "%s %s\r", m.Status, m.Progress)
+-			} else if m.Error != "" {
+-				return errors.New(m.Error)
+-			}
+-			if m.Status != "" {
+-				fmt.Fprintln(stdout, m.Status)
+-			}
+-		}
+-	} else {
+-		if setRawTerminal {
+-			_, err = io.Copy(stdout, resp.Body)
+-		} else {
+-			_, err = stdCopy(stdout, stderr, resp.Body)
+-		}
+-		return err
+-	}
+-	return nil
+-}
+-
+-func (c *Client) hijack(method, path string, success chan struct{}, setRawTerminal bool, in io.Reader, stderr, stdout io.Writer) error {
+-	if path != "/version" && !c.SkipServerVersionCheck && c.expectedApiVersion == nil {
+-		err := c.checkApiVersion()
+-		if err != nil {
+-			return err
+-		}
+-	}
+-	if stdout == nil {
+-		stdout = ioutil.Discard
+-	}
+-	if stderr == nil {
+-		stderr = ioutil.Discard
+-	}
+-	req, err := http.NewRequest(method, c.getURL(path), nil)
+-	if err != nil {
+-		return err
+-	}
+-	req.Header.Set("Content-Type", "plain/text")
+-	protocol := c.endpointURL.Scheme
+-	address := c.endpointURL.Path
+-	if protocol != "unix" {
+-		protocol = "tcp"
+-		address = c.endpointURL.Host
+-	}
+-	dial, err := net.Dial(protocol, address)
+-	if err != nil {
+-		return err
+-	}
+-	defer dial.Close()
+-	clientconn := httputil.NewClientConn(dial, nil)
+-	clientconn.Do(req)
+-	if success != nil {
+-		success <- struct{}{}
+-		<-success
+-	}
+-	rwc, br := clientconn.Hijack()
+-	errs := make(chan error, 2)
+-	exit := make(chan bool)
+-	go func() {
+-		defer close(exit)
+-		var err error
+-		if setRawTerminal {
+-			_, err = io.Copy(stdout, br)
+-		} else {
+-			_, err = stdCopy(stdout, stderr, br)
+-		}
+-		errs <- err
+-	}()
+-	go func() {
+-		var err error
+-		if in != nil {
+-			_, err = io.Copy(rwc, in)
+-		}
+-		rwc.(interface {
+-			CloseWrite() error
+-		}).CloseWrite()
+-		errs <- err
+-	}()
+-	<-exit
+-	return <-errs
+-}
+-
+-func (c *Client) getURL(path string) string {
+-	urlStr := strings.TrimRight(c.endpointURL.String(), "/")
+-	if c.endpointURL.Scheme == "unix" {
+-		urlStr = ""
+-	}
+-
+-	if c.requestedApiVersion != nil {
+-		return fmt.Sprintf("%s/v%s%s", urlStr, c.requestedApiVersion, path)
+-	} else {
+-		return fmt.Sprintf("%s%s", urlStr, path)
+-	}
+-}
+-
+-type jsonMessage struct {
+-	Status   string `json:"status,omitempty"`
+-	Progress string `json:"progress,omitempty"`
+-	Error    string `json:"error,omitempty"`
+-	Stream   string `json:"stream,omitempty"`
+-}
+-
+-func queryString(opts interface{}) string {
+-	if opts == nil {
+-		return ""
+-	}
+-	value := reflect.ValueOf(opts)
+-	if value.Kind() == reflect.Ptr {
+-		value = value.Elem()
+-	}
+-	if value.Kind() != reflect.Struct {
+-		return ""
+-	}
+-	items := url.Values(map[string][]string{})
+-	for i := 0; i < value.NumField(); i++ {
+-		field := value.Type().Field(i)
+-		if field.PkgPath != "" {
+-			continue
+-		}
+-		key := field.Tag.Get("qs")
+-		if key == "" {
+-			key = strings.ToLower(field.Name)
+-		} else if key == "-" {
+-			continue
+-		}
+-		v := value.Field(i)
+-		switch v.Kind() {
+-		case reflect.Bool:
+-			if v.Bool() {
+-				items.Add(key, "1")
+-			}
+-		case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+-			if v.Int() > 0 {
+-				items.Add(key, strconv.FormatInt(v.Int(), 10))
+-			}
+-		case reflect.Float32, reflect.Float64:
+-			if v.Float() > 0 {
+-				items.Add(key, strconv.FormatFloat(v.Float(), 'f', -1, 64))
+-			}
+-		case reflect.String:
+-			if v.String() != "" {
+-				items.Add(key, v.String())
+-			}
+-		case reflect.Ptr:
+-			if !v.IsNil() {
+-				if b, err := json.Marshal(v.Interface()); err == nil {
+-					items.Add(key, string(b))
+-				}
+-			}
+-		}
+-	}
+-	return items.Encode()
+-}
+-
+-// Error represents failures in the API. It represents a failure from the API.
+-type Error struct {
+-	Status  int
+-	Message string
+-}
+-
+-func newError(status int, body []byte) *Error {
+-	return &Error{Status: status, Message: string(body)}
+-}
+-
+-func (e *Error) Error() string {
+-	return fmt.Sprintf("API error (%d): %s", e.Status, e.Message)
+-}
+-
+-func parseEndpoint(endpoint string) (*url.URL, error) {
+-	u, err := url.Parse(endpoint)
+-	if err != nil {
+-		return nil, ErrInvalidEndpoint
+-	}
+-	if u.Scheme == "tcp" {
+-		u.Scheme = "http"
+-	}
+-	if u.Scheme != "http" && u.Scheme != "https" && u.Scheme != "unix" {
+-		return nil, ErrInvalidEndpoint
+-	}
+-	if u.Scheme != "unix" {
+-		_, port, err := net.SplitHostPort(u.Host)
+-		if err != nil {
+-			if e, ok := err.(*net.AddrError); ok {
+-				if e.Err == "missing port in address" {
+-					return u, nil
+-				}
+-			}
+-			return nil, ErrInvalidEndpoint
+-		}
+-		number, err := strconv.ParseInt(port, 10, 64)
+-		if err == nil && number > 0 && number < 65536 {
+-			return u, nil
+-		}
+-	} else {
+-		return u, nil // we don't need port when using a unix socket
+-	}
+-	return nil, ErrInvalidEndpoint
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/client_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/client_test.go
+deleted file mode 100644
+index 9def171..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/client_test.go
++++ /dev/null
+@@ -1,290 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"fmt"
+-	"io/ioutil"
+-	"net/http"
+-	"net/url"
+-	"reflect"
+-	"strconv"
+-	"strings"
+-	"testing"
+-)
+-
+-func TestNewAPIClient(t *testing.T) {
+-	endpoint := "http://localhost:4243"
+-	client, err := NewClient(endpoint)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if client.endpoint != endpoint {
+-		t.Errorf("Expected endpoint %s. Got %s.", endpoint, client.endpoint)
+-	}
+-	if client.HTTPClient != http.DefaultClient {
+-		t.Errorf("Expected http.Client %#v. Got %#v.", http.DefaultClient, client.HTTPClient)
+-	}
+-	// test unix socket endpoints
+-	endpoint = "unix:///var/run/docker.sock"
+-	client, err = NewClient(endpoint)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if client.endpoint != endpoint {
+-		t.Errorf("Expected endpoint %s. Got %s.", endpoint, client.endpoint)
+-	}
+-	if !client.SkipServerVersionCheck {
+-		t.Error("Expected SkipServerVersionCheck to be true, got false")
+-	}
+-	if client.requestedApiVersion != nil {
+-		t.Errorf("Expected requestedApiVersion to be nil, got %#v.", client.requestedApiVersion)
+-	}
+-}
+-
+-func TestNewVersionedClient(t *testing.T) {
+-	endpoint := "http://localhost:4243"
+-	client, err := NewVersionedClient(endpoint, "1.12")
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if client.endpoint != endpoint {
+-		t.Errorf("Expected endpoint %s. Got %s.", endpoint, client.endpoint)
+-	}
+-	if client.HTTPClient != http.DefaultClient {
+-		t.Errorf("Expected http.Client %#v. Got %#v.", http.DefaultClient, client.HTTPClient)
+-	}
+-	if reqVersion := client.requestedApiVersion.String(); reqVersion != "1.12" {
+-		t.Errorf("Wrong requestApiVersion. Want %q. Got %q.", "1.12", reqVersion)
+-	}
+-	if client.SkipServerVersionCheck {
+-		t.Error("Expected SkipServerVersionCheck to be false, got true")
+-	}
+-}
+-
+-func TestNewClientInvalidEndpoint(t *testing.T) {
+-	cases := []string{
+-		"htp://localhost:3243", "http://localhost:a", "localhost:8080",
+-		"", "localhost", "http://localhost:8080:8383", "http://localhost:65536",
+-		"https://localhost:-20",
+-	}
+-	for _, c := range cases {
+-		client, err := NewClient(c)
+-		if client != nil {
+-			t.Errorf("Want <nil> client for invalid endpoint, got %#v.", client)
+-		}
+-		if !reflect.DeepEqual(err, ErrInvalidEndpoint) {
+-			t.Errorf("NewClient(%q): Got invalid error for invalid endpoint. Want %#v. Got %#v.", c, ErrInvalidEndpoint, err)
+-		}
+-	}
+-}
+-
+-func TestGetURL(t *testing.T) {
+-	var tests = []struct {
+-		endpoint string
+-		path     string
+-		expected string
+-	}{
+-		{"http://localhost:4243/", "/", "http://localhost:4243/"},
+-		{"http://localhost:4243", "/", "http://localhost:4243/"},
+-		{"http://localhost:4243", "/containers/ps", "http://localhost:4243/containers/ps"},
+-		{"tcp://localhost:4243", "/containers/ps", "http://localhost:4243/containers/ps"},
+-		{"http://localhost:4243/////", "/", "http://localhost:4243/"},
+-		{"unix:///var/run/docker.socket", "/containers", "/containers"},
+-	}
+-	for _, tt := range tests {
+-		client, _ := NewClient(tt.endpoint)
+-		client.endpoint = tt.endpoint
+-		client.SkipServerVersionCheck = true
+-		got := client.getURL(tt.path)
+-		if got != tt.expected {
+-			t.Errorf("getURL(%q): Got %s. Want %s.", tt.path, got, tt.expected)
+-		}
+-	}
+-}
+-
+-func TestError(t *testing.T) {
+-	err := newError(400, []byte("bad parameter"))
+-	expected := Error{Status: 400, Message: "bad parameter"}
+-	if !reflect.DeepEqual(expected, *err) {
+-		t.Errorf("Wrong error type. Want %#v. Got %#v.", expected, *err)
+-	}
+-	message := "API error (400): bad parameter"
+-	if err.Error() != message {
+-		t.Errorf("Wrong error message. Want %q. Got %q.", message, err.Error())
+-	}
+-}
+-
+-func TestQueryString(t *testing.T) {
+-	v := float32(2.4)
+-	f32QueryString := fmt.Sprintf("w=%s&x=10&y=10.35", strconv.FormatFloat(float64(v), 'f', -1, 64))
+-	jsonPerson := url.QueryEscape(`{"Name":"gopher","age":4}`)
+-	var tests = []struct {
+-		input interface{}
+-		want  string
+-	}{
+-		{&ListContainersOptions{All: true}, "all=1"},
+-		{ListContainersOptions{All: true}, "all=1"},
+-		{ListContainersOptions{Before: "something"}, "before=something"},
+-		{ListContainersOptions{Before: "something", Since: "other"}, "before=something&since=other"},
+-		{dumb{X: 10, Y: 10.35000}, "x=10&y=10.35"},
+-		{dumb{W: v, X: 10, Y: 10.35000}, f32QueryString},
+-		{dumb{X: 10, Y: 10.35000, Z: 10}, "x=10&y=10.35&zee=10"},
+-		{dumb{v: 4, X: 10, Y: 10.35000}, "x=10&y=10.35"},
+-		{dumb{T: 10, Y: 10.35000}, "y=10.35"},
+-		{dumb{Person: &person{Name: "gopher", Age: 4}}, "p=" + jsonPerson},
+-		{nil, ""},
+-		{10, ""},
+-		{"not_a_struct", ""},
+-	}
+-	for _, tt := range tests {
+-		got := queryString(tt.input)
+-		if got != tt.want {
+-			t.Errorf("queryString(%v). Want %q. Got %q.", tt.input, tt.want, got)
+-		}
+-	}
+-}
+-
+-func TestNewApiVersionFailures(t *testing.T) {
+-	var tests = []struct {
+-		input         string
+-		expectedError string
+-	}{
+-		{"1-0", `Unable to parse version "1-0"`},
+-		{"1.0-beta", `Unable to parse version "1.0-beta": "0-beta" is not an integer`},
+-	}
+-	for _, tt := range tests {
+-		v, err := NewApiVersion(tt.input)
+-		if v != nil {
+-			t.Errorf("Expected <nil> version, got %v.", v)
+-		}
+-		if err.Error() != tt.expectedError {
+-			t.Errorf("NewApiVersion(%q): wrong error. Want %q. Got %q", tt.input, tt.expectedError, err.Error())
+-		}
+-	}
+-}
+-
+-func TestApiVersions(t *testing.T) {
+-	var tests = []struct {
+-		a                              string
+-		b                              string
+-		expectedALessThanB             bool
+-		expectedALessThanOrEqualToB    bool
+-		expectedAGreaterThanB          bool
+-		expectedAGreaterThanOrEqualToB bool
+-	}{
+-		{"1.11", "1.11", false, true, false, true},
+-		{"1.10", "1.11", true, true, false, false},
+-		{"1.11", "1.10", false, false, true, true},
+-
+-		{"1.9", "1.11", true, true, false, false},
+-		{"1.11", "1.9", false, false, true, true},
+-
+-		{"1.1.1", "1.1", false, false, true, true},
+-		{"1.1", "1.1.1", true, true, false, false},
+-
+-		{"2.1", "1.1.1", false, false, true, true},
+-		{"2.1", "1.3.1", false, false, true, true},
+-		{"1.1.1", "2.1", true, true, false, false},
+-		{"1.3.1", "2.1", true, true, false, false},
+-	}
+-
+-	for _, tt := range tests {
+-		a, _ := NewApiVersion(tt.a)
+-		b, _ := NewApiVersion(tt.b)
+-
+-		if tt.expectedALessThanB && !a.LessThan(b) {
+-			t.Errorf("Expected %#v < %#v", a, b)
+-		}
+-		if tt.expectedALessThanOrEqualToB && !a.LessThanOrEqualTo(b) {
+-			t.Errorf("Expected %#v <= %#v", a, b)
+-		}
+-		if tt.expectedAGreaterThanB && !a.GreaterThan(b) {
+-			t.Errorf("Expected %#v > %#v", a, b)
+-		}
+-		if tt.expectedAGreaterThanOrEqualToB && !a.GreaterThanOrEqualTo(b) {
+-			t.Errorf("Expected %#v >= %#v", a, b)
+-		}
+-	}
+-}
+-
+-func TestPing(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	err := client.Ping()
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-}
+-
+-func TestPingFailing(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusInternalServerError}
+-	client := newTestClient(fakeRT)
+-	err := client.Ping()
+-	if err == nil {
+-		t.Fatal("Expected non nil error, got nil")
+-	}
+-	expectedErrMsg := "API error (500): "
+-	if err.Error() != expectedErrMsg {
+-		t.Fatalf("Expected error to be %q, got: %q", expectedErrMsg, err.Error())
+-	}
+-}
+-
+-func TestPingFailingWrongStatus(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusAccepted}
+-	client := newTestClient(fakeRT)
+-	err := client.Ping()
+-	if err == nil {
+-		t.Fatal("Expected non nil error, got nil")
+-	}
+-	expectedErrMsg := "API error (202): "
+-	if err.Error() != expectedErrMsg {
+-		t.Fatalf("Expected error to be %q, got: %q", expectedErrMsg, err.Error())
+-	}
+-}
+-
+-type FakeRoundTripper struct {
+-	message  string
+-	status   int
+-	header   map[string]string
+-	requests []*http.Request
+-}
+-
+-func (rt *FakeRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) {
+-	body := strings.NewReader(rt.message)
+-	rt.requests = append(rt.requests, r)
+-	res := &http.Response{
+-		StatusCode: rt.status,
+-		Body:       ioutil.NopCloser(body),
+-		Header:     make(http.Header),
+-	}
+-	for k, v := range rt.header {
+-		res.Header.Set(k, v)
+-	}
+-	return res, nil
+-}
+-
+-func (rt *FakeRoundTripper) Reset() {
+-	rt.requests = nil
+-}
+-
+-type person struct {
+-	Name string
+-	Age  int `json:"age"`
+-}
+-
+-type dumb struct {
+-	T      int `qs:"-"`
+-	v      int
+-	W      float32
+-	X      int
+-	Y      float64
+-	Z      int     `qs:"zee"`
+-	Person *person `qs:"p"`
+-}
+-
+-type fakeEndpointURL struct {
+-	Scheme string
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/container.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/container.go
+deleted file mode 100644
+index 3e3556b..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/container.go
++++ /dev/null
+@@ -1,693 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"bytes"
+-	"encoding/json"
+-	"fmt"
+-	"io"
+-	"net/http"
+-	"net/url"
+-	"strconv"
+-	"strings"
+-	"time"
+-)
+-
+-// ListContainersOptions specify parameters to the ListContainers function.
+-//
+-// See http://goo.gl/QpCnDN for more details.
+-type ListContainersOptions struct {
+-	All    bool
+-	Size   bool
+-	Limit  int
+-	Since  string
+-	Before string
+-}
+-
+-type APIPort struct {
+-	PrivatePort int64  `json:"PrivatePort,omitempty" yaml:"PrivatePort,omitempty"`
+-	PublicPort  int64  `json:"PublicPort,omitempty" yaml:"PublicPort,omitempty"`
+-	Type        string `json:"Type,omitempty" yaml:"Type,omitempty"`
+-	IP          string `json:"IP,omitempty" yaml:"IP,omitempty"`
+-}
+-
+-// APIContainers represents a container.
+-//
+-// See http://goo.gl/QeFH7U for more details.
+-type APIContainers struct {
+-	ID         string    `json:"Id" yaml:"Id"`
+-	Image      string    `json:"Image,omitempty" yaml:"Image,omitempty"`
+-	Command    string    `json:"Command,omitempty" yaml:"Command,omitempty"`
+-	Created    int64     `json:"Created,omitempty" yaml:"Created,omitempty"`
+-	Status     string    `json:"Status,omitempty" yaml:"Status,omitempty"`
+-	Ports      []APIPort `json:"Ports,omitempty" yaml:"Ports,omitempty"`
+-	SizeRw     int64     `json:"SizeRw,omitempty" yaml:"SizeRw,omitempty"`
+-	SizeRootFs int64     `json:"SizeRootFs,omitempty" yaml:"SizeRootFs,omitempty"`
+-	Names      []string  `json:"Names,omitempty" yaml:"Names,omitempty"`
+-}
+-
+-// ListContainers returns a slice of containers matching the given criteria.
+-//
+-// See http://goo.gl/QpCnDN for more details.
+-func (c *Client) ListContainers(opts ListContainersOptions) ([]APIContainers, error) {
+-	path := "/containers/json?" + queryString(opts)
+-	body, _, err := c.do("GET", path, nil)
+-	if err != nil {
+-		return nil, err
+-	}
+-	var containers []APIContainers
+-	err = json.Unmarshal(body, &containers)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return containers, nil
+-}
+-
+-// Port represents the port number and the protocol, in the form
+-// <number>/<protocol>. For example: 80/tcp.
+-type Port string
+-
+-// Port returns the number of the port.
+-func (p Port) Port() string {
+-	return strings.Split(string(p), "/")[0]
+-}
+-
+-// Proto returns the name of the protocol.
+-func (p Port) Proto() string {
+-	parts := strings.Split(string(p), "/")
+-	if len(parts) == 1 {
+-		return "tcp"
+-	}
+-	return parts[1]
+-}
+-
+-// State represents the state of a container.
+-type State struct {
+-	Running    bool      `json:"Running,omitempty" yaml:"Running,omitempty"`
+-	Paused     bool      `json:"Paused,omitempty" yaml:"Paused,omitempty"`
+-	Pid        int       `json:"Pid,omitempty" yaml:"Pid,omitempty"`
+-	ExitCode   int       `json:"ExitCode,omitempty" yaml:"ExitCode,omitempty"`
+-	StartedAt  time.Time `json:"StartedAt,omitempty" yaml:"StartedAt,omitempty"`
+-	FinishedAt time.Time `json:"FinishedAt,omitempty" yaml:"FinishedAt,omitempty"`
+-}
+-
+-// String returns the string representation of a state.
+-func (s *State) String() string {
+-	if s.Running {
+-		if s.Paused {
+-			return "paused"
+-		}
+-		return fmt.Sprintf("Up %s", time.Now().UTC().Sub(s.StartedAt))
+-	}
+-	return fmt.Sprintf("Exit %d", s.ExitCode)
+-}
+-
+-type PortBinding struct {
+-	HostIp   string `json:"HostIP,omitempty" yaml:"HostIP,omitempty"`
+-	HostPort string `json:"HostPort,omitempty" yaml:"HostPort,omitempty"`
+-}
+-
+-type PortMapping map[string]string
+-
+-type NetworkSettings struct {
+-	IPAddress   string                 `json:"IPAddress,omitempty" yaml:"IPAddress,omitempty"`
+-	IPPrefixLen int                    `json:"IPPrefixLen,omitempty" yaml:"IPPrefixLen,omitempty"`
+-	Gateway     string                 `json:"Gateway,omitempty" yaml:"Gateway,omitempty"`
+-	Bridge      string                 `json:"Bridge,omitempty" yaml:"Bridge,omitempty"`
+-	PortMapping map[string]PortMapping `json:"PortMapping,omitempty" yaml:"PortMapping,omitempty"`
+-	Ports       map[Port][]PortBinding `json:"Ports,omitempty" yaml:"Ports,omitempty"`
+-}
+-
+-func (settings *NetworkSettings) PortMappingAPI() []APIPort {
+-	var mapping []APIPort
+-	for port, bindings := range settings.Ports {
+-		p, _ := parsePort(port.Port())
+-		if len(bindings) == 0 {
+-			mapping = append(mapping, APIPort{
+-				PublicPort: int64(p),
+-				Type:       port.Proto(),
+-			})
+-			continue
+-		}
+-		for _, binding := range bindings {
+-			p, _ := parsePort(port.Port())
+-			h, _ := parsePort(binding.HostPort)
+-			mapping = append(mapping, APIPort{
+-				PrivatePort: int64(p),
+-				PublicPort:  int64(h),
+-				Type:        port.Proto(),
+-				IP:          binding.HostIp,
+-			})
+-		}
+-	}
+-	return mapping
+-}
+-
+-func parsePort(rawPort string) (int, error) {
+-	port, err := strconv.ParseUint(rawPort, 10, 16)
+-	if err != nil {
+-		return 0, err
+-	}
+-	return int(port), nil
+-}
+-
+-type Config struct {
+-	Hostname        string              `json:"Hostname,omitempty" yaml:"Hostname,omitempty"`
+-	Domainname      string              `json:"Domainname,omitempty" yaml:"Domainname,omitempty"`
+-	User            string              `json:"User,omitempty" yaml:"User,omitempty"`
+-	Memory          int64               `json:"Memory,omitempty" yaml:"Memory,omitempty"`
+-	MemorySwap      int64               `json:"MemorySwap,omitempty" yaml:"MemorySwap,omitempty"`
+-	CpuShares       int64               `json:"CpuShares,omitempty" yaml:"CpuShares,omitempty"`
+-	AttachStdin     bool                `json:"AttachStdin,omitempty" yaml:"AttachStdin,omitempty"`
+-	AttachStdout    bool                `json:"AttachStdout,omitempty" yaml:"AttachStdout,omitempty"`
+-	AttachStderr    bool                `json:"AttachStderr,omitempty" yaml:"AttachStderr,omitempty"`
+-	PortSpecs       []string            `json:"PortSpecs,omitempty" yaml:"PortSpecs,omitempty"`
+-	ExposedPorts    map[Port]struct{}   `json:"ExposedPorts,omitempty" yaml:"ExposedPorts,omitempty"`
+-	Tty             bool                `json:"Tty,omitempty" yaml:"Tty,omitempty"`
+-	OpenStdin       bool                `json:"OpenStdin,omitempty" yaml:"OpenStdin,omitempty"`
+-	StdinOnce       bool                `json:"StdinOnce,omitempty" yaml:"StdinOnce,omitempty"`
+-	Env             []string            `json:"Env,omitempty" yaml:"Env,omitempty"`
+-	Cmd             []string            `json:"Cmd,omitempty" yaml:"Cmd,omitempty"`
+-	Dns             []string            `json:"Dns,omitempty" yaml:"Dns,omitempty"` // For Docker API v1.9 and below only
+-	Image           string              `json:"Image,omitempty" yaml:"Image,omitempty"`
+-	Volumes         map[string]struct{} `json:"Volumes,omitempty" yaml:"Volumes,omitempty"`
+-	VolumesFrom     string              `json:"VolumesFrom,omitempty" yaml:"VolumesFrom,omitempty"`
+-	WorkingDir      string              `json:"WorkingDir,omitempty" yaml:"WorkingDir,omitempty"`
+-	Entrypoint      []string            `json:"Entrypoint,omitempty" yaml:"Entrypoint,omitempty"`
+-	NetworkDisabled bool                `json:"NetworkDisabled,omitempty" yaml:"NetworkDisabled,omitempty"`
+-}
+-
+-type Container struct {
+-	ID string `json:"Id" yaml:"Id"`
+-
+-	Created time.Time `json:"Created,omitempty" yaml:"Created,omitempty"`
+-
+-	Path string   `json:"Path,omitempty" yaml:"Path,omitempty"`
+-	Args []string `json:"Args,omitempty" yaml:"Args,omitempty"`
+-
+-	Config *Config `json:"Config,omitempty" yaml:"Config,omitempty"`
+-	State  State   `json:"State,omitempty" yaml:"State,omitempty"`
+-	Image  string  `json:"Image,omitempty" yaml:"Image,omitempty"`
+-
+-	NetworkSettings *NetworkSettings `json:"NetworkSettings,omitempty" yaml:"NetworkSettings,omitempty"`
+-
+-	SysInitPath    string `json:"SysInitPath,omitempty" yaml:"SysInitPath,omitempty"`
+-	ResolvConfPath string `json:"ResolvConfPath,omitempty" yaml:"ResolvConfPath,omitempty"`
+-	HostnamePath   string `json:"HostnamePath,omitempty" yaml:"HostnamePath,omitempty"`
+-	HostsPath      string `json:"HostsPath,omitempty" yaml:"HostsPath,omitempty"`
+-	Name           string `json:"Name,omitempty" yaml:"Name,omitempty"`
+-	Driver         string `json:"Driver,omitempty" yaml:"Driver,omitempty"`
+-
+-	Volumes    map[string]string `json:"Volumes,omitempty" yaml:"Volumes,omitempty"`
+-	VolumesRW  map[string]bool   `json:"VolumesRW,omitempty" yaml:"VolumesRW,omitempty"`
+-	HostConfig *HostConfig       `json:"HostConfig,omitempty" yaml:"HostConfig,omitempty"`
+-}
+-
+-// InspectContainer returns information about a container by its ID.
+-//
+-// See http://goo.gl/2o52Sx for more details.
+-func (c *Client) InspectContainer(id string) (*Container, error) {
+-	path := "/containers/" + id + "/json"
+-	body, status, err := c.do("GET", path, nil)
+-	if status == http.StatusNotFound {
+-		return nil, &NoSuchContainer{ID: id}
+-	}
+-	if err != nil {
+-		return nil, err
+-	}
+-	var container Container
+-	err = json.Unmarshal(body, &container)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return &container, nil
+-}
+-
+-// ContainerChanges returns changes in the filesystem of the given container.
+-//
+-// See http://goo.gl/DpGyzK for more details.
+-func (c *Client) ContainerChanges(id string) ([]Change, error) {
+-	path := "/containers/" + id + "/changes"
+-	body, status, err := c.do("GET", path, nil)
+-	if status == http.StatusNotFound {
+-		return nil, &NoSuchContainer{ID: id}
+-	}
+-	if err != nil {
+-		return nil, err
+-	}
+-	var changes []Change
+-	err = json.Unmarshal(body, &changes)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return changes, nil
+-}
+-
+-// CreateContainerOptions specify parameters to the CreateContainer function.
+-//
+-// See http://goo.gl/WPPYtB for more details.
+-type CreateContainerOptions struct {
+-	Name   string
+-	Config *Config `qs:"-"`
+-}
+-
+-// CreateContainer creates a new container, returning the container instance,
+-// or an error in case of failure.
+-//
+-// See http://goo.gl/tjihUc for more details.
+-func (c *Client) CreateContainer(opts CreateContainerOptions) (*Container, error) {
+-	path := "/containers/create?" + queryString(opts)
+-	body, status, err := c.do("POST", path, opts.Config)
+-	if status == http.StatusNotFound {
+-		return nil, ErrNoSuchImage
+-	}
+-	if err != nil {
+-		return nil, err
+-	}
+-	var container Container
+-	err = json.Unmarshal(body, &container)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	container.Name = opts.Name
+-
+-	return &container, nil
+-}
+-
+-type KeyValuePair struct {
+-	Key   string `json:"Key,omitempty" yaml:"Key,omitempty"`
+-	Value string `json:"Value,omitempty" yaml:"Value,omitempty"`
+-}
+-
+-// RestartPolicy represents the policy for automatically restarting a container.
+-//
+-// Possible values are:
+-//
+-//   - always: the docker daemon will always restart the container
+-//   - on-failure: the docker daemon will restart the container on failures, at
+-//                 most MaximumRetryCount times
+-//   - no: the docker daemon will not restart the container automatically
+-type RestartPolicy struct {
+-	Name              string `json:"Name,omitempty" yaml:"Name,omitempty"`
+-	MaximumRetryCount int    `json:"MaximumRetryCount,omitempty" yaml:"MaximumRetryCount,omitempty"`
+-}
+-
+-// AlwaysRestart returns a restart policy that tells the Docker daemon to
+-// always restart the container.
+-func AlwaysRestart() RestartPolicy {
+-	return RestartPolicy{Name: "always"}
+-}
+-
+-// RestartOnFailure returns a restart policy that tells the Docker daemon to
+-// restart the container on failures, trying at most maxRetry times.
+-func RestartOnFailure(maxRetry int) RestartPolicy {
+-	return RestartPolicy{Name: "on-failure", MaximumRetryCount: maxRetry}
+-}
+-
+-// NeverRestart returns a restart policy that tells the Docker daemon to never
+-// restart the container on failures.
+-func NeverRestart() RestartPolicy {
+-	return RestartPolicy{Name: "no"}
+-}
+-
+-type HostConfig struct {
+-	Binds           []string               `json:"Binds,omitempty" yaml:"Binds,omitempty"`
+-	CapAdd          []string               `json:"CapAdd,omitempty" yaml:"CapAdd,omitempty"`
+-	CapDrop         []string               `json:"CapDrop,omitempty" yaml:"CapDrop,omitempty"`
+-	ContainerIDFile string                 `json:"ContainerIDFile,omitempty" yaml:"ContainerIDFile,omitempty"`
+-	LxcConf         []KeyValuePair         `json:"LxcConf,omitempty" yaml:"LxcConf,omitempty"`
+-	Privileged      bool                   `json:"Privileged,omitempty" yaml:"Privileged,omitempty"`
+-	PortBindings    map[Port][]PortBinding `json:"PortBindings,omitempty" yaml:"PortBindings,omitempty"`
+-	Links           []string               `json:"Links,omitempty" yaml:"Links,omitempty"`
+-	PublishAllPorts bool                   `json:"PublishAllPorts,omitempty" yaml:"PublishAllPorts,omitempty"`
+-	Dns             []string               `json:"Dns,omitempty" yaml:"Dns,omitempty"` // For Docker API v1.10 and above only
+-	DnsSearch       []string               `json:"DnsSearch,omitempty" yaml:"DnsSearch,omitempty"`
+-	VolumesFrom     []string               `json:"VolumesFrom,omitempty" yaml:"VolumesFrom,omitempty"`
+-	NetworkMode     string                 `json:"NetworkMode,omitempty" yaml:"NetworkMode,omitempty"`
+-	RestartPolicy   RestartPolicy          `json:"RestartPolicy,omitempty" yaml:"RestartPolicy,omitempty"`
+-}
+-
+-// StartContainer starts a container, returning an error in case of failure.
+-//
+-// See http://goo.gl/y5GZlE for more details.
+-func (c *Client) StartContainer(id string, hostConfig *HostConfig) error {
+-	if hostConfig == nil {
+-		hostConfig = &HostConfig{}
+-	}
+-	path := "/containers/" + id + "/start"
+-	_, status, err := c.do("POST", path, hostConfig)
+-	if status == http.StatusNotFound {
+-		return &NoSuchContainer{ID: id}
+-	}
+-	if status == http.StatusNotModified {
+-		return &ContainerAlreadyRunning{ID: id}
+-	}
+-	if err != nil {
+-		return err
+-	}
+-	return nil
+-}
+-
+-// StopContainer stops a container, killing it after the given timeout (in
+-// seconds).
+-//
+-// See http://goo.gl/X2mj8t for more details.
+-func (c *Client) StopContainer(id string, timeout uint) error {
+-	path := fmt.Sprintf("/containers/%s/stop?t=%d", id, timeout)
+-	_, status, err := c.do("POST", path, nil)
+-	if status == http.StatusNotFound {
+-		return &NoSuchContainer{ID: id}
+-	}
+-	if status == http.StatusNotModified {
+-		return &ContainerNotRunning{ID: id}
+-	}
+-	if err != nil {
+-		return err
+-	}
+-	return nil
+-}
+-
+-// RestartContainer stops a container, killing it after the given timeout (in
+-// seconds), during the stop process.
+-//
+-// See http://goo.gl/zms73Z for more details.
+-func (c *Client) RestartContainer(id string, timeout uint) error {
+-	path := fmt.Sprintf("/containers/%s/restart?t=%d", id, timeout)
+-	_, status, err := c.do("POST", path, nil)
+-	if status == http.StatusNotFound {
+-		return &NoSuchContainer{ID: id}
+-	}
+-	if err != nil {
+-		return err
+-	}
+-	return nil
+-}
+-
+-// PauseContainer pauses the given container.
+-//
+-// See http://goo.gl/AM5t42 for more details.
+-func (c *Client) PauseContainer(id string) error {
+-	path := fmt.Sprintf("/containers/%s/pause", id)
+-	_, status, err := c.do("POST", path, nil)
+-	if status == http.StatusNotFound {
+-		return &NoSuchContainer{ID: id}
+-	}
+-	if err != nil {
+-		return err
+-	}
+-	return nil
+-}
+-
+-// UnpauseContainer pauses the given container.
+-//
+-// See http://goo.gl/eBrNSL for more details.
+-func (c *Client) UnpauseContainer(id string) error {
+-	path := fmt.Sprintf("/containers/%s/unpause", id)
+-	_, status, err := c.do("POST", path, nil)
+-	if status == http.StatusNotFound {
+-		return &NoSuchContainer{ID: id}
+-	}
+-	if err != nil {
+-		return err
+-	}
+-	return nil
+-}
+-
+-// KillContainerOptions represents the set of options that can be used in a
+-// call to KillContainer.
+-type KillContainerOptions struct {
+-	// The ID of the container.
+-	ID string `qs:"-"`
+-
+-	// The signal to send to the container. When omitted, Docker server
+-	// will assume SIGKILL.
+-	Signal Signal
+-}
+-
+-// KillContainer kills a container, returning an error in case of failure.
+-//
+-// See http://goo.gl/DPbbBy for more details.
+-func (c *Client) KillContainer(opts KillContainerOptions) error {
+-	path := "/containers/" + opts.ID + "/kill" + "?" + queryString(opts)
+-	_, status, err := c.do("POST", path, nil)
+-	if status == http.StatusNotFound {
+-		return &NoSuchContainer{ID: opts.ID}
+-	}
+-	if err != nil {
+-		return err
+-	}
+-	return nil
+-}
+-
+-// RemoveContainerOptions encapsulates options to remove a container.
+-type RemoveContainerOptions struct {
+-	// The ID of the container.
+-	ID string `qs:"-"`
+-
+-	// A flag that indicates whether Docker should remove the volumes
+-	// associated to the container.
+-	RemoveVolumes bool `qs:"v"`
+-
+-	// A flag that indicates whether Docker should remove the container
+-	// even if it is currently running.
+-	Force bool
+-}
+-
+-// RemoveContainer removes a container, returning an error in case of failure.
+-//
+-// See http://goo.gl/PBvGdU for more details.
+-func (c *Client) RemoveContainer(opts RemoveContainerOptions) error {
+-	path := "/containers/" + opts.ID + "?" + queryString(opts)
+-	_, status, err := c.do("DELETE", path, nil)
+-	if status == http.StatusNotFound {
+-		return &NoSuchContainer{ID: opts.ID}
+-	}
+-	if err != nil {
+-		return err
+-	}
+-	return nil
+-}
+-
+-// CopyFromContainerOptions is the set of options that can be used when copying
+-// files or folders from a container.
+-//
+-// See http://goo.gl/mnxRMl for more details.
+-type CopyFromContainerOptions struct {
+-	OutputStream io.Writer `json:"-"`
+-	Container    string    `json:"-"`
+-	Resource     string
+-}
+-
+-// CopyFromContainer copy files or folders from a container, using a given
+-// resource.
+-//
+-// See http://goo.gl/mnxRMl for more details.
+-func (c *Client) CopyFromContainer(opts CopyFromContainerOptions) error {
+-	if opts.Container == "" {
+-		return &NoSuchContainer{ID: opts.Container}
+-	}
+-	url := fmt.Sprintf("/containers/%s/copy", opts.Container)
+-	body, status, err := c.do("POST", url, opts)
+-	if status == http.StatusNotFound {
+-		return &NoSuchContainer{ID: opts.Container}
+-	}
+-	if err != nil {
+-		return err
+-	}
+-	io.Copy(opts.OutputStream, bytes.NewBuffer(body))
+-	return nil
+-}
+-
+-// WaitContainer blocks until the given container stops, return the exit code
+-// of the container status.
+-//
+-// See http://goo.gl/gnHJL2 for more details.
+-func (c *Client) WaitContainer(id string) (int, error) {
+-	body, status, err := c.do("POST", "/containers/"+id+"/wait", nil)
+-	if status == http.StatusNotFound {
+-		return 0, &NoSuchContainer{ID: id}
+-	}
+-	if err != nil {
+-		return 0, err
+-	}
+-	var r struct{ StatusCode int }
+-	err = json.Unmarshal(body, &r)
+-	if err != nil {
+-		return 0, err
+-	}
+-	return r.StatusCode, nil
+-}
+-
+-// CommitContainerOptions aggregates parameters to the CommitContainer method.
+-//
+-// See http://goo.gl/628gxm for more details.
+-type CommitContainerOptions struct {
+-	Container  string
+-	Repository string `qs:"repo"`
+-	Tag        string
+-	Message    string `qs:"m"`
+-	Author     string
+-	Run        *Config `qs:"-"`
+-}
+-
+-// CommitContainer creates a new image from a container's changes.
+-//
+-// See http://goo.gl/628gxm for more details.
+-func (c *Client) CommitContainer(opts CommitContainerOptions) (*Image, error) {
+-	path := "/commit?" + queryString(opts)
+-	body, status, err := c.do("POST", path, opts.Run)
+-	if status == http.StatusNotFound {
+-		return nil, &NoSuchContainer{ID: opts.Container}
+-	}
+-	if err != nil {
+-		return nil, err
+-	}
+-	var image Image
+-	err = json.Unmarshal(body, &image)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return &image, nil
+-}
+-
+-// AttachToContainerOptions is the set of options that can be used when
+-// attaching to a container.
+-//
+-// See http://goo.gl/oPzcqH for more details.
+-type AttachToContainerOptions struct {
+-	Container    string    `qs:"-"`
+-	InputStream  io.Reader `qs:"-"`
+-	OutputStream io.Writer `qs:"-"`
+-	ErrorStream  io.Writer `qs:"-"`
+-
+-	// Get container logs, sending it to OutputStream.
+-	Logs bool
+-
+-	// Stream the response?
+-	Stream bool
+-
+-	// Attach to stdin, and use InputStream.
+-	Stdin bool
+-
+-	// Attach to stdout, and use OutputStream.
+-	Stdout bool
+-
+-	// Attach to stderr, and use ErrorStream.
+-	Stderr bool
+-
+-	// If set, after a successful connect, a sentinel will be sent and then the
+-	// client will block on receive before continuing.
+-	//
+-	// It must be an unbuffered channel. Using a buffered channel can lead
+-	// to unexpected behavior.
+-	Success chan struct{}
+-
+-	// Use raw terminal? Usually true when the container contains a TTY.
+-	RawTerminal bool `qs:"-"`
+-}
+-
+-// AttachToContainer attaches to a container, using the given options.
+-//
+-// See http://goo.gl/oPzcqH for more details.
+-func (c *Client) AttachToContainer(opts AttachToContainerOptions) error {
+-	if opts.Container == "" {
+-		return &NoSuchContainer{ID: opts.Container}
+-	}
+-	path := "/containers/" + opts.Container + "/attach?" + queryString(opts)
+-	return c.hijack("POST", path, opts.Success, opts.RawTerminal, opts.InputStream, opts.ErrorStream, opts.OutputStream)
+-}
+-
+-// LogsOptions represents the set of options used when getting logs from a
+-// container.
+-//
+-// See http://goo.gl/rLhKSU for more details.
+-type LogsOptions struct {
+-	Container    string    `qs:"-"`
+-	OutputStream io.Writer `qs:"-"`
+-	ErrorStream  io.Writer `qs:"-"`
+-	Follow       bool
+-	Stdout       bool
+-	Stderr       bool
+-	Timestamps   bool
+-	Tail         string
+-
+-	// Use raw terminal? Usually true when the container contains a TTY.
+-	RawTerminal bool `qs:"-"`
+-}
+-
+-// Logs gets stdout and stderr logs from the specified container.
+-//
+-// See http://goo.gl/rLhKSU for more details.
+-func (c *Client) Logs(opts LogsOptions) error {
+-	if opts.Container == "" {
+-		return &NoSuchContainer{ID: opts.Container}
+-	}
+-	if opts.Tail == "" {
+-		opts.Tail = "all"
+-	}
+-	path := "/containers/" + opts.Container + "/logs?" + queryString(opts)
+-	return c.stream("GET", path, opts.RawTerminal, false, nil, nil, opts.OutputStream, opts.ErrorStream)
+-}
+-
+-// ResizeContainerTTY resizes the terminal to the given height and width.
+-func (c *Client) ResizeContainerTTY(id string, height, width int) error {
+-	params := make(url.Values)
+-	params.Set("h", strconv.Itoa(height))
+-	params.Set("w", strconv.Itoa(width))
+-	_, _, err := c.do("POST", "/containers/"+id+"/resize?"+params.Encode(), nil)
+-	return err
+-}
+-
+-// ExportContainerOptions is the set of parameters to the ExportContainer
+-// method.
+-//
+-// See http://goo.gl/Lqk0FZ for more details.
+-type ExportContainerOptions struct {
+-	ID           string
+-	OutputStream io.Writer
+-}
+-
+-// ExportContainer export the contents of container id as tar archive
+-// and prints the exported contents to stdout.
+-//
+-// See http://goo.gl/Lqk0FZ for more details.
+-func (c *Client) ExportContainer(opts ExportContainerOptions) error {
+-	if opts.ID == "" {
+-		return &NoSuchContainer{ID: opts.ID}
+-	}
+-	url := fmt.Sprintf("/containers/%s/export", opts.ID)
+-	return c.stream("GET", url, true, false, nil, nil, opts.OutputStream, nil)
+-}
+-
+-// NoSuchContainer is the error returned when a given container does not exist.
+-type NoSuchContainer struct {
+-	ID string
+-}
+-
+-func (err *NoSuchContainer) Error() string {
+-	return "No such container: " + err.ID
+-}
+-
+-// ContainerAlreadyRunning is the error returned when a given container is
+-// already running.
+-type ContainerAlreadyRunning struct {
+-	ID string
+-}
+-
+-func (err *ContainerAlreadyRunning) Error() string {
+-	return "Container already running: " + err.ID
+-}
+-
+-// ContainerNotRunning is the error returned when a given container is not
+-// running.
+-type ContainerNotRunning struct {
+-	ID string
+-}
+-
+-func (err *ContainerNotRunning) Error() string {
+-	return "Container not running: " + err.ID
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/container_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/container_test.go
+deleted file mode 100644
+index f3e1954..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/container_test.go
++++ /dev/null
+@@ -1,1416 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"bytes"
+-	"encoding/json"
+-	"io/ioutil"
+-	"net"
+-	"net/http"
+-	"net/http/httptest"
+-	"net/url"
+-	"os"
+-	"reflect"
+-	"regexp"
+-	"runtime"
+-	"strconv"
+-	"strings"
+-	"testing"
+-	"time"
+-)
+-
+-func TestStateString(t *testing.T) {
+-	started := time.Now().Add(-3 * time.Hour)
+-	var tests = []struct {
+-		input    State
+-		expected string
+-	}{
+-		{State{Running: true, Paused: true}, "^paused$"},
+-		{State{Running: true, StartedAt: started}, "^Up 3h.*$"},
+-		{State{Running: false, ExitCode: 7}, "^Exit 7$"},
+-	}
+-	for _, tt := range tests {
+-		re := regexp.MustCompile(tt.expected)
+-		if got := tt.input.String(); !re.MatchString(got) {
+-			t.Errorf("State.String(): wrong result. Want %q. Got %q.", tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestListContainers(t *testing.T) {
+-	jsonContainers := `[
+-     {
+-             "Id": "8dfafdbc3a40",
+-             "Image": "base:latest",
+-             "Command": "echo 1",
+-             "Created": 1367854155,
+-             "Ports":[{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}],
+-             "Status": "Exit 0"
+-     },
+-     {
+-             "Id": "9cd87474be90",
+-             "Image": "base:latest",
+-             "Command": "echo 222222",
+-             "Created": 1367854155,
+-             "Ports":[{"PrivatePort": 2222, "PublicPort": 3333, "Type": "tcp"}],
+-             "Status": "Exit 0"
+-     },
+-     {
+-             "Id": "3176a2479c92",
+-             "Image": "base:latest",
+-             "Command": "echo 3333333333333333",
+-             "Created": 1367854154,
+-             "Ports":[{"PrivatePort": 2221, "PublicPort": 3331, "Type": "tcp"}],
+-             "Status": "Exit 0"
+-     },
+-     {
+-             "Id": "4cb07b47f9fb",
+-             "Image": "base:latest",
+-             "Command": "echo 444444444444444444444444444444444",
+-             "Ports":[{"PrivatePort": 2223, "PublicPort": 3332, "Type": "tcp"}],
+-             "Created": 1367854152,
+-             "Status": "Exit 0"
+-     }
+-]`
+-	var expected []APIContainers
+-	err := json.Unmarshal([]byte(jsonContainers), &expected)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	client := newTestClient(&FakeRoundTripper{message: jsonContainers, status: http.StatusOK})
+-	containers, err := client.ListContainers(ListContainersOptions{})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(containers, expected) {
+-		t.Errorf("ListContainers: Expected %#v. Got %#v.", expected, containers)
+-	}
+-}
+-
+-func TestListContainersParams(t *testing.T) {
+-	var tests = []struct {
+-		input  ListContainersOptions
+-		params map[string][]string
+-	}{
+-		{ListContainersOptions{}, map[string][]string{}},
+-		{ListContainersOptions{All: true}, map[string][]string{"all": {"1"}}},
+-		{ListContainersOptions{All: true, Limit: 10}, map[string][]string{"all": {"1"}, "limit": {"10"}}},
+-		{
+-			ListContainersOptions{All: true, Limit: 10, Since: "adf9983", Before: "abdeef"},
+-			map[string][]string{"all": {"1"}, "limit": {"10"}, "since": {"adf9983"}, "before": {"abdeef"}},
+-		},
+-	}
+-	fakeRT := &FakeRoundTripper{message: "[]", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	u, _ := url.Parse(client.getURL("/containers/json"))
+-	for _, tt := range tests {
+-		client.ListContainers(tt.input)
+-		got := map[string][]string(fakeRT.requests[0].URL.Query())
+-		if !reflect.DeepEqual(got, tt.params) {
+-			t.Errorf("Expected %#v, got %#v.", tt.params, got)
+-		}
+-		if path := fakeRT.requests[0].URL.Path; path != u.Path {
+-			t.Errorf("Wrong path on request. Want %q. Got %q.", u.Path, path)
+-		}
+-		if meth := fakeRT.requests[0].Method; meth != "GET" {
+-			t.Errorf("Wrong HTTP method. Want GET. Got %s.", meth)
+-		}
+-		fakeRT.Reset()
+-	}
+-}
+-
+-func TestListContainersFailure(t *testing.T) {
+-	var tests = []struct {
+-		status  int
+-		message string
+-	}{
+-		{400, "bad parameter"},
+-		{500, "internal server error"},
+-	}
+-	for _, tt := range tests {
+-		client := newTestClient(&FakeRoundTripper{message: tt.message, status: tt.status})
+-		expected := Error{Status: tt.status, Message: tt.message}
+-		containers, err := client.ListContainers(ListContainersOptions{})
+-		if !reflect.DeepEqual(expected, *err.(*Error)) {
+-			t.Errorf("Wrong error in ListContainers. Want %#v. Got %#v.", expected, err)
+-		}
+-		if len(containers) > 0 {
+-			t.Errorf("ListContainers failure. Expected empty list. Got %#v.", containers)
+-		}
+-	}
+-}
+-
+-func TestInspectContainer(t *testing.T) {
+-	jsonContainer := `{
+-             "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2",
+-             "Created": "2013-05-07T14:51:42.087658+02:00",
+-             "Path": "date",
+-             "Args": [],
+-             "Config": {
+-                     "Hostname": "4fa6e0f0c678",
+-                     "User": "",
+-                     "Memory": 17179869184,
+-                     "MemorySwap": 34359738368,
+-                     "AttachStdin": false,
+-                     "AttachStdout": true,
+-                     "AttachStderr": true,
+-                     "PortSpecs": null,
+-                     "Tty": false,
+-                     "OpenStdin": false,
+-                     "StdinOnce": false,
+-                     "Env": null,
+-                     "Cmd": [
+-                             "date"
+-                     ],
+-                     "Image": "base",
+-                     "Volumes": {},
+-                     "VolumesFrom": ""
+-             },
+-             "State": {
+-                     "Running": false,
+-                     "Pid": 0,
+-                     "ExitCode": 0,
+-                     "StartedAt": "2013-05-07T14:51:42.087658+02:00",
+-                     "Ghost": false
+-             },
+-             "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+-             "NetworkSettings": {
+-                     "IpAddress": "",
+-                     "IpPrefixLen": 0,
+-                     "Gateway": "",
+-                     "Bridge": "",
+-                     "PortMapping": null
+-             },
+-             "SysInitPath": "/home/kitty/go/src/github.com/dotcloud/docker/bin/docker",
+-             "ResolvConfPath": "/etc/resolv.conf",
+-             "Volumes": {},
+-             "HostConfig": {
+-               "Binds": null,
+-               "ContainerIDFile": "",
+-               "LxcConf": [],
+-               "Privileged": false,
+-               "PortBindings": {
+-                 "80/tcp": [
+-                   {
+-                     "HostIp": "0.0.0.0",
+-                     "HostPort": "49153"
+-                   }
+-                 ]
+-               },
+-               "Links": null,
+-               "PublishAllPorts": false
+-             }
+-}`
+-	var expected Container
+-	err := json.Unmarshal([]byte(jsonContainer), &expected)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	fakeRT := &FakeRoundTripper{message: jsonContainer, status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c678"
+-	container, err := client.InspectContainer(id)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(*container, expected) {
+-		t.Errorf("InspectContainer(%q): Expected %#v. Got %#v.", id, expected, container)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/4fa6e0f0c678/json"))
+-	if gotPath := fakeRT.requests[0].URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("InspectContainer(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestInspectContainerNegativeSwap(t *testing.T) {
+-	jsonContainer := `{
+-             "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2",
+-             "Created": "2013-05-07T14:51:42.087658+02:00",
+-             "Path": "date",
+-             "Args": [],
+-             "Config": {
+-                     "Hostname": "4fa6e0f0c678",
+-                     "User": "",
+-                     "Memory": 17179869184,
+-                     "MemorySwap": -1,
+-                     "AttachStdin": false,
+-                     "AttachStdout": true,
+-                     "AttachStderr": true,
+-                     "PortSpecs": null,
+-                     "Tty": false,
+-                     "OpenStdin": false,
+-                     "StdinOnce": false,
+-                     "Env": null,
+-                     "Cmd": [
+-                             "date"
+-                     ],
+-                     "Image": "base",
+-                     "Volumes": {},
+-                     "VolumesFrom": ""
+-             },
+-             "State": {
+-                     "Running": false,
+-                     "Pid": 0,
+-                     "ExitCode": 0,
+-                     "StartedAt": "2013-05-07T14:51:42.087658+02:00",
+-                     "Ghost": false
+-             },
+-             "Image": "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+-             "NetworkSettings": {
+-                     "IpAddress": "",
+-                     "IpPrefixLen": 0,
+-                     "Gateway": "",
+-                     "Bridge": "",
+-                     "PortMapping": null
+-             },
+-             "SysInitPath": "/home/kitty/go/src/github.com/dotcloud/docker/bin/docker",
+-             "ResolvConfPath": "/etc/resolv.conf",
+-             "Volumes": {},
+-             "HostConfig": {
+-               "Binds": null,
+-               "ContainerIDFile": "",
+-               "LxcConf": [],
+-               "Privileged": false,
+-               "PortBindings": {
+-                 "80/tcp": [
+-                   {
+-                     "HostIp": "0.0.0.0",
+-                     "HostPort": "49153"
+-                   }
+-                 ]
+-               },
+-               "Links": null,
+-               "PublishAllPorts": false
+-             }
+-}`
+-	var expected Container
+-	err := json.Unmarshal([]byte(jsonContainer), &expected)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	fakeRT := &FakeRoundTripper{message: jsonContainer, status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c678"
+-	container, err := client.InspectContainer(id)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(*container, expected) {
+-		t.Errorf("InspectContainer(%q): Expected %#v. Got %#v.", id, expected, container)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/4fa6e0f0c678/json"))
+-	if gotPath := fakeRT.requests[0].URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("InspectContainer(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestInspectContainerFailure(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "server error", status: 500})
+-	expected := Error{Status: 500, Message: "server error"}
+-	container, err := client.InspectContainer("abe033")
+-	if container != nil {
+-		t.Errorf("InspectContainer: Expected <nil> container, got %#v", container)
+-	}
+-	if !reflect.DeepEqual(expected, *err.(*Error)) {
+-		t.Errorf("InspectContainer: Wrong error information. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestInspectContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: 404})
+-	container, err := client.InspectContainer("abe033")
+-	if container != nil {
+-		t.Errorf("InspectContainer: Expected <nil> container, got %#v", container)
+-	}
+-	expected := &NoSuchContainer{ID: "abe033"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("InspectContainer: Wrong error information. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestContainerChanges(t *testing.T) {
+-	jsonChanges := `[
+-     {
+-             "Path":"/dev",
+-             "Kind":0
+-     },
+-     {
+-             "Path":"/dev/kmsg",
+-             "Kind":1
+-     },
+-     {
+-             "Path":"/test",
+-             "Kind":1
+-     }
+-]`
+-	var expected []Change
+-	err := json.Unmarshal([]byte(jsonChanges), &expected)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	fakeRT := &FakeRoundTripper{message: jsonChanges, status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c678"
+-	changes, err := client.ContainerChanges(id)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(changes, expected) {
+-		t.Errorf("ContainerChanges(%q): Expected %#v. Got %#v.", id, expected, changes)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/4fa6e0f0c678/changes"))
+-	if gotPath := fakeRT.requests[0].URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("ContainerChanges(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestContainerChangesFailure(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "server error", status: 500})
+-	expected := Error{Status: 500, Message: "server error"}
+-	changes, err := client.ContainerChanges("abe033")
+-	if changes != nil {
+-		t.Errorf("ContainerChanges: Expected <nil> changes, got %#v", changes)
+-	}
+-	if !reflect.DeepEqual(expected, *err.(*Error)) {
+-		t.Errorf("ContainerChanges: Wrong error information. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestContainerChangesNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: 404})
+-	changes, err := client.ContainerChanges("abe033")
+-	if changes != nil {
+-		t.Errorf("ContainerChanges: Expected <nil> changes, got %#v", changes)
+-	}
+-	expected := &NoSuchContainer{ID: "abe033"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("ContainerChanges: Wrong error information. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestCreateContainer(t *testing.T) {
+-	jsonContainer := `{
+-             "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2",
+-	     "Warnings": []
+-}`
+-	var expected Container
+-	err := json.Unmarshal([]byte(jsonContainer), &expected)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	fakeRT := &FakeRoundTripper{message: jsonContainer, status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	config := Config{AttachStdout: true, AttachStdin: true}
+-	opts := CreateContainerOptions{Name: "TestCreateContainer", Config: &config}
+-	container, err := client.CreateContainer(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	if container.ID != id {
+-		t.Errorf("CreateContainer: wrong ID. Want %q. Got %q.", id, container.ID)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("CreateContainer: wrong HTTP method. Want %q. Got %q.", "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/create"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("CreateContainer: Wrong path in request. Want %q. Got %q.", expectedURL.Path, gotPath)
+-	}
+-	var gotBody Config
+-	err = json.NewDecoder(req.Body).Decode(&gotBody)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-}
+-
+-func TestCreateContainerImageNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "No such image", status: http.StatusNotFound})
+-	config := Config{AttachStdout: true, AttachStdin: true}
+-	container, err := client.CreateContainer(CreateContainerOptions{Config: &config})
+-	if container != nil {
+-		t.Errorf("CreateContainer: expected <nil> container, got %#v.", container)
+-	}
+-	if !reflect.DeepEqual(err, ErrNoSuchImage) {
+-		t.Errorf("CreateContainer: Wrong error type. Want %#v. Got %#v.", ErrNoSuchImage, err)
+-	}
+-}
+-
+-func TestStartContainer(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	err := client.StartContainer(id, &HostConfig{})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("StartContainer(%q): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id + "/start"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("StartContainer(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-	expectedContentType := "application/json"
+-	if contentType := req.Header.Get("Content-Type"); contentType != expectedContentType {
+-		t.Errorf("StartContainer(%q): Wrong content-type in request. Want %q. Got %q.", id, expectedContentType, contentType)
+-	}
+-}
+-
+-func TestStartContainerNilHostConfig(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	err := client.StartContainer(id, nil)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("StartContainer(%q): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id + "/start"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("StartContainer(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-	expectedContentType := "application/json"
+-	if contentType := req.Header.Get("Content-Type"); contentType != expectedContentType {
+-		t.Errorf("StartContainer(%q): Wrong content-type in request. Want %q. Got %q.", id, expectedContentType, contentType)
+-	}
+-}
+-
+-func TestStartContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusNotFound})
+-	err := client.StartContainer("a2344", &HostConfig{})
+-	expected := &NoSuchContainer{ID: "a2344"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("StartContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestStartContainerAlreadyRunning(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "container already running", status: http.StatusNotModified})
+-	err := client.StartContainer("a2334", &HostConfig{})
+-	expected := &ContainerAlreadyRunning{ID: "a2334"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("StartContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestStopContainer(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusNoContent}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	err := client.StopContainer(id, 10)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("StopContainer(%q, 10): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id + "/stop"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("StopContainer(%q, 10): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestStopContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusNotFound})
+-	err := client.StopContainer("a2334", 10)
+-	expected := &NoSuchContainer{ID: "a2334"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("StopContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestStopContainerNotRunning(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "container not running", status: http.StatusNotModified})
+-	err := client.StopContainer("a2334", 10)
+-	expected := &ContainerNotRunning{ID: "a2334"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("StopContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestRestartContainer(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusNoContent}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	err := client.RestartContainer(id, 10)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("RestartContainer(%q, 10): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id + "/restart"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("RestartContainer(%q, 10): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestRestartContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusNotFound})
+-	err := client.RestartContainer("a2334", 10)
+-	expected := &NoSuchContainer{ID: "a2334"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("RestartContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestPauseContainer(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusNoContent}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	err := client.PauseContainer(id)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("PauseContainer(%q): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id + "/pause"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("PauseContainer(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestPauseContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusNotFound})
+-	err := client.PauseContainer("a2334")
+-	expected := &NoSuchContainer{ID: "a2334"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("PauseContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestUnpauseContainer(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusNoContent}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	err := client.UnpauseContainer(id)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("PauseContainer(%q): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id + "/unpause"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("PauseContainer(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestUnpauseContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusNotFound})
+-	err := client.UnpauseContainer("a2334")
+-	expected := &NoSuchContainer{ID: "a2334"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("PauseContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestKillContainer(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusNoContent}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	err := client.KillContainer(KillContainerOptions{ID: id})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("KillContainer(%q): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id + "/kill"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("KillContainer(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestKillContainerSignal(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusNoContent}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	err := client.KillContainer(KillContainerOptions{ID: id, Signal: SIGTERM})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("KillContainer(%q): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	if signal := req.URL.Query().Get("signal"); signal != "15" {
+-		t.Errorf("KillContainer(%q): Wrong query string in request. Want %q. Got %q.", id, "15", signal)
+-	}
+-}
+-
+-func TestKillContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusNotFound})
+-	err := client.KillContainer(KillContainerOptions{ID: "a2334"})
+-	expected := &NoSuchContainer{ID: "a2334"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("KillContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestRemoveContainer(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	opts := RemoveContainerOptions{ID: id}
+-	err := client.RemoveContainer(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "DELETE" {
+-		t.Errorf("RemoveContainer(%q): wrong HTTP method. Want %q. Got %q.", id, "DELETE", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("RemoveContainer(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestRemoveContainerRemoveVolumes(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	opts := RemoveContainerOptions{ID: id, RemoveVolumes: true}
+-	err := client.RemoveContainer(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	params := map[string][]string(req.URL.Query())
+-	expected := map[string][]string{"v": {"1"}}
+-	if !reflect.DeepEqual(params, expected) {
+-		t.Errorf("RemoveContainer(%q): wrong parameters. Want %#v. Got %#v.", id, expected, params)
+-	}
+-}
+-
+-func TestRemoveContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusNotFound})
+-	err := client.RemoveContainer(RemoveContainerOptions{ID: "a2334"})
+-	expected := &NoSuchContainer{ID: "a2334"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("RemoveContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestResizeContainerTTY(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	err := client.ResizeContainerTTY(id, 40, 80)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("ResizeContainerTTY(%q): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id + "/resize"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("ResizeContainerTTY(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-	got := map[string][]string(req.URL.Query())
+-	expectedParams := map[string][]string{
+-		"w": {"80"},
+-		"h": {"40"},
+-	}
+-	if !reflect.DeepEqual(got, expectedParams) {
+-		t.Errorf("Expected %#v, got %#v.", expectedParams, got)
+-	}
+-}
+-
+-func TestWaitContainer(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: `{"StatusCode": 56}`, status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	id := "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2"
+-	status, err := client.WaitContainer(id)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if status != 56 {
+-		t.Errorf("WaitContainer(%q): wrong return. Want 56. Got %d.", id, status)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("WaitContainer(%q): wrong HTTP method. Want %q. Got %q.", id, "POST", req.Method)
+-	}
+-	expectedURL, _ := url.Parse(client.getURL("/containers/" + id + "/wait"))
+-	if gotPath := req.URL.Path; gotPath != expectedURL.Path {
+-		t.Errorf("WaitContainer(%q): Wrong path in request. Want %q. Got %q.", id, expectedURL.Path, gotPath)
+-	}
+-}
+-
+-func TestWaitContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusNotFound})
+-	_, err := client.WaitContainer("a2334")
+-	expected := &NoSuchContainer{ID: "a2334"}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("WaitContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestCommitContainer(t *testing.T) {
+-	response := `{"Id":"596069db4bf5"}`
+-	client := newTestClient(&FakeRoundTripper{message: response, status: http.StatusOK})
+-	id := "596069db4bf5"
+-	image, err := client.CommitContainer(CommitContainerOptions{})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if image.ID != id {
+-		t.Errorf("CommitContainer: Wrong image id. Want %q. Got %q.", id, image.ID)
+-	}
+-}
+-
+-func TestCommitContainerParams(t *testing.T) {
+-	cfg := Config{Memory: 67108864}
+-	json, _ := json.Marshal(&cfg)
+-	var tests = []struct {
+-		input  CommitContainerOptions
+-		params map[string][]string
+-		body   []byte
+-	}{
+-		{CommitContainerOptions{}, map[string][]string{}, nil},
+-		{CommitContainerOptions{Container: "44c004db4b17"}, map[string][]string{"container": {"44c004db4b17"}}, nil},
+-		{
+-			CommitContainerOptions{Container: "44c004db4b17", Repository: "tsuru/python", Message: "something"},
+-			map[string][]string{"container": {"44c004db4b17"}, "repo": {"tsuru/python"}, "m": {"something"}},
+-			nil,
+-		},
+-		{
+-			CommitContainerOptions{Container: "44c004db4b17", Run: &cfg},
+-			map[string][]string{"container": {"44c004db4b17"}},
+-			json,
+-		},
+-	}
+-	fakeRT := &FakeRoundTripper{message: "[]", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	u, _ := url.Parse(client.getURL("/commit"))
+-	for _, tt := range tests {
+-		client.CommitContainer(tt.input)
+-		got := map[string][]string(fakeRT.requests[0].URL.Query())
+-		if !reflect.DeepEqual(got, tt.params) {
+-			t.Errorf("Expected %#v, got %#v.", tt.params, got)
+-		}
+-		if path := fakeRT.requests[0].URL.Path; path != u.Path {
+-			t.Errorf("Wrong path on request. Want %q. Got %q.", u.Path, path)
+-		}
+-		if meth := fakeRT.requests[0].Method; meth != "POST" {
+-			t.Errorf("Wrong HTTP method. Want POST. Got %s.", meth)
+-		}
+-		if tt.body != nil {
+-			if requestBody, err := ioutil.ReadAll(fakeRT.requests[0].Body); err == nil {
+-				if bytes.Compare(requestBody, tt.body) != 0 {
+-					t.Errorf("Expected body %#v, got %#v", tt.body, requestBody)
+-				}
+-			} else {
+-				t.Errorf("Error reading request body: %#v", err)
+-			}
+-		}
+-		fakeRT.Reset()
+-	}
+-}
+-
+-func TestCommitContainerFailure(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusInternalServerError})
+-	_, err := client.CommitContainer(CommitContainerOptions{})
+-	if err == nil {
+-		t.Error("Expected non-nil error, got <nil>.")
+-	}
+-}
+-
+-func TestCommitContainerNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such container", status: http.StatusNotFound})
+-	_, err := client.CommitContainer(CommitContainerOptions{})
+-	expected := &NoSuchContainer{ID: ""}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("CommitContainer: Wrong error returned. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestAttachToContainerLogs(t *testing.T) {
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		w.Write([]byte{1, 0, 0, 0, 0, 0, 0, 19})
+-		w.Write([]byte("something happened!"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	var buf bytes.Buffer
+-	opts := AttachToContainerOptions{
+-		Container:    "a123456",
+-		OutputStream: &buf,
+-		Stdout:       true,
+-		Stderr:       true,
+-		Logs:         true,
+-	}
+-	err := client.AttachToContainer(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	expected := "something happened!"
+-	if buf.String() != expected {
+-		t.Errorf("AttachToContainer for logs: wrong output. Want %q. Got %q.", expected, buf.String())
+-	}
+-	if req.Method != "POST" {
+-		t.Errorf("AttachToContainer: wrong HTTP method. Want POST. Got %s.", req.Method)
+-	}
+-	u, _ := url.Parse(client.getURL("/containers/a123456/attach"))
+-	if req.URL.Path != u.Path {
+-		t.Errorf("AttachToContainer for logs: wrong HTTP path. Want %q. Got %q.", u.Path, req.URL.Path)
+-	}
+-	expectedQs := map[string][]string{
+-		"logs":   {"1"},
+-		"stdout": {"1"},
+-		"stderr": {"1"},
+-	}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expectedQs) {
+-		t.Errorf("AttachToContainer: wrong query string. Want %#v. Got %#v.", expectedQs, got)
+-	}
+-}
+-
+-func TestAttachToContainer(t *testing.T) {
+-	var reader = strings.NewReader("send value")
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		w.Write([]byte{1, 0, 0, 0, 0, 0, 0, 5})
+-		w.Write([]byte("hello"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	var stdout, stderr bytes.Buffer
+-	opts := AttachToContainerOptions{
+-		Container:    "a123456",
+-		OutputStream: &stdout,
+-		ErrorStream:  &stderr,
+-		InputStream:  reader,
+-		Stdin:        true,
+-		Stdout:       true,
+-		Stderr:       true,
+-		Stream:       true,
+-		RawTerminal:  true,
+-	}
+-	err := client.AttachToContainer(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	expected := map[string][]string{
+-		"stdin":  {"1"},
+-		"stdout": {"1"},
+-		"stderr": {"1"},
+-		"stream": {"1"},
+-	}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("AttachToContainer: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestAttachToContainerSentinel(t *testing.T) {
+-	var reader = strings.NewReader("send value")
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		w.Write([]byte{1, 0, 0, 0, 0, 0, 0, 5})
+-		w.Write([]byte("hello"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	var stdout, stderr bytes.Buffer
+-	success := make(chan struct{})
+-	opts := AttachToContainerOptions{
+-		Container:    "a123456",
+-		OutputStream: &stdout,
+-		ErrorStream:  &stderr,
+-		InputStream:  reader,
+-		Stdin:        true,
+-		Stdout:       true,
+-		Stderr:       true,
+-		Stream:       true,
+-		RawTerminal:  true,
+-		Success:      success,
+-	}
+-	go client.AttachToContainer(opts)
+-	success <- <-success
+-}
+-
+-func TestAttachToContainerNilStdout(t *testing.T) {
+-	var reader = strings.NewReader("send value")
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		w.Write([]byte{1, 0, 0, 0, 0, 0, 0, 5})
+-		w.Write([]byte("hello"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	var stderr bytes.Buffer
+-	opts := AttachToContainerOptions{
+-		Container:    "a123456",
+-		OutputStream: nil,
+-		ErrorStream:  &stderr,
+-		InputStream:  reader,
+-		Stdin:        true,
+-		Stdout:       true,
+-		Stderr:       true,
+-		Stream:       true,
+-		RawTerminal:  true,
+-	}
+-	err := client.AttachToContainer(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-}
+-
+-func TestAttachToContainerNilStderr(t *testing.T) {
+-	var reader = strings.NewReader("send value")
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		w.Write([]byte{1, 0, 0, 0, 0, 0, 0, 5})
+-		w.Write([]byte("hello"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	var stdout bytes.Buffer
+-	opts := AttachToContainerOptions{
+-		Container:    "a123456",
+-		OutputStream: &stdout,
+-		InputStream:  reader,
+-		Stdin:        true,
+-		Stdout:       true,
+-		Stderr:       true,
+-		Stream:       true,
+-		RawTerminal:  true,
+-	}
+-	err := client.AttachToContainer(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-}
+-
+-func TestAttachToContainerRawTerminalFalse(t *testing.T) {
+-	input := strings.NewReader("send value")
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		prefix := []byte{1, 0, 0, 0, 0, 0, 0, 5}
+-		w.Write(prefix)
+-		w.Write([]byte("hello"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	var stdout, stderr bytes.Buffer
+-	opts := AttachToContainerOptions{
+-		Container:    "a123456",
+-		OutputStream: &stdout,
+-		ErrorStream:  &stderr,
+-		InputStream:  input,
+-		Stdin:        true,
+-		Stdout:       true,
+-		Stderr:       true,
+-		Stream:       true,
+-		RawTerminal:  false,
+-	}
+-	err := client.AttachToContainer(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	expected := map[string][]string{
+-		"stdin":  {"1"},
+-		"stdout": {"1"},
+-		"stderr": {"1"},
+-		"stream": {"1"},
+-	}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("AttachToContainer: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-	t.Log(stderr.String())
+-	t.Log(stdout.String())
+-	if stdout.String() != "hello" {
+-		t.Errorf("AttachToContainer: wrong content written to stdout. Want %q. Got %q.", "hello", stderr.String())
+-	}
+-}
+-
+-func TestAttachToContainerWithoutContainer(t *testing.T) {
+-	var client Client
+-	err := client.AttachToContainer(AttachToContainerOptions{})
+-	expected := &NoSuchContainer{ID: ""}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("AttachToContainer: wrong error. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestLogs(t *testing.T) {
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		prefix := []byte{1, 0, 0, 0, 0, 0, 0, 19}
+-		w.Write(prefix)
+-		w.Write([]byte("something happened!"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	var buf bytes.Buffer
+-	opts := LogsOptions{
+-		Container:    "a123456",
+-		OutputStream: &buf,
+-		Follow:       true,
+-		Stdout:       true,
+-		Stderr:       true,
+-		Timestamps:   true,
+-	}
+-	err := client.Logs(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	expected := "something happened!"
+-	if buf.String() != expected {
+-		t.Errorf("Logs: wrong output. Want %q. Got %q.", expected, buf.String())
+-	}
+-	if req.Method != "GET" {
+-		t.Errorf("Logs: wrong HTTP method. Want GET. Got %s.", req.Method)
+-	}
+-	u, _ := url.Parse(client.getURL("/containers/a123456/logs"))
+-	if req.URL.Path != u.Path {
+-		t.Errorf("AttachToContainer for logs: wrong HTTP path. Want %q. Got %q.", u.Path, req.URL.Path)
+-	}
+-	expectedQs := map[string][]string{
+-		"follow":     {"1"},
+-		"stdout":     {"1"},
+-		"stderr":     {"1"},
+-		"timestamps": {"1"},
+-		"tail":       {"all"},
+-	}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expectedQs) {
+-		t.Errorf("Logs: wrong query string. Want %#v. Got %#v.", expectedQs, got)
+-	}
+-}
+-
+-func TestLogsNilStdoutDoesntFail(t *testing.T) {
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		prefix := []byte{1, 0, 0, 0, 0, 0, 0, 19}
+-		w.Write(prefix)
+-		w.Write([]byte("something happened!"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	opts := LogsOptions{
+-		Container:  "a123456",
+-		Follow:     true,
+-		Stdout:     true,
+-		Stderr:     true,
+-		Timestamps: true,
+-	}
+-	err := client.Logs(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-}
+-
+-func TestLogsNilStderrDoesntFail(t *testing.T) {
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		prefix := []byte{2, 0, 0, 0, 0, 0, 0, 19}
+-		w.Write(prefix)
+-		w.Write([]byte("something happened!"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	opts := LogsOptions{
+-		Container:  "a123456",
+-		Follow:     true,
+-		Stdout:     true,
+-		Stderr:     true,
+-		Timestamps: true,
+-	}
+-	err := client.Logs(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-}
+-
+-func TestLogsSpecifyingTail(t *testing.T) {
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		prefix := []byte{1, 0, 0, 0, 0, 0, 0, 19}
+-		w.Write(prefix)
+-		w.Write([]byte("something happened!"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	var buf bytes.Buffer
+-	opts := LogsOptions{
+-		Container:    "a123456",
+-		OutputStream: &buf,
+-		Follow:       true,
+-		Stdout:       true,
+-		Stderr:       true,
+-		Timestamps:   true,
+-		Tail:         "100",
+-	}
+-	err := client.Logs(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	expected := "something happened!"
+-	if buf.String() != expected {
+-		t.Errorf("Logs: wrong output. Want %q. Got %q.", expected, buf.String())
+-	}
+-	if req.Method != "GET" {
+-		t.Errorf("Logs: wrong HTTP method. Want GET. Got %s.", req.Method)
+-	}
+-	u, _ := url.Parse(client.getURL("/containers/a123456/logs"))
+-	if req.URL.Path != u.Path {
+-		t.Errorf("AttachToContainer for logs: wrong HTTP path. Want %q. Got %q.", u.Path, req.URL.Path)
+-	}
+-	expectedQs := map[string][]string{
+-		"follow":     {"1"},
+-		"stdout":     {"1"},
+-		"stderr":     {"1"},
+-		"timestamps": {"1"},
+-		"tail":       {"100"},
+-	}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expectedQs) {
+-		t.Errorf("Logs: wrong query string. Want %#v. Got %#v.", expectedQs, got)
+-	}
+-}
+-
+-func TestLogsRawTerminal(t *testing.T) {
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		w.Write([]byte("something happened!"))
+-		req = *r
+-	}))
+-	defer server.Close()
+-	client, _ := NewClient(server.URL)
+-	client.SkipServerVersionCheck = true
+-	var buf bytes.Buffer
+-	opts := LogsOptions{
+-		Container:    "a123456",
+-		OutputStream: &buf,
+-		Follow:       true,
+-		RawTerminal:  true,
+-		Stdout:       true,
+-		Stderr:       true,
+-		Timestamps:   true,
+-		Tail:         "100",
+-	}
+-	err := client.Logs(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	expected := "something happened!"
+-	if buf.String() != expected {
+-		t.Errorf("Logs: wrong output. Want %q. Got %q.", expected, buf.String())
+-	}
+-}
+-
+-func TestLogsNoContainer(t *testing.T) {
+-	var client Client
+-	err := client.Logs(LogsOptions{})
+-	expected := &NoSuchContainer{ID: ""}
+-	if !reflect.DeepEqual(err, expected) {
+-		t.Errorf("AttachToContainer: wrong error. Want %#v. Got %#v.", expected, err)
+-	}
+-}
+-
+-func TestNoSuchContainerError(t *testing.T) {
+-	var err error = &NoSuchContainer{ID: "i345"}
+-	expected := "No such container: i345"
+-	if got := err.Error(); got != expected {
+-		t.Errorf("NoSuchContainer: wrong message. Want %q. Got %q.", expected, got)
+-	}
+-}
+-
+-func TestExportContainer(t *testing.T) {
+-	content := "exported container tar content"
+-	out := stdoutMock{bytes.NewBufferString(content)}
+-	client := newTestClient(&FakeRoundTripper{status: http.StatusOK})
+-	opts := ExportContainerOptions{ID: "4fa6e0f0c678", OutputStream: out}
+-	err := client.ExportContainer(opts)
+-	if err != nil {
+-		t.Errorf("ExportContainer: caugh error %#v while exporting container, expected nil", err.Error())
+-	}
+-	if out.String() != content {
+-		t.Errorf("ExportContainer: wrong stdout. Want %#v. Got %#v.", content, out.String())
+-	}
+-}
+-
+-func TestExportContainerViaUnixSocket(t *testing.T) {
+-	if runtime.GOOS != "darwin" {
+-		t.Skip("skipping test on %q", runtime.GOOS)
+-	}
+-	content := "exported container tar content"
+-	var buf []byte
+-	out := bytes.NewBuffer(buf)
+-	tempSocket := tempfile("export_socket")
+-	defer os.Remove(tempSocket)
+-	endpoint := "unix://" + tempSocket
+-	u, _ := parseEndpoint(endpoint)
+-	client := Client{
+-		HTTPClient:             http.DefaultClient,
+-		endpoint:               endpoint,
+-		endpointURL:            u,
+-		SkipServerVersionCheck: true,
+-	}
+-	listening := make(chan string)
+-	done := make(chan int)
+-	go runStreamConnServer(t, "unix", tempSocket, listening, done)
+-	<-listening // wait for server to start
+-	opts := ExportContainerOptions{ID: "4fa6e0f0c678", OutputStream: out}
+-	err := client.ExportContainer(opts)
+-	<-done // make sure server stopped
+-	if err != nil {
+-		t.Errorf("ExportContainer: caugh error %#v while exporting container, expected nil", err.Error())
+-	}
+-	if out.String() != content {
+-		t.Errorf("ExportContainer: wrong stdout. Want %#v. Got %#v.", content, out.String())
+-	}
+-}
+-
+-func runStreamConnServer(t *testing.T, network, laddr string, listening chan<- string, done chan<- int) {
+-	defer close(done)
+-	l, err := net.Listen(network, laddr)
+-	if err != nil {
+-		t.Errorf("Listen(%q, %q) failed: %v", network, laddr, err)
+-		listening <- "<nil>"
+-		return
+-	}
+-	defer l.Close()
+-	listening <- l.Addr().String()
+-	c, err := l.Accept()
+-	if err != nil {
+-		t.Logf("Accept failed: %v", err)
+-		return
+-	}
+-	c.Write([]byte("HTTP/1.1 200 OK\n\nexported container tar content"))
+-	c.Close()
+-}
+-
+-func tempfile(filename string) string {
+-	return os.TempDir() + "/" + filename + "." + strconv.Itoa(os.Getpid())
+-}
+-
+-func TestExportContainerNoId(t *testing.T) {
+-	client := Client{}
+-	out := stdoutMock{bytes.NewBufferString("")}
+-	err := client.ExportContainer(ExportContainerOptions{OutputStream: out})
+-	e, ok := err.(*NoSuchContainer)
+-	if !ok {
+-		t.Errorf("ExportContainer: wrong error. Want NoSuchContainer. Got %#v.", e)
+-	}
+-	if e.ID != "" {
+-		t.Errorf("ExportContainer: wrong ID. Want %q. Got %q", "", e.ID)
+-	}
+-}
+-
+-func TestCopyFromContainer(t *testing.T) {
+-	content := "File content"
+-	out := stdoutMock{bytes.NewBufferString(content)}
+-	client := newTestClient(&FakeRoundTripper{status: http.StatusOK})
+-	opts := CopyFromContainerOptions{
+-		Container:    "a123456",
+-		OutputStream: out,
+-	}
+-	err := client.CopyFromContainer(opts)
+-	if err != nil {
+-		t.Errorf("CopyFromContainer: caugh error %#v while copying from container, expected nil", err.Error())
+-	}
+-	if out.String() != content {
+-		t.Errorf("CopyFromContainer: wrong stdout. Want %#v. Got %#v.", content, out.String())
+-	}
+-}
+-
+-func TestCopyFromContainerEmptyContainer(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{status: http.StatusOK})
+-	err := client.CopyFromContainer(CopyFromContainerOptions{})
+-	_, ok := err.(*NoSuchContainer)
+-	if !ok {
+-		t.Errorf("CopyFromContainer: invalid error returned. Want NoSuchContainer, got %#v.", err)
+-	}
+-}
+-
+-func TestPassingNameOptToCreateContainerReturnsItInContainer(t *testing.T) {
+-	jsonContainer := `{
+-             "Id": "4fa6e0f0c6786287e131c3852c58a2e01cc697a68231826813597e4994f1d6e2",
+-	     "Warnings": []
+-}`
+-	fakeRT := &FakeRoundTripper{message: jsonContainer, status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	config := Config{AttachStdout: true, AttachStdin: true}
+-	opts := CreateContainerOptions{Name: "TestCreateContainer", Config: &config}
+-	container, err := client.CreateContainer(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if container.Name != "TestCreateContainer" {
+-		t.Errorf("Container name expected to be TestCreateContainer, was %s", container.Name)
+-	}
+-}
+-
+-func TestAlwaysRestart(t *testing.T) {
+-	policy := AlwaysRestart()
+-	if policy.Name != "always" {
+-		t.Errorf("AlwaysRestart(): wrong policy name. Want %q. Got %q", "always", policy.Name)
+-	}
+-	if policy.MaximumRetryCount != 0 {
+-		t.Errorf("AlwaysRestart(): wrong MaximumRetryCount. Want 0. Got %d", policy.MaximumRetryCount)
+-	}
+-}
+-
+-func TestRestartOnFailure(t *testing.T) {
+-	const retry = 5
+-	policy := RestartOnFailure(retry)
+-	if policy.Name != "on-failure" {
+-		t.Errorf("RestartOnFailure(%d): wrong policy name. Want %q. Got %q", retry, "on-failure", policy.Name)
+-	}
+-	if policy.MaximumRetryCount != retry {
+-		t.Errorf("RestartOnFailure(%d): wrong MaximumRetryCount. Want %d. Got %d", retry, retry, policy.MaximumRetryCount)
+-	}
+-}
+-
+-func TestNeverRestart(t *testing.T) {
+-	policy := NeverRestart()
+-	if policy.Name != "no" {
+-		t.Errorf("NeverRestart(): wrong policy name. Want %q. Got %q", "always", policy.Name)
+-	}
+-	if policy.MaximumRetryCount != 0 {
+-		t.Errorf("NeverRestart(): wrong MaximumRetryCount. Want 0. Got %d", policy.MaximumRetryCount)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/env.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/env.go
+deleted file mode 100644
+index c54b0b0..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/env.go
++++ /dev/null
+@@ -1,168 +0,0 @@
+-// Copyright 2014 Docker authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the DOCKER-LICENSE file.
+-
+-package docker
+-
+-import (
+-	"encoding/json"
+-	"fmt"
+-	"io"
+-	"strconv"
+-	"strings"
+-)
+-
+-// Env represents a list of key-pair represented in the form KEY=VALUE.
+-type Env []string
+-
+-// Get returns the string value of the given key.
+-func (env *Env) Get(key string) (value string) {
+-	return env.Map()[key]
+-}
+-
+-// Exists checks whether the given key is defined in the internal Env
+-// representation.
+-func (env *Env) Exists(key string) bool {
+-	_, exists := env.Map()[key]
+-	return exists
+-}
+-
+-// GetBool returns a boolean representation of the given key. The key is false
+-// whenever its value if 0, no, false, none or an empty string. Any other value
+-// will be interpreted as true.
+-func (env *Env) GetBool(key string) (value bool) {
+-	s := strings.ToLower(strings.Trim(env.Get(key), " \t"))
+-	if s == "" || s == "0" || s == "no" || s == "false" || s == "none" {
+-		return false
+-	}
+-	return true
+-}
+-
+-// SetBool defines a boolean value to the given key.
+-func (env *Env) SetBool(key string, value bool) {
+-	if value {
+-		env.Set(key, "1")
+-	} else {
+-		env.Set(key, "0")
+-	}
+-}
+-
+-// GetInt returns the value of the provided key, converted to int.
+-//
+-// It the value cannot be represented as an integer, it returns -1.
+-func (env *Env) GetInt(key string) int {
+-	return int(env.GetInt64(key))
+-}
+-
+-// SetInt defines an integer value to the given key.
+-func (env *Env) SetInt(key string, value int) {
+-	env.Set(key, strconv.Itoa(value))
+-}
+-
+-// GetInt64 returns the value of the provided key, converted to int64.
+-//
+-// It the value cannot be represented as an integer, it returns -1.
+-func (env *Env) GetInt64(key string) int64 {
+-	s := strings.Trim(env.Get(key), " \t")
+-	val, err := strconv.ParseInt(s, 10, 64)
+-	if err != nil {
+-		return -1
+-	}
+-	return val
+-}
+-
+-// SetInt64 defines an integer (64-bit wide) value to the given key.
+-func (env *Env) SetInt64(key string, value int64) {
+-	env.Set(key, strconv.FormatInt(value, 10))
+-}
+-
+-// GetJSON unmarshals the value of the provided key in the provided iface.
+-//
+-// iface is a value that can be provided to the json.Unmarshal function.
+-func (env *Env) GetJSON(key string, iface interface{}) error {
+-	sval := env.Get(key)
+-	if sval == "" {
+-		return nil
+-	}
+-	return json.Unmarshal([]byte(sval), iface)
+-}
+-
+-// SetJSON marshals the given value to JSON format and stores it using the
+-// provided key.
+-func (env *Env) SetJSON(key string, value interface{}) error {
+-	sval, err := json.Marshal(value)
+-	if err != nil {
+-		return err
+-	}
+-	env.Set(key, string(sval))
+-	return nil
+-}
+-
+-// GetList returns a list of strings matching the provided key. It handles the
+-// list as a JSON representation of a list of strings.
+-//
+-// If the given key matches to a single string, it will return a list
+-// containing only the value that matches the key.
+-func (env *Env) GetList(key string) []string {
+-	sval := env.Get(key)
+-	if sval == "" {
+-		return nil
+-	}
+-	var l []string
+-	if err := json.Unmarshal([]byte(sval), &l); err != nil {
+-		l = append(l, sval)
+-	}
+-	return l
+-}
+-
+-// SetList stores the given list in the provided key, after serializing it to
+-// JSON format.
+-func (env *Env) SetList(key string, value []string) error {
+-	return env.SetJSON(key, value)
+-}
+-
+-// Set defines the value of a key to the given string.
+-func (env *Env) Set(key, value string) {
+-	*env = append(*env, key+"="+value)
+-}
+-
+-// Decode decodes `src` as a json dictionary, and adds each decoded key-value
+-// pair to the environment.
+-//
+-// If `src` cannot be decoded as a json dictionary, an error is returned.
+-func (env *Env) Decode(src io.Reader) error {
+-	m := make(map[string]interface{})
+-	if err := json.NewDecoder(src).Decode(&m); err != nil {
+-		return err
+-	}
+-	for k, v := range m {
+-		env.SetAuto(k, v)
+-	}
+-	return nil
+-}
+-
+-// SetAuto will try to define the Set* method to call based on the given value.
+-func (env *Env) SetAuto(key string, value interface{}) {
+-	if fval, ok := value.(float64); ok {
+-		env.SetInt64(key, int64(fval))
+-	} else if sval, ok := value.(string); ok {
+-		env.Set(key, sval)
+-	} else if val, err := json.Marshal(value); err == nil {
+-		env.Set(key, string(val))
+-	} else {
+-		env.Set(key, fmt.Sprintf("%v", value))
+-	}
+-}
+-
+-// Map returns the map representation of the env.
+-func (env *Env) Map() map[string]string {
+-	if len(*env) == 0 {
+-		return nil
+-	}
+-	m := make(map[string]string)
+-	for _, kv := range *env {
+-		parts := strings.SplitN(kv, "=", 2)
+-		m[parts[0]] = parts[1]
+-	}
+-	return m
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/env_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/env_test.go
+deleted file mode 100644
+index 6d03d7b..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/env_test.go
++++ /dev/null
+@@ -1,349 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the DOCKER-LICENSE file.
+-
+-package docker
+-
+-import (
+-	"bytes"
+-	"errors"
+-	"reflect"
+-	"sort"
+-	"testing"
+-)
+-
+-func TestGet(t *testing.T) {
+-	var tests = []struct {
+-		input    []string
+-		query    string
+-		expected string
+-	}{
+-		{[]string{"PATH=/usr/bin:/bin", "PYTHONPATH=/usr/local"}, "PATH", "/usr/bin:/bin"},
+-		{[]string{"PATH=/usr/bin:/bin", "PYTHONPATH=/usr/local"}, "PYTHONPATH", "/usr/local"},
+-		{[]string{"PATH=/usr/bin:/bin", "PYTHONPATH=/usr/local"}, "PYTHONPATHI", ""},
+-		{[]string{"WAT="}, "WAT", ""},
+-	}
+-	for _, tt := range tests {
+-		env := Env(tt.input)
+-		got := env.Get(tt.query)
+-		if got != tt.expected {
+-			t.Errorf("Env.Get(%q): wrong result. Want %q. Got %q", tt.query, tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestExists(t *testing.T) {
+-	var tests = []struct {
+-		input    []string
+-		query    string
+-		expected bool
+-	}{
+-		{[]string{"WAT=", "PYTHONPATH=/usr/local"}, "WAT", true},
+-		{[]string{"PATH=/usr/bin:/bin", "PYTHONPATH=/usr/local"}, "PYTHONPATH", true},
+-		{[]string{"PATH=/usr/bin:/bin", "PYTHONPATH=/usr/local"}, "PYTHONPATHI", false},
+-	}
+-	for _, tt := range tests {
+-		env := Env(tt.input)
+-		got := env.Exists(tt.query)
+-		if got != tt.expected {
+-			t.Errorf("Env.Exists(%q): wrong result. Want %v. Got %v", tt.query, tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestGetBool(t *testing.T) {
+-	var tests = []struct {
+-		input    string
+-		expected bool
+-	}{
+-		{"EMTPY_VAR", false}, {"ZERO_VAR", false}, {"NO_VAR", false},
+-		{"FALSE_VAR", false}, {"NONE_VAR", false}, {"TRUE_VAR", true},
+-		{"WAT", true}, {"PATH", true}, {"ONE_VAR", true}, {"NO_VAR_TAB", false},
+-	}
+-	env := Env([]string{
+-		"EMPTY_VAR=", "ZERO_VAR=0", "NO_VAR=no", "FALSE_VAR=false",
+-		"NONE_VAR=none", "TRUE_VAR=true", "WAT=wat", "PATH=/usr/bin:/bin",
+-		"ONE_VAR=1", "NO_VAR_TAB=0 \t\t\t",
+-	})
+-	for _, tt := range tests {
+-		got := env.GetBool(tt.input)
+-		if got != tt.expected {
+-			t.Errorf("Env.GetBool(%q): wrong result. Want %v. Got %v.", tt.input, tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestSetBool(t *testing.T) {
+-	var tests = []struct {
+-		input    bool
+-		expected string
+-	}{
+-		{true, "1"}, {false, "0"},
+-	}
+-	for _, tt := range tests {
+-		var env Env
+-		env.SetBool("SOME", tt.input)
+-		if got := env.Get("SOME"); got != tt.expected {
+-			t.Errorf("Env.SetBool(%v): wrong result. Want %q. Got %q", tt.input, tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestGetInt(t *testing.T) {
+-	var tests = []struct {
+-		input    string
+-		expected int
+-	}{
+-		{"NEGATIVE_INTEGER", -10}, {"NON_INTEGER", -1}, {"ONE", 1}, {"TWO", 2},
+-	}
+-	env := Env([]string{"NEGATIVE_INTEGER=-10", "NON_INTEGER=wat", "ONE=1", "TWO=2"})
+-	for _, tt := range tests {
+-		got := env.GetInt(tt.input)
+-		if got != tt.expected {
+-			t.Errorf("Env.GetInt(%q): wrong result. Want %d. Got %d", tt.input, tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestSetInt(t *testing.T) {
+-	var tests = []struct {
+-		input    int
+-		expected string
+-	}{
+-		{10, "10"}, {13, "13"}, {7, "7"}, {33, "33"},
+-		{0, "0"}, {-34, "-34"},
+-	}
+-	for _, tt := range tests {
+-		var env Env
+-		env.SetInt("SOME", tt.input)
+-		if got := env.Get("SOME"); got != tt.expected {
+-			t.Errorf("Env.SetBool(%d): wrong result. Want %q. Got %q", tt.input, tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestGetInt64(t *testing.T) {
+-	var tests = []struct {
+-		input    string
+-		expected int64
+-	}{
+-		{"NEGATIVE_INTEGER", -10}, {"NON_INTEGER", -1}, {"ONE", 1}, {"TWO", 2},
+-	}
+-	env := Env([]string{"NEGATIVE_INTEGER=-10", "NON_INTEGER=wat", "ONE=1", "TWO=2"})
+-	for _, tt := range tests {
+-		got := env.GetInt64(tt.input)
+-		if got != tt.expected {
+-			t.Errorf("Env.GetInt64(%q): wrong result. Want %d. Got %d", tt.input, tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestSetInt64(t *testing.T) {
+-	var tests = []struct {
+-		input    int64
+-		expected string
+-	}{
+-		{10, "10"}, {13, "13"}, {7, "7"}, {33, "33"},
+-		{0, "0"}, {-34, "-34"},
+-	}
+-	for _, tt := range tests {
+-		var env Env
+-		env.SetInt64("SOME", tt.input)
+-		if got := env.Get("SOME"); got != tt.expected {
+-			t.Errorf("Env.SetBool(%d): wrong result. Want %q. Got %q", tt.input, tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestGetJSON(t *testing.T) {
+-	var p struct {
+-		Name string `json:"name"`
+-		Age  int    `json:"age"`
+-	}
+-	var env Env
+-	env.Set("person", `{"name":"Gopher","age":5}`)
+-	err := env.GetJSON("person", &p)
+-	if err != nil {
+-		t.Error(err)
+-	}
+-	if p.Name != "Gopher" {
+-		t.Errorf("Env.GetJSON(%q): wrong name. Want %q. Got %q", "person", "Gopher", p.Name)
+-	}
+-	if p.Age != 5 {
+-		t.Errorf("Env.GetJSON(%q): wrong age. Want %d. Got %d", "person", 5, p.Age)
+-	}
+-}
+-
+-func TestGetJSONAbsent(t *testing.T) {
+-	var l []string
+-	var env Env
+-	err := env.GetJSON("person", &l)
+-	if err != nil {
+-		t.Error(err)
+-	}
+-	if l != nil {
+-		t.Errorf("Env.GetJSON(): get unexpected list %v", l)
+-	}
+-}
+-
+-func TestGetJSONFailure(t *testing.T) {
+-	var p []string
+-	var env Env
+-	env.Set("list-person", `{"name":"Gopher","age":5}`)
+-	err := env.GetJSON("list-person", &p)
+-	if err == nil {
+-		t.Errorf("Env.GetJSON(%q): got unexpected <nil> error.", "list-person")
+-	}
+-}
+-
+-func TestSetJSON(t *testing.T) {
+-	var p1 = struct {
+-		Name string `json:"name"`
+-		Age  int    `json:"age"`
+-	}{Name: "Gopher", Age: 5}
+-	var env Env
+-	err := env.SetJSON("person", p1)
+-	if err != nil {
+-		t.Error(err)
+-	}
+-	var p2 struct {
+-		Name string `json:"name"`
+-		Age  int    `json:"age"`
+-	}
+-	err = env.GetJSON("person", &p2)
+-	if err != nil {
+-		t.Error(err)
+-	}
+-	if !reflect.DeepEqual(p1, p2) {
+-		t.Errorf("Env.SetJSON(%q): wrong result. Want %v. Got %v", "person", p1, p2)
+-	}
+-}
+-
+-func TestSetJSONFailure(t *testing.T) {
+-	var env Env
+-	err := env.SetJSON("person", unmarshable{})
+-	if err == nil {
+-		t.Error("Env.SetJSON(): got unexpected <nil> error")
+-	}
+-	if env.Exists("person") {
+-		t.Errorf("Env.SetJSON(): should not define the key %q, but did", "person")
+-	}
+-}
+-
+-func TestGetList(t *testing.T) {
+-	var tests = []struct {
+-		input    string
+-		expected []string
+-	}{
+-		{"WAT=wat", []string{"wat"}},
+-		{`WAT=["wat","wet","wit","wot","wut"]`, []string{"wat", "wet", "wit", "wot", "wut"}},
+-		{"WAT=", nil},
+-	}
+-	for _, tt := range tests {
+-		env := Env([]string{tt.input})
+-		got := env.GetList("WAT")
+-		if !reflect.DeepEqual(got, tt.expected) {
+-			t.Errorf("Env.GetList(%q): wrong result. Want %v. Got %v", "WAT", tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestSetList(t *testing.T) {
+-	list := []string{"a", "b", "c"}
+-	var env Env
+-	env.SetList("SOME", list)
+-	if got := env.GetList("SOME"); !reflect.DeepEqual(got, list) {
+-		t.Errorf("Env.SetList(%v): wrong result. Got %v", list, got)
+-	}
+-}
+-
+-func TestSet(t *testing.T) {
+-	var env Env
+-	env.Set("PATH", "/home/bin:/bin")
+-	env.Set("SOMETHING", "/usr/bin")
+-	env.Set("PATH", "/bin")
+-	if expected, got := "/usr/bin", env.Get("SOMETHING"); got != expected {
+-		t.Errorf("Env.Set(%q): wrong result. Want %q. Got %q", expected, expected, got)
+-	}
+-	if expected, got := "/bin", env.Get("PATH"); got != expected {
+-		t.Errorf("Env.Set(%q): wrong result. Want %q. Got %q", expected, expected, got)
+-	}
+-}
+-
+-func TestDecode(t *testing.T) {
+-	var tests = []struct {
+-		input       string
+-		expectedOut []string
+-		expectedErr string
+-	}{
+-		{
+-			`{"PATH":"/usr/bin:/bin","containers":54,"wat":["123","345"]}`,
+-			[]string{"PATH=/usr/bin:/bin", "containers=54", `wat=["123","345"]`},
+-			"",
+-		},
+-		{"}}", nil, "invalid character '}' looking for beginning of value"},
+-		{`{}`, nil, ""},
+-	}
+-	for _, tt := range tests {
+-		var env Env
+-		err := env.Decode(bytes.NewBufferString(tt.input))
+-		if tt.expectedErr == "" {
+-			if err != nil {
+-				t.Error(err)
+-			}
+-		} else if tt.expectedErr != err.Error() {
+-			t.Errorf("Env.Decode(): invalid error. Want %q. Got %q.", tt.expectedErr, err)
+-		}
+-		got := []string(env)
+-		sort.Strings(got)
+-		sort.Strings(tt.expectedOut)
+-		if !reflect.DeepEqual(got, tt.expectedOut) {
+-			t.Errorf("Env.Decode(): wrong result. Want %v. Got %v.", tt.expectedOut, got)
+-		}
+-	}
+-}
+-
+-func TestSetAuto(t *testing.T) {
+-	buf := bytes.NewBufferString("oi")
+-	var tests = []struct {
+-		input    interface{}
+-		expected string
+-	}{
+-		{10, "10"},
+-		{10.3, "10"},
+-		{"oi", "oi"},
+-		{buf, "{}"},
+-		{unmarshable{}, "{}"},
+-	}
+-	for _, tt := range tests {
+-		var env Env
+-		env.SetAuto("SOME", tt.input)
+-		if got := env.Get("SOME"); got != tt.expected {
+-			t.Errorf("Env.SetAuto(%v): wrong result. Want %q. Got %q", tt.input, tt.expected, got)
+-		}
+-	}
+-}
+-
+-func TestMap(t *testing.T) {
+-	var tests = []struct {
+-		input    []string
+-		expected map[string]string
+-	}{
+-		{[]string{"PATH=/usr/bin:/bin", "PYTHONPATH=/usr/local"}, map[string]string{"PATH": "/usr/bin:/bin", "PYTHONPATH": "/usr/local"}},
+-		{nil, nil},
+-	}
+-	for _, tt := range tests {
+-		env := Env(tt.input)
+-		got := env.Map()
+-		if !reflect.DeepEqual(got, tt.expected) {
+-			t.Errorf("Env.Map(): wrong result. Want %v. Got %v", tt.expected, got)
+-		}
+-	}
+-}
+-
+-type unmarshable struct {
+-}
+-
+-func (unmarshable) MarshalJSON() ([]byte, error) {
+-	return nil, errors.New("cannot marshal")
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/event.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/event.go
+deleted file mode 100644
+index 262d4ee..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/event.go
++++ /dev/null
+@@ -1,278 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"encoding/json"
+-	"errors"
+-	"fmt"
+-	"io"
+-	"math"
+-	"net"
+-	"net/http"
+-	"net/http/httputil"
+-	"sync"
+-	"sync/atomic"
+-	"time"
+-)
+-
+-// APIEvents represents an event returned by the API.
+-type APIEvents struct {
+-	Status string `json:"Status,omitempty" yaml:"Status,omitempty"`
+-	ID     string `json:"ID,omitempty" yaml:"ID,omitempty"`
+-	From   string `json:"From,omitempty" yaml:"From,omitempty"`
+-	Time   int64  `json:"Time,omitempty" yaml:"Time,omitempty"`
+-}
+-
+-type eventMonitoringState struct {
+-	sync.RWMutex
+-	sync.WaitGroup
+-	enabled   bool
+-	lastSeen  *int64
+-	C         chan *APIEvents
+-	errC      chan error
+-	listeners []chan<- *APIEvents
+-}
+-
+-const (
+-	maxMonitorConnRetries = 5
+-	retryInitialWaitTime  = 10.
+-)
+-
+-var (
+-	// ErrNoListeners is the error returned when no listeners are available
+-	// to receive an event.
+-	ErrNoListeners = errors.New("no listeners present to receive event")
+-
+-	// ErrListenerAlreadyExists is the error returned when the listerner already
+-	// exists.
+-	ErrListenerAlreadyExists = errors.New("listener already exists for docker events")
+-)
+-
+-// AddEventListener adds a new listener to container events in the Docker API.
+-//
+-// The parameter is a channel through which events will be sent.
+-func (c *Client) AddEventListener(listener chan<- *APIEvents) error {
+-	var err error
+-	if !c.eventMonitor.isEnabled() {
+-		err = c.eventMonitor.enableEventMonitoring(c)
+-		if err != nil {
+-			return err
+-		}
+-	}
+-	err = c.eventMonitor.addListener(listener)
+-	if err != nil {
+-		return err
+-	}
+-	return nil
+-}
+-
+-// RemoveEventListener removes a listener from the monitor.
+-func (c *Client) RemoveEventListener(listener chan *APIEvents) error {
+-	err := c.eventMonitor.removeListener(listener)
+-	if err != nil {
+-		return err
+-	}
+-	if len(c.eventMonitor.listeners) == 0 {
+-		err = c.eventMonitor.disableEventMonitoring()
+-		if err != nil {
+-			return err
+-		}
+-	}
+-	return nil
+-}
+-
+-func (eventState *eventMonitoringState) addListener(listener chan<- *APIEvents) error {
+-	eventState.Lock()
+-	defer eventState.Unlock()
+-	if listenerExists(listener, &eventState.listeners) {
+-		return ErrListenerAlreadyExists
+-	}
+-	eventState.Add(1)
+-	eventState.listeners = append(eventState.listeners, listener)
+-	return nil
+-}
+-
+-func (eventState *eventMonitoringState) removeListener(listener chan<- *APIEvents) error {
+-	eventState.Lock()
+-	defer eventState.Unlock()
+-	if listenerExists(listener, &eventState.listeners) {
+-		var newListeners []chan<- *APIEvents
+-		for _, l := range eventState.listeners {
+-			if l != listener {
+-				newListeners = append(newListeners, l)
+-			}
+-		}
+-		eventState.listeners = newListeners
+-		eventState.Add(-1)
+-	}
+-	return nil
+-}
+-
+-func listenerExists(a chan<- *APIEvents, list *[]chan<- *APIEvents) bool {
+-	for _, b := range *list {
+-		if b == a {
+-			return true
+-		}
+-	}
+-	return false
+-}
+-
+-func (eventState *eventMonitoringState) enableEventMonitoring(c *Client) error {
+-	eventState.Lock()
+-	defer eventState.Unlock()
+-	if !eventState.enabled {
+-		eventState.enabled = true
+-		var lastSeenDefault = int64(0)
+-		eventState.lastSeen = &lastSeenDefault
+-		eventState.C = make(chan *APIEvents, 100)
+-		eventState.errC = make(chan error, 1)
+-		go eventState.monitorEvents(c)
+-	}
+-	return nil
+-}
+-
+-func (eventState *eventMonitoringState) disableEventMonitoring() error {
+-	eventState.Wait()
+-	eventState.Lock()
+-	defer eventState.Unlock()
+-	if eventState.enabled {
+-		eventState.enabled = false
+-		close(eventState.C)
+-		close(eventState.errC)
+-	}
+-	return nil
+-}
+-
+-func (eventState *eventMonitoringState) monitorEvents(c *Client) {
+-	var err error
+-	for eventState.noListeners() {
+-		time.Sleep(10 * time.Millisecond)
+-	}
+-	if err = eventState.connectWithRetry(c); err != nil {
+-		eventState.terminate(err)
+-	}
+-	for eventState.isEnabled() {
+-		timeout := time.After(100 * time.Millisecond)
+-		select {
+-		case ev, ok := <-eventState.C:
+-			if !ok {
+-				return
+-			}
+-			go eventState.sendEvent(ev)
+-			go eventState.updateLastSeen(ev)
+-		case err = <-eventState.errC:
+-			if err == ErrNoListeners {
+-				eventState.terminate(nil)
+-				return
+-			} else if err != nil {
+-				defer func() { go eventState.monitorEvents(c) }()
+-				return
+-			}
+-		case <-timeout:
+-			continue
+-		}
+-	}
+-}
+-
+-func (eventState *eventMonitoringState) connectWithRetry(c *Client) error {
+-	var retries int
+-	var err error
+-	for err = c.eventHijack(atomic.LoadInt64(eventState.lastSeen), eventState.C, eventState.errC); err != nil && retries < maxMonitorConnRetries; retries++ {
+-		waitTime := int64(retryInitialWaitTime * math.Pow(2, float64(retries)))
+-		time.Sleep(time.Duration(waitTime) * time.Millisecond)
+-		err = c.eventHijack(atomic.LoadInt64(eventState.lastSeen), eventState.C, eventState.errC)
+-	}
+-	return err
+-}
+-
+-func (eventState *eventMonitoringState) noListeners() bool {
+-	eventState.RLock()
+-	defer eventState.RUnlock()
+-	return len(eventState.listeners) == 0
+-}
+-
+-func (eventState *eventMonitoringState) isEnabled() bool {
+-	eventState.RLock()
+-	defer eventState.RUnlock()
+-	return eventState.enabled
+-}
+-
+-func (eventState *eventMonitoringState) sendEvent(event *APIEvents) {
+-	eventState.RLock()
+-	defer eventState.RUnlock()
+-	eventState.Add(1)
+-	defer eventState.Done()
+-	if eventState.isEnabled() {
+-		if eventState.noListeners() {
+-			eventState.errC <- ErrNoListeners
+-			return
+-		}
+-
+-		for _, listener := range eventState.listeners {
+-			listener <- event
+-		}
+-	}
+-}
+-
+-func (eventState *eventMonitoringState) updateLastSeen(e *APIEvents) {
+-	eventState.Lock()
+-	defer eventState.Unlock()
+-	if atomic.LoadInt64(eventState.lastSeen) < e.Time {
+-		atomic.StoreInt64(eventState.lastSeen, e.Time)
+-	}
+-}
+-
+-func (eventState *eventMonitoringState) terminate(err error) {
+-	eventState.disableEventMonitoring()
+-}
+-
+-func (c *Client) eventHijack(startTime int64, eventChan chan *APIEvents, errChan chan error) error {
+-	uri := "/events"
+-	if startTime != 0 {
+-		uri += fmt.Sprintf("?since=%d", startTime)
+-	}
+-	protocol := c.endpointURL.Scheme
+-	address := c.endpointURL.Path
+-	if protocol != "unix" {
+-		protocol = "tcp"
+-		address = c.endpointURL.Host
+-	}
+-	dial, err := net.Dial(protocol, address)
+-	if err != nil {
+-		return err
+-	}
+-	conn := httputil.NewClientConn(dial, nil)
+-	req, err := http.NewRequest("GET", uri, nil)
+-	if err != nil {
+-		return err
+-	}
+-	res, err := conn.Do(req)
+-	if err != nil {
+-		return err
+-	}
+-	go func(res *http.Response, conn *httputil.ClientConn) {
+-		defer conn.Close()
+-		defer res.Body.Close()
+-		decoder := json.NewDecoder(res.Body)
+-		for {
+-			var event APIEvents
+-			if err = decoder.Decode(&event); err != nil {
+-				if err == io.EOF || err == io.ErrUnexpectedEOF {
+-					break
+-				}
+-				errChan <- err
+-			}
+-			if event.Time == 0 {
+-				continue
+-			}
+-			if !c.eventMonitor.isEnabled() {
+-				return
+-			}
+-			c.eventMonitor.C <- &event
+-		}
+-	}(res, conn)
+-	return nil
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/event_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/event_test.go
+deleted file mode 100644
+index cb54f4a..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/event_test.go
++++ /dev/null
+@@ -1,93 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"bufio"
+-	"fmt"
+-	"net/http"
+-	"net/http/httptest"
+-	"strings"
+-	"testing"
+-	"time"
+-)
+-
+-func TestEventListeners(t *testing.T) {
+-	response := `{"status":"create","id":"dfdf82bd3881","from":"base:latest","time":1374067924}
+-{"status":"start","id":"dfdf82bd3881","from":"base:latest","time":1374067924}
+-{"status":"stop","id":"dfdf82bd3881","from":"base:latest","time":1374067966}
+-{"status":"destroy","id":"dfdf82bd3881","from":"base:latest","time":1374067970}
+-`
+-
+-	var req http.Request
+-	server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		rsc := bufio.NewScanner(strings.NewReader(response))
+-		for rsc.Scan() {
+-			w.Write([]byte(rsc.Text()))
+-			w.(http.Flusher).Flush()
+-			time.Sleep(10 * time.Millisecond)
+-		}
+-		req = *r
+-	}))
+-	defer server.Close()
+-
+-	client, err := NewClient(server.URL)
+-	if err != nil {
+-		t.Errorf("Failed to create client: %s", err)
+-	}
+-	client.SkipServerVersionCheck = true
+-
+-	listener := make(chan *APIEvents, 10)
+-	defer func() { time.Sleep(10 * time.Millisecond); client.RemoveEventListener(listener) }()
+-
+-	err = client.AddEventListener(listener)
+-	if err != nil {
+-		t.Errorf("Failed to add event listener: %s", err)
+-	}
+-
+-	timeout := time.After(1 * time.Second)
+-	var count int
+-
+-	for {
+-		select {
+-		case msg := <-listener:
+-			t.Logf("Recieved: %s", *msg)
+-			count++
+-			err = checkEvent(count, msg)
+-			if err != nil {
+-				t.Fatalf("Check event failed: %s", err)
+-			}
+-			if count == 4 {
+-				return
+-			}
+-		case <-timeout:
+-			t.Fatal("TestAddEventListener timed out waiting on events")
+-		}
+-	}
+-}
+-
+-func checkEvent(index int, event *APIEvents) error {
+-	if event.ID != "dfdf82bd3881" {
+-		return fmt.Errorf("event ID did not match. Expected dfdf82bd3881 got %s", event.ID)
+-	}
+-	if event.From != "base:latest" {
+-		return fmt.Errorf("event from did not match. Expected base:latest got %s", event.From)
+-	}
+-	var status string
+-	switch index {
+-	case 1:
+-		status = "create"
+-	case 2:
+-		status = "start"
+-	case 3:
+-		status = "stop"
+-	case 4:
+-		status = "destroy"
+-	}
+-	if event.Status != status {
+-		return fmt.Errorf("event status did not match. Expected %s got %s", status, event.Status)
+-	}
+-	return nil
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/example_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/example_test.go
+deleted file mode 100644
+index 8c2c719..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/example_test.go
++++ /dev/null
+@@ -1,168 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker_test
+-
+-import (
+-	"archive/tar"
+-	"bytes"
+-	"fmt"
+-	"io"
+-	"log"
+-	"time"
+-
+-	"github.com/fsouza/go-dockerclient"
+-)
+-
+-func ExampleClient_AttachToContainer() {
+-	client, err := docker.NewClient("http://localhost:4243")
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-	client.SkipServerVersionCheck = true
+-	// Reading logs from container a84849 and sending them to buf.
+-	var buf bytes.Buffer
+-	err = client.AttachToContainer(docker.AttachToContainerOptions{
+-		Container:    "a84849",
+-		OutputStream: &buf,
+-		Logs:         true,
+-		Stdout:       true,
+-		Stderr:       true,
+-	})
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-	log.Println(buf.String())
+-	buf.Reset()
+-	err = client.AttachToContainer(docker.AttachToContainerOptions{
+-		Container:    "a84849",
+-		OutputStream: &buf,
+-		Stdout:       true,
+-		Stream:       true,
+-	})
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-	log.Println(buf.String())
+-}
+-
+-func ExampleClient_CopyFromContainer() {
+-	client, err := docker.NewClient("http://localhost:4243")
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-	cid := "a84849"
+-	var buf bytes.Buffer
+-	filename := "/tmp/output.txt"
+-	err = client.CopyFromContainer(docker.CopyFromContainerOptions{
+-		Container:    cid,
+-		Resource:     filename,
+-		OutputStream: &buf,
+-	})
+-	if err != nil {
+-		log.Fatalf("Error while copying from %s: %s\n", cid, err)
+-	}
+-	content := new(bytes.Buffer)
+-	r := bytes.NewReader(buf.Bytes())
+-	tr := tar.NewReader(r)
+-	tr.Next()
+-	if err != nil && err != io.EOF {
+-		log.Fatal(err)
+-	}
+-	if _, err := io.Copy(content, tr); err != nil {
+-		log.Fatal(err)
+-	}
+-	log.Println(buf.String())
+-}
+-
+-func ExampleClient_BuildImage() {
+-	client, err := docker.NewClient("http://localhost:4243")
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-
+-	t := time.Now()
+-	inputbuf, outputbuf := bytes.NewBuffer(nil), bytes.NewBuffer(nil)
+-	tr := tar.NewWriter(inputbuf)
+-	tr.WriteHeader(&tar.Header{Name: "Dockerfile", Size: 10, ModTime: t, AccessTime: t, ChangeTime: t})
+-	tr.Write([]byte("FROM base\n"))
+-	tr.Close()
+-	opts := docker.BuildImageOptions{
+-		Name:         "test",
+-		InputStream:  inputbuf,
+-		OutputStream: outputbuf,
+-	}
+-	if err := client.BuildImage(opts); err != nil {
+-		log.Fatal(err)
+-	}
+-}
+-
+-func ExampleClient_ListenEvents() {
+-	client, err := docker.NewClient("http://localhost:4243")
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-
+-	listener := make(chan *docker.APIEvents)
+-	err = client.AddEventListener(listener)
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-
+-	defer func() {
+-
+-		err = client.RemoveEventListener(listener)
+-		if err != nil {
+-			log.Fatal(err)
+-		}
+-
+-	}()
+-
+-	timeout := time.After(1 * time.Second)
+-
+-	for {
+-		select {
+-		case msg := <-listener:
+-			log.Println(msg)
+-		case <-timeout:
+-			break
+-		}
+-	}
+-
+-}
+-
+-func ExampleEnv_Map() {
+-	e := docker.Env([]string{"A=1", "B=2", "C=3"})
+-	envs := e.Map()
+-	for k, v := range envs {
+-		fmt.Printf("%s=%q\n", k, v)
+-	}
+-}
+-
+-func ExampleEnv_SetJSON() {
+-	type Person struct {
+-		Name string
+-		Age  int
+-	}
+-	p := Person{Name: "Gopher", Age: 4}
+-	var e docker.Env
+-	err := e.SetJSON("person", p)
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-}
+-
+-func ExampleEnv_GetJSON() {
+-	type Person struct {
+-		Name string
+-		Age  int
+-	}
+-	p := Person{Name: "Gopher", Age: 4}
+-	var e docker.Env
+-	e.Set("person", `{"name":"Gopher","age":4}`)
+-	err := e.GetJSON("person", &p)
+-	if err != nil {
+-		log.Fatal(err)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/image.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/image.go
+deleted file mode 100644
+index 4ce94c8..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/image.go
++++ /dev/null
+@@ -1,351 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"bytes"
+-	"encoding/base64"
+-	"encoding/json"
+-	"errors"
+-	"fmt"
+-	"io"
+-	"io/ioutil"
+-	"net/http"
+-	"net/url"
+-	"os"
+-	"time"
+-)
+-
+-// APIImages represent an image returned in the ListImages call.
+-type APIImages struct {
+-	ID          string   `json:"Id" yaml:"Id"`
+-	RepoTags    []string `json:"RepoTags,omitempty" yaml:"RepoTags,omitempty"`
+-	Created     int64    `json:"Created,omitempty" yaml:"Created,omitempty"`
+-	Size        int64    `json:"Size,omitempty" yaml:"Size,omitempty"`
+-	VirtualSize int64    `json:"VirtualSize,omitempty" yaml:"VirtualSize,omitempty"`
+-	ParentId    string   `json:"ParentId,omitempty" yaml:"ParentId,omitempty"`
+-}
+-
+-type Image struct {
+-	ID              string    `json:"Id" yaml:"Id"`
+-	Parent          string    `json:"Parent,omitempty" yaml:"Parent,omitempty"`
+-	Comment         string    `json:"Comment,omitempty" yaml:"Comment,omitempty"`
+-	Created         time.Time `json:"Created,omitempty" yaml:"Created,omitempty"`
+-	Container       string    `json:"Container,omitempty" yaml:"Container,omitempty"`
+-	ContainerConfig Config    `json:"ContainerConfig,omitempty" yaml:"ContainerConfig,omitempty"`
+-	DockerVersion   string    `json:"DockerVersion,omitempty" yaml:"DockerVersion,omitempty"`
+-	Author          string    `json:"Author,omitempty" yaml:"Author,omitempty"`
+-	Config          *Config   `json:"Config,omitempty" yaml:"Config,omitempty"`
+-	Architecture    string    `json:"Architecture,omitempty" yaml:"Architecture,omitempty"`
+-	Size            int64     `json:"Size,omitempty" yaml:"Size,omitempty"`
+-}
+-
+-type ImagePre012 struct {
+-	ID              string    `json:"id"`
+-	Parent          string    `json:"parent,omitempty"`
+-	Comment         string    `json:"comment,omitempty"`
+-	Created         time.Time `json:"created"`
+-	Container       string    `json:"container,omitempty"`
+-	ContainerConfig Config    `json:"container_config,omitempty"`
+-	DockerVersion   string    `json:"docker_version,omitempty"`
+-	Author          string    `json:"author,omitempty"`
+-	Config          *Config   `json:"config,omitempty"`
+-	Architecture    string    `json:"architecture,omitempty"`
+-	Size            int64     `json:"size,omitempty"`
+-}
+-
+-var (
+-	// ErrNoSuchImage is the error returned when the image does not exist.
+-	ErrNoSuchImage = errors.New("no such image")
+-
+-	// ErrMissingRepo is the error returned when the remote repository is
+-	// missing.
+-	ErrMissingRepo = errors.New("missing remote repository e.g. 'github.com/user/repo'")
+-
+-	// ErrMissingOutputStream is the error returned when no output stream
+-	// is provided to some calls, like BuildImage.
+-	ErrMissingOutputStream = errors.New("missing output stream")
+-)
+-
+-// ListImages returns the list of available images in the server.
+-//
+-// See http://goo.gl/dkMrwP for more details.
+-func (c *Client) ListImages(all bool) ([]APIImages, error) {
+-	path := "/images/json?all="
+-	if all {
+-		path += "1"
+-	} else {
+-		path += "0"
+-	}
+-	body, _, err := c.do("GET", path, nil)
+-	if err != nil {
+-		return nil, err
+-	}
+-	var images []APIImages
+-	err = json.Unmarshal(body, &images)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return images, nil
+-}
+-
+-// RemoveImage removes an image by its name or ID.
+-//
+-// See http://goo.gl/7hjHHy for more details.
+-func (c *Client) RemoveImage(name string) error {
+-	_, status, err := c.do("DELETE", "/images/"+name, nil)
+-	if status == http.StatusNotFound {
+-		return ErrNoSuchImage
+-	}
+-	return err
+-}
+-
+-// InspectImage returns an image by its name or ID.
+-//
+-// See http://goo.gl/pHEbma for more details.
+-func (c *Client) InspectImage(name string) (*Image, error) {
+-	body, status, err := c.do("GET", "/images/"+name+"/json", nil)
+-	if status == http.StatusNotFound {
+-		return nil, ErrNoSuchImage
+-	}
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	var image Image
+-
+-	// if the caller elected to skip checking the server's version, assume it's the latest
+-	if c.SkipServerVersionCheck || c.expectedApiVersion.GreaterThanOrEqualTo(apiVersion_1_12) {
+-		err = json.Unmarshal(body, &image)
+-		if err != nil {
+-			return nil, err
+-		}
+-	} else {
+-		var imagePre012 ImagePre012
+-		err = json.Unmarshal(body, &imagePre012)
+-		if err != nil {
+-			return nil, err
+-		}
+-
+-		image.ID = imagePre012.ID
+-		image.Parent = imagePre012.Parent
+-		image.Comment = imagePre012.Comment
+-		image.Created = imagePre012.Created
+-		image.Container = imagePre012.Container
+-		image.ContainerConfig = imagePre012.ContainerConfig
+-		image.DockerVersion = imagePre012.DockerVersion
+-		image.Author = imagePre012.Author
+-		image.Config = imagePre012.Config
+-		image.Architecture = imagePre012.Architecture
+-		image.Size = imagePre012.Size
+-	}
+-
+-	return &image, nil
+-}
+-
+-// PushImageOptions represents options to use in the PushImage method.
+-//
+-// See http://goo.gl/GBmyhc for more details.
+-type PushImageOptions struct {
+-	// Name of the image
+-	Name string
+-
+-	// Tag of the image
+-	Tag string
+-
+-	// Registry server to push the image
+-	Registry string
+-
+-	OutputStream io.Writer `qs:"-"`
+-}
+-
+-// AuthConfiguration represents authentication options to use in the PushImage
+-// method. It represents the authentication in the Docker index server.
+-type AuthConfiguration struct {
+-	Username string `json:"username,omitempty"`
+-	Password string `json:"password,omitempty"`
+-	Email    string `json:"email,omitempty"`
+-}
+-
+-// PushImage pushes an image to a remote registry, logging progress to w.
+-//
+-// An empty instance of AuthConfiguration may be used for unauthenticated
+-// pushes.
+-//
+-// See http://goo.gl/GBmyhc for more details.
+-func (c *Client) PushImage(opts PushImageOptions, auth AuthConfiguration) error {
+-	if opts.Name == "" {
+-		return ErrNoSuchImage
+-	}
+-	name := opts.Name
+-	opts.Name = ""
+-	path := "/images/" + name + "/push?" + queryString(&opts)
+-	var headers = make(map[string]string)
+-	var buf bytes.Buffer
+-	json.NewEncoder(&buf).Encode(auth)
+-
+-	headers["X-Registry-Auth"] = base64.URLEncoding.EncodeToString(buf.Bytes())
+-
+-	return c.stream("POST", path, true, false, headers, nil, opts.OutputStream, nil)
+-}
+-
+-// PullImageOptions present the set of options available for pulling an image
+-// from a registry.
+-//
+-// See http://goo.gl/PhBKnS for more details.
+-type PullImageOptions struct {
+-	Repository    string `qs:"fromImage"`
+-	Registry      string
+-	Tag           string
+-	OutputStream  io.Writer `qs:"-"`
+-	RawJSONStream bool      `qs:"-"`
+-}
+-
+-// PullImage pulls an image from a remote registry, logging progress to w.
+-//
+-// See http://goo.gl/PhBKnS for more details.
+-func (c *Client) PullImage(opts PullImageOptions, auth AuthConfiguration) error {
+-	if opts.Repository == "" {
+-		return ErrNoSuchImage
+-	}
+-
+-	var headers = make(map[string]string)
+-	var buf bytes.Buffer
+-	json.NewEncoder(&buf).Encode(auth)
+-	headers["X-Registry-Auth"] = base64.URLEncoding.EncodeToString(buf.Bytes())
+-
+-	return c.createImage(queryString(&opts), headers, nil, opts.OutputStream, opts.RawJSONStream)
+-}
+-
+-func (c *Client) createImage(qs string, headers map[string]string, in io.Reader, w io.Writer, rawJSONStream bool) error {
+-	path := "/images/create?" + qs
+-	return c.stream("POST", path, true, rawJSONStream, headers, in, w, nil)
+-}
+-
+-// LoadImageOptions represents the options for LoadImage Docker API Call
+-//
+-// See http://goo.gl/Y8NNCq for more details.
+-type LoadImageOptions struct {
+-	InputStream io.Reader
+-}
+-
+-// LoadImage imports a tarball docker image
+-//
+-// See http://goo.gl/Y8NNCq for more details.
+-func (c *Client) LoadImage(opts LoadImageOptions) error {
+-	return c.stream("POST", "/images/load", true, false, nil, opts.InputStream, nil, nil)
+-}
+-
+-// ExportImageOptions represent the options for ExportImage Docker API call
+-//
+-// See http://goo.gl/mi6kvk for more details.
+-type ExportImageOptions struct {
+-	Name         string
+-	OutputStream io.Writer
+-}
+-
+-// ExportImage exports an image (as a tar file) into the stream
+-//
+-// See http://goo.gl/mi6kvk for more details.
+-func (c *Client) ExportImage(opts ExportImageOptions) error {
+-	return c.stream("GET", fmt.Sprintf("/images/%s/get", opts.Name), true, false, nil, nil, opts.OutputStream, nil)
+-}
+-
+-// ImportImageOptions present the set of informations available for importing
+-// an image from a source file or the stdin.
+-//
+-// See http://goo.gl/PhBKnS for more details.
+-type ImportImageOptions struct {
+-	Repository string `qs:"repo"`
+-	Source     string `qs:"fromSrc"`
+-	Tag        string `qs:"tag"`
+-
+-	InputStream  io.Reader `qs:"-"`
+-	OutputStream io.Writer `qs:"-"`
+-}
+-
+-// ImportImage imports an image from a url, a file or stdin
+-//
+-// See http://goo.gl/PhBKnS for more details.
+-func (c *Client) ImportImage(opts ImportImageOptions) error {
+-	if opts.Repository == "" {
+-		return ErrNoSuchImage
+-	}
+-	if opts.Source != "-" {
+-		opts.InputStream = nil
+-	}
+-	if opts.Source != "-" && !isURL(opts.Source) {
+-		f, err := os.Open(opts.Source)
+-		if err != nil {
+-			return err
+-		}
+-		b, err := ioutil.ReadAll(f)
+-		opts.InputStream = bytes.NewBuffer(b)
+-		opts.Source = "-"
+-	}
+-	return c.createImage(queryString(&opts), nil, opts.InputStream, opts.OutputStream, false)
+-}
+-
+-// BuildImageOptions present the set of informations available for building
+-// an image from a tarfile with a Dockerfile in it,the details about Dockerfile
+-// see http://docs.docker.io/en/latest/reference/builder/
+-type BuildImageOptions struct {
+-	Name                string    `qs:"t"`
+-	NoCache             bool      `qs:"nocache"`
+-	SuppressOutput      bool      `qs:"q"`
+-	RmTmpContainer      bool      `qs:"rm"`
+-	ForceRmTmpContainer bool      `qs:"forcerm"`
+-	InputStream         io.Reader `qs:"-"`
+-	OutputStream        io.Writer `qs:"-"`
+-	Remote              string    `qs:"remote"`
+-}
+-
+-// BuildImage builds an image from a tarball's url or a Dockerfile in the input
+-// stream.
+-func (c *Client) BuildImage(opts BuildImageOptions) error {
+-	if opts.OutputStream == nil {
+-		return ErrMissingOutputStream
+-	}
+-	var headers map[string]string
+-	if opts.Remote != "" && opts.Name == "" {
+-		opts.Name = opts.Remote
+-	}
+-	if opts.InputStream != nil {
+-		headers = map[string]string{"Content-Type": "application/tar"}
+-	} else if opts.Remote == "" {
+-		return ErrMissingRepo
+-	}
+-	return c.stream("POST", fmt.Sprintf("/build?%s",
+-		queryString(&opts)), true, false, headers, opts.InputStream, opts.OutputStream, nil)
+-}
+-
+-// TagImageOptions present the set of options to tag an image
+-type TagImageOptions struct {
+-	Repo  string
+-	Tag   string
+-	Force bool
+-}
+-
+-// TagImage adds a tag to the image 'name'
+-func (c *Client) TagImage(name string, opts TagImageOptions) error {
+-	if name == "" {
+-		return ErrNoSuchImage
+-	}
+-	_, status, err := c.do("POST", fmt.Sprintf("/images/"+name+"/tag?%s",
+-		queryString(&opts)), nil)
+-	if status == http.StatusNotFound {
+-		return ErrNoSuchImage
+-	}
+-
+-	return err
+-}
+-
+-func isURL(u string) bool {
+-	p, err := url.Parse(u)
+-	if err != nil {
+-		return false
+-	}
+-	return p.Scheme == "http" || p.Scheme == "https"
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/image_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/image_test.go
+deleted file mode 100644
+index 97612f2..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/image_test.go
++++ /dev/null
+@@ -1,712 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"bytes"
+-	"encoding/base64"
+-	"encoding/json"
+-	"io/ioutil"
+-	"net/http"
+-	"net/url"
+-	"os"
+-	"reflect"
+-	"strings"
+-	"testing"
+-)
+-
+-func newTestClient(rt *FakeRoundTripper) Client {
+-	endpoint := "http://localhost:4243"
+-	u, _ := parseEndpoint("http://localhost:4243")
+-	client := Client{
+-		HTTPClient:             &http.Client{Transport: rt},
+-		endpoint:               endpoint,
+-		endpointURL:            u,
+-		SkipServerVersionCheck: true,
+-	}
+-	return client
+-}
+-
+-type stdoutMock struct {
+-	*bytes.Buffer
+-}
+-
+-func (m stdoutMock) Close() error {
+-	return nil
+-}
+-
+-type stdinMock struct {
+-	*bytes.Buffer
+-}
+-
+-func (m stdinMock) Close() error {
+-	return nil
+-}
+-
+-func TestListImages(t *testing.T) {
+-	body := `[
+-     {
+-             "Repository":"base",
+-             "Tag":"ubuntu-12.10",
+-             "Id":"b750fe79269d",
+-             "Created":1364102658
+-     },
+-     {
+-             "Repository":"base",
+-             "Tag":"ubuntu-quantal",
+-             "Id":"b750fe79269d",
+-             "Created":1364102658
+-     },
+-     {
+-             "RepoTag": [
+-             "ubuntu:12.04",
+-             "ubuntu:precise",
+-             "ubuntu:latest"
+-             ],
+-             "Id": "8dbd9e392a964c",
+-             "Created": 1365714795,
+-             "Size": 131506275,
+-             "VirtualSize": 131506275
+-      },
+-      {
+-             "RepoTag": [
+-             "ubuntu:12.10",
+-             "ubuntu:quantal"
+-             ],
+-             "ParentId": "27cf784147099545",
+-             "Id": "b750fe79269d2e",
+-             "Created": 1364102658,
+-             "Size": 24653,
+-             "VirtualSize": 180116135
+-      }
+-]`
+-	var expected []APIImages
+-	err := json.Unmarshal([]byte(body), &expected)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	client := newTestClient(&FakeRoundTripper{message: body, status: http.StatusOK})
+-	images, err := client.ListImages(false)
+-	if err != nil {
+-		t.Error(err)
+-	}
+-	if !reflect.DeepEqual(images, expected) {
+-		t.Errorf("ListImages: Wrong return value. Want %#v. Got %#v.", expected, images)
+-	}
+-}
+-
+-func TestListImagesParameters(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "null", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	_, err := client.ListImages(false)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "GET" {
+-		t.Errorf("ListImages(false: Wrong HTTP method. Want GET. Got %s.", req.Method)
+-	}
+-	if all := req.URL.Query().Get("all"); all != "0" {
+-		t.Errorf("ListImages(false): Wrong parameter. Want all=0. Got all=%s", all)
+-	}
+-	fakeRT.Reset()
+-	_, err = client.ListImages(true)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req = fakeRT.requests[0]
+-	if all := req.URL.Query().Get("all"); all != "1" {
+-		t.Errorf("ListImages(true): Wrong parameter. Want all=1. Got all=%s", all)
+-	}
+-}
+-
+-func TestRemoveImage(t *testing.T) {
+-	name := "test"
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusNoContent}
+-	client := newTestClient(fakeRT)
+-	err := client.RemoveImage(name)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expectedMethod := "DELETE"
+-	if req.Method != expectedMethod {
+-		t.Errorf("RemoveImage(%q): Wrong HTTP method. Want %s. Got %s.", name, expectedMethod, req.Method)
+-	}
+-	u, _ := url.Parse(client.getURL("/images/" + name))
+-	if req.URL.Path != u.Path {
+-		t.Errorf("RemoveImage(%q): Wrong request path. Want %q. Got %q.", name, u.Path, req.URL.Path)
+-	}
+-}
+-
+-func TestRemoveImageNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such image", status: http.StatusNotFound})
+-	err := client.RemoveImage("test:")
+-	if err != ErrNoSuchImage {
+-		t.Errorf("RemoveImage: wrong error. Want %#v. Got %#v.", ErrNoSuchImage, err)
+-	}
+-}
+-
+-func TestInspectImage(t *testing.T) {
+-	body := `{
+-     "id":"b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+-     "parent":"27cf784147099545",
+-     "created":"2013-03-23T22:24:18.818426-07:00",
+-     "container":"3d67245a8d72ecf13f33dffac9f79dcdf70f75acb84d308770391510e0c23ad0",
+-     "container_config":{"Memory":0}
+-}`
+-	var expected Image
+-	json.Unmarshal([]byte(body), &expected)
+-	fakeRT := &FakeRoundTripper{message: body, status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	image, err := client.InspectImage(expected.ID)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(*image, expected) {
+-		t.Errorf("InspectImage(%q): Wrong image returned. Want %#v. Got %#v.", expected.ID, expected, *image)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "GET" {
+-		t.Errorf("InspectImage(%q): Wrong HTTP method. Want GET. Got %s.", expected.ID, req.Method)
+-	}
+-	u, _ := url.Parse(client.getURL("/images/" + expected.ID + "/json"))
+-	if req.URL.Path != u.Path {
+-		t.Errorf("InspectImage(%q): Wrong request URL. Want %q. Got %q.", expected.ID, u.Path, req.URL.Path)
+-	}
+-}
+-
+-func TestInspectImageNotFound(t *testing.T) {
+-	client := newTestClient(&FakeRoundTripper{message: "no such image", status: http.StatusNotFound})
+-	name := "test"
+-	image, err := client.InspectImage(name)
+-	if image != nil {
+-		t.Errorf("InspectImage(%q): expected <nil> image, got %#v.", name, image)
+-	}
+-	if err != ErrNoSuchImage {
+-		t.Errorf("InspectImage(%q): wrong error. Want %#v. Got %#v.", name, ErrNoSuchImage, err)
+-	}
+-}
+-
+-func TestPushImage(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "Pushing 1/100", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	err := client.PushImage(PushImageOptions{Name: "test", OutputStream: &buf}, AuthConfiguration{})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	expected := "Pushing 1/100"
+-	if buf.String() != expected {
+-		t.Errorf("PushImage: Wrong output. Want %q. Got %q.", expected, buf.String())
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("PushImage: Wrong HTTP method. Want POST. Got %s.", req.Method)
+-	}
+-	u, _ := url.Parse(client.getURL("/images/test/push"))
+-	if req.URL.Path != u.Path {
+-		t.Errorf("PushImage: Wrong request path. Want %q. Got %q.", u.Path, req.URL.Path)
+-	}
+-	if query := req.URL.Query().Encode(); query != "" {
+-		t.Errorf("PushImage: Wrong query string. Want no parameters, got %q.", query)
+-	}
+-
+-	auth, err := base64.URLEncoding.DecodeString(req.Header.Get("X-Registry-Auth"))
+-	if err != nil {
+-		t.Errorf("PushImage: caught error decoding auth. %#v", err.Error())
+-	}
+-	if strings.TrimSpace(string(auth)) != "{}" {
+-		t.Errorf("PushImage: wrong body. Want %q. Got %q.",
+-			base64.URLEncoding.EncodeToString([]byte("{}")), req.Header.Get("X-Registry-Auth"))
+-	}
+-}
+-
+-func TestPushImageWithAuthentication(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "Pushing 1/100", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	inputAuth := AuthConfiguration{
+-		Username: "gopher",
+-		Password: "gopher123",
+-		Email:    "gopher at tsuru.io",
+-	}
+-	err := client.PushImage(PushImageOptions{Name: "test", OutputStream: &buf}, inputAuth)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	var gotAuth AuthConfiguration
+-
+-	auth, err := base64.URLEncoding.DecodeString(req.Header.Get("X-Registry-Auth"))
+-	if err != nil {
+-		t.Errorf("PushImage: caught error decoding auth. %#v", err.Error())
+-	}
+-
+-	err = json.Unmarshal(auth, &gotAuth)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(gotAuth, inputAuth) {
+-		t.Errorf("PushImage: wrong auth configuration. Want %#v. Got %#v.", inputAuth, gotAuth)
+-	}
+-}
+-
+-func TestPushImageCustomRegistry(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "Pushing 1/100", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var authConfig AuthConfiguration
+-	var buf bytes.Buffer
+-	opts := PushImageOptions{
+-		Name: "test", Registry: "docker.tsuru.io",
+-		OutputStream: &buf,
+-	}
+-	err := client.PushImage(opts, authConfig)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expectedQuery := "registry=docker.tsuru.io"
+-	if query := req.URL.Query().Encode(); query != expectedQuery {
+-		t.Errorf("PushImage: Wrong query string. Want %q. Got %q.", expectedQuery, query)
+-	}
+-}
+-
+-func TestPushImageNoName(t *testing.T) {
+-	client := Client{}
+-	err := client.PushImage(PushImageOptions{}, AuthConfiguration{})
+-	if err != ErrNoSuchImage {
+-		t.Errorf("PushImage: got wrong error. Want %#v. Got %#v.", ErrNoSuchImage, err)
+-	}
+-}
+-
+-func TestPullImage(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "Pulling 1/100", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	err := client.PullImage(PullImageOptions{Repository: "base", OutputStream: &buf},
+-		AuthConfiguration{})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	expected := "Pulling 1/100"
+-	if buf.String() != expected {
+-		t.Errorf("PullImage: Wrong output. Want %q. Got %q.", expected, buf.String())
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("PullImage: Wrong HTTP method. Want POST. Got %s.", req.Method)
+-	}
+-	u, _ := url.Parse(client.getURL("/images/create"))
+-	if req.URL.Path != u.Path {
+-		t.Errorf("PullImage: Wrong request path. Want %q. Got %q.", u.Path, req.URL.Path)
+-	}
+-	expectedQuery := "fromImage=base"
+-	if query := req.URL.Query().Encode(); query != expectedQuery {
+-		t.Errorf("PullImage: Wrong query strin. Want %q. Got %q.", expectedQuery, query)
+-	}
+-}
+-
+-func TestPullImageWithRawJSON(t *testing.T) {
+-	body := `
+-	{"status":"Pulling..."}
+-	{"status":"Pulling", "progress":"1 B/ 100 B", "progressDetail":{"current":1, "total":100}}
+-	`
+-	fakeRT := &FakeRoundTripper{
+-		message: body,
+-		status:  http.StatusOK,
+-		header: map[string]string{
+-			"Content-Type": "application/json",
+-		},
+-	}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	err := client.PullImage(PullImageOptions{
+-		Repository:    "base",
+-		OutputStream:  &buf,
+-		RawJSONStream: true,
+-	}, AuthConfiguration{})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if buf.String() != body {
+-		t.Errorf("PullImage: Wrong raw output. Want %q. Got %q", body, buf.String())
+-	}
+-}
+-
+-func TestPullImageWithoutOutputStream(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "Pulling 1/100", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	opts := PullImageOptions{
+-		Repository: "base",
+-		Registry:   "docker.tsuru.io",
+-	}
+-	err := client.PullImage(opts, AuthConfiguration{})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"fromImage": {"base"}, "registry": {"docker.tsuru.io"}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("PullImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestPullImageCustomRegistry(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "Pulling 1/100", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	opts := PullImageOptions{
+-		Repository:   "base",
+-		Registry:     "docker.tsuru.io",
+-		OutputStream: &buf,
+-	}
+-	err := client.PullImage(opts, AuthConfiguration{})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"fromImage": {"base"}, "registry": {"docker.tsuru.io"}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("PullImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestPullImageTag(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "Pulling 1/100", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	opts := PullImageOptions{
+-		Repository:   "base",
+-		Registry:     "docker.tsuru.io",
+-		Tag:          "latest",
+-		OutputStream: &buf,
+-	}
+-	err := client.PullImage(opts, AuthConfiguration{})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"fromImage": {"base"}, "registry": {"docker.tsuru.io"}, "tag": {"latest"}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("PullImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestPullImageNoRepository(t *testing.T) {
+-	var opts PullImageOptions
+-	client := Client{}
+-	err := client.PullImage(opts, AuthConfiguration{})
+-	if err != ErrNoSuchImage {
+-		t.Errorf("PullImage: got wrong error. Want %#v. Got %#v.", ErrNoSuchImage, err)
+-	}
+-}
+-
+-func TestImportImageFromUrl(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	opts := ImportImageOptions{
+-		Source:       "http://mycompany.com/file.tar",
+-		Repository:   "testimage",
+-		Tag:          "tag",
+-		OutputStream: &buf,
+-	}
+-	err := client.ImportImage(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"fromSrc": {opts.Source}, "repo": {opts.Repository}, "tag": {opts.Tag}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("ImportImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestImportImageFromInput(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	in := bytes.NewBufferString("tar content")
+-	var buf bytes.Buffer
+-	opts := ImportImageOptions{
+-		Source: "-", Repository: "testimage",
+-		InputStream: in, OutputStream: &buf,
+-		Tag: "tag",
+-	}
+-	err := client.ImportImage(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"fromSrc": {opts.Source}, "repo": {opts.Repository}, "tag": {opts.Tag}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("ImportImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-	body, err := ioutil.ReadAll(req.Body)
+-	if err != nil {
+-		t.Errorf("ImportImage: caugth error while reading body %#v", err.Error())
+-	}
+-	e := "tar content"
+-	if string(body) != e {
+-		t.Errorf("ImportImage: wrong body. Want %#v. Got %#v.", e, string(body))
+-	}
+-}
+-
+-func TestImportImageDoesNotPassesInputIfSourceIsNotDash(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	in := bytes.NewBufferString("foo")
+-	opts := ImportImageOptions{
+-		Source: "http://test.com/container.tar", Repository: "testimage",
+-		InputStream: in, OutputStream: &buf,
+-	}
+-	err := client.ImportImage(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"fromSrc": {opts.Source}, "repo": {opts.Repository}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("ImportImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-	body, err := ioutil.ReadAll(req.Body)
+-	if err != nil {
+-		t.Errorf("ImportImage: caugth error while reading body %#v", err.Error())
+-	}
+-	if string(body) != "" {
+-		t.Errorf("ImportImage: wrong body. Want nothing. Got %#v.", string(body))
+-	}
+-}
+-
+-func TestImportImageShouldPassTarContentToBodyWhenSourceIsFilePath(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	tarPath := "testing/data/container.tar"
+-	opts := ImportImageOptions{
+-		Source: tarPath, Repository: "testimage",
+-		OutputStream: &buf,
+-	}
+-	err := client.ImportImage(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	tar, err := os.Open(tarPath)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	tarContent, err := ioutil.ReadAll(tar)
+-	body, err := ioutil.ReadAll(req.Body)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(tarContent, body) {
+-		t.Errorf("ImportImage: wrong body. Want %#v content. Got %#v.", tarPath, body)
+-	}
+-}
+-
+-func TestImportImageShouldChangeSourceToDashWhenItsAFilePath(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	tarPath := "testing/data/container.tar"
+-	opts := ImportImageOptions{
+-		Source: tarPath, Repository: "testimage",
+-		OutputStream: &buf,
+-	}
+-	err := client.ImportImage(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"fromSrc": {"-"}, "repo": {opts.Repository}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("ImportImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestBuildImageParameters(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	opts := BuildImageOptions{
+-		Name:                "testImage",
+-		NoCache:             true,
+-		SuppressOutput:      true,
+-		RmTmpContainer:      true,
+-		ForceRmTmpContainer: true,
+-		InputStream:         &buf,
+-		OutputStream:        &buf,
+-	}
+-	err := client.BuildImage(opts)
+-	if err != nil && strings.Index(err.Error(), "build image fail") == -1 {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"t": {opts.Name}, "nocache": {"1"}, "q": {"1"}, "rm": {"1"}, "forcerm": {"1"}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("BuildImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestBuildImageParametersForRemoteBuild(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	opts := BuildImageOptions{
+-		Name:           "testImage",
+-		Remote:         "testing/data/container.tar",
+-		SuppressOutput: true,
+-		OutputStream:   &buf,
+-	}
+-	err := client.BuildImage(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"t": {opts.Name}, "remote": {opts.Remote}, "q": {"1"}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("ImportImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestBuildImageMissingRepoAndNilInput(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	opts := BuildImageOptions{
+-		Name:           "testImage",
+-		SuppressOutput: true,
+-		OutputStream:   &buf,
+-	}
+-	err := client.BuildImage(opts)
+-	if err != ErrMissingRepo {
+-		t.Errorf("BuildImage: wrong error returned. Want %#v. Got %#v.", ErrMissingRepo, err)
+-	}
+-}
+-
+-func TestBuildImageMissingOutputStream(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	opts := BuildImageOptions{Name: "testImage"}
+-	err := client.BuildImage(opts)
+-	if err != ErrMissingOutputStream {
+-		t.Errorf("BuildImage: wrong error returned. Want %#v. Got %#v.", ErrMissingOutputStream, err)
+-	}
+-}
+-
+-func TestBuildImageRemoteWithoutName(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	var buf bytes.Buffer
+-	opts := BuildImageOptions{
+-		Remote:         "testing/data/container.tar",
+-		SuppressOutput: true,
+-		OutputStream:   &buf,
+-	}
+-	err := client.BuildImage(opts)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := map[string][]string{"t": {opts.Remote}, "remote": {opts.Remote}, "q": {"1"}}
+-	got := map[string][]string(req.URL.Query())
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("BuildImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestTagImageParameters(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	opts := TagImageOptions{Repo: "testImage"}
+-	err := client.TagImage("base", opts)
+-	if err != nil && strings.Index(err.Error(), "tag image fail") == -1 {
+-		t.Fatal(err)
+-	}
+-	req := fakeRT.requests[0]
+-	expected := "http://localhost:4243/images/base/tag?repo=testImage"
+-	got := req.URL.String()
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("TagImage: wrong query string. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestTagImageMissingRepo(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	opts := TagImageOptions{Repo: "testImage"}
+-	err := client.TagImage("", opts)
+-	if err != ErrNoSuchImage {
+-		t.Errorf("TestTag: wrong error returned. Want %#v. Got %#v.",
+-			ErrNoSuchImage, err)
+-	}
+-}
+-
+-func TestIsUrl(t *testing.T) {
+-	url := "http://foo.bar/"
+-	result := isURL(url)
+-	if !result {
+-		t.Errorf("isURL: wrong match. Expected %#v to be a url. Got %#v.", url, result)
+-	}
+-	url = "/foo/bar.tar"
+-	result = isURL(url)
+-	if result {
+-		t.Errorf("isURL: wrong match. Expected %#v to not be a url. Got %#v", url, result)
+-	}
+-}
+-
+-func TestLoadImage(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	tar, err := os.Open("testing/data/container.tar")
+-	if err != nil {
+-		t.Fatal(err)
+-	} else {
+-		defer tar.Close()
+-	}
+-	opts := LoadImageOptions{InputStream: tar}
+-	err = client.LoadImage(opts)
+-	if nil != err {
+-		t.Error(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "POST" {
+-		t.Errorf("LoadImage: wrong method. Expected %q. Got %q.", "POST", req.Method)
+-	}
+-	if req.URL.Path != "/images/load" {
+-		t.Errorf("LoadImage: wrong URL. Expected %q. Got %q.", "/images/load", req.URL.Path)
+-	}
+-}
+-
+-func TestExportImage(t *testing.T) {
+-	var buf bytes.Buffer
+-	fakeRT := &FakeRoundTripper{message: "", status: http.StatusOK}
+-	client := newTestClient(fakeRT)
+-	opts := ExportImageOptions{Name: "testimage", OutputStream: &buf}
+-	err := client.ExportImage(opts)
+-	if nil != err {
+-		t.Error(err)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "GET" {
+-		t.Errorf("ExportImage: wrong method. Expected %q. Got %q.", "GET", req.Method)
+-	}
+-	expectedPath := "/images/testimage/get"
+-	if req.URL.Path != expectedPath {
+-		t.Errorf("ExportIMage: wrong path. Expected %q. Got %q.", expectedPath, req.URL.Path)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/misc.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/misc.go
+deleted file mode 100644
+index 3f32bf2..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/misc.go
++++ /dev/null
+@@ -1,59 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"bytes"
+-	"strings"
+-)
+-
+-// Version returns version information about the docker server.
+-//
+-// See http://goo.gl/IqKNRE for more details.
+-func (c *Client) Version() (*Env, error) {
+-	body, _, err := c.do("GET", "/version", nil)
+-	if err != nil {
+-		return nil, err
+-	}
+-	var env Env
+-	if err := env.Decode(bytes.NewReader(body)); err != nil {
+-		return nil, err
+-	}
+-	return &env, nil
+-}
+-
+-// Info returns system-wide information, like the number of running containers.
+-//
+-// See http://goo.gl/LOmySw for more details.
+-func (c *Client) Info() (*Env, error) {
+-	body, _, err := c.do("GET", "/info", nil)
+-	if err != nil {
+-		return nil, err
+-	}
+-	var info Env
+-	err = info.Decode(bytes.NewReader(body))
+-	if err != nil {
+-		return nil, err
+-	}
+-	return &info, nil
+-}
+-
+-// ParseRepositoryTag gets the name of the repository and returns it splitted
+-// in two parts: the repository and the tag.
+-//
+-// Some examples:
+-//
+-//     localhost.localdomain:5000/samalba/hipache:latest -> localhost.localdomain:5000/samalba/hipache, latest
+-//     localhost.localdomain:5000/samalba/hipache -> localhost.localdomain:5000/samalba/hipache, ""
+-func ParseRepositoryTag(repoTag string) (repository string, tag string) {
+-	n := strings.LastIndex(repoTag, ":")
+-	if n < 0 {
+-		return repoTag, ""
+-	}
+-	if tag := repoTag[n+1:]; !strings.Contains(tag, "/") {
+-		return repoTag[:n], tag
+-	}
+-	return repoTag, ""
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/misc_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/misc_test.go
+deleted file mode 100644
+index ceaf076..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/misc_test.go
++++ /dev/null
+@@ -1,159 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-import (
+-	"net/http"
+-	"net/url"
+-	"reflect"
+-	"sort"
+-	"testing"
+-)
+-
+-type DockerVersion struct {
+-	Version   string
+-	GitCommit string
+-	GoVersion string
+-}
+-
+-func TestVersion(t *testing.T) {
+-	body := `{
+-     "Version":"0.2.2",
+-     "GitCommit":"5a2a5cc+CHANGES",
+-     "GoVersion":"go1.0.3"
+-}`
+-	fakeRT := FakeRoundTripper{message: body, status: http.StatusOK}
+-	client := newTestClient(&fakeRT)
+-	expected := DockerVersion{
+-		Version:   "0.2.2",
+-		GitCommit: "5a2a5cc+CHANGES",
+-		GoVersion: "go1.0.3",
+-	}
+-	version, err := client.Version()
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if result := version.Get("Version"); result != expected.Version {
+-		t.Errorf("Version(): Wrong result. Want %#v. Got %#v.", expected.Version, version.Get("Version"))
+-	}
+-	if result := version.Get("GitCommit"); result != expected.GitCommit {
+-		t.Errorf("GitCommit(): Wrong result. Want %#v. Got %#v.", expected.GitCommit, version.Get("GitCommit"))
+-	}
+-	if result := version.Get("GoVersion"); result != expected.GoVersion {
+-		t.Errorf("GoVersion(): Wrong result. Want %#v. Got %#v.", expected.GoVersion, version.Get("GoVersion"))
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "GET" {
+-		t.Errorf("Version(): wrong request method. Want GET. Got %s.", req.Method)
+-	}
+-	u, _ := url.Parse(client.getURL("/version"))
+-	if req.URL.Path != u.Path {
+-		t.Errorf("Version(): wrong request path. Want %q. Got %q.", u.Path, req.URL.Path)
+-	}
+-}
+-
+-func TestVersionError(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "internal error", status: http.StatusInternalServerError}
+-	client := newTestClient(fakeRT)
+-	version, err := client.Version()
+-	if version != nil {
+-		t.Errorf("Version(): expected <nil> value, got %#v.", version)
+-	}
+-	if err == nil {
+-		t.Error("Version(): unexpected <nil> error")
+-	}
+-}
+-
+-func TestInfo(t *testing.T) {
+-	body := `{
+-     "Containers":11,
+-     "Images":16,
+-     "Debug":0,
+-     "NFd":11,
+-     "NGoroutines":21,
+-     "MemoryLimit":1,
+-     "SwapLimit":0
+-}`
+-	fakeRT := FakeRoundTripper{message: body, status: http.StatusOK}
+-	client := newTestClient(&fakeRT)
+-	expected := Env{}
+-	expected.SetInt("Containers", 11)
+-	expected.SetInt("Images", 16)
+-	expected.SetBool("Debug", false)
+-	expected.SetInt("NFd", 11)
+-	expected.SetInt("NGoroutines", 21)
+-	expected.SetBool("MemoryLimit", true)
+-	expected.SetBool("SwapLimit", false)
+-	info, err := client.Info()
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	infoSlice := []string(*info)
+-	expectedSlice := []string(expected)
+-	sort.Strings(infoSlice)
+-	sort.Strings(expectedSlice)
+-	if !reflect.DeepEqual(expectedSlice, infoSlice) {
+-		t.Errorf("Info(): Wrong result.\nWant %#v.\nGot %#v.", expected, *info)
+-	}
+-	req := fakeRT.requests[0]
+-	if req.Method != "GET" {
+-		t.Errorf("Info(): Wrong HTTP method. Want GET. Got %s.", req.Method)
+-	}
+-	u, _ := url.Parse(client.getURL("/info"))
+-	if req.URL.Path != u.Path {
+-		t.Errorf("Info(): Wrong request path. Want %q. Got %q.", u.Path, req.URL.Path)
+-	}
+-}
+-
+-func TestInfoError(t *testing.T) {
+-	fakeRT := &FakeRoundTripper{message: "internal error", status: http.StatusInternalServerError}
+-	client := newTestClient(fakeRT)
+-	version, err := client.Info()
+-	if version != nil {
+-		t.Errorf("Info(): expected <nil> value, got %#v.", version)
+-	}
+-	if err == nil {
+-		t.Error("Info(): unexpected <nil> error")
+-	}
+-}
+-
+-func TestParseRepositoryTag(t *testing.T) {
+-	var tests = []struct {
+-		input        string
+-		expectedRepo string
+-		expectedTag  string
+-	}{
+-		{
+-			"localhost.localdomain:5000/samalba/hipache:latest",
+-			"localhost.localdomain:5000/samalba/hipache",
+-			"latest",
+-		},
+-		{
+-			"localhost.localdomain:5000/samalba/hipache",
+-			"localhost.localdomain:5000/samalba/hipache",
+-			"",
+-		},
+-		{
+-			"tsuru/python",
+-			"tsuru/python",
+-			"",
+-		},
+-		{
+-			"tsuru/python:2.7",
+-			"tsuru/python",
+-			"2.7",
+-		},
+-	}
+-	for _, tt := range tests {
+-		repo, tag := ParseRepositoryTag(tt.input)
+-		if repo != tt.expectedRepo {
+-			t.Errorf("ParseRepositoryTag(%q): wrong repository. Want %q. Got %q", tt.input, tt.expectedRepo, repo)
+-		}
+-		if tag != tt.expectedTag {
+-			t.Errorf("ParseRepositoryTag(%q): wrong tag. Want %q. Got %q", tt.input, tt.expectedTag, tag)
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/signal.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/signal.go
+deleted file mode 100644
+index 16aa003..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/signal.go
++++ /dev/null
+@@ -1,49 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package docker
+-
+-// Signal represents a signal that can be send to the container on
+-// KillContainer call.
+-type Signal int
+-
+-// These values represent all signals available on Linux, where containers will
+-// be running.
+-const (
+-	SIGABRT   = Signal(0x6)
+-	SIGALRM   = Signal(0xe)
+-	SIGBUS    = Signal(0x7)
+-	SIGCHLD   = Signal(0x11)
+-	SIGCLD    = Signal(0x11)
+-	SIGCONT   = Signal(0x12)
+-	SIGFPE    = Signal(0x8)
+-	SIGHUP    = Signal(0x1)
+-	SIGILL    = Signal(0x4)
+-	SIGINT    = Signal(0x2)
+-	SIGIO     = Signal(0x1d)
+-	SIGIOT    = Signal(0x6)
+-	SIGKILL   = Signal(0x9)
+-	SIGPIPE   = Signal(0xd)
+-	SIGPOLL   = Signal(0x1d)
+-	SIGPROF   = Signal(0x1b)
+-	SIGPWR    = Signal(0x1e)
+-	SIGQUIT   = Signal(0x3)
+-	SIGSEGV   = Signal(0xb)
+-	SIGSTKFLT = Signal(0x10)
+-	SIGSTOP   = Signal(0x13)
+-	SIGSYS    = Signal(0x1f)
+-	SIGTERM   = Signal(0xf)
+-	SIGTRAP   = Signal(0x5)
+-	SIGTSTP   = Signal(0x14)
+-	SIGTTIN   = Signal(0x15)
+-	SIGTTOU   = Signal(0x16)
+-	SIGUNUSED = Signal(0x1f)
+-	SIGURG    = Signal(0x17)
+-	SIGUSR1   = Signal(0xa)
+-	SIGUSR2   = Signal(0xc)
+-	SIGVTALRM = Signal(0x1a)
+-	SIGWINCH  = Signal(0x1c)
+-	SIGXCPU   = Signal(0x18)
+-	SIGXFSZ   = Signal(0x19)
+-)
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/stdcopy.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/stdcopy.go
+deleted file mode 100644
+index 3782f3d..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/stdcopy.go
++++ /dev/null
+@@ -1,91 +0,0 @@
+-// Copyright 2014 Docker authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the DOCKER-LICENSE file.
+-
+-package docker
+-
+-import (
+-	"encoding/binary"
+-	"errors"
+-	"io"
+-)
+-
+-const (
+-	stdWriterPrefixLen = 8
+-	stdWriterFdIndex   = 0
+-	stdWriterSizeIndex = 4
+-)
+-
+-var errInvalidStdHeader = errors.New("Unrecognized input header")
+-
+-func stdCopy(dstout, dsterr io.Writer, src io.Reader) (written int64, err error) {
+-	var (
+-		buf       = make([]byte, 32*1024+stdWriterPrefixLen+1)
+-		bufLen    = len(buf)
+-		nr, nw    int
+-		er, ew    error
+-		out       io.Writer
+-		frameSize int
+-	)
+-	for {
+-		for nr < stdWriterPrefixLen {
+-			var nr2 int
+-			nr2, er = src.Read(buf[nr:])
+-			if er == io.EOF {
+-				if nr < stdWriterPrefixLen && nr2 < stdWriterPrefixLen {
+-					return written, nil
+-				}
+-				nr += nr2
+-				break
+-			} else if er != nil {
+-				return 0, er
+-			}
+-			nr += nr2
+-		}
+-		switch buf[stdWriterFdIndex] {
+-		case 0:
+-			fallthrough
+-		case 1:
+-			out = dstout
+-		case 2:
+-			out = dsterr
+-		default:
+-			return 0, errInvalidStdHeader
+-		}
+-		frameSize = int(binary.BigEndian.Uint32(buf[stdWriterSizeIndex : stdWriterSizeIndex+4]))
+-		if frameSize+stdWriterPrefixLen > bufLen {
+-			buf = append(buf, make([]byte, frameSize+stdWriterPrefixLen-len(buf)+1)...)
+-			bufLen = len(buf)
+-		}
+-		for nr < frameSize+stdWriterPrefixLen {
+-			var nr2 int
+-			nr2, er = src.Read(buf[nr:])
+-			if er == io.EOF {
+-				if nr == 0 {
+-					return written, nil
+-				}
+-				nr += nr2
+-				break
+-			} else if er != nil {
+-				return 0, er
+-			}
+-			nr += nr2
+-		}
+-		bound := frameSize + stdWriterPrefixLen
+-		if bound > nr {
+-			bound = nr
+-		}
+-		nw, ew = out.Write(buf[stdWriterPrefixLen:bound])
+-		if nw > 0 {
+-			written += int64(nw)
+-		}
+-		if ew != nil {
+-			return 0, ew
+-		}
+-		if nw != frameSize {
+-			return written, io.ErrShortWrite
+-		}
+-		copy(buf, buf[frameSize+stdWriterPrefixLen:])
+-		nr -= frameSize + stdWriterPrefixLen
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/stdcopy_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/stdcopy_test.go
+deleted file mode 100644
+index 75b8922..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/stdcopy_test.go
++++ /dev/null
+@@ -1,255 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the DOCKER-LICENSE file.
+-
+-package docker
+-
+-import (
+-	"bytes"
+-	"encoding/binary"
+-	"errors"
+-	"io"
+-	"strings"
+-	"testing"
+-	"testing/iotest"
+-)
+-
+-type errorWriter struct {
+-}
+-
+-func (errorWriter) Write([]byte) (int, error) {
+-	return 0, errors.New("something went wrong")
+-}
+-
+-func TestStdCopy(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	input.Write([]byte{2, 0, 0, 0, 0, 0, 0, 19})
+-	input.Write([]byte("something happened!"))
+-	input.Write([]byte{1, 0, 0, 0, 0, 0, 0, 12})
+-	input.Write([]byte("just kidding"))
+-	input.Write([]byte{0, 0, 0, 0, 0, 0, 0, 6})
+-	input.Write([]byte("\nyeah!"))
+-	n, err := stdCopy(&stdout, &stderr, &input)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if expected := int64(19 + 12 + 6); n != expected {
+-		t.Errorf("Wrong number of bytes. Want %d. Got %d.", expected, n)
+-	}
+-	if got := stderr.String(); got != "something happened!" {
+-		t.Errorf("stdCopy: wrong stderr. Want %q. Got %q.", "something happened!", got)
+-	}
+-	if got := stdout.String(); got != "just kidding\nyeah!" {
+-		t.Errorf("stdCopy: wrong stdout. Want %q. Got %q.", "just kidding\nyeah!", got)
+-	}
+-}
+-
+-func TestStdCopyStress(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	value := strings.Repeat("something ", 4096)
+-	writer := newStdWriter(&input, Stdout)
+-	writer.Write([]byte(value))
+-	n, err := stdCopy(&stdout, &stderr, &input)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if n != 40960 {
+-		t.Errorf("Wrong number of bytes. Want 40960. Got %d.", n)
+-	}
+-	if got := stderr.String(); got != "" {
+-		t.Errorf("stdCopy: wrong stderr. Want empty string. Got %q", got)
+-	}
+-	if got := stdout.String(); got != value {
+-		t.Errorf("stdCopy: wrong stdout. Want %q. Got %q", value, got)
+-	}
+-}
+-
+-func TestStdCopyInvalidStdHeader(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	input.Write([]byte{3, 0, 0, 0, 0, 0, 0, 19})
+-	n, err := stdCopy(&stdout, &stderr, &input)
+-	if n != 0 {
+-		t.Errorf("stdCopy: wrong number of bytes. Want 0. Got %d", n)
+-	}
+-	if err != errInvalidStdHeader {
+-		t.Errorf("stdCopy: wrong error. Want ErrInvalidStdHeader. Got %#v", err)
+-	}
+-}
+-
+-func TestStdCopyBigFrame(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	input.Write([]byte{2, 0, 0, 0, 0, 0, 0, 18})
+-	input.Write([]byte("something happened!"))
+-	n, err := stdCopy(&stdout, &stderr, &input)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if expected := int64(18); n != expected {
+-		t.Errorf("Wrong number of bytes. Want %d. Got %d.", expected, n)
+-	}
+-	if got := stderr.String(); got != "something happened" {
+-		t.Errorf("stdCopy: wrong stderr. Want %q. Got %q.", "something happened", got)
+-	}
+-	if got := stdout.String(); got != "" {
+-		t.Errorf("stdCopy: wrong stdout. Want %q. Got %q.", "", got)
+-	}
+-}
+-
+-func TestStdCopySmallFrame(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	input.Write([]byte{2, 0, 0, 0, 0, 0, 0, 20})
+-	input.Write([]byte("something happened!"))
+-	n, err := stdCopy(&stdout, &stderr, &input)
+-	if err != io.ErrShortWrite {
+-		t.Errorf("stdCopy: wrong error. Want ShortWrite. Got %#v", err)
+-	}
+-	if expected := int64(19); n != expected {
+-		t.Errorf("Wrong number of bytes. Want %d. Got %d.", expected, n)
+-	}
+-	if got := stderr.String(); got != "something happened!" {
+-		t.Errorf("stdCopy: wrong stderr. Want %q. Got %q.", "something happened", got)
+-	}
+-	if got := stdout.String(); got != "" {
+-		t.Errorf("stdCopy: wrong stdout. Want %q. Got %q.", "", got)
+-	}
+-}
+-
+-func TestStdCopyEmpty(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	n, err := stdCopy(&stdout, &stderr, &input)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if n != 0 {
+-		t.Errorf("stdCopy: wrong number of bytes. Want 0. Got %d.", n)
+-	}
+-}
+-
+-func TestStdCopyCorruptedHeader(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	input.Write([]byte{2, 0, 0, 0, 0})
+-	n, err := stdCopy(&stdout, &stderr, &input)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if n != 0 {
+-		t.Errorf("stdCopy: wrong number of bytes. Want 0. Got %d.", n)
+-	}
+-}
+-
+-func TestStdCopyTruncateWriter(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	input.Write([]byte{2, 0, 0, 0, 0, 0, 0, 19})
+-	input.Write([]byte("something happened!"))
+-	n, err := stdCopy(&stdout, iotest.TruncateWriter(&stderr, 7), &input)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if expected := int64(19); n != expected {
+-		t.Errorf("Wrong number of bytes. Want %d. Got %d.", expected, n)
+-	}
+-	if got := stderr.String(); got != "somethi" {
+-		t.Errorf("stdCopy: wrong stderr. Want %q. Got %q.", "somethi", got)
+-	}
+-	if got := stdout.String(); got != "" {
+-		t.Errorf("stdCopy: wrong stdout. Want %q. Got %q.", "", got)
+-	}
+-}
+-
+-func TestStdCopyHeaderOnly(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	input.Write([]byte{2, 0, 0, 0, 0, 0, 0, 19})
+-	n, err := stdCopy(&stdout, iotest.TruncateWriter(&stderr, 7), &input)
+-	if err != io.ErrShortWrite {
+-		t.Errorf("stdCopy: wrong error. Want ShortWrite. Got %#v", err)
+-	}
+-	if n != 0 {
+-		t.Errorf("Wrong number of bytes. Want 0. Got %d.", n)
+-	}
+-	if got := stderr.String(); got != "" {
+-		t.Errorf("stdCopy: wrong stderr. Want %q. Got %q.", "", got)
+-	}
+-	if got := stdout.String(); got != "" {
+-		t.Errorf("stdCopy: wrong stdout. Want %q. Got %q.", "", got)
+-	}
+-}
+-
+-func TestStdCopyDataErrReader(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	input.Write([]byte{2, 0, 0, 0, 0, 0, 0, 19})
+-	input.Write([]byte("something happened!"))
+-	n, err := stdCopy(&stdout, &stderr, iotest.DataErrReader(&input))
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if expected := int64(19); n != expected {
+-		t.Errorf("Wrong number of bytes. Want %d. Got %d.", expected, n)
+-	}
+-	if got := stderr.String(); got != "something happened!" {
+-		t.Errorf("stdCopy: wrong stderr. Want %q. Got %q.", "something happened!", got)
+-	}
+-	if got := stdout.String(); got != "" {
+-		t.Errorf("stdCopy: wrong stdout. Want %q. Got %q.", "", got)
+-	}
+-}
+-
+-func TestStdCopyTimeoutReader(t *testing.T) {
+-	var input, stdout, stderr bytes.Buffer
+-	input.Write([]byte{2, 0, 0, 0, 0, 0, 0, 19})
+-	input.Write([]byte("something happened!"))
+-	_, err := stdCopy(&stdout, &stderr, iotest.TimeoutReader(&input))
+-	if err != iotest.ErrTimeout {
+-		t.Errorf("stdCopy: wrong error. Want ErrTimeout. Got %#v.", err)
+-	}
+-}
+-
+-func TestStdCopyWriteError(t *testing.T) {
+-	var input bytes.Buffer
+-	input.Write([]byte{2, 0, 0, 0, 0, 0, 0, 19})
+-	input.Write([]byte("something happened!"))
+-	var stdout, stderr errorWriter
+-	n, err := stdCopy(stdout, stderr, &input)
+-	if err.Error() != "something went wrong" {
+-		t.Errorf("stdCopy: wrong error. Want %q. Got %q", "something went wrong", err)
+-	}
+-	if n != 0 {
+-		t.Errorf("stdCopy: wrong number of bytes. Want 0. Got %d.", n)
+-	}
+-}
+-
+-type StdType [8]byte
+-
+-var (
+-	Stdin  = StdType{0: 0}
+-	Stdout = StdType{0: 1}
+-	Stderr = StdType{0: 2}
+-)
+-
+-type StdWriter struct {
+-	io.Writer
+-	prefix  StdType
+-	sizeBuf []byte
+-}
+-
+-func (w *StdWriter) Write(buf []byte) (n int, err error) {
+-	if w == nil || w.Writer == nil {
+-		return 0, errors.New("Writer not instanciated")
+-	}
+-	binary.BigEndian.PutUint32(w.prefix[4:], uint32(len(buf)))
+-	buf = append(w.prefix[:], buf...)
+-
+-	n, err = w.Writer.Write(buf)
+-	return n - 8, err
+-}
+-
+-func newStdWriter(w io.Writer, t StdType) *StdWriter {
+-	if len(t) != 8 {
+-		return nil
+-	}
+-
+-	return &StdWriter{
+-		Writer:  w,
+-		prefix:  t,
+-		sizeBuf: make([]byte, 4),
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/Dockerfile b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/Dockerfile
+deleted file mode 100644
+index 0948dcf..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/Dockerfile
++++ /dev/null
+@@ -1,15 +0,0 @@
+-# this file describes how to build tsuru python image
+-# to run it:
+-# 1- install docker
+-# 2- run: $ docker build -t tsuru/python https://raw.github.com/tsuru/basebuilder/master/python/Dockerfile
+-
+-from	base:ubuntu-quantal
+-run	apt-get install wget -y --force-yes
+-run	wget http://github.com/tsuru/basebuilder/tarball/master -O basebuilder.tar.gz --no-check-certificate
+-run	mkdir /var/lib/tsuru
+-run	tar -xvf basebuilder.tar.gz -C /var/lib/tsuru --strip 1
+-run	cp /var/lib/tsuru/python/deploy /var/lib/tsuru
+-run	cp /var/lib/tsuru/base/restart /var/lib/tsuru
+-run	cp /var/lib/tsuru/base/start /var/lib/tsuru
+-run	/var/lib/tsuru/base/install
+-run	/var/lib/tsuru/base/setup
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/container.tar b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/container.tar
+deleted file mode 100644
+index e4b066e3b6df8cb78ac445a34234f3780d164cf4..0000000000000000000000000000000000000000
+GIT binary patch
+literal 0
+HcmV?d00001
+
+literal 2048
+zcmeH_Q3``F42FH)DgF~kTC`qZ7s*`9%A^%r$Bu89Fp<6NMew1akmheFe?H>)Y5N#5
+z`(UT)m>?q4G^iwZ#(XmAwH8Ujv`|_rQd)Ig3sQ!(szArs+5bAH%#&Di1HU}iJx_zp
+z+3uU9k~Zgl)J<3?S%)LS_Hgc7e)t4AX&%Rz>>WAcX2Ec>82D}md=O1Y)p%bo=N_rJ
+OD+CIGLZA@%gTMmt=q{T8
+
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/dockerfile.tar b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/data/dockerfile.tar
+deleted file mode 100644
+index 32c9ce64704835cd096b85ac44c35b5087b5ccdd..0000000000000000000000000000000000000000
+GIT binary patch
+literal 0
+HcmV?d00001
+
+literal 2560
+zcmeHGy>8<$49;3V1%d0TNOs}`$a>xT46-c8LTt+?QB8ACf0XPiQll-<p+kUZFvXvb
+z{76$zR-LqKOs7{rc7zbS?G{!f_q$z^qL_3tiM%LE$cs&}-<R8RFF at p*a#OBA{1~IF
+z#KEI<M2)`Q_$$ZaN?}d2uwARM6CtMNqP&sw3$QgF;sQXey>h0~9$I?_v`_`p)qp;@
+z0OJK)JAmosQD=m*-~y?5ASGvD1{zS;L7n!AYz2z}2Y8%Kb25fgK0fDb5l4UE+{yF$
+zXs`{{TG^hbn!J);Cl1>2UV0=k!T8hL+GbhfZ2u5L51|SJ2KFb&fyiW3|3Qw(jvC+i
+zouk4oz*u9Q((Iyric9uLhPZsmgZ8ANMrS_2p5cn+n!M}dU&=mMrdq8|OlgOvF-oFN
+zh5A!%9Pk(EcxS4q(c~Z~u-BL7!+gIN2&&-GnGy1YRpY|{e@?X?J9}9;KY_$PxYO}H
+o;5QJT#=q||{Y*ZuNn-Gk-)jtGb|Y`+PV+v2`vmS2xaA4_1I+dVl>h($
+
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/server.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/server.go
+deleted file mode 100644
+index 42c20e4..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/server.go
++++ /dev/null
+@@ -1,668 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-// Package testing provides a fake implementation of the Docker API, useful for
+-// testing purpose.
+-package testing
+-
+-import (
+-	"archive/tar"
+-	"crypto/rand"
+-	"encoding/json"
+-	"errors"
+-	"fmt"
+-	mathrand "math/rand"
+-	"net"
+-	"net/http"
+-	"regexp"
+-	"strconv"
+-	"strings"
+-	"sync"
+-	"time"
+-
+-	"github.com/fsouza/go-dockerclient"
+-	"github.com/gorilla/mux"
+-)
+-
+-// DockerServer represents a programmable, concurrent (not much), HTTP server
+-// implementing a fake version of the Docker remote API.
+-//
+-// It can used in standalone mode, listening for connections or as an arbitrary
+-// HTTP handler.
+-//
+-// For more details on the remote API, check http://goo.gl/yMI1S.
+-type DockerServer struct {
+-	containers     []*docker.Container
+-	cMut           sync.RWMutex
+-	images         []docker.Image
+-	iMut           sync.RWMutex
+-	imgIDs         map[string]string
+-	listener       net.Listener
+-	mux            *mux.Router
+-	hook           func(*http.Request)
+-	failures       map[string]string
+-	customHandlers map[string]http.Handler
+-	handlerMutex   sync.RWMutex
+-	cChan          chan<- *docker.Container
+-}
+-
+-// NewServer returns a new instance of the fake server, in standalone mode. Use
+-// the method URL to get the URL of the server.
+-//
+-// It receives the bind address (use 127.0.0.1:0 for getting an available port
+-// on the host), a channel of containers and a hook function, that will be
+-// called on every request.
+-//
+-// The fake server will send containers in the channel whenever the container
+-// changes its state, via the HTTP API (i.e.: create, start and stop). This
+-// channel may be nil, which means that the server won't notify on state
+-// changes.
+-func NewServer(bind string, containerChan chan<- *docker.Container, hook func(*http.Request)) (*DockerServer, error) {
+-	listener, err := net.Listen("tcp", bind)
+-	if err != nil {
+-		return nil, err
+-	}
+-	server := DockerServer{
+-		listener:       listener,
+-		imgIDs:         make(map[string]string),
+-		hook:           hook,
+-		failures:       make(map[string]string),
+-		customHandlers: make(map[string]http.Handler),
+-		cChan:          containerChan,
+-	}
+-	server.buildMuxer()
+-	go http.Serve(listener, &server)
+-	return &server, nil
+-}
+-
+-func (s *DockerServer) notify(container *docker.Container) {
+-	if s.cChan != nil {
+-		s.cChan <- container
+-	}
+-}
+-
+-func (s *DockerServer) buildMuxer() {
+-	s.mux = mux.NewRouter()
+-	s.mux.Path("/commit").Methods("POST").HandlerFunc(s.handlerWrapper(s.commitContainer))
+-	s.mux.Path("/containers/json").Methods("GET").HandlerFunc(s.handlerWrapper(s.listContainers))
+-	s.mux.Path("/containers/create").Methods("POST").HandlerFunc(s.handlerWrapper(s.createContainer))
+-	s.mux.Path("/containers/{id:.*}/json").Methods("GET").HandlerFunc(s.handlerWrapper(s.inspectContainer))
+-	s.mux.Path("/containers/{id:.*}/start").Methods("POST").HandlerFunc(s.handlerWrapper(s.startContainer))
+-	s.mux.Path("/containers/{id:.*}/stop").Methods("POST").HandlerFunc(s.handlerWrapper(s.stopContainer))
+-	s.mux.Path("/containers/{id:.*}/pause").Methods("POST").HandlerFunc(s.handlerWrapper(s.pauseContainer))
+-	s.mux.Path("/containers/{id:.*}/unpause").Methods("POST").HandlerFunc(s.handlerWrapper(s.unpauseContainer))
+-	s.mux.Path("/containers/{id:.*}/wait").Methods("POST").HandlerFunc(s.handlerWrapper(s.waitContainer))
+-	s.mux.Path("/containers/{id:.*}/attach").Methods("POST").HandlerFunc(s.handlerWrapper(s.attachContainer))
+-	s.mux.Path("/containers/{id:.*}").Methods("DELETE").HandlerFunc(s.handlerWrapper(s.removeContainer))
+-	s.mux.Path("/images/create").Methods("POST").HandlerFunc(s.handlerWrapper(s.pullImage))
+-	s.mux.Path("/build").Methods("POST").HandlerFunc(s.handlerWrapper(s.buildImage))
+-	s.mux.Path("/images/json").Methods("GET").HandlerFunc(s.handlerWrapper(s.listImages))
+-	s.mux.Path("/images/{id:.*}").Methods("DELETE").HandlerFunc(s.handlerWrapper(s.removeImage))
+-	s.mux.Path("/images/{name:.*}/json").Methods("GET").HandlerFunc(s.handlerWrapper(s.inspectImage))
+-	s.mux.Path("/images/{name:.*}/push").Methods("POST").HandlerFunc(s.handlerWrapper(s.pushImage))
+-	s.mux.Path("/events").Methods("GET").HandlerFunc(s.listEvents)
+-	s.mux.Path("/_ping").Methods("GET").HandlerFunc(s.handlerWrapper(s.pingDocker))
+-	s.mux.Path("/images/load").Methods("POST").HandlerFunc(s.handlerWrapper(s.loadImage))
+-	s.mux.Path("/images/{id:.*}/get").Methods("GET").HandlerFunc(s.handlerWrapper(s.getImage))
+-}
+-
+-// PrepareFailure adds a new expected failure based on a URL regexp it receives
+-// an id for the failure.
+-func (s *DockerServer) PrepareFailure(id string, urlRegexp string) {
+-	s.failures[id] = urlRegexp
+-}
+-
+-// ResetFailure removes an expected failure identified by the given id.
+-func (s *DockerServer) ResetFailure(id string) {
+-	delete(s.failures, id)
+-}
+-
+-// CustomHandler registers a custom handler for a specific path.
+-//
+-// For example:
+-//
+-//     server.CustomHandler("/containers/json", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-//         http.Error(w, "Something wrong is not right", http.StatusInternalServerError)
+-//     }))
+-func (s *DockerServer) CustomHandler(path string, handler http.Handler) {
+-	s.handlerMutex.Lock()
+-	s.customHandlers[path] = handler
+-	s.handlerMutex.Unlock()
+-}
+-
+-// MutateContainer changes the state of a container, returning an error if the
+-// given id does not match to any container "running" in the server.
+-func (s *DockerServer) MutateContainer(id string, state docker.State) error {
+-	for _, container := range s.containers {
+-		if container.ID == id {
+-			container.State = state
+-			return nil
+-		}
+-	}
+-	return errors.New("container not found")
+-}
+-
+-// Stop stops the server.
+-func (s *DockerServer) Stop() {
+-	if s.listener != nil {
+-		s.listener.Close()
+-	}
+-}
+-
+-// URL returns the HTTP URL of the server.
+-func (s *DockerServer) URL() string {
+-	if s.listener == nil {
+-		return ""
+-	}
+-	return "http://" + s.listener.Addr().String() + "/"
+-}
+-
+-// ServeHTTP handles HTTP requests sent to the server.
+-func (s *DockerServer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+-	s.handlerMutex.RLock()
+-	defer s.handlerMutex.RUnlock()
+-	if handler, ok := s.customHandlers[r.URL.Path]; ok {
+-		handler.ServeHTTP(w, r)
+-		return
+-	}
+-	s.mux.ServeHTTP(w, r)
+-	if s.hook != nil {
+-		s.hook(r)
+-	}
+-}
+-
+-// Returns default http.Handler mux, it allows customHandlers to call the
+-// default behavior if wanted.
+-func (s *DockerServer) DefaultHandler() http.Handler {
+-	return s.mux
+-}
+-
+-func (s *DockerServer) handlerWrapper(f func(http.ResponseWriter, *http.Request)) func(http.ResponseWriter, *http.Request) {
+-	return func(w http.ResponseWriter, r *http.Request) {
+-		for errorID, urlRegexp := range s.failures {
+-			matched, err := regexp.MatchString(urlRegexp, r.URL.Path)
+-			if err != nil {
+-				http.Error(w, err.Error(), http.StatusBadRequest)
+-				return
+-			}
+-			if !matched {
+-				continue
+-			}
+-			http.Error(w, errorID, http.StatusBadRequest)
+-			return
+-		}
+-		f(w, r)
+-	}
+-}
+-
+-func (s *DockerServer) listContainers(w http.ResponseWriter, r *http.Request) {
+-	all := r.URL.Query().Get("all")
+-	s.cMut.RLock()
+-	result := make([]docker.APIContainers, len(s.containers))
+-	for i, container := range s.containers {
+-		if all == "1" || container.State.Running {
+-			result[i] = docker.APIContainers{
+-				ID:      container.ID,
+-				Image:   container.Image,
+-				Command: fmt.Sprintf("%s %s", container.Path, strings.Join(container.Args, " ")),
+-				Created: container.Created.Unix(),
+-				Status:  container.State.String(),
+-				Ports:   container.NetworkSettings.PortMappingAPI(),
+-			}
+-		}
+-	}
+-	s.cMut.RUnlock()
+-	w.Header().Set("Content-Type", "application/json")
+-	w.WriteHeader(http.StatusOK)
+-	json.NewEncoder(w).Encode(result)
+-}
+-
+-func (s *DockerServer) listImages(w http.ResponseWriter, r *http.Request) {
+-	s.cMut.RLock()
+-	result := make([]docker.APIImages, len(s.images))
+-	for i, image := range s.images {
+-		result[i] = docker.APIImages{
+-			ID:      image.ID,
+-			Created: image.Created.Unix(),
+-		}
+-		for tag, id := range s.imgIDs {
+-			if id == image.ID {
+-				result[i].RepoTags = append(result[i].RepoTags, tag)
+-			}
+-		}
+-	}
+-	s.cMut.RUnlock()
+-	w.Header().Set("Content-Type", "application/json")
+-	w.WriteHeader(http.StatusOK)
+-	json.NewEncoder(w).Encode(result)
+-}
+-
+-func (s *DockerServer) findImage(id string) (string, error) {
+-	s.iMut.RLock()
+-	defer s.iMut.RUnlock()
+-	image, ok := s.imgIDs[id]
+-	if ok {
+-		return image, nil
+-	}
+-	image, _, err := s.findImageByID(id)
+-	return image, err
+-}
+-
+-func (s *DockerServer) findImageByID(id string) (string, int, error) {
+-	s.iMut.RLock()
+-	defer s.iMut.RUnlock()
+-	for i, image := range s.images {
+-		if image.ID == id {
+-			return image.ID, i, nil
+-		}
+-	}
+-	return "", -1, errors.New("No such image")
+-}
+-
+-func (s *DockerServer) createContainer(w http.ResponseWriter, r *http.Request) {
+-	var config docker.Config
+-	defer r.Body.Close()
+-	err := json.NewDecoder(r.Body).Decode(&config)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusBadRequest)
+-		return
+-	}
+-	image, err := s.findImage(config.Image)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	w.WriteHeader(http.StatusCreated)
+-	ports := map[docker.Port][]docker.PortBinding{}
+-	for port := range config.ExposedPorts {
+-		ports[port] = []docker.PortBinding{{
+-			HostIp:   "0.0.0.0",
+-			HostPort: strconv.Itoa(mathrand.Int() % 65536),
+-		}}
+-	}
+-
+-	//the container may not have cmd when using a Dockerfile
+-	var path string
+-	var args []string
+-	if len(config.Cmd) == 1 {
+-		path = config.Cmd[0]
+-	} else if len(config.Cmd) > 1 {
+-		path = config.Cmd[0]
+-		args = config.Cmd[1:]
+-	}
+-
+-	container := docker.Container{
+-		ID:      s.generateID(),
+-		Created: time.Now(),
+-		Path:    path,
+-		Args:    args,
+-		Config:  &config,
+-		State: docker.State{
+-			Running:   false,
+-			Pid:       mathrand.Int() % 50000,
+-			ExitCode:  0,
+-			StartedAt: time.Now(),
+-		},
+-		Image: image,
+-		NetworkSettings: &docker.NetworkSettings{
+-			IPAddress:   fmt.Sprintf("172.16.42.%d", mathrand.Int()%250+2),
+-			IPPrefixLen: 24,
+-			Gateway:     "172.16.42.1",
+-			Bridge:      "docker0",
+-			Ports:       ports,
+-		},
+-	}
+-	s.cMut.Lock()
+-	s.containers = append(s.containers, &container)
+-	s.cMut.Unlock()
+-	s.notify(&container)
+-	var c = struct{ ID string }{ID: container.ID}
+-	json.NewEncoder(w).Encode(c)
+-}
+-
+-func (s *DockerServer) generateID() string {
+-	var buf [16]byte
+-	rand.Read(buf[:])
+-	return fmt.Sprintf("%x", buf)
+-}
+-
+-func (s *DockerServer) inspectContainer(w http.ResponseWriter, r *http.Request) {
+-	id := mux.Vars(r)["id"]
+-	container, _, err := s.findContainer(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	w.Header().Set("Content-Type", "application/json")
+-	w.WriteHeader(http.StatusOK)
+-	json.NewEncoder(w).Encode(container)
+-}
+-
+-func (s *DockerServer) startContainer(w http.ResponseWriter, r *http.Request) {
+-	id := mux.Vars(r)["id"]
+-	container, _, err := s.findContainer(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	s.cMut.Lock()
+-	defer s.cMut.Unlock()
+-	if container.State.Running {
+-		http.Error(w, "Container already running", http.StatusBadRequest)
+-		return
+-	}
+-	container.State.Running = true
+-	s.notify(container)
+-}
+-
+-func (s *DockerServer) stopContainer(w http.ResponseWriter, r *http.Request) {
+-	id := mux.Vars(r)["id"]
+-	container, _, err := s.findContainer(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	s.cMut.Lock()
+-	defer s.cMut.Unlock()
+-	if !container.State.Running {
+-		http.Error(w, "Container not running", http.StatusBadRequest)
+-		return
+-	}
+-	w.WriteHeader(http.StatusNoContent)
+-	container.State.Running = false
+-	s.notify(container)
+-}
+-
+-func (s *DockerServer) pauseContainer(w http.ResponseWriter, r *http.Request) {
+-	id := mux.Vars(r)["id"]
+-	container, _, err := s.findContainer(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	s.cMut.Lock()
+-	defer s.cMut.Unlock()
+-	if container.State.Paused {
+-		http.Error(w, "Container already paused", http.StatusBadRequest)
+-		return
+-	}
+-	w.WriteHeader(http.StatusNoContent)
+-	container.State.Paused = true
+-}
+-
+-func (s *DockerServer) unpauseContainer(w http.ResponseWriter, r *http.Request) {
+-	id := mux.Vars(r)["id"]
+-	container, _, err := s.findContainer(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	s.cMut.Lock()
+-	defer s.cMut.Unlock()
+-	if !container.State.Paused {
+-		http.Error(w, "Container not paused", http.StatusBadRequest)
+-		return
+-	}
+-	w.WriteHeader(http.StatusNoContent)
+-	container.State.Paused = false
+-}
+-
+-func (s *DockerServer) attachContainer(w http.ResponseWriter, r *http.Request) {
+-	id := mux.Vars(r)["id"]
+-	container, _, err := s.findContainer(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	outStream := newStdWriter(w, stdout)
+-	fmt.Fprintf(outStream, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
+-	if container.State.Running {
+-		fmt.Fprintf(outStream, "Container %q is running\n", container.ID)
+-	} else {
+-		fmt.Fprintf(outStream, "Container %q is not running\n", container.ID)
+-	}
+-	fmt.Fprintln(outStream, "What happened?")
+-	fmt.Fprintln(outStream, "Something happened")
+-}
+-
+-func (s *DockerServer) waitContainer(w http.ResponseWriter, r *http.Request) {
+-	id := mux.Vars(r)["id"]
+-	container, _, err := s.findContainer(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	for {
+-		time.Sleep(1e6)
+-		s.cMut.RLock()
+-		if !container.State.Running {
+-			s.cMut.RUnlock()
+-			break
+-		}
+-		s.cMut.RUnlock()
+-	}
+-	result := map[string]int{"StatusCode": container.State.ExitCode}
+-	json.NewEncoder(w).Encode(result)
+-}
+-
+-func (s *DockerServer) removeContainer(w http.ResponseWriter, r *http.Request) {
+-	id := mux.Vars(r)["id"]
+-	_, index, err := s.findContainer(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	if s.containers[index].State.Running {
+-		msg := "Error: API error (406): Impossible to remove a running container, please stop it first"
+-		http.Error(w, msg, http.StatusInternalServerError)
+-		return
+-	}
+-	w.WriteHeader(http.StatusNoContent)
+-	s.cMut.Lock()
+-	defer s.cMut.Unlock()
+-	s.containers[index] = s.containers[len(s.containers)-1]
+-	s.containers = s.containers[:len(s.containers)-1]
+-}
+-
+-func (s *DockerServer) commitContainer(w http.ResponseWriter, r *http.Request) {
+-	id := r.URL.Query().Get("container")
+-	container, _, err := s.findContainer(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	var config *docker.Config
+-	runConfig := r.URL.Query().Get("run")
+-	if runConfig != "" {
+-		config = new(docker.Config)
+-		err = json.Unmarshal([]byte(runConfig), config)
+-		if err != nil {
+-			http.Error(w, err.Error(), http.StatusBadRequest)
+-			return
+-		}
+-	}
+-	w.WriteHeader(http.StatusOK)
+-	image := docker.Image{
+-		ID:        "img-" + container.ID,
+-		Parent:    container.Image,
+-		Container: container.ID,
+-		Comment:   r.URL.Query().Get("m"),
+-		Author:    r.URL.Query().Get("author"),
+-		Config:    config,
+-	}
+-	repository := r.URL.Query().Get("repo")
+-	s.iMut.Lock()
+-	s.images = append(s.images, image)
+-	if repository != "" {
+-		s.imgIDs[repository] = image.ID
+-	}
+-	s.iMut.Unlock()
+-	fmt.Fprintf(w, `{"ID":%q}`, image.ID)
+-}
+-
+-func (s *DockerServer) findContainer(id string) (*docker.Container, int, error) {
+-	s.cMut.RLock()
+-	defer s.cMut.RUnlock()
+-	for i, container := range s.containers {
+-		if container.ID == id {
+-			return container, i, nil
+-		}
+-	}
+-	return nil, -1, errors.New("No such container")
+-}
+-
+-func (s *DockerServer) buildImage(w http.ResponseWriter, r *http.Request) {
+-	if ct := r.Header.Get("Content-Type"); ct == "application/tar" {
+-		gotDockerFile := false
+-		tr := tar.NewReader(r.Body)
+-		for {
+-			header, err := tr.Next()
+-			if err != nil {
+-				break
+-			}
+-			if header.Name == "Dockerfile" {
+-				gotDockerFile = true
+-			}
+-		}
+-		if !gotDockerFile {
+-			w.WriteHeader(http.StatusBadRequest)
+-			w.Write([]byte("miss Dockerfile"))
+-			return
+-		}
+-	}
+-	//we did not use that Dockerfile to build image cause we are a fake Docker daemon
+-	image := docker.Image{
+-		ID: s.generateID(),
+-	}
+-	query := r.URL.Query()
+-	repository := image.ID
+-	if t := query.Get("t"); t != "" {
+-		repository = t
+-	}
+-	s.iMut.Lock()
+-	s.images = append(s.images, image)
+-	s.imgIDs[repository] = image.ID
+-	s.iMut.Unlock()
+-	w.Write([]byte(fmt.Sprintf("Successfully built %s", image.ID)))
+-}
+-
+-func (s *DockerServer) pullImage(w http.ResponseWriter, r *http.Request) {
+-	repository := r.URL.Query().Get("fromImage")
+-	image := docker.Image{
+-		ID: s.generateID(),
+-	}
+-	s.iMut.Lock()
+-	s.images = append(s.images, image)
+-	if repository != "" {
+-		s.imgIDs[repository] = image.ID
+-	}
+-	s.iMut.Unlock()
+-}
+-
+-func (s *DockerServer) pushImage(w http.ResponseWriter, r *http.Request) {
+-	name := mux.Vars(r)["name"]
+-	s.iMut.RLock()
+-	if _, ok := s.imgIDs[name]; !ok {
+-		s.iMut.RUnlock()
+-		http.Error(w, "No such image", http.StatusNotFound)
+-		return
+-	}
+-	s.iMut.RUnlock()
+-	fmt.Fprintln(w, "Pushing...")
+-	fmt.Fprintln(w, "Pushed")
+-}
+-
+-func (s *DockerServer) removeImage(w http.ResponseWriter, r *http.Request) {
+-	id := mux.Vars(r)["id"]
+-	s.iMut.RLock()
+-	var tag string
+-	if img, ok := s.imgIDs[id]; ok {
+-		id, tag = img, id
+-	}
+-	s.iMut.RUnlock()
+-	_, index, err := s.findImageByID(id)
+-	if err != nil {
+-		http.Error(w, err.Error(), http.StatusNotFound)
+-		return
+-	}
+-	w.WriteHeader(http.StatusNoContent)
+-	s.iMut.Lock()
+-	defer s.iMut.Unlock()
+-	s.images[index] = s.images[len(s.images)-1]
+-	s.images = s.images[:len(s.images)-1]
+-	if tag != "" {
+-		delete(s.imgIDs, tag)
+-	}
+-}
+-
+-func (s *DockerServer) inspectImage(w http.ResponseWriter, r *http.Request) {
+-	name := mux.Vars(r)["name"]
+-	if id, ok := s.imgIDs[name]; ok {
+-		s.iMut.Lock()
+-		defer s.iMut.Unlock()
+-
+-		for _, img := range s.images {
+-			if img.ID == id {
+-				w.Header().Set("Content-Type", "application/json")
+-				w.WriteHeader(http.StatusOK)
+-				json.NewEncoder(w).Encode(img)
+-				return
+-			}
+-		}
+-	}
+-	http.Error(w, "not found", http.StatusNotFound)
+-}
+-
+-func (s *DockerServer) listEvents(w http.ResponseWriter, r *http.Request) {
+-	w.Header().Set("Content-Type", "application/json")
+-	var events [][]byte
+-	count := mathrand.Intn(20)
+-	for i := 0; i < count; i++ {
+-		data, err := json.Marshal(s.generateEvent())
+-		if err != nil {
+-			w.WriteHeader(http.StatusInternalServerError)
+-			return
+-		}
+-		events = append(events, data)
+-	}
+-	w.WriteHeader(http.StatusOK)
+-	for _, d := range events {
+-		fmt.Fprintln(w, d)
+-		time.Sleep(time.Duration(mathrand.Intn(200)) * time.Millisecond)
+-	}
+-}
+-
+-func (s *DockerServer) pingDocker(w http.ResponseWriter, r *http.Request) {
+-	w.WriteHeader(http.StatusOK)
+-}
+-
+-func (s *DockerServer) generateEvent() *docker.APIEvents {
+-	var eventType string
+-	switch mathrand.Intn(4) {
+-	case 0:
+-		eventType = "create"
+-	case 1:
+-		eventType = "start"
+-	case 2:
+-		eventType = "stop"
+-	case 3:
+-		eventType = "destroy"
+-	}
+-	return &docker.APIEvents{
+-		ID:     s.generateID(),
+-		Status: eventType,
+-		From:   "mybase:latest",
+-		Time:   time.Now().Unix(),
+-	}
+-}
+-
+-func (s *DockerServer) loadImage(w http.ResponseWriter, r *http.Request) {
+-	w.WriteHeader(http.StatusOK)
+-}
+-
+-func (s *DockerServer) getImage(w http.ResponseWriter, r *http.Request) {
+-	w.WriteHeader(http.StatusOK)
+-	w.Header().Set("Content-Type", "application/tar")
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/server_test.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/server_test.go
+deleted file mode 100644
+index 5203004..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/server_test.go
++++ /dev/null
+@@ -1,965 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package testing
+-
+-import (
+-	"encoding/json"
+-	"fmt"
+-	"math/rand"
+-	"net"
+-	"net/http"
+-	"net/http/httptest"
+-	"os"
+-	"reflect"
+-	"strings"
+-	"testing"
+-	"time"
+-
+-	"github.com/fsouza/go-dockerclient"
+-)
+-
+-func TestNewServer(t *testing.T) {
+-	server, err := NewServer("127.0.0.1:0", nil, nil)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	defer server.listener.Close()
+-	conn, err := net.Dial("tcp", server.listener.Addr().String())
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	conn.Close()
+-}
+-
+-func TestServerStop(t *testing.T) {
+-	server, err := NewServer("127.0.0.1:0", nil, nil)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	server.Stop()
+-	_, err = net.Dial("tcp", server.listener.Addr().String())
+-	if err == nil {
+-		t.Error("Unexpected <nil> error when dialing to stopped server")
+-	}
+-}
+-
+-func TestServerStopNoListener(t *testing.T) {
+-	server := DockerServer{}
+-	server.Stop()
+-}
+-
+-func TestServerURL(t *testing.T) {
+-	server, err := NewServer("127.0.0.1:0", nil, nil)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	defer server.Stop()
+-	url := server.URL()
+-	if expected := "http://" + server.listener.Addr().String() + "/"; url != expected {
+-		t.Errorf("DockerServer.URL(): Want %q. Got %q.", expected, url)
+-	}
+-}
+-
+-func TestServerURLNoListener(t *testing.T) {
+-	server := DockerServer{}
+-	url := server.URL()
+-	if url != "" {
+-		t.Errorf("DockerServer.URL(): Expected empty URL on handler mode, got %q.", url)
+-	}
+-}
+-
+-func TestHandleWithHook(t *testing.T) {
+-	var called bool
+-	server, _ := NewServer("127.0.0.1:0", nil, func(*http.Request) { called = true })
+-	defer server.Stop()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("GET", "/containers/json?all=1", nil)
+-	server.ServeHTTP(recorder, request)
+-	if !called {
+-		t.Error("ServeHTTP did not call the hook function.")
+-	}
+-}
+-
+-func TestCustomHandler(t *testing.T) {
+-	var called bool
+-	server, _ := NewServer("127.0.0.1:0", nil, nil)
+-	addContainers(server, 2)
+-	server.CustomHandler("/containers/json", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		called = true
+-		fmt.Fprint(w, "Hello world")
+-	}))
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("GET", "/containers/json?all=1", nil)
+-	server.ServeHTTP(recorder, request)
+-	if !called {
+-		t.Error("Did not call the custom handler")
+-	}
+-	if got := recorder.Body.String(); got != "Hello world" {
+-		t.Errorf("Wrong output for custom handler: want %q. Got %q.", "Hello world", got)
+-	}
+-}
+-
+-func TestListContainers(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 2)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("GET", "/containers/json?all=1", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("ListContainers: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	expected := make([]docker.APIContainers, 2)
+-	for i, container := range server.containers {
+-		expected[i] = docker.APIContainers{
+-			ID:      container.ID,
+-			Image:   container.Image,
+-			Command: strings.Join(container.Config.Cmd, " "),
+-			Created: container.Created.Unix(),
+-			Status:  container.State.String(),
+-			Ports:   container.NetworkSettings.PortMappingAPI(),
+-		}
+-	}
+-	var got []docker.APIContainers
+-	err := json.NewDecoder(recorder.Body).Decode(&got)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("ListContainers. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestListRunningContainers(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 2)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("GET", "/containers/json?all=0", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("ListRunningContainers: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	var got []docker.APIContainers
+-	err := json.NewDecoder(recorder.Body).Decode(&got)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if len(got) == 0 {
+-		t.Errorf("ListRunningContainers: Want 0. Got %d.", len(got))
+-	}
+-}
+-
+-func TestCreateContainer(t *testing.T) {
+-	server := DockerServer{}
+-	server.imgIDs = map[string]string{"base": "a1234"}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	body := `{"Hostname":"", "User":"", "Memory":0, "MemorySwap":0, "AttachStdin":false, "AttachStdout":true, "AttachStderr":true,
+-"PortSpecs":null, "Tty":false, "OpenStdin":false, "StdinOnce":false, "Env":null, "Cmd":["date"], "Image":"base", "Volumes":{}, "VolumesFrom":""}`
+-	request, _ := http.NewRequest("POST", "/containers/create", strings.NewReader(body))
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusCreated {
+-		t.Errorf("CreateContainer: wrong status. Want %d. Got %d.", http.StatusCreated, recorder.Code)
+-	}
+-	var returned docker.Container
+-	err := json.NewDecoder(recorder.Body).Decode(&returned)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	stored := server.containers[0]
+-	if returned.ID != stored.ID {
+-		t.Errorf("CreateContainer: ID mismatch. Stored: %q. Returned: %q.", stored.ID, returned.ID)
+-	}
+-	if stored.State.Running {
+-		t.Errorf("CreateContainer should not set container to running state.")
+-	}
+-}
+-
+-func TestCreateContainerWithNotifyChannel(t *testing.T) {
+-	ch := make(chan *docker.Container, 1)
+-	server := DockerServer{}
+-	server.imgIDs = map[string]string{"base": "a1234"}
+-	server.cChan = ch
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	body := `{"Hostname":"", "User":"", "Memory":0, "MemorySwap":0, "AttachStdin":false, "AttachStdout":true, "AttachStderr":true,
+-"PortSpecs":null, "Tty":false, "OpenStdin":false, "StdinOnce":false, "Env":null, "Cmd":["date"], "Image":"base", "Volumes":{}, "VolumesFrom":""}`
+-	request, _ := http.NewRequest("POST", "/containers/create", strings.NewReader(body))
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusCreated {
+-		t.Errorf("CreateContainer: wrong status. Want %d. Got %d.", http.StatusCreated, recorder.Code)
+-	}
+-	if notified := <-ch; notified != server.containers[0] {
+-		t.Errorf("CreateContainer: did not notify the proper container. Want %q. Got %q.", server.containers[0].ID, notified.ID)
+-	}
+-}
+-
+-func TestCreateContainerInvalidBody(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("POST", "/containers/create", strings.NewReader("whaaaaaat---"))
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusBadRequest {
+-		t.Errorf("CreateContainer: wrong status. Want %d. Got %d.", http.StatusBadRequest, recorder.Code)
+-	}
+-}
+-
+-func TestCreateContainerImageNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	body := `{"Hostname":"", "User":"", "Memory":0, "MemorySwap":0, "AttachStdin":false, "AttachStdout":true, "AttachStderr":true,
+-"PortSpecs":null, "Tty":false, "OpenStdin":false, "StdinOnce":false, "Env":null, "Cmd":["date"],
+-"Image":"base", "Volumes":{}, "VolumesFrom":""}`
+-	request, _ := http.NewRequest("POST", "/containers/create", strings.NewReader(body))
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("CreateContainer: wrong status. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestCommitContainer(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 2)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("POST", "/commit?container="+server.containers[0].ID, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("CommitContainer: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	expected := fmt.Sprintf(`{"ID":"%s"}`, server.images[0].ID)
+-	if got := recorder.Body.String(); got != expected {
+-		t.Errorf("CommitContainer: wrong response body. Want %q. Got %q.", expected, got)
+-	}
+-}
+-
+-func TestCommitContainerComplete(t *testing.T) {
+-	server := DockerServer{}
+-	server.imgIDs = make(map[string]string)
+-	addContainers(&server, 2)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	queryString := "container=" + server.containers[0].ID + "&repo=tsuru/python&m=saving&author=developers"
+-	queryString += `&run={"Cmd": ["cat", "/world"],"PortSpecs":["22"]}`
+-	request, _ := http.NewRequest("POST", "/commit?"+queryString, nil)
+-	server.ServeHTTP(recorder, request)
+-	image := server.images[0]
+-	if image.Parent != server.containers[0].Image {
+-		t.Errorf("CommitContainer: wrong parent image. Want %q. Got %q.", server.containers[0].Image, image.Parent)
+-	}
+-	if image.Container != server.containers[0].ID {
+-		t.Errorf("CommitContainer: wrong container. Want %q. Got %q.", server.containers[0].ID, image.Container)
+-	}
+-	message := "saving"
+-	if image.Comment != message {
+-		t.Errorf("CommitContainer: wrong comment (commit message). Want %q. Got %q.", message, image.Comment)
+-	}
+-	author := "developers"
+-	if image.Author != author {
+-		t.Errorf("CommitContainer: wrong author. Want %q. Got %q.", author, image.Author)
+-	}
+-	if id := server.imgIDs["tsuru/python"]; id != image.ID {
+-		t.Errorf("CommitContainer: wrong ID saved for repository. Want %q. Got %q.", image.ID, id)
+-	}
+-	portSpecs := []string{"22"}
+-	if !reflect.DeepEqual(image.Config.PortSpecs, portSpecs) {
+-		t.Errorf("CommitContainer: wrong port spec in config. Want %#v. Got %#v.", portSpecs, image.Config.PortSpecs)
+-	}
+-	cmd := []string{"cat", "/world"}
+-	if !reflect.DeepEqual(image.Config.Cmd, cmd) {
+-		t.Errorf("CommitContainer: wrong cmd in config. Want %#v. Got %#v.", cmd, image.Config.Cmd)
+-	}
+-}
+-
+-func TestCommitContainerInvalidRun(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("POST", "/commit?container="+server.containers[0].ID+"&run=abc---", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusBadRequest {
+-		t.Errorf("CommitContainer. Wrong status. Want %d. Got %d.", http.StatusBadRequest, recorder.Code)
+-	}
+-}
+-
+-func TestCommitContainerNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("POST", "/commit?container=abc123", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("CommitContainer. Wrong status. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestInspectContainer(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 2)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/json", server.containers[0].ID)
+-	request, _ := http.NewRequest("GET", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("InspectContainer: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	expected := server.containers[0]
+-	var got docker.Container
+-	err := json.NewDecoder(recorder.Body).Decode(&got)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(got.Config, expected.Config) {
+-		t.Errorf("InspectContainer: wrong value. Want %#v. Got %#v.", *expected, got)
+-	}
+-	if !reflect.DeepEqual(got.NetworkSettings, expected.NetworkSettings) {
+-		t.Errorf("InspectContainer: wrong value. Want %#v. Got %#v.", *expected, got)
+-	}
+-	got.State.StartedAt = expected.State.StartedAt
+-	got.State.FinishedAt = expected.State.FinishedAt
+-	got.Config = expected.Config
+-	got.Created = expected.Created
+-	got.NetworkSettings = expected.NetworkSettings
+-	if !reflect.DeepEqual(got, *expected) {
+-		t.Errorf("InspectContainer: wrong value. Want %#v. Got %#v.", *expected, got)
+-	}
+-}
+-
+-func TestInspectContainerNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("GET", "/containers/abc123/json", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("InspectContainer: wrong status code. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestStartContainer(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/start", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("StartContainer: wrong status code. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	if !server.containers[0].State.Running {
+-		t.Error("StartContainer: did not set the container to running state")
+-	}
+-}
+-
+-func TestStartContainerWithNotifyChannel(t *testing.T) {
+-	ch := make(chan *docker.Container, 1)
+-	server := DockerServer{}
+-	server.cChan = ch
+-	addContainers(&server, 1)
+-	addContainers(&server, 1)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/start", server.containers[1].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("StartContainer: wrong status code. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	if notified := <-ch; notified != server.containers[1] {
+-		t.Errorf("StartContainer: did not notify the proper container. Want %q. Got %q.", server.containers[1].ID, notified.ID)
+-	}
+-}
+-
+-func TestStartContainerNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := "/containers/abc123/start"
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("StartContainer: wrong status code. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestStartContainerAlreadyRunning(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.containers[0].State.Running = true
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/start", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusBadRequest {
+-		t.Errorf("StartContainer: wrong status code. Want %d. Got %d.", http.StatusBadRequest, recorder.Code)
+-	}
+-}
+-
+-func TestStopContainer(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.containers[0].State.Running = true
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/stop", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNoContent {
+-		t.Errorf("StopContainer: wrong status code. Want %d. Got %d.", http.StatusNoContent, recorder.Code)
+-	}
+-	if server.containers[0].State.Running {
+-		t.Error("StopContainer: did not stop the container")
+-	}
+-}
+-
+-func TestStopContainerWithNotifyChannel(t *testing.T) {
+-	ch := make(chan *docker.Container, 1)
+-	server := DockerServer{}
+-	server.cChan = ch
+-	addContainers(&server, 1)
+-	addContainers(&server, 1)
+-	server.containers[1].State.Running = true
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/stop", server.containers[1].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNoContent {
+-		t.Errorf("StopContainer: wrong status code. Want %d. Got %d.", http.StatusNoContent, recorder.Code)
+-	}
+-	if notified := <-ch; notified != server.containers[1] {
+-		t.Errorf("StopContainer: did not notify the proper container. Want %q. Got %q.", server.containers[1].ID, notified.ID)
+-	}
+-}
+-
+-func TestStopContainerNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := "/containers/abc123/stop"
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("StopContainer: wrong status code. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestStopContainerNotRunning(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/stop", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusBadRequest {
+-		t.Errorf("StopContainer: wrong status code. Want %d. Got %d.", http.StatusBadRequest, recorder.Code)
+-	}
+-}
+-
+-func TestPauseContainer(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/pause", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNoContent {
+-		t.Errorf("PauseContainer: wrong status code. Want %d. Got %d.", http.StatusNoContent, recorder.Code)
+-	}
+-	if !server.containers[0].State.Paused {
+-		t.Error("PauseContainer: did not pause the container")
+-	}
+-}
+-
+-func TestPauseContainerAlreadyPaused(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.containers[0].State.Paused = true
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/pause", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusBadRequest {
+-		t.Errorf("PauseContainer: wrong status code. Want %d. Got %d.", http.StatusBadRequest, recorder.Code)
+-	}
+-}
+-
+-func TestPauseContainerNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := "/containers/abc123/pause"
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("PauseContainer: wrong status code. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestUnpauseContainer(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.containers[0].State.Paused = true
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/unpause", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNoContent {
+-		t.Errorf("UnpauseContainer: wrong status code. Want %d. Got %d.", http.StatusNoContent, recorder.Code)
+-	}
+-	if server.containers[0].State.Paused {
+-		t.Error("UnpauseContainer: did not unpause the container")
+-	}
+-}
+-
+-func TestUnpauseContainerNotPaused(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/unpause", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusBadRequest {
+-		t.Errorf("UnpauseContainer: wrong status code. Want %d. Got %d.", http.StatusBadRequest, recorder.Code)
+-	}
+-}
+-
+-func TestUnpauseContainerNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := "/containers/abc123/unpause"
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("UnpauseContainer: wrong status code. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestWaitContainer(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.containers[0].State.Running = true
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/wait", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	go func() {
+-		server.cMut.Lock()
+-		server.containers[0].State.Running = false
+-		server.cMut.Unlock()
+-	}()
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("WaitContainer: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	expected := `{"StatusCode":0}` + "\n"
+-	if body := recorder.Body.String(); body != expected {
+-		t.Errorf("WaitContainer: wrong body. Want %q. Got %q.", expected, body)
+-	}
+-}
+-
+-func TestWaitContainerStatus(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.buildMuxer()
+-	server.containers[0].State.ExitCode = 63
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/wait", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("WaitContainer: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	expected := `{"StatusCode":63}` + "\n"
+-	if body := recorder.Body.String(); body != expected {
+-		t.Errorf("WaitContainer: wrong body. Want %q. Got %q.", expected, body)
+-	}
+-}
+-
+-func TestWaitContainerNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := "/containers/abc123/wait"
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("WaitContainer: wrong status code. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestAttachContainer(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.containers[0].State.Running = true
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s/attach?logs=1", server.containers[0].ID)
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	lines := []string{
+-		fmt.Sprintf("\x01\x00\x00\x00\x03\x00\x00\x00Container %q is running", server.containers[0].ID),
+-		"What happened?",
+-		"Something happened",
+-	}
+-	expected := strings.Join(lines, "\n") + "\n"
+-	if body := recorder.Body.String(); body == expected {
+-		t.Errorf("AttachContainer: wrong body. Want %q. Got %q.", expected, body)
+-	}
+-}
+-
+-func TestAttachContainerNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := "/containers/abc123/attach?logs=1"
+-	request, _ := http.NewRequest("POST", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("AttachContainer: wrong status. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestRemoveContainer(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s", server.containers[0].ID)
+-	request, _ := http.NewRequest("DELETE", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNoContent {
+-		t.Errorf("RemoveContainer: wrong status. Want %d. Got %d.", http.StatusNoContent, recorder.Code)
+-	}
+-	if len(server.containers) > 0 {
+-		t.Error("RemoveContainer: did not remove the container.")
+-	}
+-}
+-
+-func TestRemoveContainerNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/abc123")
+-	request, _ := http.NewRequest("DELETE", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("RemoveContainer: wrong status. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func TestRemoveContainerRunning(t *testing.T) {
+-	server := DockerServer{}
+-	addContainers(&server, 1)
+-	server.containers[0].State.Running = true
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/containers/%s", server.containers[0].ID)
+-	request, _ := http.NewRequest("DELETE", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusInternalServerError {
+-		t.Errorf("RemoveContainer: wrong status. Want %d. Got %d.", http.StatusInternalServerError, recorder.Code)
+-	}
+-	if len(server.containers) < 1 {
+-		t.Error("RemoveContainer: should not remove the container.")
+-	}
+-}
+-
+-func TestPullImage(t *testing.T) {
+-	server := DockerServer{imgIDs: make(map[string]string)}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("POST", "/images/create?fromImage=base", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("PullImage: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	if len(server.images) != 1 {
+-		t.Errorf("PullImage: Want 1 image. Got %d.", len(server.images))
+-	}
+-	if _, ok := server.imgIDs["base"]; !ok {
+-		t.Error("PullImage: Repository should not be empty.")
+-	}
+-}
+-
+-func TestPushImage(t *testing.T) {
+-	server := DockerServer{imgIDs: map[string]string{"tsuru/python": "a123"}}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("POST", "/images/tsuru/python/push", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("PushImage: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-}
+-
+-func TestPushImageNotFound(t *testing.T) {
+-	server := DockerServer{}
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("POST", "/images/tsuru/python/push", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNotFound {
+-		t.Errorf("PushImage: wrong status. Want %d. Got %d.", http.StatusNotFound, recorder.Code)
+-	}
+-}
+-
+-func addContainers(server *DockerServer, n int) {
+-	server.cMut.Lock()
+-	defer server.cMut.Unlock()
+-	for i := 0; i < n; i++ {
+-		date := time.Now().Add(time.Duration((rand.Int() % (i + 1))) * time.Hour)
+-		container := docker.Container{
+-			ID:      fmt.Sprintf("%x", rand.Int()%10000),
+-			Created: date,
+-			Path:    "ls",
+-			Args:    []string{"-la", ".."},
+-			Config: &docker.Config{
+-				Hostname:     fmt.Sprintf("docker-%d", i),
+-				AttachStdout: true,
+-				AttachStderr: true,
+-				Env:          []string{"ME=you", fmt.Sprintf("NUMBER=%d", i)},
+-				Cmd:          []string{"ls", "-la", ".."},
+-				Image:        "base",
+-			},
+-			State: docker.State{
+-				Running:   false,
+-				Pid:       400 + i,
+-				ExitCode:  0,
+-				StartedAt: date,
+-			},
+-			Image: "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
+-			NetworkSettings: &docker.NetworkSettings{
+-				IPAddress:   fmt.Sprintf("10.10.10.%d", i+2),
+-				IPPrefixLen: 24,
+-				Gateway:     "10.10.10.1",
+-				Bridge:      "docker0",
+-				PortMapping: map[string]docker.PortMapping{
+-					"Tcp": {"8888": fmt.Sprintf("%d", 49600+i)},
+-				},
+-			},
+-			ResolvConfPath: "/etc/resolv.conf",
+-		}
+-		server.containers = append(server.containers, &container)
+-	}
+-}
+-
+-func addImages(server *DockerServer, n int, repo bool) {
+-	server.iMut.Lock()
+-	defer server.iMut.Unlock()
+-	if server.imgIDs == nil {
+-		server.imgIDs = make(map[string]string)
+-	}
+-	for i := 0; i < n; i++ {
+-		date := time.Now().Add(time.Duration((rand.Int() % (i + 1))) * time.Hour)
+-		image := docker.Image{
+-			ID:      fmt.Sprintf("%x", rand.Int()%10000),
+-			Created: date,
+-		}
+-		server.images = append(server.images, image)
+-		if repo {
+-			repo := "docker/python-" + image.ID
+-			server.imgIDs[repo] = image.ID
+-		}
+-	}
+-}
+-
+-func TestListImages(t *testing.T) {
+-	server := DockerServer{}
+-	addImages(&server, 2, true)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("GET", "/images/json?all=1", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("ListImages: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-	expected := make([]docker.APIImages, 2)
+-	for i, image := range server.images {
+-		expected[i] = docker.APIImages{
+-			ID:       image.ID,
+-			Created:  image.Created.Unix(),
+-			RepoTags: []string{"docker/python-" + image.ID},
+-		}
+-	}
+-	var got []docker.APIImages
+-	err := json.NewDecoder(recorder.Body).Decode(&got)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(got, expected) {
+-		t.Errorf("ListImages. Want %#v. Got %#v.", expected, got)
+-	}
+-}
+-
+-func TestRemoveImage(t *testing.T) {
+-	server := DockerServer{}
+-	addImages(&server, 1, false)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	path := fmt.Sprintf("/images/%s", server.images[0].ID)
+-	request, _ := http.NewRequest("DELETE", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNoContent {
+-		t.Errorf("RemoveImage: wrong status. Want %d. Got %d.", http.StatusNoContent, recorder.Code)
+-	}
+-	if len(server.images) > 0 {
+-		t.Error("RemoveImage: did not remove the image.")
+-	}
+-}
+-
+-func TestRemoveImageByName(t *testing.T) {
+-	server := DockerServer{}
+-	addImages(&server, 1, true)
+-	server.buildMuxer()
+-	recorder := httptest.NewRecorder()
+-	imgName := "docker/python-" + server.images[0].ID
+-	path := "/images/" + imgName
+-	request, _ := http.NewRequest("DELETE", path, nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusNoContent {
+-		t.Errorf("RemoveImage: wrong status. Want %d. Got %d.", http.StatusNoContent, recorder.Code)
+-	}
+-	if len(server.images) > 0 {
+-		t.Error("RemoveImage: did not remove the image.")
+-	}
+-	_, ok := server.imgIDs[imgName]
+-	if ok {
+-		t.Error("RemoveImage: did not remove image tag name.")
+-	}
+-}
+-
+-func TestPrepareFailure(t *testing.T) {
+-	server := DockerServer{failures: make(map[string]string)}
+-	server.buildMuxer()
+-	errorID := "my_error"
+-	server.PrepareFailure(errorID, "containers/json")
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("GET", "/containers/json?all=1", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusBadRequest {
+-		t.Errorf("PrepareFailure: wrong status. Want %d. Got %d.", http.StatusBadRequest, recorder.Code)
+-	}
+-	if recorder.Body.String() != errorID+"\n" {
+-		t.Errorf("PrepareFailure: wrong message. Want %s. Got %s.", errorID, recorder.Body.String())
+-	}
+-}
+-
+-func TestRemoveFailure(t *testing.T) {
+-	server := DockerServer{failures: make(map[string]string)}
+-	server.buildMuxer()
+-	errorID := "my_error"
+-	server.PrepareFailure(errorID, "containers/json")
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("GET", "/containers/json?all=1", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusBadRequest {
+-		t.Errorf("PrepareFailure: wrong status. Want %d. Got %d.", http.StatusBadRequest, recorder.Code)
+-	}
+-	server.ResetFailure(errorID)
+-	recorder = httptest.NewRecorder()
+-	request, _ = http.NewRequest("GET", "/containers/json?all=1", nil)
+-	server.ServeHTTP(recorder, request)
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("RemoveFailure: wrong status. Want %d. Got %d.", http.StatusOK, recorder.Code)
+-	}
+-}
+-
+-func TestMutateContainer(t *testing.T) {
+-	server := DockerServer{failures: make(map[string]string)}
+-	server.buildMuxer()
+-	server.containers = append(server.containers, &docker.Container{ID: "id123"})
+-	state := docker.State{Running: false, ExitCode: 1}
+-	err := server.MutateContainer("id123", state)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(server.containers[0].State, state) {
+-		t.Errorf("Wrong state after mutation.\nWant %#v.\nGot %#v.",
+-			state, server.containers[0].State)
+-	}
+-}
+-
+-func TestMutateContainerNotFound(t *testing.T) {
+-	server := DockerServer{failures: make(map[string]string)}
+-	server.buildMuxer()
+-	state := docker.State{Running: false, ExitCode: 1}
+-	err := server.MutateContainer("id123", state)
+-	if err == nil {
+-		t.Error("Unexpected <nil> error")
+-	}
+-	if err.Error() != "container not found" {
+-		t.Errorf("wrong error message. Want %q. Got %q.", "container not found", err)
+-	}
+-}
+-
+-func TestBuildImageWithContentTypeTar(t *testing.T) {
+-	server := DockerServer{imgIDs: make(map[string]string)}
+-	imageName := "teste"
+-	recorder := httptest.NewRecorder()
+-	tarFile, err := os.Open("data/dockerfile.tar")
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	defer tarFile.Close()
+-	request, _ := http.NewRequest("POST", "/build?t=teste", tarFile)
+-	request.Header.Add("Content-Type", "application/tar")
+-	server.buildImage(recorder, request)
+-	if recorder.Body.String() == "miss Dockerfile" {
+-		t.Errorf("BuildImage: miss Dockerfile")
+-		return
+-	}
+-	if _, ok := server.imgIDs[imageName]; ok == false {
+-		t.Errorf("BuildImage: image %s not builded", imageName)
+-	}
+-}
+-
+-func TestBuildImageWithRemoteDockerfile(t *testing.T) {
+-	server := DockerServer{imgIDs: make(map[string]string)}
+-	imageName := "teste"
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("POST", "/build?t=teste&remote=http://localhost/Dockerfile", nil)
+-	server.buildImage(recorder, request)
+-	if _, ok := server.imgIDs[imageName]; ok == false {
+-		t.Errorf("BuildImage: image %s not builded", imageName)
+-	}
+-}
+-
+-func TestPing(t *testing.T) {
+-	server := DockerServer{}
+-	recorder := httptest.NewRecorder()
+-	request, _ := http.NewRequest("GET", "/_ping", nil)
+-	server.pingDocker(recorder, request)
+-	if recorder.Body.String() != "" {
+-		t.Errorf("Ping: Unexpected body: %s", recorder.Body.String())
+-	}
+-	if recorder.Code != http.StatusOK {
+-		t.Errorf("Ping: Expected code %d, got: %d", http.StatusOK, recorder.Code)
+-	}
+-}
+-
+-func TestDefaultHandler(t *testing.T) {
+-	server, err := NewServer("127.0.0.1:0", nil, nil)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	defer server.listener.Close()
+-	if server.mux != server.DefaultHandler() {
+-		t.Fatalf("DefaultHandler: Expected to return server.mux, got: %#v", server.DefaultHandler())
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/writer.go b/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/writer.go
+deleted file mode 100644
+index 42752b0..0000000
+--- a/Godeps/_workspace/src/github.com/fsouza/go-dockerclient/testing/writer.go
++++ /dev/null
+@@ -1,43 +0,0 @@
+-// Copyright 2014 go-dockerclient authors. All rights reserved.
+-// Use of this source code is governed by a BSD-style
+-// license that can be found in the LICENSE file.
+-
+-package testing
+-
+-import (
+-	"encoding/binary"
+-	"errors"
+-	"io"
+-)
+-
+-type stdType [8]byte
+-
+-var (
+-	stdin  stdType = stdType{0: 0}
+-	stdout stdType = stdType{0: 1}
+-	stderr stdType = stdType{0: 2}
+-)
+-
+-type stdWriter struct {
+-	io.Writer
+-	prefix  stdType
+-	sizeBuf []byte
+-}
+-
+-func (w *stdWriter) Write(buf []byte) (n int, err error) {
+-	if w == nil || w.Writer == nil {
+-		return 0, errors.New("Writer not instanciated")
+-	}
+-	binary.BigEndian.PutUint32(w.prefix[4:], uint32(len(buf)))
+-	buf = append(w.prefix[:], buf...)
+-
+-	n, err = w.Writer.Write(buf)
+-	return n - 8, err
+-}
+-
+-func newStdWriter(w io.Writer, t stdType) *stdWriter {
+-	if len(t) != 8 {
+-		return nil
+-	}
+-	return &stdWriter{Writer: w, prefix: t, sizeBuf: make([]byte, 4)}
+-}
+diff --git a/Godeps/_workspace/src/github.com/golang/glog/LICENSE b/Godeps/_workspace/src/github.com/golang/glog/LICENSE
+deleted file mode 100644
+index 37ec93a..0000000
+--- a/Godeps/_workspace/src/github.com/golang/glog/LICENSE
++++ /dev/null
+@@ -1,191 +0,0 @@
+-Apache License
+-Version 2.0, January 2004
+-http://www.apache.org/licenses/
+-
+-TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+-
+-1. Definitions.
+-
+-"License" shall mean the terms and conditions for use, reproduction, and
+-distribution as defined by Sections 1 through 9 of this document.
+-
+-"Licensor" shall mean the copyright owner or entity authorized by the copyright
+-owner that is granting the License.
+-
+-"Legal Entity" shall mean the union of the acting entity and all other entities
+-that control, are controlled by, or are under common control with that entity.
+-For the purposes of this definition, "control" means (i) the power, direct or
+-indirect, to cause the direction or management of such entity, whether by
+-contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
+-outstanding shares, or (iii) beneficial ownership of such entity.
+-
+-"You" (or "Your") shall mean an individual or Legal Entity exercising
+-permissions granted by this License.
+-
+-"Source" form shall mean the preferred form for making modifications, including
+-but not limited to software source code, documentation source, and configuration
+-files.
+-
+-"Object" form shall mean any form resulting from mechanical transformation or
+-translation of a Source form, including but not limited to compiled object code,
+-generated documentation, and conversions to other media types.
+-
+-"Work" shall mean the work of authorship, whether in Source or Object form, made
+-available under the License, as indicated by a copyright notice that is included
+-in or attached to the work (an example is provided in the Appendix below).
+-
+-"Derivative Works" shall mean any work, whether in Source or Object form, that
+-is based on (or derived from) the Work and for which the editorial revisions,
+-annotations, elaborations, or other modifications represent, as a whole, an
+-original work of authorship. For the purposes of this License, Derivative Works
+-shall not include works that remain separable from, or merely link (or bind by
+-name) to the interfaces of, the Work and Derivative Works thereof.
+-
+-"Contribution" shall mean any work of authorship, including the original version
+-of the Work and any modifications or additions to that Work or Derivative Works
+-thereof, that is intentionally submitted to Licensor for inclusion in the Work
+-by the copyright owner or by an individual or Legal Entity authorized to submit
+-on behalf of the copyright owner. For the purposes of this definition,
+-"submitted" means any form of electronic, verbal, or written communication sent
+-to the Licensor or its representatives, including but not limited to
+-communication on electronic mailing lists, source code control systems, and
+-issue tracking systems that are managed by, or on behalf of, the Licensor for
+-the purpose of discussing and improving the Work, but excluding communication
+-that is conspicuously marked or otherwise designated in writing by the copyright
+-owner as "Not a Contribution."
+-
+-"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
+-of whom a Contribution has been received by Licensor and subsequently
+-incorporated within the Work.
+-
+-2. Grant of Copyright License.
+-
+-Subject to the terms and conditions of this License, each Contributor hereby
+-grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
+-irrevocable copyright license to reproduce, prepare Derivative Works of,
+-publicly display, publicly perform, sublicense, and distribute the Work and such
+-Derivative Works in Source or Object form.
+-
+-3. Grant of Patent License.
+-
+-Subject to the terms and conditions of this License, each Contributor hereby
+-grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
+-irrevocable (except as stated in this section) patent license to make, have
+-made, use, offer to sell, sell, import, and otherwise transfer the Work, where
+-such license applies only to those patent claims licensable by such Contributor
+-that are necessarily infringed by their Contribution(s) alone or by combination
+-of their Contribution(s) with the Work to which such Contribution(s) was
+-submitted. If You institute patent litigation against any entity (including a
+-cross-claim or counterclaim in a lawsuit) alleging that the Work or a
+-Contribution incorporated within the Work constitutes direct or contributory
+-patent infringement, then any patent licenses granted to You under this License
+-for that Work shall terminate as of the date such litigation is filed.
+-
+-4. Redistribution.
+-
+-You may reproduce and distribute copies of the Work or Derivative Works thereof
+-in any medium, with or without modifications, and in Source or Object form,
+-provided that You meet the following conditions:
+-
+-You must give any other recipients of the Work or Derivative Works a copy of
+-this License; and
+-You must cause any modified files to carry prominent notices stating that You
+-changed the files; and
+-You must retain, in the Source form of any Derivative Works that You distribute,
+-all copyright, patent, trademark, and attribution notices from the Source form
+-of the Work, excluding those notices that do not pertain to any part of the
+-Derivative Works; and
+-If the Work includes a "NOTICE" text file as part of its distribution, then any
+-Derivative Works that You distribute must include a readable copy of the
+-attribution notices contained within such NOTICE file, excluding those notices
+-that do not pertain to any part of the Derivative Works, in at least one of the
+-following places: within a NOTICE text file distributed as part of the
+-Derivative Works; within the Source form or documentation, if provided along
+-with the Derivative Works; or, within a display generated by the Derivative
+-Works, if and wherever such third-party notices normally appear. The contents of
+-the NOTICE file are for informational purposes only and do not modify the
+-License. You may add Your own attribution notices within Derivative Works that
+-You distribute, alongside or as an addendum to the NOTICE text from the Work,
+-provided that such additional attribution notices cannot be construed as
+-modifying the License.
+-You may add Your own copyright statement to Your modifications and may provide
+-additional or different license terms and conditions for use, reproduction, or
+-distribution of Your modifications, or for any such Derivative Works as a whole,
+-provided Your use, reproduction, and distribution of the Work otherwise complies
+-with the conditions stated in this License.
+-
+-5. Submission of Contributions.
+-
+-Unless You explicitly state otherwise, any Contribution intentionally submitted
+-for inclusion in the Work by You to the Licensor shall be under the terms and
+-conditions of this License, without any additional terms or conditions.
+-Notwithstanding the above, nothing herein shall supersede or modify the terms of
+-any separate license agreement you may have executed with Licensor regarding
+-such Contributions.
+-
+-6. Trademarks.
+-
+-This License does not grant permission to use the trade names, trademarks,
+-service marks, or product names of the Licensor, except as required for
+-reasonable and customary use in describing the origin of the Work and
+-reproducing the content of the NOTICE file.
+-
+-7. Disclaimer of Warranty.
+-
+-Unless required by applicable law or agreed to in writing, Licensor provides the
+-Work (and each Contributor provides its Contributions) on an "AS IS" BASIS,
+-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
+-including, without limitation, any warranties or conditions of TITLE,
+-NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
+-solely responsible for determining the appropriateness of using or
+-redistributing the Work and assume any risks associated with Your exercise of
+-permissions under this License.
+-
+-8. Limitation of Liability.
+-
+-In no event and under no legal theory, whether in tort (including negligence),
+-contract, or otherwise, unless required by applicable law (such as deliberate
+-and grossly negligent acts) or agreed to in writing, shall any Contributor be
+-liable to You for damages, including any direct, indirect, special, incidental,
+-or consequential damages of any character arising as a result of this License or
+-out of the use or inability to use the Work (including but not limited to
+-damages for loss of goodwill, work stoppage, computer failure or malfunction, or
+-any and all other commercial damages or losses), even if such Contributor has
+-been advised of the possibility of such damages.
+-
+-9. Accepting Warranty or Additional Liability.
+-
+-While redistributing the Work or Derivative Works thereof, You may choose to
+-offer, and charge a fee for, acceptance of support, warranty, indemnity, or
+-other liability obligations and/or rights consistent with this License. However,
+-in accepting such obligations, You may act only on Your own behalf and on Your
+-sole responsibility, not on behalf of any other Contributor, and only if You
+-agree to indemnify, defend, and hold each Contributor harmless for any liability
+-incurred by, or claims asserted against, such Contributor by reason of your
+-accepting any such warranty or additional liability.
+-
+-END OF TERMS AND CONDITIONS
+-
+-APPENDIX: How to apply the Apache License to your work
+-
+-To apply the Apache License to your work, attach the following boilerplate
+-notice, with the fields enclosed by brackets "[]" replaced with your own
+-identifying information. (Don't include the brackets!) The text should be
+-enclosed in the appropriate comment syntax for the file format. We also
+-recommend that a file or class name and description of purpose be included on
+-the same "printed page" as the copyright notice for easier identification within
+-third-party archives.
+-
+-   Copyright [yyyy] [name of copyright owner]
+-
+-   Licensed under the Apache License, Version 2.0 (the "License");
+-   you may not use this file except in compliance with the License.
+-   You may obtain a copy of the License at
+-
+-     http://www.apache.org/licenses/LICENSE-2.0
+-
+-   Unless required by applicable law or agreed to in writing, software
+-   distributed under the License is distributed on an "AS IS" BASIS,
+-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-   See the License for the specific language governing permissions and
+-   limitations under the License.
+diff --git a/Godeps/_workspace/src/github.com/golang/glog/README b/Godeps/_workspace/src/github.com/golang/glog/README
+deleted file mode 100644
+index 5f9c114..0000000
+--- a/Godeps/_workspace/src/github.com/golang/glog/README
++++ /dev/null
+@@ -1,44 +0,0 @@
+-glog
+-====
+-
+-Leveled execution logs for Go.
+-
+-This is an efficient pure Go implementation of leveled logs in the
+-manner of the open source C++ package
+-	http://code.google.com/p/google-glog
+-
+-By binding methods to booleans it is possible to use the log package
+-without paying the expense of evaluating the arguments to the log.
+-Through the -vmodule flag, the package also provides fine-grained
+-control over logging at the file level.
+-
+-The comment from glog.go introduces the ideas:
+-
+-	Package glog implements logging analogous to the Google-internal
+-	C++ INFO/ERROR/V setup.  It provides functions Info, Warning,
+-	Error, Fatal, plus formatting variants such as Infof. It
+-	also provides V-style logging controlled by the -v and
+-	-vmodule=file=2 flags.
+-	
+-	Basic examples:
+-	
+-		glog.Info("Prepare to repel boarders")
+-	
+-		glog.Fatalf("Initialization failed: %s", err)
+-	
+-	See the documentation for the V function for an explanation
+-	of these examples:
+-	
+-		if glog.V(2) {
+-			glog.Info("Starting transaction...")
+-		}
+-	
+-		glog.V(2).Infoln("Processed", nItems, "elements")
+-
+-
+-The repository contains an open source version of the log package
+-used inside Google. The master copy of the source lives inside
+-Google, not here. The code in this repo is for export only and is not itself
+-under development. Feature requests will be ignored.
+-
+-Send bug reports to golang-nuts at googlegroups.com.
+diff --git a/Godeps/_workspace/src/github.com/golang/glog/glog.go b/Godeps/_workspace/src/github.com/golang/glog/glog.go
+deleted file mode 100644
+index d5e1ac2..0000000
+--- a/Godeps/_workspace/src/github.com/golang/glog/glog.go
++++ /dev/null
+@@ -1,1034 +0,0 @@
+-// Go support for leveled logs, analogous to https://code.google.com/p/google-glog/
+-//
+-// Copyright 2013 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-// Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup.
+-// It provides functions Info, Warning, Error, Fatal, plus formatting variants such as
+-// Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags.
+-//
+-// Basic examples:
+-//
+-//	glog.Info("Prepare to repel boarders")
+-//
+-//	glog.Fatalf("Initialization failed: %s", err)
+-//
+-// See the documentation for the V function for an explanation of these examples:
+-//
+-//	if glog.V(2) {
+-//		glog.Info("Starting transaction...")
+-//	}
+-//
+-//	glog.V(2).Infoln("Processed", nItems, "elements")
+-//
+-// Log output is buffered and written periodically using Flush. Programs
+-// should call Flush before exiting to guarantee all log output is written.
+-//
+-// By default, all log statements write to files in a temporary directory.
+-// This package provides several flags that modify this behavior.
+-// As a result, flag.Parse must be called before any logging is done.
+-//
+-//	-logtostderr=false
+-//		Logs are written to standard error instead of to files.
+-//	-alsologtostderr=false
+-//		Logs are written to standard error as well as to files.
+-//	-stderrthreshold=ERROR
+-//		Log events at or above this severity are logged to standard
+-//		error as well as to files.
+-//	-log_dir=""
+-//		Log files will be written to this directory instead of the
+-//		default temporary directory.
+-//
+-//	Other flags provide aids to debugging.
+-//
+-//	-log_backtrace_at=""
+-//		When set to a file and line number holding a logging statement,
+-//		such as
+-//			-log_backtrace_at=gopherflakes.go:234
+-//		a stack trace will be written to the Info log whenever execution
+-//		hits that statement. (Unlike with -vmodule, the ".go" must be
+-//		present.)
+-//	-v=0
+-//		Enable V-leveled logging at the specified level.
+-//	-vmodule=""
+-//		The syntax of the argument is a comma-separated list of pattern=N,
+-//		where pattern is a literal file name (minus the ".go" suffix) or
+-//		"glob" pattern and N is a V level. For instance,
+-//			-vmodule=gopher*=3
+-//		sets the V level to 3 in all Go files whose names begin "gopher".
+-//
+-package glog
+-
+-import (
+-	"bufio"
+-	"bytes"
+-	"errors"
+-	"flag"
+-	"fmt"
+-	"io"
+-	"os"
+-	"path/filepath"
+-	"runtime"
+-	"strconv"
+-	"strings"
+-	"sync"
+-	"sync/atomic"
+-	"time"
+-)
+-
+-// severity identifies the sort of log: info, warning etc. It also implements
+-// the flag.Value interface. The -stderrthreshold flag is of type severity and
+-// should be modified only through the flag.Value interface. The values match
+-// the corresponding constants in C++.
+-type severity int32 // sync/atomic int32
+-
+-const (
+-	infoLog severity = iota
+-	warningLog
+-	errorLog
+-	fatalLog
+-	numSeverity = 4
+-)
+-
+-const severityChar = "IWEF"
+-
+-var severityName = []string{
+-	infoLog:    "INFO",
+-	warningLog: "WARNING",
+-	errorLog:   "ERROR",
+-	fatalLog:   "FATAL",
+-}
+-
+-// get returns the value of the severity.
+-func (s *severity) get() severity {
+-	return severity(atomic.LoadInt32((*int32)(s)))
+-}
+-
+-// set sets the value of the severity.
+-func (s *severity) set(val severity) {
+-	atomic.StoreInt32((*int32)(s), int32(val))
+-}
+-
+-// String is part of the flag.Value interface.
+-func (s *severity) String() string {
+-	return strconv.FormatInt(int64(*s), 10)
+-}
+-
+-// Get is part of the flag.Value interface.
+-func (s *severity) Get() interface{} {
+-	return *s
+-}
+-
+-// Set is part of the flag.Value interface.
+-func (s *severity) Set(value string) error {
+-	var threshold severity
+-	// Is it a known name?
+-	if v, ok := severityByName(value); ok {
+-		threshold = v
+-	} else {
+-		v, err := strconv.Atoi(value)
+-		if err != nil {
+-			return err
+-		}
+-		threshold = severity(v)
+-	}
+-	logging.stderrThreshold.set(threshold)
+-	return nil
+-}
+-
+-func severityByName(s string) (severity, bool) {
+-	s = strings.ToUpper(s)
+-	for i, name := range severityName {
+-		if name == s {
+-			return severity(i), true
+-		}
+-	}
+-	return 0, false
+-}
+-
+-// OutputStats tracks the number of output lines and bytes written.
+-type OutputStats struct {
+-	lines int64
+-	bytes int64
+-}
+-
+-// Lines returns the number of lines written.
+-func (s *OutputStats) Lines() int64 {
+-	return atomic.LoadInt64(&s.lines)
+-}
+-
+-// Bytes returns the number of bytes written.
+-func (s *OutputStats) Bytes() int64 {
+-	return atomic.LoadInt64(&s.bytes)
+-}
+-
+-// Stats tracks the number of lines of output and number of bytes
+-// per severity level. Values must be read with atomic.LoadInt64.
+-var Stats struct {
+-	Info, Warning, Error OutputStats
+-}
+-
+-var severityStats = [numSeverity]*OutputStats{
+-	infoLog:    &Stats.Info,
+-	warningLog: &Stats.Warning,
+-	errorLog:   &Stats.Error,
+-}
+-
+-// Level is exported because it appears in the arguments to V and is
+-// the type of the v flag, which can be set programmatically.
+-// It's a distinct type because we want to discriminate it from logType.
+-// Variables of type level are only changed under logging.mu.
+-// The -v flag is read only with atomic ops, so the state of the logging
+-// module is consistent.
+-
+-// Level is treated as a sync/atomic int32.
+-
+-// Level specifies a level of verbosity for V logs. *Level implements
+-// flag.Value; the -v flag is of type Level and should be modified
+-// only through the flag.Value interface.
+-type Level int32
+-
+-// get returns the value of the Level.
+-func (l *Level) get() Level {
+-	return Level(atomic.LoadInt32((*int32)(l)))
+-}
+-
+-// set sets the value of the Level.
+-func (l *Level) set(val Level) {
+-	atomic.StoreInt32((*int32)(l), int32(val))
+-}
+-
+-// String is part of the flag.Value interface.
+-func (l *Level) String() string {
+-	return strconv.FormatInt(int64(*l), 10)
+-}
+-
+-// Get is part of the flag.Value interface.
+-func (l *Level) Get() interface{} {
+-	return *l
+-}
+-
+-// Set is part of the flag.Value interface.
+-func (l *Level) Set(value string) error {
+-	v, err := strconv.Atoi(value)
+-	if err != nil {
+-		return err
+-	}
+-	logging.mu.Lock()
+-	defer logging.mu.Unlock()
+-	logging.setVState(Level(v), logging.vmodule.filter, false)
+-	return nil
+-}
+-
+-// moduleSpec represents the setting of the -vmodule flag.
+-type moduleSpec struct {
+-	filter []modulePat
+-}
+-
+-// modulePat contains a filter for the -vmodule flag.
+-// It holds a verbosity level and a file pattern to match.
+-type modulePat struct {
+-	pattern string
+-	literal bool // The pattern is a literal string
+-	level   Level
+-}
+-
+-// match reports whether the file matches the pattern. It uses a string
+-// comparison if the pattern contains no metacharacters.
+-func (m *modulePat) match(file string) bool {
+-	if m.literal {
+-		return file == m.pattern
+-	}
+-	match, _ := filepath.Match(m.pattern, file)
+-	return match
+-}
+-
+-func (m *moduleSpec) String() string {
+-	// Lock because the type is not atomic. TODO: clean this up.
+-	logging.mu.Lock()
+-	defer logging.mu.Unlock()
+-	var b bytes.Buffer
+-	for i, f := range m.filter {
+-		if i > 0 {
+-			b.WriteRune(',')
+-		}
+-		fmt.Fprintf(&b, "%s=%d", f.pattern, f.level)
+-	}
+-	return b.String()
+-}
+-
+-// Get is part of the (Go 1.2)  flag.Getter interface. It always returns nil for this flag type since the
+-// struct is not exported.
+-func (m *moduleSpec) Get() interface{} {
+-	return nil
+-}
+-
+-var errVmoduleSyntax = errors.New("syntax error: expect comma-separated list of filename=N")
+-
+-// Syntax: -vmodule=recordio=2,file=1,gfs*=3
+-func (m *moduleSpec) Set(value string) error {
+-	var filter []modulePat
+-	for _, pat := range strings.Split(value, ",") {
+-		if len(pat) == 0 {
+-			// Empty strings such as from a trailing comma can be ignored.
+-			continue
+-		}
+-		patLev := strings.Split(pat, "=")
+-		if len(patLev) != 2 || len(patLev[0]) == 0 || len(patLev[1]) == 0 {
+-			return errVmoduleSyntax
+-		}
+-		pattern := patLev[0]
+-		v, err := strconv.Atoi(patLev[1])
+-		if err != nil {
+-			return errors.New("syntax error: expect comma-separated list of filename=N")
+-		}
+-		if v < 0 {
+-			return errors.New("negative value for vmodule level")
+-		}
+-		if v == 0 {
+-			continue // Ignore. It's harmless but no point in paying the overhead.
+-		}
+-		// TODO: check syntax of filter?
+-		filter = append(filter, modulePat{pattern, isLiteral(pattern), Level(v)})
+-	}
+-	logging.mu.Lock()
+-	defer logging.mu.Unlock()
+-	logging.setVState(logging.verbosity, filter, true)
+-	return nil
+-}
+-
+-// isLiteral reports whether the pattern is a literal string, that is, has no metacharacters
+-// that require filepath.Match to be called to match the pattern.
+-func isLiteral(pattern string) bool {
+-	return !strings.ContainsAny(pattern, `*?[]\`)
+-}
+-
+-// traceLocation represents the setting of the -log_backtrace_at flag.
+-type traceLocation struct {
+-	file string
+-	line int
+-}
+-
+-// isSet reports whether the trace location has been specified.
+-// logging.mu is held.
+-func (t *traceLocation) isSet() bool {
+-	return t.line > 0
+-}
+-
+-// match reports whether the specified file and line matches the trace location.
+-// The argument file name is the full path, not the basename specified in the flag.
+-// logging.mu is held.
+-func (t *traceLocation) match(file string, line int) bool {
+-	if t.line != line {
+-		return false
+-	}
+-	if i := strings.LastIndex(file, "/"); i >= 0 {
+-		file = file[i+1:]
+-	}
+-	return t.file == file
+-}
+-
+-func (t *traceLocation) String() string {
+-	// Lock because the type is not atomic. TODO: clean this up.
+-	logging.mu.Lock()
+-	defer logging.mu.Unlock()
+-	return fmt.Sprintf("%s:%d", t.file, t.line)
+-}
+-
+-// Get is part of the (Go 1.2) flag.Getter interface. It always returns nil for this flag type since the
+-// struct is not exported
+-func (t *traceLocation) Get() interface{} {
+-	return nil
+-}
+-
+-var errTraceSyntax = errors.New("syntax error: expect file.go:234")
+-
+-// Syntax: -log_backtrace_at=gopherflakes.go:234
+-// Note that unlike vmodule the file extension is included here.
+-func (t *traceLocation) Set(value string) error {
+-	if value == "" {
+-		// Unset.
+-		t.line = 0
+-		t.file = ""
+-	}
+-	fields := strings.Split(value, ":")
+-	if len(fields) != 2 {
+-		return errTraceSyntax
+-	}
+-	file, line := fields[0], fields[1]
+-	if !strings.Contains(file, ".") {
+-		return errTraceSyntax
+-	}
+-	v, err := strconv.Atoi(line)
+-	if err != nil {
+-		return errTraceSyntax
+-	}
+-	if v <= 0 {
+-		return errors.New("negative or zero value for level")
+-	}
+-	logging.mu.Lock()
+-	defer logging.mu.Unlock()
+-	t.line = v
+-	t.file = file
+-	return nil
+-}
+-
+-// flushSyncWriter is the interface satisfied by logging destinations.
+-type flushSyncWriter interface {
+-	Flush() error
+-	Sync() error
+-	io.Writer
+-}
+-
+-func init() {
+-	flag.BoolVar(&logging.toStderr, "logtostderr", false, "log to standard error instead of files")
+-	flag.BoolVar(&logging.alsoToStderr, "alsologtostderr", false, "log to standard error as well as files")
+-	flag.Var(&logging.verbosity, "v", "log level for V logs")
+-	flag.Var(&logging.stderrThreshold, "stderrthreshold", "logs at or above this threshold go to stderr")
+-	flag.Var(&logging.vmodule, "vmodule", "comma-separated list of pattern=N settings for file-filtered logging")
+-	flag.Var(&logging.traceLocation, "log_backtrace_at", "when logging hits line file:N, emit a stack trace")
+-
+-	// Default stderrThreshold is ERROR.
+-	logging.stderrThreshold = errorLog
+-
+-	logging.setVState(0, nil, false)
+-	go logging.flushDaemon()
+-}
+-
+-// Flush flushes all pending log I/O.
+-func Flush() {
+-	logging.lockAndFlushAll()
+-}
+-
+-// loggingT collects all the global state of the logging setup.
+-type loggingT struct {
+-	// Boolean flags. Not handled atomically because the flag.Value interface
+-	// does not let us avoid the =true, and that shorthand is necessary for
+-	// compatibility. TODO: does this matter enough to fix? Seems unlikely.
+-	toStderr     bool // The -logtostderr flag.
+-	alsoToStderr bool // The -alsologtostderr flag.
+-
+-	// Level flag. Handled atomically.
+-	stderrThreshold severity // The -stderrthreshold flag.
+-
+-	// freeList is a list of byte buffers, maintained under freeListMu.
+-	freeList *buffer
+-	// freeListMu maintains the free list. It is separate from the main mutex
+-	// so buffers can be grabbed and printed to without holding the main lock,
+-	// for better parallelization.
+-	freeListMu sync.Mutex
+-
+-	// mu protects the remaining elements of this structure and is
+-	// used to synchronize logging.
+-	mu sync.Mutex
+-	// file holds writer for each of the log types.
+-	file [numSeverity]flushSyncWriter
+-	// pcs is used in V to avoid an allocation when computing the caller's PC.
+-	pcs [1]uintptr
+-	// vmap is a cache of the V Level for each V() call site, identified by PC.
+-	// It is wiped whenever the vmodule flag changes state.
+-	vmap map[uintptr]Level
+-	// filterLength stores the length of the vmodule filter chain. If greater
+-	// than zero, it means vmodule is enabled. It may be read safely
+-	// using sync.LoadInt32, but is only modified under mu.
+-	filterLength int32
+-	// traceLocation is the state of the -log_backtrace_at flag.
+-	traceLocation traceLocation
+-	// These flags are modified only under lock, although verbosity may be fetched
+-	// safely using atomic.LoadInt32.
+-	vmodule   moduleSpec // The state of the -vmodule flag.
+-	verbosity Level      // V logging level, the value of the -v flag/
+-}
+-
+-// buffer holds a byte Buffer for reuse. The zero value is ready for use.
+-type buffer struct {
+-	bytes.Buffer
+-	tmp  [64]byte // temporary byte array for creating headers.
+-	next *buffer
+-}
+-
+-var logging loggingT
+-
+-// setVState sets a consistent state for V logging.
+-// l.mu is held.
+-func (l *loggingT) setVState(verbosity Level, filter []modulePat, setFilter bool) {
+-	// Turn verbosity off so V will not fire while we are in transition.
+-	logging.verbosity.set(0)
+-	// Ditto for filter length.
+-	logging.filterLength = 0
+-
+-	// Set the new filters and wipe the pc->Level map if the filter has changed.
+-	if setFilter {
+-		logging.vmodule.filter = filter
+-		logging.vmap = make(map[uintptr]Level)
+-	}
+-
+-	// Things are consistent now, so enable filtering and verbosity.
+-	// They are enabled in order opposite to that in V.
+-	atomic.StoreInt32(&logging.filterLength, int32(len(filter)))
+-	logging.verbosity.set(verbosity)
+-}
+-
+-// getBuffer returns a new, ready-to-use buffer.
+-func (l *loggingT) getBuffer() *buffer {
+-	l.freeListMu.Lock()
+-	b := l.freeList
+-	if b != nil {
+-		l.freeList = b.next
+-	}
+-	l.freeListMu.Unlock()
+-	if b == nil {
+-		b = new(buffer)
+-	} else {
+-		b.next = nil
+-		b.Reset()
+-	}
+-	return b
+-}
+-
+-// putBuffer returns a buffer to the free list.
+-func (l *loggingT) putBuffer(b *buffer) {
+-	if b.Len() >= 256 {
+-		// Let big buffers die a natural death.
+-		return
+-	}
+-	l.freeListMu.Lock()
+-	b.next = l.freeList
+-	l.freeList = b
+-	l.freeListMu.Unlock()
+-}
+-
+-var timeNow = time.Now // Stubbed out for testing.
+-
+-/*
+-header formats a log header as defined by the C++ implementation.
+-It returns a buffer containing the formatted header.
+-
+-Log lines have this form:
+-	Lmmdd hh:mm:ss.uuuuuu threadid file:line] msg...
+-where the fields are defined as follows:
+-	L                A single character, representing the log level (eg 'I' for INFO)
+-	mm               The month (zero padded; ie May is '05')
+-	dd               The day (zero padded)
+-	hh:mm:ss.uuuuuu  Time in hours, minutes and fractional seconds
+-	threadid         The space-padded thread ID as returned by GetTID()
+-	file             The file name
+-	line             The line number
+-	msg              The user-supplied message
+-*/
+-func (l *loggingT) header(s severity) *buffer {
+-	// Lmmdd hh:mm:ss.uuuuuu threadid file:line]
+-	now := timeNow()
+-	_, file, line, ok := runtime.Caller(3) // It's always the same number of frames to the user's call.
+-	if !ok {
+-		file = "???"
+-		line = 1
+-	} else {
+-		slash := strings.LastIndex(file, "/")
+-		if slash >= 0 {
+-			file = file[slash+1:]
+-		}
+-	}
+-	if line < 0 {
+-		line = 0 // not a real line number, but acceptable to someDigits
+-	}
+-	if s > fatalLog {
+-		s = infoLog // for safety.
+-	}
+-	buf := l.getBuffer()
+-
+-	// Avoid Fprintf, for speed. The format is so simple that we can do it quickly by hand.
+-	// It's worth about 3X. Fprintf is hard.
+-	_, month, day := now.Date()
+-	hour, minute, second := now.Clock()
+-	buf.tmp[0] = severityChar[s]
+-	buf.twoDigits(1, int(month))
+-	buf.twoDigits(3, day)
+-	buf.tmp[5] = ' '
+-	buf.twoDigits(6, hour)
+-	buf.tmp[8] = ':'
+-	buf.twoDigits(9, minute)
+-	buf.tmp[11] = ':'
+-	buf.twoDigits(12, second)
+-	buf.tmp[14] = '.'
+-	buf.nDigits(6, 15, now.Nanosecond()/1000)
+-	buf.tmp[21] = ' '
+-	buf.nDigits(5, 22, pid) // TODO: should be TID
+-	buf.tmp[27] = ' '
+-	buf.Write(buf.tmp[:28])
+-	buf.WriteString(file)
+-	buf.tmp[0] = ':'
+-	n := buf.someDigits(1, line)
+-	buf.tmp[n+1] = ']'
+-	buf.tmp[n+2] = ' '
+-	buf.Write(buf.tmp[:n+3])
+-	return buf
+-}
+-
+-// Some custom tiny helper functions to print the log header efficiently.
+-
+-const digits = "0123456789"
+-
+-// twoDigits formats a zero-prefixed two-digit integer at buf.tmp[i].
+-func (buf *buffer) twoDigits(i, d int) {
+-	buf.tmp[i+1] = digits[d%10]
+-	d /= 10
+-	buf.tmp[i] = digits[d%10]
+-}
+-
+-// nDigits formats a zero-prefixed n-digit integer at buf.tmp[i].
+-func (buf *buffer) nDigits(n, i, d int) {
+-	for j := n - 1; j >= 0; j-- {
+-		buf.tmp[i+j] = digits[d%10]
+-		d /= 10
+-	}
+-}
+-
+-// someDigits formats a zero-prefixed variable-width integer at buf.tmp[i].
+-func (buf *buffer) someDigits(i, d int) int {
+-	// Print into the top, then copy down. We know there's space for at least
+-	// a 10-digit number.
+-	j := len(buf.tmp)
+-	for {
+-		j--
+-		buf.tmp[j] = digits[d%10]
+-		d /= 10
+-		if d == 0 {
+-			break
+-		}
+-	}
+-	return copy(buf.tmp[i:], buf.tmp[j:])
+-}
+-
+-func (l *loggingT) println(s severity, args ...interface{}) {
+-	buf := l.header(s)
+-	fmt.Fprintln(buf, args...)
+-	l.output(s, buf)
+-}
+-
+-func (l *loggingT) print(s severity, args ...interface{}) {
+-	buf := l.header(s)
+-	fmt.Fprint(buf, args...)
+-	if buf.Bytes()[buf.Len()-1] != '\n' {
+-		buf.WriteByte('\n')
+-	}
+-	l.output(s, buf)
+-}
+-
+-func (l *loggingT) printf(s severity, format string, args ...interface{}) {
+-	buf := l.header(s)
+-	fmt.Fprintf(buf, format, args...)
+-	if buf.Bytes()[buf.Len()-1] != '\n' {
+-		buf.WriteByte('\n')
+-	}
+-	l.output(s, buf)
+-}
+-
+-// output writes the data to the log files and releases the buffer.
+-func (l *loggingT) output(s severity, buf *buffer) {
+-	l.mu.Lock()
+-	if l.traceLocation.isSet() {
+-		_, file, line, ok := runtime.Caller(3) // It's always the same number of frames to the user's call (same as header).
+-		if ok && l.traceLocation.match(file, line) {
+-			buf.Write(stacks(false))
+-		}
+-	}
+-	data := buf.Bytes()
+-	if l.toStderr {
+-		os.Stderr.Write(data)
+-	} else {
+-		if l.alsoToStderr || s >= l.stderrThreshold.get() {
+-			os.Stderr.Write(data)
+-		}
+-		if l.file[s] == nil {
+-			if err := l.createFiles(s); err != nil {
+-				os.Stderr.Write(data) // Make sure the message appears somewhere.
+-				l.exit(err)
+-			}
+-		}
+-		switch s {
+-		case fatalLog:
+-			l.file[fatalLog].Write(data)
+-			fallthrough
+-		case errorLog:
+-			l.file[errorLog].Write(data)
+-			fallthrough
+-		case warningLog:
+-			l.file[warningLog].Write(data)
+-			fallthrough
+-		case infoLog:
+-			l.file[infoLog].Write(data)
+-		}
+-	}
+-	if s == fatalLog {
+-		// Make sure we see the trace for the current goroutine on standard error.
+-		if !l.toStderr {
+-			os.Stderr.Write(stacks(false))
+-		}
+-		// Write the stack trace for all goroutines to the files.
+-		trace := stacks(true)
+-		logExitFunc = func(error) {} // If we get a write error, we'll still exit below.
+-		for log := fatalLog; log >= infoLog; log-- {
+-			if f := l.file[log]; f != nil { // Can be nil if -logtostderr is set.
+-				f.Write(trace)
+-			}
+-		}
+-		l.mu.Unlock()
+-		timeoutFlush(10 * time.Second)
+-		os.Exit(255) // C++ uses -1, which is silly because it's anded with 255 anyway.
+-	}
+-	l.putBuffer(buf)
+-	l.mu.Unlock()
+-	if stats := severityStats[s]; stats != nil {
+-		atomic.AddInt64(&stats.lines, 1)
+-		atomic.AddInt64(&stats.bytes, int64(len(data)))
+-	}
+-}
+-
+-// timeoutFlush calls Flush and returns when it completes or after timeout
+-// elapses, whichever happens first.  This is needed because the hooks invoked
+-// by Flush may deadlock when glog.Fatal is called from a hook that holds
+-// a lock.
+-func timeoutFlush(timeout time.Duration) {
+-	done := make(chan bool, 1)
+-	go func() {
+-		Flush() // calls logging.lockAndFlushAll()
+-		done <- true
+-	}()
+-	select {
+-	case <-done:
+-	case <-time.After(timeout):
+-		fmt.Fprintln(os.Stderr, "glog: Flush took longer than", timeout)
+-	}
+-}
+-
+-// stacks is a wrapper for runtime.Stack that attempts to recover the data for all goroutines.
+-func stacks(all bool) []byte {
+-	// We don't know how big the traces are, so grow a few times if they don't fit. Start large, though.
+-	n := 10000
+-	if all {
+-		n = 100000
+-	}
+-	var trace []byte
+-	for i := 0; i < 5; i++ {
+-		trace = make([]byte, n)
+-		nbytes := runtime.Stack(trace, all)
+-		if nbytes < len(trace) {
+-			return trace[:nbytes]
+-		}
+-		n *= 2
+-	}
+-	return trace
+-}
+-
+-// logExitFunc provides a simple mechanism to override the default behavior
+-// of exiting on error. Used in testing and to guarantee we reach a required exit
+-// for fatal logs. Instead, exit could be a function rather than a method but that
+-// would make its use clumsier.
+-var logExitFunc func(error)
+-
+-// exit is called if there is trouble creating or writing log files.
+-// It flushes the logs and exits the program; there's no point in hanging around.
+-// l.mu is held.
+-func (l *loggingT) exit(err error) {
+-	fmt.Fprintf(os.Stderr, "log: exiting because of error: %s\n", err)
+-	// If logExitFunc is set, we do that instead of exiting.
+-	if logExitFunc != nil {
+-		logExitFunc(err)
+-		return
+-	}
+-	l.flushAll()
+-	os.Exit(2)
+-}
+-
+-// syncBuffer joins a bufio.Writer to its underlying file, providing access to the
+-// file's Sync method and providing a wrapper for the Write method that provides log
+-// file rotation. There are conflicting methods, so the file cannot be embedded.
+-// l.mu is held for all its methods.
+-type syncBuffer struct {
+-	logger *loggingT
+-	*bufio.Writer
+-	file   *os.File
+-	sev    severity
+-	nbytes uint64 // The number of bytes written to this file
+-}
+-
+-func (sb *syncBuffer) Sync() error {
+-	return sb.file.Sync()
+-}
+-
+-func (sb *syncBuffer) Write(p []byte) (n int, err error) {
+-	if sb.nbytes+uint64(len(p)) >= MaxSize {
+-		if err := sb.rotateFile(time.Now()); err != nil {
+-			sb.logger.exit(err)
+-		}
+-	}
+-	n, err = sb.Writer.Write(p)
+-	sb.nbytes += uint64(n)
+-	if err != nil {
+-		sb.logger.exit(err)
+-	}
+-	return
+-}
+-
+-// rotateFile closes the syncBuffer's file and starts a new one.
+-func (sb *syncBuffer) rotateFile(now time.Time) error {
+-	if sb.file != nil {
+-		sb.Flush()
+-		sb.file.Close()
+-	}
+-	var err error
+-	sb.file, _, err = create(severityName[sb.sev], now)
+-	sb.nbytes = 0
+-	if err != nil {
+-		return err
+-	}
+-
+-	sb.Writer = bufio.NewWriterSize(sb.file, bufferSize)
+-
+-	// Write header.
+-	var buf bytes.Buffer
+-	fmt.Fprintf(&buf, "Log file created at: %s\n", now.Format("2006/01/02 15:04:05"))
+-	fmt.Fprintf(&buf, "Running on machine: %s\n", host)
+-	fmt.Fprintf(&buf, "Binary: Built with %s %s for %s/%s\n", runtime.Compiler, runtime.Version(), runtime.GOOS, runtime.GOARCH)
+-	fmt.Fprintf(&buf, "Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg\n")
+-	n, err := sb.file.Write(buf.Bytes())
+-	sb.nbytes += uint64(n)
+-	return err
+-}
+-
+-// bufferSize sizes the buffer associated with each log file. It's large
+-// so that log records can accumulate without the logging thread blocking
+-// on disk I/O. The flushDaemon will block instead.
+-const bufferSize = 256 * 1024
+-
+-// createFiles creates all the log files for severity from sev down to infoLog.
+-// l.mu is held.
+-func (l *loggingT) createFiles(sev severity) error {
+-	now := time.Now()
+-	// Files are created in decreasing severity order, so as soon as we find one
+-	// has already been created, we can stop.
+-	for s := sev; s >= infoLog && l.file[s] == nil; s-- {
+-		sb := &syncBuffer{
+-			logger: l,
+-			sev:    s,
+-		}
+-		if err := sb.rotateFile(now); err != nil {
+-			return err
+-		}
+-		l.file[s] = sb
+-	}
+-	return nil
+-}
+-
+-const flushInterval = 30 * time.Second
+-
+-// flushDaemon periodically flushes the log file buffers.
+-func (l *loggingT) flushDaemon() {
+-	for _ = range time.NewTicker(flushInterval).C {
+-		l.lockAndFlushAll()
+-	}
+-}
+-
+-// lockAndFlushAll is like flushAll but locks l.mu first.
+-func (l *loggingT) lockAndFlushAll() {
+-	l.mu.Lock()
+-	l.flushAll()
+-	l.mu.Unlock()
+-}
+-
+-// flushAll flushes all the logs and attempts to "sync" their data to disk.
+-// l.mu is held.
+-func (l *loggingT) flushAll() {
+-	// Flush from fatal down, in case there's trouble flushing.
+-	for s := fatalLog; s >= infoLog; s-- {
+-		file := l.file[s]
+-		if file != nil {
+-			file.Flush() // ignore error
+-			file.Sync()  // ignore error
+-		}
+-	}
+-}
+-
+-// setV computes and remembers the V level for a given PC
+-// when vmodule is enabled.
+-// File pattern matching takes the basename of the file, stripped
+-// of its .go suffix, and uses filepath.Match, which is a little more
+-// general than the *? matching used in C++.
+-// l.mu is held.
+-func (l *loggingT) setV(pc uintptr) Level {
+-	fn := runtime.FuncForPC(pc)
+-	file, _ := fn.FileLine(pc)
+-	// The file is something like /a/b/c/d.go. We want just the d.
+-	if strings.HasSuffix(file, ".go") {
+-		file = file[:len(file)-3]
+-	}
+-	if slash := strings.LastIndex(file, "/"); slash >= 0 {
+-		file = file[slash+1:]
+-	}
+-	for _, filter := range l.vmodule.filter {
+-		if filter.match(file) {
+-			l.vmap[pc] = filter.level
+-			return filter.level
+-		}
+-	}
+-	l.vmap[pc] = 0
+-	return 0
+-}
+-
+-// Verbose is a boolean type that implements Infof (like Printf) etc.
+-// See the documentation of V for more information.
+-type Verbose bool
+-
+-// V reports whether verbosity at the call site is at least the requested level.
+-// The returned value is a boolean of type Verbose, which implements Info, Infoln
+-// and Infof. These methods will write to the Info log if called.
+-// Thus, one may write either
+-//	if glog.V(2) { glog.Info("log this") }
+-// or
+-//	glog.V(2).Info("log this")
+-// The second form is shorter but the first is cheaper if logging is off because it does
+-// not evaluate its arguments.
+-//
+-// Whether an individual call to V generates a log record depends on the setting of
+-// the -v and --vmodule flags; both are off by default. If the level in the call to
+-// V is at least the value of -v, or of -vmodule for the source file containing the
+-// call, the V call will log.
+-func V(level Level) Verbose {
+-	// This function tries hard to be cheap unless there's work to do.
+-	// The fast path is two atomic loads and compares.
+-
+-	// Here is a cheap but safe test to see if V logging is enabled globally.
+-	if logging.verbosity.get() >= level {
+-		return Verbose(true)
+-	}
+-
+-	// It's off globally but it vmodule may still be set.
+-	// Here is another cheap but safe test to see if vmodule is enabled.
+-	if atomic.LoadInt32(&logging.filterLength) > 0 {
+-		// Now we need a proper lock to use the logging structure. The pcs field
+-		// is shared so we must lock before accessing it. This is fairly expensive,
+-		// but if V logging is enabled we're slow anyway.
+-		logging.mu.Lock()
+-		defer logging.mu.Unlock()
+-		if runtime.Callers(2, logging.pcs[:]) == 0 {
+-			return Verbose(false)
+-		}
+-		v, ok := logging.vmap[logging.pcs[0]]
+-		if !ok {
+-			v = logging.setV(logging.pcs[0])
+-		}
+-		return Verbose(v >= level)
+-	}
+-	return Verbose(false)
+-}
+-
+-// Info is equivalent to the global Info function, guarded by the value of v.
+-// See the documentation of V for usage.
+-func (v Verbose) Info(args ...interface{}) {
+-	if v {
+-		logging.print(infoLog, args...)
+-	}
+-}
+-
+-// Infoln is equivalent to the global Infoln function, guarded by the value of v.
+-// See the documentation of V for usage.
+-func (v Verbose) Infoln(args ...interface{}) {
+-	if v {
+-		logging.println(infoLog, args...)
+-	}
+-}
+-
+-// Infof is equivalent to the global Infof function, guarded by the value of v.
+-// See the documentation of V for usage.
+-func (v Verbose) Infof(format string, args ...interface{}) {
+-	if v {
+-		logging.printf(infoLog, format, args...)
+-	}
+-}
+-
+-// Info logs to the INFO log.
+-// Arguments are handled in the manner of fmt.Print; a newline is appended if missing.
+-func Info(args ...interface{}) {
+-	logging.print(infoLog, args...)
+-}
+-
+-// Infoln logs to the INFO log.
+-// Arguments are handled in the manner of fmt.Println; a newline is appended if missing.
+-func Infoln(args ...interface{}) {
+-	logging.println(infoLog, args...)
+-}
+-
+-// Infof logs to the INFO log.
+-// Arguments are handled in the manner of fmt.Printf; a newline is appended if missing.
+-func Infof(format string, args ...interface{}) {
+-	logging.printf(infoLog, format, args...)
+-}
+-
+-// Warning logs to the WARNING and INFO logs.
+-// Arguments are handled in the manner of fmt.Print; a newline is appended if missing.
+-func Warning(args ...interface{}) {
+-	logging.print(warningLog, args...)
+-}
+-
+-// Warningln logs to the WARNING and INFO logs.
+-// Arguments are handled in the manner of fmt.Println; a newline is appended if missing.
+-func Warningln(args ...interface{}) {
+-	logging.println(warningLog, args...)
+-}
+-
+-// Warningf logs to the WARNING and INFO logs.
+-// Arguments are handled in the manner of fmt.Printf; a newline is appended if missing.
+-func Warningf(format string, args ...interface{}) {
+-	logging.printf(warningLog, format, args...)
+-}
+-
+-// Error logs to the ERROR, WARNING, and INFO logs.
+-// Arguments are handled in the manner of fmt.Print; a newline is appended if missing.
+-func Error(args ...interface{}) {
+-	logging.print(errorLog, args...)
+-}
+-
+-// Errorln logs to the ERROR, WARNING, and INFO logs.
+-// Arguments are handled in the manner of fmt.Println; a newline is appended if missing.
+-func Errorln(args ...interface{}) {
+-	logging.println(errorLog, args...)
+-}
+-
+-// Errorf logs to the ERROR, WARNING, and INFO logs.
+-// Arguments are handled in the manner of fmt.Printf; a newline is appended if missing.
+-func Errorf(format string, args ...interface{}) {
+-	logging.printf(errorLog, format, args...)
+-}
+-
+-// Fatal logs to the FATAL, ERROR, WARNING, and INFO logs,
+-// including a stack trace of all running goroutines, then calls os.Exit(255).
+-// Arguments are handled in the manner of fmt.Print; a newline is appended if missing.
+-func Fatal(args ...interface{}) {
+-	logging.print(fatalLog, args...)
+-}
+-
+-// Fatalln logs to the FATAL, ERROR, WARNING, and INFO logs,
+-// including a stack trace of all running goroutines, then calls os.Exit(255).
+-// Arguments are handled in the manner of fmt.Println; a newline is appended if missing.
+-func Fatalln(args ...interface{}) {
+-	logging.println(fatalLog, args...)
+-}
+-
+-// Fatalf logs to the FATAL, ERROR, WARNING, and INFO logs,
+-// including a stack trace of all running goroutines, then calls os.Exit(255).
+-// Arguments are handled in the manner of fmt.Printf; a newline is appended if missing.
+-func Fatalf(format string, args ...interface{}) {
+-	logging.printf(fatalLog, format, args...)
+-}
+diff --git a/Godeps/_workspace/src/github.com/golang/glog/glog_file.go b/Godeps/_workspace/src/github.com/golang/glog/glog_file.go
+deleted file mode 100644
+index 65075d2..0000000
+--- a/Godeps/_workspace/src/github.com/golang/glog/glog_file.go
++++ /dev/null
+@@ -1,124 +0,0 @@
+-// Go support for leveled logs, analogous to https://code.google.com/p/google-glog/
+-//
+-// Copyright 2013 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-// File I/O for logs.
+-
+-package glog
+-
+-import (
+-	"errors"
+-	"flag"
+-	"fmt"
+-	"os"
+-	"os/user"
+-	"path/filepath"
+-	"strings"
+-	"sync"
+-	"time"
+-)
+-
+-// MaxSize is the maximum size of a log file in bytes.
+-var MaxSize uint64 = 1024 * 1024 * 1800
+-
+-// logDirs lists the candidate directories for new log files.
+-var logDirs []string
+-
+-// If non-empty, overrides the choice of directory in which to write logs.
+-// See createLogDirs for the full list of possible destinations.
+-var logDir = flag.String("log_dir", "", "If non-empty, write log files in this directory")
+-
+-func createLogDirs() {
+-	if *logDir != "" {
+-		logDirs = append(logDirs, *logDir)
+-	}
+-	logDirs = append(logDirs, os.TempDir())
+-}
+-
+-var (
+-	pid      = os.Getpid()
+-	program  = filepath.Base(os.Args[0])
+-	host     = "unknownhost"
+-	userName = "unknownuser"
+-)
+-
+-func init() {
+-	h, err := os.Hostname()
+-	if err == nil {
+-		host = shortHostname(h)
+-	}
+-
+-	current, err := user.Current()
+-	if err == nil {
+-		userName = current.Username
+-	}
+-
+-	// Sanitize userName since it may contain filepath separators on Windows.
+-	userName = strings.Replace(userName, `\`, "_", -1)
+-}
+-
+-// shortHostname returns its argument, truncating at the first period.
+-// For instance, given "www.google.com" it returns "www".
+-func shortHostname(hostname string) string {
+-	if i := strings.Index(hostname, "."); i >= 0 {
+-		return hostname[:i]
+-	}
+-	return hostname
+-}
+-
+-// logName returns a new log file name containing tag, with start time t, and
+-// the name for the symlink for tag.
+-func logName(tag string, t time.Time) (name, link string) {
+-	name = fmt.Sprintf("%s.%s.%s.log.%s.%04d%02d%02d-%02d%02d%02d.%d",
+-		program,
+-		host,
+-		userName,
+-		tag,
+-		t.Year(),
+-		t.Month(),
+-		t.Day(),
+-		t.Hour(),
+-		t.Minute(),
+-		t.Second(),
+-		pid)
+-	return name, program + "." + tag
+-}
+-
+-var onceLogDirs sync.Once
+-
+-// create creates a new log file and returns the file and its filename, which
+-// contains tag ("INFO", "FATAL", etc.) and t.  If the file is created
+-// successfully, create also attempts to update the symlink for that tag, ignoring
+-// errors.
+-func create(tag string, t time.Time) (f *os.File, filename string, err error) {
+-	onceLogDirs.Do(createLogDirs)
+-	if len(logDirs) == 0 {
+-		return nil, "", errors.New("log: no log dirs")
+-	}
+-	name, link := logName(tag, t)
+-	var lastErr error
+-	for _, dir := range logDirs {
+-		fname := filepath.Join(dir, name)
+-		f, err := os.Create(fname)
+-		if err == nil {
+-			symlink := filepath.Join(dir, link)
+-			os.Remove(symlink)        // ignore err
+-			os.Symlink(name, symlink) // ignore err
+-			return f, fname, nil
+-		}
+-		lastErr = err
+-	}
+-	return nil, "", fmt.Errorf("log: cannot create log: %v", lastErr)
+-}
+diff --git a/Godeps/_workspace/src/github.com/golang/glog/glog_test.go b/Godeps/_workspace/src/github.com/golang/glog/glog_test.go
+deleted file mode 100644
+index e4cac5a..0000000
+--- a/Godeps/_workspace/src/github.com/golang/glog/glog_test.go
++++ /dev/null
+@@ -1,333 +0,0 @@
+-// Go support for leveled logs, analogous to https://code.google.com/p/google-glog/
+-//
+-// Copyright 2013 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-package glog
+-
+-import (
+-	"bytes"
+-	"fmt"
+-	"path/filepath"
+-	"runtime"
+-	"strings"
+-	"testing"
+-	"time"
+-)
+-
+-// Test that shortHostname works as advertised.
+-func TestShortHostname(t *testing.T) {
+-	for hostname, expect := range map[string]string{
+-		"":                "",
+-		"host":            "host",
+-		"host.google.com": "host",
+-	} {
+-		if got := shortHostname(hostname); expect != got {
+-			t.Errorf("shortHostname(%q): expected %q, got %q", hostname, expect, got)
+-		}
+-	}
+-}
+-
+-// flushBuffer wraps a bytes.Buffer to satisfy flushSyncWriter.
+-type flushBuffer struct {
+-	bytes.Buffer
+-}
+-
+-func (f *flushBuffer) Flush() error {
+-	return nil
+-}
+-
+-func (f *flushBuffer) Sync() error {
+-	return nil
+-}
+-
+-// swap sets the log writers and returns the old array.
+-func (l *loggingT) swap(writers [numSeverity]flushSyncWriter) (old [numSeverity]flushSyncWriter) {
+-	l.mu.Lock()
+-	defer l.mu.Unlock()
+-	old = l.file
+-	for i, w := range writers {
+-		logging.file[i] = w
+-	}
+-	return
+-}
+-
+-// newBuffers sets the log writers to all new byte buffers and returns the old array.
+-func (l *loggingT) newBuffers() [numSeverity]flushSyncWriter {
+-	return l.swap([numSeverity]flushSyncWriter{new(flushBuffer), new(flushBuffer), new(flushBuffer), new(flushBuffer)})
+-}
+-
+-// contents returns the specified log value as a string.
+-func contents(s severity) string {
+-	return logging.file[s].(*flushBuffer).String()
+-}
+-
+-// contains reports whether the string is contained in the log.
+-func contains(s severity, str string, t *testing.T) bool {
+-	return strings.Contains(contents(s), str)
+-}
+-
+-// setFlags configures the logging flags how the test expects them.
+-func setFlags() {
+-	logging.toStderr = false
+-}
+-
+-// Test that Info works as advertised.
+-func TestInfo(t *testing.T) {
+-	setFlags()
+-	defer logging.swap(logging.newBuffers())
+-	Info("test")
+-	if !contains(infoLog, "I", t) {
+-		t.Errorf("Info has wrong character: %q", contents(infoLog))
+-	}
+-	if !contains(infoLog, "test", t) {
+-		t.Error("Info failed")
+-	}
+-}
+-
+-// Test that the header has the correct format.
+-func TestHeader(t *testing.T) {
+-	setFlags()
+-	defer logging.swap(logging.newBuffers())
+-	defer func(previous func() time.Time) { timeNow = previous }(timeNow)
+-	timeNow = func() time.Time {
+-		return time.Date(2006, 1, 2, 15, 4, 5, .678901e9, time.Local)
+-	}
+-	Info("test")
+-	var line, pid int
+-	n, err := fmt.Sscanf(contents(infoLog), "I0102 15:04:05.678901 %d glog_test.go:%d] test\n", &pid, &line)
+-	if n != 2 || err != nil {
+-		t.Errorf("log format error: %d elements, error %s:\n%s", n, err, contents(infoLog))
+-	}
+-}
+-
+-// Test that an Error log goes to Warning and Info.
+-// Even in the Info log, the source character will be E, so the data should
+-// all be identical.
+-func TestError(t *testing.T) {
+-	setFlags()
+-	defer logging.swap(logging.newBuffers())
+-	Error("test")
+-	if !contains(errorLog, "E", t) {
+-		t.Errorf("Error has wrong character: %q", contents(errorLog))
+-	}
+-	if !contains(errorLog, "test", t) {
+-		t.Error("Error failed")
+-	}
+-	str := contents(errorLog)
+-	if !contains(warningLog, str, t) {
+-		t.Error("Warning failed")
+-	}
+-	if !contains(infoLog, str, t) {
+-		t.Error("Info failed")
+-	}
+-}
+-
+-// Test that a Warning log goes to Info.
+-// Even in the Info log, the source character will be W, so the data should
+-// all be identical.
+-func TestWarning(t *testing.T) {
+-	setFlags()
+-	defer logging.swap(logging.newBuffers())
+-	Warning("test")
+-	if !contains(warningLog, "W", t) {
+-		t.Errorf("Warning has wrong character: %q", contents(warningLog))
+-	}
+-	if !contains(warningLog, "test", t) {
+-		t.Error("Warning failed")
+-	}
+-	str := contents(warningLog)
+-	if !contains(infoLog, str, t) {
+-		t.Error("Info failed")
+-	}
+-}
+-
+-// Test that a V log goes to Info.
+-func TestV(t *testing.T) {
+-	setFlags()
+-	defer logging.swap(logging.newBuffers())
+-	logging.verbosity.Set("2")
+-	defer logging.verbosity.Set("0")
+-	V(2).Info("test")
+-	if !contains(infoLog, "I", t) {
+-		t.Errorf("Info has wrong character: %q", contents(infoLog))
+-	}
+-	if !contains(infoLog, "test", t) {
+-		t.Error("Info failed")
+-	}
+-}
+-
+-// Test that a vmodule enables a log in this file.
+-func TestVmoduleOn(t *testing.T) {
+-	setFlags()
+-	defer logging.swap(logging.newBuffers())
+-	logging.vmodule.Set("glog_test=2")
+-	defer logging.vmodule.Set("")
+-	if !V(1) {
+-		t.Error("V not enabled for 1")
+-	}
+-	if !V(2) {
+-		t.Error("V not enabled for 2")
+-	}
+-	if V(3) {
+-		t.Error("V enabled for 3")
+-	}
+-	V(2).Info("test")
+-	if !contains(infoLog, "I", t) {
+-		t.Errorf("Info has wrong character: %q", contents(infoLog))
+-	}
+-	if !contains(infoLog, "test", t) {
+-		t.Error("Info failed")
+-	}
+-}
+-
+-// Test that a vmodule of another file does not enable a log in this file.
+-func TestVmoduleOff(t *testing.T) {
+-	setFlags()
+-	defer logging.swap(logging.newBuffers())
+-	logging.vmodule.Set("notthisfile=2")
+-	defer logging.vmodule.Set("")
+-	for i := 1; i <= 3; i++ {
+-		if V(Level(i)) {
+-			t.Errorf("V enabled for %d", i)
+-		}
+-	}
+-	V(2).Info("test")
+-	if contents(infoLog) != "" {
+-		t.Error("V logged incorrectly")
+-	}
+-}
+-
+-// vGlobs are patterns that match/don't match this file at V=2.
+-var vGlobs = map[string]bool{
+-	// Easy to test the numeric match here.
+-	"glog_test=1": false, // If -vmodule sets V to 1, V(2) will fail.
+-	"glog_test=2": true,
+-	"glog_test=3": true, // If -vmodule sets V to 1, V(3) will succeed.
+-	// These all use 2 and check the patterns. All are true.
+-	"*=2":           true,
+-	"?l*=2":         true,
+-	"????_*=2":      true,
+-	"??[mno]?_*t=2": true,
+-	// These all use 2 and check the patterns. All are false.
+-	"*x=2":         false,
+-	"m*=2":         false,
+-	"??_*=2":       false,
+-	"?[abc]?_*t=2": false,
+-}
+-
+-// Test that vmodule globbing works as advertised.
+-func testVmoduleGlob(pat string, match bool, t *testing.T) {
+-	setFlags()
+-	defer logging.swap(logging.newBuffers())
+-	defer logging.vmodule.Set("")
+-	logging.vmodule.Set(pat)
+-	if V(2) != Verbose(match) {
+-		t.Errorf("incorrect match for %q: got %t expected %t", pat, V(2), match)
+-	}
+-}
+-
+-// Test that a vmodule globbing works as advertised.
+-func TestVmoduleGlob(t *testing.T) {
+-	for glob, match := range vGlobs {
+-		testVmoduleGlob(glob, match, t)
+-	}
+-}
+-
+-func TestRollover(t *testing.T) {
+-	setFlags()
+-	var err error
+-	defer func(previous func(error)) { logExitFunc = previous }(logExitFunc)
+-	logExitFunc = func(e error) {
+-		err = e
+-	}
+-	defer func(previous uint64) { MaxSize = previous }(MaxSize)
+-	MaxSize = 512
+-
+-	Info("x") // Be sure we have a file.
+-	info, ok := logging.file[infoLog].(*syncBuffer)
+-	if !ok {
+-		t.Fatal("info wasn't created")
+-	}
+-	if err != nil {
+-		t.Fatalf("info has initial error: %v", err)
+-	}
+-	fname0 := info.file.Name()
+-	Info(strings.Repeat("x", int(MaxSize))) // force a rollover
+-	if err != nil {
+-		t.Fatalf("info has error after big write: %v", err)
+-	}
+-
+-	// Make sure the next log file gets a file name with a different
+-	// time stamp.
+-	//
+-	// TODO: determine whether we need to support subsecond log
+-	// rotation.  C++ does not appear to handle this case (nor does it
+-	// handle Daylight Savings Time properly).
+-	time.Sleep(1 * time.Second)
+-
+-	Info("x") // create a new file
+-	if err != nil {
+-		t.Fatalf("error after rotation: %v", err)
+-	}
+-	fname1 := info.file.Name()
+-	if fname0 == fname1 {
+-		t.Errorf("info.f.Name did not change: %v", fname0)
+-	}
+-	if info.nbytes >= MaxSize {
+-		t.Errorf("file size was not reset: %d", info.nbytes)
+-	}
+-}
+-
+-func TestLogBacktraceAt(t *testing.T) {
+-	setFlags()
+-	defer logging.swap(logging.newBuffers())
+-	// The peculiar style of this code simplifies line counting and maintenance of the
+-	// tracing block below.
+-	var infoLine string
+-	setTraceLocation := func(file string, line int, ok bool, delta int) {
+-		if !ok {
+-			t.Fatal("could not get file:line")
+-		}
+-		_, file = filepath.Split(file)
+-		infoLine = fmt.Sprintf("%s:%d", file, line+delta)
+-		err := logging.traceLocation.Set(infoLine)
+-		if err != nil {
+-			t.Fatal("error setting log_backtrace_at: ", err)
+-		}
+-	}
+-	{
+-		// Start of tracing block. These lines know about each other's relative position.
+-		_, file, line, ok := runtime.Caller(0)
+-		setTraceLocation(file, line, ok, +2) // Two lines between Caller and Info calls.
+-		Info("we want a stack trace here")
+-	}
+-	numAppearances := strings.Count(contents(infoLog), infoLine)
+-	if numAppearances < 2 {
+-		// Need 2 appearances, one in the log header and one in the trace:
+-		//   log_test.go:281: I0511 16:36:06.952398 02238 log_test.go:280] we want a stack trace here
+-		//   ...
+-		//   github.com/glog/glog_test.go:280 (0x41ba91)
+-		//   ...
+-		// We could be more precise but that would require knowing the details
+-		// of the traceback format, which may not be dependable.
+-		t.Fatal("got no trace back; log is ", contents(infoLog))
+-	}
+-}
+-
+-func BenchmarkHeader(b *testing.B) {
+-	for i := 0; i < b.N; i++ {
+-		logging.putBuffer(logging.header(infoLog))
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/cadvisor/client/client.go b/Godeps/_workspace/src/github.com/google/cadvisor/client/client.go
+deleted file mode 100644
+index f49176e..0000000
+--- a/Godeps/_workspace/src/github.com/google/cadvisor/client/client.go
++++ /dev/null
+@@ -1,106 +0,0 @@
+-// Copyright 2014 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-package cadvisor
+-
+-import (
+-	"bytes"
+-	"encoding/json"
+-	"fmt"
+-	"io/ioutil"
+-	"net/http"
+-	"strings"
+-
+-	"github.com/google/cadvisor/info"
+-)
+-
+-type Client struct {
+-	baseUrl string
+-}
+-
+-func NewClient(URL string) (*Client, error) {
+-	c := &Client{
+-		baseUrl: strings.Join([]string{
+-			URL,
+-			"api/v1.0",
+-		}, "/"),
+-	}
+-	return c, nil
+-}
+-
+-func (self *Client) machineInfoUrl() string {
+-	return strings.Join([]string{self.baseUrl, "machine"}, "/")
+-}
+-
+-func (self *Client) MachineInfo() (minfo *info.MachineInfo, err error) {
+-	u := self.machineInfoUrl()
+-	ret := new(info.MachineInfo)
+-	err = self.httpGetJsonData(ret, nil, u, "machine info")
+-	if err != nil {
+-		return
+-	}
+-	minfo = ret
+-	return
+-}
+-
+-func (self *Client) containerInfoUrl(name string) string {
+-	if name[0] == '/' {
+-		name = name[1:]
+-	}
+-	return strings.Join([]string{self.baseUrl, "containers", name}, "/")
+-}
+-
+-func (self *Client) httpGetJsonData(data, postData interface{}, url, infoName string) error {
+-	var resp *http.Response
+-	var err error
+-
+-	if postData != nil {
+-		data, err := json.Marshal(postData)
+-		if err != nil {
+-			return fmt.Errorf("unable to marshal data: %v", err)
+-		}
+-		resp, err = http.Post(url, "application/json", bytes.NewBuffer(data))
+-	} else {
+-		resp, err = http.Get(url)
+-	}
+-	if err != nil {
+-		err = fmt.Errorf("unable to get %v: %v", infoName, err)
+-		return err
+-	}
+-	defer resp.Body.Close()
+-	body, err := ioutil.ReadAll(resp.Body)
+-	if err != nil {
+-		err = fmt.Errorf("unable to read all %v: %v", infoName, err)
+-		return err
+-	}
+-	err = json.Unmarshal(body, data)
+-	if err != nil {
+-		err = fmt.Errorf("unable to unmarshal %v (%v): %v", infoName, string(body), err)
+-		return err
+-	}
+-	return nil
+-}
+-
+-func (self *Client) ContainerInfo(
+-	name string,
+-	query *info.ContainerInfoRequest) (cinfo *info.ContainerInfo, err error) {
+-	u := self.containerInfoUrl(name)
+-	ret := new(info.ContainerInfo)
+-	err = self.httpGetJsonData(ret, query, u, fmt.Sprintf("container info for %v", name))
+-	if err != nil {
+-		return
+-	}
+-	cinfo = ret
+-	return
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/cadvisor/client/client_test.go b/Godeps/_workspace/src/github.com/google/cadvisor/client/client_test.go
+deleted file mode 100644
+index eb6b7a4..0000000
+--- a/Godeps/_workspace/src/github.com/google/cadvisor/client/client_test.go
++++ /dev/null
+@@ -1,113 +0,0 @@
+-// Copyright 2014 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-package cadvisor
+-
+-import (
+-	"encoding/json"
+-	"fmt"
+-	"net/http"
+-	"net/http/httptest"
+-	"reflect"
+-	"testing"
+-	"time"
+-
+-	"github.com/google/cadvisor/info"
+-	itest "github.com/google/cadvisor/info/test"
+-	"github.com/kr/pretty"
+-)
+-
+-func testGetJsonData(
+-	expected interface{},
+-	f func() (interface{}, error),
+-) error {
+-	reply, err := f()
+-	if err != nil {
+-		return fmt.Errorf("unable to retrieve data: %v", err)
+-	}
+-	if !reflect.DeepEqual(reply, expected) {
+-		return pretty.Errorf("retrieved wrong data: %# v != %# v", reply, expected)
+-	}
+-	return nil
+-}
+-
+-func cadvisorTestClient(path string, expectedPostObj, expectedPostObjEmpty, replyObj interface{}, t *testing.T) (*Client, *httptest.Server, error) {
+-	ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+-		if r.URL.Path == path {
+-			if expectedPostObj != nil {
+-				decoder := json.NewDecoder(r.Body)
+-				err := decoder.Decode(expectedPostObjEmpty)
+-				if err != nil {
+-					t.Errorf("Received invalid object: %v", err)
+-				}
+-				if !reflect.DeepEqual(expectedPostObj, expectedPostObjEmpty) {
+-					t.Errorf("Received unexpected object: %+v", expectedPostObjEmpty)
+-				}
+-			}
+-			encoder := json.NewEncoder(w)
+-			encoder.Encode(replyObj)
+-		} else if r.URL.Path == "/api/v1.0/machine" {
+-			fmt.Fprint(w, `{"num_cores":8,"memory_capacity":31625871360}`)
+-		} else {
+-			w.WriteHeader(http.StatusNotFound)
+-			fmt.Fprintf(w, "Page not found.")
+-		}
+-	}))
+-	client, err := NewClient(ts.URL)
+-	if err != nil {
+-		ts.Close()
+-		return nil, nil, err
+-	}
+-	return client, ts, err
+-}
+-
+-func TestGetMachineinfo(t *testing.T) {
+-	minfo := &info.MachineInfo{
+-		NumCores:       8,
+-		MemoryCapacity: 31625871360,
+-	}
+-	client, server, err := cadvisorTestClient("/api/v1.0/machine", nil, nil, minfo, t)
+-	if err != nil {
+-		t.Fatalf("unable to get a client %v", err)
+-	}
+-	defer server.Close()
+-	returned, err := client.MachineInfo()
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if !reflect.DeepEqual(returned, minfo) {
+-		t.Fatalf("received unexpected machine info")
+-	}
+-}
+-
+-func TestGetContainerInfo(t *testing.T) {
+-	query := &info.ContainerInfoRequest{
+-		NumStats: 3,
+-	}
+-	containerName := "/some/container"
+-	cinfo := itest.GenerateRandomContainerInfo(containerName, 4, query, 1*time.Second)
+-	client, server, err := cadvisorTestClient(fmt.Sprintf("/api/v1.0/containers%v", containerName), query, &info.ContainerInfoRequest{}, cinfo, t)
+-	if err != nil {
+-		t.Fatalf("unable to get a client %v", err)
+-	}
+-	defer server.Close()
+-	returned, err := client.ContainerInfo(containerName, query)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if !returned.Eq(cinfo) {
+-		t.Error("received unexpected ContainerInfo")
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/cadvisor/info/advice.go b/Godeps/_workspace/src/github.com/google/cadvisor/info/advice.go
+deleted file mode 100644
+index 8084cf4..0000000
+--- a/Godeps/_workspace/src/github.com/google/cadvisor/info/advice.go
++++ /dev/null
+@@ -1,34 +0,0 @@
+-// Copyright 2014 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-package info
+-
+-// This struct describes one type of relationship between containers: One
+-// container, antagonist, interferes the performance of other
+-// containers, victims.
+-type Interference struct {
+-	// Absolute name of the antagonist container name. This field
+-	// should not be empty.
+-	Antagonist string `json:"antagonist"`
+-
+-	// The absolute path of the victims. This field should not be empty.
+-	Victims []string `json:"victims"`
+-
+-	// The name of the detector used to detect this antagonism. This field
+-	// should not be empty
+-	Detector string `json:"detector"`
+-
+-	// Human readable description of this interference
+-	Description string `json:"description,omitempty"`
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/cadvisor/info/container.go b/Godeps/_workspace/src/github.com/google/cadvisor/info/container.go
+deleted file mode 100644
+index 7858fa5..0000000
+--- a/Godeps/_workspace/src/github.com/google/cadvisor/info/container.go
++++ /dev/null
+@@ -1,312 +0,0 @@
+-// Copyright 2014 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-package info
+-
+-import (
+-	"reflect"
+-	"time"
+-)
+-
+-type CpuSpec struct {
+-	Limit    uint64 `json:"limit"`
+-	MaxLimit uint64 `json:"max_limit"`
+-	Mask     string `json:"mask,omitempty"`
+-}
+-
+-type MemorySpec struct {
+-	// The amount of memory requested. Default is unlimited (-1).
+-	// Units: bytes.
+-	Limit uint64 `json:"limit,omitempty"`
+-
+-	// The amount of guaranteed memory.  Default is 0.
+-	// Units: bytes.
+-	Reservation uint64 `json:"reservation,omitempty"`
+-
+-	// The amount of swap space requested. Default is unlimited (-1).
+-	// Units: bytes.
+-	SwapLimit uint64 `json:"swap_limit,omitempty"`
+-}
+-
+-type ContainerSpec struct {
+-	Cpu    *CpuSpec    `json:"cpu,omitempty"`
+-	Memory *MemorySpec `json:"memory,omitempty"`
+-}
+-
+-// Container reference contains enough information to uniquely identify a container
+-type ContainerReference struct {
+-	// The absolute name of the container.
+-	Name string `json:"name"`
+-
+-	Aliases []string `json:"aliases,omitempty"`
+-}
+-
+-// ContainerInfoQuery is used when users check a container info from the REST api.
+-// It specifies how much data users want to get about a container
+-type ContainerInfoRequest struct {
+-	// Max number of stats to return.
+-	NumStats int `json:"num_stats,omitempty"`
+-}
+-
+-type ContainerInfo struct {
+-	ContainerReference
+-
+-	// The direct subcontainers of the current container.
+-	Subcontainers []ContainerReference `json:"subcontainers,omitempty"`
+-
+-	// The isolation used in the container.
+-	Spec *ContainerSpec `json:"spec,omitempty"`
+-
+-	// Historical statistics gathered from the container.
+-	Stats []*ContainerStats `json:"stats,omitempty"`
+-}
+-
+-// ContainerInfo may be (un)marshaled by json or other en/decoder. In that
+-// case, the Timestamp field in each stats/sample may not be precisely
+-// en/decoded.  This will lead to small but acceptable differences between a
+-// ContainerInfo and its encode-then-decode version.  Eq() is used to compare
+-// two ContainerInfo accepting small difference (<10ms) of Time fields.
+-func (self *ContainerInfo) Eq(b *ContainerInfo) bool {
+-
+-	// If both self and b are nil, then Eq() returns true
+-	if self == nil {
+-		return b == nil
+-	}
+-	if b == nil {
+-		return self == nil
+-	}
+-
+-	// For fields other than time.Time, we will compare them precisely.
+-	// This would require that any slice should have same order.
+-	if !reflect.DeepEqual(self.ContainerReference, b.ContainerReference) {
+-		return false
+-	}
+-	if !reflect.DeepEqual(self.Subcontainers, b.Subcontainers) {
+-		return false
+-	}
+-	if !reflect.DeepEqual(self.Spec, b.Spec) {
+-		return false
+-	}
+-
+-	for i, expectedStats := range b.Stats {
+-		selfStats := self.Stats[i]
+-		if !expectedStats.Eq(selfStats) {
+-			return false
+-		}
+-	}
+-
+-	return true
+-}
+-
+-func (self *ContainerInfo) StatsAfter(ref time.Time) []*ContainerStats {
+-	n := len(self.Stats) + 1
+-	for i, s := range self.Stats {
+-		if s.Timestamp.After(ref) {
+-			n = i
+-			break
+-		}
+-	}
+-	if n > len(self.Stats) {
+-		return nil
+-	}
+-	return self.Stats[n:]
+-}
+-
+-func (self *ContainerInfo) StatsStartTime() time.Time {
+-	var ret time.Time
+-	for _, s := range self.Stats {
+-		if s.Timestamp.Before(ret) || ret.IsZero() {
+-			ret = s.Timestamp
+-		}
+-	}
+-	return ret
+-}
+-
+-func (self *ContainerInfo) StatsEndTime() time.Time {
+-	var ret time.Time
+-	for i := len(self.Stats) - 1; i >= 0; i-- {
+-		s := self.Stats[i]
+-		if s.Timestamp.After(ret) {
+-			ret = s.Timestamp
+-		}
+-	}
+-	return ret
+-}
+-
+-// All CPU usage metrics are cumulative from the creation of the container
+-type CpuStats struct {
+-	Usage struct {
+-		// Total CPU usage.
+-		// Units: nanoseconds
+-		Total uint64 `json:"total"`
+-
+-		// Per CPU/core usage of the container.
+-		// Unit: nanoseconds.
+-		PerCpu []uint64 `json:"per_cpu_usage,omitempty"`
+-
+-		// Time spent in user space.
+-		// Unit: nanoseconds
+-		User uint64 `json:"user"`
+-
+-		// Time spent in kernel space.
+-		// Unit: nanoseconds
+-		System uint64 `json:"system"`
+-	} `json:"usage"`
+-	Load int32 `json:"load"`
+-}
+-
+-type MemoryStats struct {
+-	// Memory limit, equivalent to "limit" in MemorySpec.
+-	// Units: Bytes.
+-	Limit uint64 `json:"limit,omitempty"`
+-
+-	// Usage statistics.
+-
+-	// Current memory usage, this includes all memory regardless of when it was
+-	// accessed.
+-	// Units: Bytes.
+-	Usage uint64 `json:"usage,omitempty"`
+-
+-	// The amount of working set memory, this includes recently accessed memory,
+-	// dirty memory, and kernel memory. Working set is <= "usage".
+-	// Units: Bytes.
+-	WorkingSet uint64 `json:"working_set,omitempty"`
+-
+-	ContainerData    MemoryStatsMemoryData `json:"container_data,omitempty"`
+-	HierarchicalData MemoryStatsMemoryData `json:"hierarchical_data,omitempty"`
+-}
+-
+-type MemoryStatsMemoryData struct {
+-	Pgfault    uint64 `json:"pgfault,omitempty"`
+-	Pgmajfault uint64 `json:"pgmajfault,omitempty"`
+-}
+-
+-type NetworkStats struct {
+-	// Cumulative count of bytes received.
+-	RxBytes uint64 `json:"rx_bytes"`
+-	// Cumulative count of packets received.
+-	RxPackets uint64 `json:"rx_packets"`
+-	// Cumulative count of receive errors encountered.
+-	RxErrors uint64 `json:"rx_errors"`
+-	// Cumulative count of packets dropped while receiving.
+-	RxDropped uint64 `json:"rx_dropped"`
+-	// Cumulative count of bytes transmitted.
+-	TxBytes uint64 `json:"tx_bytes"`
+-	// Cumulative count of packets transmitted.
+-	TxPackets uint64 `json:"tx_packets"`
+-	// Cumulative count of transmit errors encountered.
+-	TxErrors uint64 `json:"tx_errors"`
+-	// Cumulative count of packets dropped while transmitting.
+-	TxDropped uint64 `json:"tx_dropped"`
+-}
+-
+-type ContainerStats struct {
+-	// The time of this stat point.
+-	Timestamp time.Time     `json:"timestamp"`
+-	Cpu       *CpuStats     `json:"cpu,omitempty"`
+-	Memory    *MemoryStats  `json:"memory,omitempty"`
+-	Network   *NetworkStats `json:"network,omitempty"`
+-}
+-
+-// Makes a deep copy of the ContainerStats and returns a pointer to the new
+-// copy. Copy() will allocate a new ContainerStats object if dst is nil.
+-func (self *ContainerStats) Copy(dst *ContainerStats) *ContainerStats {
+-	if dst == nil {
+-		dst = new(ContainerStats)
+-	}
+-	dst.Timestamp = self.Timestamp
+-	if self.Cpu != nil {
+-		if dst.Cpu == nil {
+-			dst.Cpu = new(CpuStats)
+-		}
+-		// To make a deep copy of a slice, we need to copy every value
+-		// in the slice. To make less memory allocation, we would like
+-		// to reuse the slice in dst if possible.
+-		percpu := dst.Cpu.Usage.PerCpu
+-		if len(percpu) != len(self.Cpu.Usage.PerCpu) {
+-			percpu = make([]uint64, len(self.Cpu.Usage.PerCpu))
+-		}
+-		dst.Cpu.Usage = self.Cpu.Usage
+-		dst.Cpu.Load = self.Cpu.Load
+-		copy(percpu, self.Cpu.Usage.PerCpu)
+-		dst.Cpu.Usage.PerCpu = percpu
+-	} else {
+-		dst.Cpu = nil
+-	}
+-	if self.Memory != nil {
+-		if dst.Memory == nil {
+-			dst.Memory = new(MemoryStats)
+-		}
+-		*dst.Memory = *self.Memory
+-	} else {
+-		dst.Memory = nil
+-	}
+-	return dst
+-}
+-
+-func timeEq(t1, t2 time.Time, tolerance time.Duration) bool {
+-	// t1 should not be later than t2
+-	if t1.After(t2) {
+-		t1, t2 = t2, t1
+-	}
+-	diff := t2.Sub(t1)
+-	if diff <= tolerance {
+-		return true
+-	}
+-	return false
+-}
+-
+-func durationEq(a, b time.Duration, tolerance time.Duration) bool {
+-	if a > b {
+-		a, b = b, a
+-	}
+-	diff := a - b
+-	if diff <= tolerance {
+-		return true
+-	}
+-	return false
+-}
+-
+-const (
+-	// 10ms, i.e. 0.01s
+-	timePrecision time.Duration = 10 * time.Millisecond
+-)
+-
+-// This function is useful because we do not require precise time
+-// representation.
+-func (a *ContainerStats) Eq(b *ContainerStats) bool {
+-	if !timeEq(a.Timestamp, b.Timestamp, timePrecision) {
+-		return false
+-	}
+-	return a.StatsEq(b)
+-}
+-
+-// Checks equality of the stats values.
+-func (a *ContainerStats) StatsEq(b *ContainerStats) bool {
+-	if !reflect.DeepEqual(a.Cpu, b.Cpu) {
+-		return false
+-	}
+-	if !reflect.DeepEqual(a.Memory, b.Memory) {
+-		return false
+-	}
+-	return true
+-}
+-
+-// Saturate CPU usage to 0.
+-func calculateCpuUsage(prev, cur uint64) uint64 {
+-	if prev > cur {
+-		return 0
+-	}
+-	return cur - prev
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/cadvisor/info/container_test.go b/Godeps/_workspace/src/github.com/google/cadvisor/info/container_test.go
+deleted file mode 100644
+index bd730c1..0000000
+--- a/Godeps/_workspace/src/github.com/google/cadvisor/info/container_test.go
++++ /dev/null
+@@ -1,101 +0,0 @@
+-// Copyright 2014 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-package info
+-
+-import (
+-	"reflect"
+-	"testing"
+-	"time"
+-)
+-
+-func TestStatsStartTime(t *testing.T) {
+-	N := 10
+-	stats := make([]*ContainerStats, 0, N)
+-	ct := time.Now()
+-	for i := 0; i < N; i++ {
+-		s := &ContainerStats{
+-			Timestamp: ct.Add(time.Duration(i) * time.Second),
+-		}
+-		stats = append(stats, s)
+-	}
+-	cinfo := &ContainerInfo{
+-		ContainerReference: ContainerReference{
+-			Name: "/some/container",
+-		},
+-		Stats: stats,
+-	}
+-	ref := ct.Add(time.Duration(N-1) * time.Second)
+-	end := cinfo.StatsEndTime()
+-
+-	if !ref.Equal(end) {
+-		t.Errorf("end time is %v; should be %v", end, ref)
+-	}
+-}
+-
+-func TestStatsEndTime(t *testing.T) {
+-	N := 10
+-	stats := make([]*ContainerStats, 0, N)
+-	ct := time.Now()
+-	for i := 0; i < N; i++ {
+-		s := &ContainerStats{
+-			Timestamp: ct.Add(time.Duration(i) * time.Second),
+-		}
+-		stats = append(stats, s)
+-	}
+-	cinfo := &ContainerInfo{
+-		ContainerReference: ContainerReference{
+-			Name: "/some/container",
+-		},
+-		Stats: stats,
+-	}
+-	ref := ct
+-	start := cinfo.StatsStartTime()
+-
+-	if !ref.Equal(start) {
+-		t.Errorf("start time is %v; should be %v", start, ref)
+-	}
+-}
+-
+-func createStats(cpuUsage, memUsage uint64, timestamp time.Time) *ContainerStats {
+-	stats := &ContainerStats{
+-		Cpu:    &CpuStats{},
+-		Memory: &MemoryStats{},
+-	}
+-	stats.Cpu.Usage.PerCpu = []uint64{cpuUsage}
+-	stats.Cpu.Usage.Total = cpuUsage
+-	stats.Cpu.Usage.System = 0
+-	stats.Cpu.Usage.User = cpuUsage
+-	stats.Memory.Usage = memUsage
+-	stats.Timestamp = timestamp
+-	return stats
+-}
+-
+-func TestContainerStatsCopy(t *testing.T) {
+-	stats := createStats(100, 101, time.Now())
+-	shadowStats := stats.Copy(nil)
+-	if !reflect.DeepEqual(stats, shadowStats) {
+-		t.Errorf("Copy() returned different object")
+-	}
+-	stats.Cpu.Usage.PerCpu[0] = shadowStats.Cpu.Usage.PerCpu[0] + 1
+-	stats.Cpu.Load = shadowStats.Cpu.Load + 1
+-	stats.Memory.Usage = shadowStats.Memory.Usage + 1
+-	if reflect.DeepEqual(stats, shadowStats) {
+-		t.Errorf("Copy() did not deeply copy the object")
+-	}
+-	stats = shadowStats.Copy(stats)
+-	if !reflect.DeepEqual(stats, shadowStats) {
+-		t.Errorf("Copy() returned different object")
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/cadvisor/info/machine.go b/Godeps/_workspace/src/github.com/google/cadvisor/info/machine.go
+deleted file mode 100644
+index 7415dc9..0000000
+--- a/Godeps/_workspace/src/github.com/google/cadvisor/info/machine.go
++++ /dev/null
+@@ -1,42 +0,0 @@
+-// Copyright 2014 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-package info
+-
+-type MachineInfo struct {
+-	// The number of cores in this machine.
+-	NumCores int `json:"num_cores"`
+-
+-	// The amount of memory (in bytes) in this machine
+-	MemoryCapacity int64 `json:"memory_capacity"`
+-}
+-
+-type VersionInfo struct {
+-	// Kernel version.
+-	KernelVersion string `json:"kernel_version"`
+-
+-	// OS image being used for cadvisor container, or host image if running on host directly.
+-	ContainerOsVersion string `json:"container_os_version"`
+-
+-	// Docker version.
+-	DockerVersion string `json:"docker_version"`
+-
+-	// cAdvisor version.
+-	CadvisorVersion string `json:"cadvisor_version"`
+-}
+-
+-type MachineInfoFactory interface {
+-	GetMachineInfo() (*MachineInfo, error)
+-	GetVersionInfo() (*VersionInfo, error)
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/cadvisor/info/test/datagen.go b/Godeps/_workspace/src/github.com/google/cadvisor/info/test/datagen.go
+deleted file mode 100644
+index 5d43ae5..0000000
+--- a/Godeps/_workspace/src/github.com/google/cadvisor/info/test/datagen.go
++++ /dev/null
+@@ -1,78 +0,0 @@
+-// Copyright 2014 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-package test
+-
+-import (
+-	"fmt"
+-	"math/rand"
+-	"time"
+-
+-	"github.com/google/cadvisor/info"
+-)
+-
+-func GenerateRandomStats(numStats, numCores int, duration time.Duration) []*info.ContainerStats {
+-	ret := make([]*info.ContainerStats, numStats)
+-	perCoreUsages := make([]uint64, numCores)
+-	currentTime := time.Now()
+-	for i := range perCoreUsages {
+-		perCoreUsages[i] = uint64(rand.Int63n(1000))
+-	}
+-	for i := 0; i < numStats; i++ {
+-		stats := new(info.ContainerStats)
+-		stats.Cpu = new(info.CpuStats)
+-		stats.Memory = new(info.MemoryStats)
+-		stats.Timestamp = currentTime
+-		currentTime = currentTime.Add(duration)
+-
+-		percore := make([]uint64, numCores)
+-		for i := range perCoreUsages {
+-			perCoreUsages[i] += uint64(rand.Int63n(1000))
+-			percore[i] = perCoreUsages[i]
+-			stats.Cpu.Usage.Total += percore[i]
+-		}
+-		stats.Cpu.Usage.PerCpu = percore
+-		stats.Cpu.Usage.User = stats.Cpu.Usage.Total
+-		stats.Cpu.Usage.System = 0
+-		stats.Memory.Usage = uint64(rand.Int63n(4096))
+-		ret[i] = stats
+-	}
+-	return ret
+-}
+-
+-func GenerateRandomContainerSpec(numCores int) *info.ContainerSpec {
+-	ret := &info.ContainerSpec{
+-		Cpu:    &info.CpuSpec{},
+-		Memory: &info.MemorySpec{},
+-	}
+-	ret.Cpu.Limit = uint64(1000 + rand.Int63n(2000))
+-	ret.Cpu.MaxLimit = uint64(1000 + rand.Int63n(2000))
+-	ret.Cpu.Mask = fmt.Sprintf("0-%d", numCores-1)
+-	ret.Memory.Limit = uint64(4096 + rand.Int63n(4096))
+-	return ret
+-}
+-
+-func GenerateRandomContainerInfo(containerName string, numCores int, query *info.ContainerInfoRequest, duration time.Duration) *info.ContainerInfo {
+-	stats := GenerateRandomStats(query.NumStats, numCores, duration)
+-	spec := GenerateRandomContainerSpec(numCores)
+-
+-	ret := &info.ContainerInfo{
+-		ContainerReference: info.ContainerReference{
+-			Name: containerName,
+-		},
+-		Spec:  spec,
+-		Stats: stats,
+-	}
+-	return ret
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/cadvisor/info/version.go b/Godeps/_workspace/src/github.com/google/cadvisor/info/version.go
+deleted file mode 100644
+index 8675734..0000000
+--- a/Godeps/_workspace/src/github.com/google/cadvisor/info/version.go
++++ /dev/null
+@@ -1,18 +0,0 @@
+-// Copyright 2014 Google Inc. All Rights Reserved.
+-//
+-// Licensed under the Apache License, Version 2.0 (the "License");
+-// you may not use this file except in compliance with the License.
+-// You may obtain a copy of the License at
+-//
+-//     http://www.apache.org/licenses/LICENSE-2.0
+-//
+-// Unless required by applicable law or agreed to in writing, software
+-// distributed under the License is distributed on an "AS IS" BASIS,
+-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-// See the License for the specific language governing permissions and
+-// limitations under the License.
+-
+-package info
+-
+-// Version of cAdvisor.
+-const VERSION = "0.4.0"
+diff --git a/Godeps/_workspace/src/github.com/google/gofuzz/.travis.yml b/Godeps/_workspace/src/github.com/google/gofuzz/.travis.yml
+deleted file mode 100644
+index 9384a54..0000000
+--- a/Godeps/_workspace/src/github.com/google/gofuzz/.travis.yml
++++ /dev/null
+@@ -1,12 +0,0 @@
+-language: go
+-
+-go:
+-  - 1.3
+-  - 1.2
+-  - tip
+-
+-install: 
+-  - go get code.google.com/p/go.tools/cmd/cover
+-
+-script:
+-  - go test -cover
+diff --git a/Godeps/_workspace/src/github.com/google/gofuzz/CONTRIBUTING.md b/Godeps/_workspace/src/github.com/google/gofuzz/CONTRIBUTING.md
+deleted file mode 100644
+index 51cf5cd..0000000
+--- a/Godeps/_workspace/src/github.com/google/gofuzz/CONTRIBUTING.md
++++ /dev/null
+@@ -1,67 +0,0 @@
+-# How to contribute #
+-
+-We'd love to accept your patches and contributions to this project.  There are
+-a just a few small guidelines you need to follow.
+-
+-
+-## Contributor License Agreement ##
+-
+-Contributions to any Google project must be accompanied by a Contributor
+-License Agreement.  This is not a copyright **assignment**, it simply gives
+-Google permission to use and redistribute your contributions as part of the
+-project.
+-
+-  * If you are an individual writing original source code and you're sure you
+-    own the intellectual property, then you'll need to sign an [individual
+-    CLA][].
+-
+-  * If you work for a company that wants to allow you to contribute your work,
+-    then you'll need to sign a [corporate CLA][].
+-
+-You generally only need to submit a CLA once, so if you've already submitted
+-one (even if it was for a different project), you probably don't need to do it
+-again.
+-
+-[individual CLA]: https://developers.google.com/open-source/cla/individual
+-[corporate CLA]: https://developers.google.com/open-source/cla/corporate
+-
+-
+-## Submitting a patch ##
+-
+-  1. It's generally best to start by opening a new issue describing the bug or
+-     feature you're intending to fix.  Even if you think it's relatively minor,
+-     it's helpful to know what people are working on.  Mention in the initial
+-     issue that you are planning to work on that bug or feature so that it can
+-     be assigned to you.
+-
+-  1. Follow the normal process of [forking][] the project, and setup a new
+-     branch to work in.  It's important that each group of changes be done in
+-     separate branches in order to ensure that a pull request only includes the
+-     commits related to that bug or feature.
+-
+-  1. Go makes it very simple to ensure properly formatted code, so always run
+-     `go fmt` on your code before committing it.  You should also run
+-     [golint][] over your code.  As noted in the [golint readme][], it's not
+-     strictly necessary that your code be completely "lint-free", but this will
+-     help you find common style issues.
+-
+-  1. Any significant changes should almost always be accompanied by tests.  The
+-     project already has good test coverage, so look at some of the existing
+-     tests if you're unsure how to go about it.  [gocov][] and [gocov-html][]
+-     are invaluable tools for seeing which parts of your code aren't being
+-     exercised by your tests.
+-
+-  1. Do your best to have [well-formed commit messages][] for each change.
+-     This provides consistency throughout the project, and ensures that commit
+-     messages are able to be formatted properly by various git tools.
+-
+-  1. Finally, push the commits to your fork and submit a [pull request][].
+-
+-[forking]: https://help.github.com/articles/fork-a-repo
+-[golint]: https://github.com/golang/lint
+-[golint readme]: https://github.com/golang/lint/blob/master/README
+-[gocov]: https://github.com/axw/gocov
+-[gocov-html]: https://github.com/matm/gocov-html
+-[well-formed commit messages]: http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html
+-[squash]: http://git-scm.com/book/en/Git-Tools-Rewriting-History#Squashing-Commits
+-[pull request]: https://help.github.com/articles/creating-a-pull-request
+diff --git a/Godeps/_workspace/src/github.com/google/gofuzz/LICENSE b/Godeps/_workspace/src/github.com/google/gofuzz/LICENSE
+deleted file mode 100644
+index d645695..0000000
+--- a/Godeps/_workspace/src/github.com/google/gofuzz/LICENSE
++++ /dev/null
+@@ -1,202 +0,0 @@
+-
+-                                 Apache License
+-                           Version 2.0, January 2004
+-                        http://www.apache.org/licenses/
+-
+-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+-
+-   1. Definitions.
+-
+-      "License" shall mean the terms and conditions for use, reproduction,
+-      and distribution as defined by Sections 1 through 9 of this document.
+-
+-      "Licensor" shall mean the copyright owner or entity authorized by
+-      the copyright owner that is granting the License.
+-
+-      "Legal Entity" shall mean the union of the acting entity and all
+-      other entities that control, are controlled by, or are under common
+-      control with that entity. For the purposes of this definition,
+-      "control" means (i) the power, direct or indirect, to cause the
+-      direction or management of such entity, whether by contract or
+-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+-      outstanding shares, or (iii) beneficial ownership of such entity.
+-
+-      "You" (or "Your") shall mean an individual or Legal Entity
+-      exercising permissions granted by this License.
+-
+-      "Source" form shall mean the preferred form for making modifications,
+-      including but not limited to software source code, documentation
+-      source, and configuration files.
+-
+-      "Object" form shall mean any form resulting from mechanical
+-      transformation or translation of a Source form, including but
+-      not limited to compiled object code, generated documentation,
+-      and conversions to other media types.
+-
+-      "Work" shall mean the work of authorship, whether in Source or
+-      Object form, made available under the License, as indicated by a
+-      copyright notice that is included in or attached to the work
+-      (an example is provided in the Appendix below).
+-
+-      "Derivative Works" shall mean any work, whether in Source or Object
+-      form, that is based on (or derived from) the Work and for which the
+-      editorial revisions, annotations, elaborations, or other modifications
+-      represent, as a whole, an original work of authorship. For the purposes
+-      of this License, Derivative Works shall not include works that remain
+-      separable from, or merely link (or bind by name) to the interfaces of,
+-      the Work and Derivative Works thereof.
+-
+-      "Contribution" shall mean any work of authorship, including
+-      the original version of the Work and any modifications or additions
+-      to that Work or Derivative Works thereof, that is intentionally
+-      submitted to Licensor for inclusion in the Work by the copyright owner
+-      or by an individual or Legal Entity authorized to submit on behalf of
+-      the copyright owner. For the purposes of this definition, "submitted"
+-      means any form of electronic, verbal, or written communication sent
+-      to the Licensor or its representatives, including but not limited to
+-      communication on electronic mailing lists, source code control systems,
+-      and issue tracking systems that are managed by, or on behalf of, the
+-      Licensor for the purpose of discussing and improving the Work, but
+-      excluding communication that is conspicuously marked or otherwise
+-      designated in writing by the copyright owner as "Not a Contribution."
+-
+-      "Contributor" shall mean Licensor and any individual or Legal Entity
+-      on behalf of whom a Contribution has been received by Licensor and
+-      subsequently incorporated within the Work.
+-
+-   2. Grant of Copyright License. Subject to the terms and conditions of
+-      this License, each Contributor hereby grants to You a perpetual,
+-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+-      copyright license to reproduce, prepare Derivative Works of,
+-      publicly display, publicly perform, sublicense, and distribute the
+-      Work and such Derivative Works in Source or Object form.
+-
+-   3. Grant of Patent License. Subject to the terms and conditions of
+-      this License, each Contributor hereby grants to You a perpetual,
+-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+-      (except as stated in this section) patent license to make, have made,
+-      use, offer to sell, sell, import, and otherwise transfer the Work,
+-      where such license applies only to those patent claims licensable
+-      by such Contributor that are necessarily infringed by their
+-      Contribution(s) alone or by combination of their Contribution(s)
+-      with the Work to which such Contribution(s) was submitted. If You
+-      institute patent litigation against any entity (including a
+-      cross-claim or counterclaim in a lawsuit) alleging that the Work
+-      or a Contribution incorporated within the Work constitutes direct
+-      or contributory patent infringement, then any patent licenses
+-      granted to You under this License for that Work shall terminate
+-      as of the date such litigation is filed.
+-
+-   4. Redistribution. You may reproduce and distribute copies of the
+-      Work or Derivative Works thereof in any medium, with or without
+-      modifications, and in Source or Object form, provided that You
+-      meet the following conditions:
+-
+-      (a) You must give any other recipients of the Work or
+-          Derivative Works a copy of this License; and
+-
+-      (b) You must cause any modified files to carry prominent notices
+-          stating that You changed the files; and
+-
+-      (c) You must retain, in the Source form of any Derivative Works
+-          that You distribute, all copyright, patent, trademark, and
+-          attribution notices from the Source form of the Work,
+-          excluding those notices that do not pertain to any part of
+-          the Derivative Works; and
+-
+-      (d) If the Work includes a "NOTICE" text file as part of its
+-          distribution, then any Derivative Works that You distribute must
+-          include a readable copy of the attribution notices contained
+-          within such NOTICE file, excluding those notices that do not
+-          pertain to any part of the Derivative Works, in at least one
+-          of the following places: within a NOTICE text file distributed
+-          as part of the Derivative Works; within the Source form or
+-          documentation, if provided along with the Derivative Works; or,
+-          within a display generated by the Derivative Works, if and
+-          wherever such third-party notices normally appear. The contents
+-          of the NOTICE file are for informational purposes only and
+-          do not modify the License. You may add Your own attribution
+-          notices within Derivative Works that You distribute, alongside
+-          or as an addendum to the NOTICE text from the Work, provided
+-          that such additional attribution notices cannot be construed
+-          as modifying the License.
+-
+-      You may add Your own copyright statement to Your modifications and
+-      may provide additional or different license terms and conditions
+-      for use, reproduction, or distribution of Your modifications, or
+-      for any such Derivative Works as a whole, provided Your use,
+-      reproduction, and distribution of the Work otherwise complies with
+-      the conditions stated in this License.
+-
+-   5. Submission of Contributions. Unless You explicitly state otherwise,
+-      any Contribution intentionally submitted for inclusion in the Work
+-      by You to the Licensor shall be under the terms and conditions of
+-      this License, without any additional terms or conditions.
+-      Notwithstanding the above, nothing herein shall supersede or modify
+-      the terms of any separate license agreement you may have executed
+-      with Licensor regarding such Contributions.
+-
+-   6. Trademarks. This License does not grant permission to use the trade
+-      names, trademarks, service marks, or product names of the Licensor,
+-      except as required for reasonable and customary use in describing the
+-      origin of the Work and reproducing the content of the NOTICE file.
+-
+-   7. Disclaimer of Warranty. Unless required by applicable law or
+-      agreed to in writing, Licensor provides the Work (and each
+-      Contributor provides its Contributions) on an "AS IS" BASIS,
+-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+-      implied, including, without limitation, any warranties or conditions
+-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+-      PARTICULAR PURPOSE. You are solely responsible for determining the
+-      appropriateness of using or redistributing the Work and assume any
+-      risks associated with Your exercise of permissions under this License.
+-
+-   8. Limitation of Liability. In no event and under no legal theory,
+-      whether in tort (including negligence), contract, or otherwise,
+-      unless required by applicable law (such as deliberate and grossly
+-      negligent acts) or agreed to in writing, shall any Contributor be
+-      liable to You for damages, including any direct, indirect, special,
+-      incidental, or consequential damages of any character arising as a
+-      result of this License or out of the use or inability to use the
+-      Work (including but not limited to damages for loss of goodwill,
+-      work stoppage, computer failure or malfunction, or any and all
+-      other commercial damages or losses), even if such Contributor
+-      has been advised of the possibility of such damages.
+-
+-   9. Accepting Warranty or Additional Liability. While redistributing
+-      the Work or Derivative Works thereof, You may choose to offer,
+-      and charge a fee for, acceptance of support, warranty, indemnity,
+-      or other liability obligations and/or rights consistent with this
+-      License. However, in accepting such obligations, You may act only
+-      on Your own behalf and on Your sole responsibility, not on behalf
+-      of any other Contributor, and only if You agree to indemnify,
+-      defend, and hold each Contributor harmless for any liability
+-      incurred by, or claims asserted against, such Contributor by reason
+-      of your accepting any such warranty or additional liability.
+-
+-   END OF TERMS AND CONDITIONS
+-
+-   APPENDIX: How to apply the Apache License to your work.
+-
+-      To apply the Apache License to your work, attach the following
+-      boilerplate notice, with the fields enclosed by brackets "[]"
+-      replaced with your own identifying information. (Don't include
+-      the brackets!)  The text should be enclosed in the appropriate
+-      comment syntax for the file format. We also recommend that a
+-      file or class name and description of purpose be included on the
+-      same "printed page" as the copyright notice for easier
+-      identification within third-party archives.
+-
+-   Copyright [yyyy] [name of copyright owner]
+-
+-   Licensed under the Apache License, Version 2.0 (the "License");
+-   you may not use this file except in compliance with the License.
+-   You may obtain a copy of the License at
+-
+-       http://www.apache.org/licenses/LICENSE-2.0
+-
+-   Unless required by applicable law or agreed to in writing, software
+-   distributed under the License is distributed on an "AS IS" BASIS,
+-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-   See the License for the specific language governing permissions and
+-   limitations under the License.
+diff --git a/Godeps/_workspace/src/github.com/google/gofuzz/README.md b/Godeps/_workspace/src/github.com/google/gofuzz/README.md
+deleted file mode 100644
+index 68fcf2c..0000000
+--- a/Godeps/_workspace/src/github.com/google/gofuzz/README.md
++++ /dev/null
+@@ -1,71 +0,0 @@
+-gofuzz
+-======
+-
+-gofuzz is a library for populating go objects with random values.
+-
+-[![GoDoc](https://godoc.org/github.com/google/gofuzz?status.png)](https://godoc.org/github.com/google/gofuzz)
+-[![Travis](https://travis-ci.org/google/gofuzz.svg?branch=master)](https://travis-ci.org/google/gofuzz)
+-
+-This is useful for testing:
+-
+-* Do your project's objects really serialize/unserialize correctly in all cases?
+-* Is there an incorrectly formatted object that will cause your project to panic?
+-
+-Import with ```import "github.com/google/gofuzz"```
+-
+-You can use it on single variables:
+-```
+-f := fuzz.New()
+-var myInt int
+-f.Fuzz(&myInt) // myInt gets a random value.
+-```
+-
+-You can use it on maps:
+-```
+-f := fuzz.New().NilChance(0).NumElements(1, 1)
+-var myMap map[ComplexKeyType]string
+-f.Fuzz(&myMap) // myMap will have exactly one element.
+-```
+-
+-Customize the chance of getting a nil pointer:
+-```
+-f := fuzz.New().NilChance(.5)
+-var fancyStruct struct {
+-  A, B, C, D *string
+-}
+-f.Fuzz(&fancyStruct) // About half the pointers should be set.
+-```
+-
+-You can even customize the randomization completely if needed:
+-```
+-type MyEnum string
+-const (
+-        A MyEnum = "A"
+-        B MyEnum = "B"
+-)
+-type MyInfo struct {
+-        Type MyEnum
+-        AInfo *string
+-        BInfo *string
+-}
+-
+-f := fuzz.New().NilChance(0).Funcs(
+-        func(e *MyInfo, c fuzz.Continue) {
+-                switch c.Intn(2) {
+-                case 0:
+-                        e.Type = A
+-                        c.Fuzz(&e.AInfo)
+-                case 1:
+-                        e.Type = B
+-                        c.Fuzz(&e.BInfo)
+-                }
+-        },
+-)
+-
+-var myObject MyInfo
+-f.Fuzz(&myObject) // Type will correspond to whether A or B info is set.
+-```
+-
+-See more examples in ```example_test.go```.
+-
+-Happy testing!
+diff --git a/Godeps/_workspace/src/github.com/google/gofuzz/doc.go b/Godeps/_workspace/src/github.com/google/gofuzz/doc.go
+deleted file mode 100644
+index 9f9956d..0000000
+--- a/Godeps/_workspace/src/github.com/google/gofuzz/doc.go
++++ /dev/null
+@@ -1,18 +0,0 @@
+-/*
+-Copyright 2014 Google Inc. All rights reserved.
+-
+-Licensed under the Apache License, Version 2.0 (the "License");
+-you may not use this file except in compliance with the License.
+-You may obtain a copy of the License at
+-
+-    http://www.apache.org/licenses/LICENSE-2.0
+-
+-Unless required by applicable law or agreed to in writing, software
+-distributed under the License is distributed on an "AS IS" BASIS,
+-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-See the License for the specific language governing permissions and
+-limitations under the License.
+-*/
+-
+-// Package fuzz is a library for populating go objects with random values.
+-package fuzz
+diff --git a/Godeps/_workspace/src/github.com/google/gofuzz/example_test.go b/Godeps/_workspace/src/github.com/google/gofuzz/example_test.go
+deleted file mode 100644
+index 792707a..0000000
+--- a/Godeps/_workspace/src/github.com/google/gofuzz/example_test.go
++++ /dev/null
+@@ -1,225 +0,0 @@
+-/*
+-Copyright 2014 Google Inc. All rights reserved.
+-
+-Licensed under the Apache License, Version 2.0 (the "License");
+-you may not use this file except in compliance with the License.
+-You may obtain a copy of the License at
+-
+-    http://www.apache.org/licenses/LICENSE-2.0
+-
+-Unless required by applicable law or agreed to in writing, software
+-distributed under the License is distributed on an "AS IS" BASIS,
+-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-See the License for the specific language governing permissions and
+-limitations under the License.
+-*/
+-
+-package fuzz_test
+-
+-import (
+-	"encoding/json"
+-	"fmt"
+-	"math/rand"
+-
+-	"github.com/google/gofuzz"
+-)
+-
+-func ExampleSimple() {
+-	type MyType struct {
+-		A string
+-		B string
+-		C int
+-		D struct {
+-			E float64
+-		}
+-	}
+-
+-	f := fuzz.New()
+-	object := MyType{}
+-
+-	uniqueObjects := map[MyType]int{}
+-
+-	for i := 0; i < 1000; i++ {
+-		f.Fuzz(&object)
+-		uniqueObjects[object]++
+-	}
+-	fmt.Printf("Got %v unique objects.\n", len(uniqueObjects))
+-	// Output:
+-	// Got 1000 unique objects.
+-}
+-
+-func ExampleCustom() {
+-	type MyType struct {
+-		A int
+-		B string
+-	}
+-
+-	counter := 0
+-	f := fuzz.New().Funcs(
+-		func(i *int, c fuzz.Continue) {
+-			*i = counter
+-			counter++
+-		},
+-	)
+-	object := MyType{}
+-
+-	uniqueObjects := map[MyType]int{}
+-
+-	for i := 0; i < 100; i++ {
+-		f.Fuzz(&object)
+-		if object.A != i {
+-			fmt.Printf("Unexpected value: %#v\n", object)
+-		}
+-		uniqueObjects[object]++
+-	}
+-	fmt.Printf("Got %v unique objects.\n", len(uniqueObjects))
+-	// Output:
+-	// Got 100 unique objects.
+-}
+-
+-func ExampleComplex() {
+-	type OtherType struct {
+-		A string
+-		B string
+-	}
+-	type MyType struct {
+-		Pointer             *OtherType
+-		Map                 map[string]OtherType
+-		PointerMap          *map[string]OtherType
+-		Slice               []OtherType
+-		SlicePointer        []*OtherType
+-		PointerSlicePointer *[]*OtherType
+-	}
+-
+-	f := fuzz.New().RandSource(rand.NewSource(0)).NilChance(0).NumElements(1, 1).Funcs(
+-		func(o *OtherType, c fuzz.Continue) {
+-			o.A = "Foo"
+-			o.B = "Bar"
+-		},
+-		func(op **OtherType, c fuzz.Continue) {
+-			*op = &OtherType{"A", "B"}
+-		},
+-		func(m map[string]OtherType, c fuzz.Continue) {
+-			m["Works Because"] = OtherType{
+-				"Fuzzer",
+-				"Preallocated",
+-			}
+-		},
+-	)
+-	object := MyType{}
+-	f.Fuzz(&object)
+-	bytes, err := json.MarshalIndent(&object, "", "    ")
+-	if err != nil {
+-		fmt.Printf("error: %v\n", err)
+-	}
+-	fmt.Printf("%s\n", string(bytes))
+-	// Output:
+-	// {
+-	//     "Pointer": {
+-	//         "A": "A",
+-	//         "B": "B"
+-	//     },
+-	//     "Map": {
+-	//         "Works Because": {
+-	//             "A": "Fuzzer",
+-	//             "B": "Preallocated"
+-	//         }
+-	//     },
+-	//     "PointerMap": {
+-	//         "Works Because": {
+-	//             "A": "Fuzzer",
+-	//             "B": "Preallocated"
+-	//         }
+-	//     },
+-	//     "Slice": [
+-	//         {
+-	//             "A": "Foo",
+-	//             "B": "Bar"
+-	//         }
+-	//     ],
+-	//     "SlicePointer": [
+-	//         {
+-	//             "A": "A",
+-	//             "B": "B"
+-	//         }
+-	//     ],
+-	//     "PointerSlicePointer": [
+-	//         {
+-	//             "A": "A",
+-	//             "B": "B"
+-	//         }
+-	//     ]
+-	// }
+-}
+-
+-func ExampleMap() {
+-	f := fuzz.New().NilChance(0).NumElements(1, 1)
+-	var myMap map[struct{ A, B, C int }]string
+-	f.Fuzz(&myMap)
+-	fmt.Printf("myMap has %v element(s).\n", len(myMap))
+-	// Output:
+-	// myMap has 1 element(s).
+-}
+-
+-func ExampleSingle() {
+-	f := fuzz.New()
+-	var i int
+-	f.Fuzz(&i)
+-
+-	// Technically, we'd expect this to fail one out of 2 billion attempts...
+-	fmt.Printf("(i == 0) == %v", i == 0)
+-	// Output:
+-	// (i == 0) == false
+-}
+-
+-func ExampleEnum() {
+-	type MyEnum string
+-	const (
+-		A MyEnum = "A"
+-		B MyEnum = "B"
+-	)
+-	type MyInfo struct {
+-		Type  MyEnum
+-		AInfo *string
+-		BInfo *string
+-	}
+-
+-	f := fuzz.New().NilChance(0).Funcs(
+-		func(e *MyInfo, c fuzz.Continue) {
+-			// Note c's embedded Rand allows for direct use.
+-			// We could also use c.RandBool() here.
+-			switch c.Intn(2) {
+-			case 0:
+-				e.Type = A
+-				c.Fuzz(&e.AInfo)
+-			case 1:
+-				e.Type = B
+-				c.Fuzz(&e.BInfo)
+-			}
+-		},
+-	)
+-
+-	for i := 0; i < 100; i++ {
+-		var myObject MyInfo
+-		f.Fuzz(&myObject)
+-		switch myObject.Type {
+-		case A:
+-			if myObject.AInfo == nil {
+-				fmt.Println("AInfo should have been set!")
+-			}
+-			if myObject.BInfo != nil {
+-				fmt.Println("BInfo should NOT have been set!")
+-			}
+-		case B:
+-			if myObject.BInfo == nil {
+-				fmt.Println("BInfo should have been set!")
+-			}
+-			if myObject.AInfo != nil {
+-				fmt.Println("AInfo should NOT have been set!")
+-			}
+-		default:
+-			fmt.Println("Invalid enum value!")
+-		}
+-	}
+-	// Output:
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/gofuzz/fuzz.go b/Godeps/_workspace/src/github.com/google/gofuzz/fuzz.go
+deleted file mode 100644
+index 31c2838..0000000
+--- a/Godeps/_workspace/src/github.com/google/gofuzz/fuzz.go
++++ /dev/null
+@@ -1,366 +0,0 @@
+-/*
+-Copyright 2014 Google Inc. All rights reserved.
+-
+-Licensed under the Apache License, Version 2.0 (the "License");
+-you may not use this file except in compliance with the License.
+-You may obtain a copy of the License at
+-
+-    http://www.apache.org/licenses/LICENSE-2.0
+-
+-Unless required by applicable law or agreed to in writing, software
+-distributed under the License is distributed on an "AS IS" BASIS,
+-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-See the License for the specific language governing permissions and
+-limitations under the License.
+-*/
+-
+-package fuzz
+-
+-import (
+-	"fmt"
+-	"math/rand"
+-	"reflect"
+-	"time"
+-)
+-
+-// fuzzFuncMap is a map from a type to a fuzzFunc that handles that type.
+-type fuzzFuncMap map[reflect.Type]reflect.Value
+-
+-// Fuzzer knows how to fill any object with random fields.
+-type Fuzzer struct {
+-	fuzzFuncs   fuzzFuncMap
+-	r           *rand.Rand
+-	nilChance   float64
+-	minElements int
+-	maxElements int
+-}
+-
+-// New returns a new Fuzzer. Customize your Fuzzer further by calling Funcs,
+-// RandSource, NilChance, or NumElements in any order.
+-func New() *Fuzzer {
+-	f := &Fuzzer{
+-		fuzzFuncs:   fuzzFuncMap{},
+-		r:           rand.New(rand.NewSource(time.Now().UnixNano())),
+-		nilChance:   .2,
+-		minElements: 1,
+-		maxElements: 10,
+-	}
+-	return f
+-}
+-
+-// Funcs adds each entry in fuzzFuncs as a custom fuzzing function.
+-//
+-// Each entry in fuzzFuncs must be a function taking two parameters.
+-// The first parameter must be a pointer or map. It is the variable that
+-// function will fill with random data. The second parameter must be a
+-// fuzz.Continue, which will provide a source of randomness and a way
+-// to automatically continue fuzzing smaller pieces of the first parameter.
+-//
+-// These functions are called sensibly, e.g., if you wanted custom string
+-// fuzzing, the function `func(s *string, c fuzz.Continue)` would get
+-// called and passed the address of strings. Maps and pointers will always
+-// be made/new'd for you, ignoring the NilChange option. For slices, it
+-// doesn't make much sense to  pre-create them--Fuzzer doesn't know how
+-// long you want your slice--so take a pointer to a slice, and make it
+-// yourself. (If you don't want your map/pointer type pre-made, take a
+-// pointer to it, and make it yourself.) See the examples for a range of
+-// custom functions.
+-func (f *Fuzzer) Funcs(fuzzFuncs ...interface{}) *Fuzzer {
+-	for i := range fuzzFuncs {
+-		v := reflect.ValueOf(fuzzFuncs[i])
+-		if v.Kind() != reflect.Func {
+-			panic("Need only funcs!")
+-		}
+-		t := v.Type()
+-		if t.NumIn() != 2 || t.NumOut() != 0 {
+-			panic("Need 2 in and 0 out params!")
+-		}
+-		argT := t.In(0)
+-		switch argT.Kind() {
+-		case reflect.Ptr, reflect.Map:
+-		default:
+-			panic("fuzzFunc must take pointer or map type")
+-		}
+-		if t.In(1) != reflect.TypeOf(Continue{}) {
+-			panic("fuzzFunc's second parameter must be type fuzz.Continue")
+-		}
+-		f.fuzzFuncs[argT] = v
+-	}
+-	return f
+-}
+-
+-// RandSource causes f to get values from the given source of randomness.
+-// Use if you want deterministic fuzzing.
+-func (f *Fuzzer) RandSource(s rand.Source) *Fuzzer {
+-	f.r = rand.New(s)
+-	return f
+-}
+-
+-// NilChance sets the probability of creating a nil pointer, map, or slice to
+-// 'p'. 'p' should be between 0 (no nils) and 1 (all nils), inclusive.
+-func (f *Fuzzer) NilChance(p float64) *Fuzzer {
+-	if p < 0 || p > 1 {
+-		panic("p should be between 0 and 1, inclusive.")
+-	}
+-	f.nilChance = p
+-	return f
+-}
+-
+-// NumElements sets the minimum and maximum number of elements that will be
+-// added to a non-nil map or slice.
+-func (f *Fuzzer) NumElements(atLeast, atMost int) *Fuzzer {
+-	if atLeast > atMost {
+-		panic("atLeast must be <= atMost")
+-	}
+-	if atLeast < 0 {
+-		panic("atLeast must be >= 0")
+-	}
+-	f.minElements = atLeast
+-	f.maxElements = atMost
+-	return f
+-}
+-
+-func (f *Fuzzer) genElementCount() int {
+-	if f.minElements == f.maxElements {
+-		return f.minElements
+-	}
+-	return f.minElements + f.r.Intn(f.maxElements-f.minElements)
+-}
+-
+-func (f *Fuzzer) genShouldFill() bool {
+-	return f.r.Float64() > f.nilChance
+-}
+-
+-// Fuzz recursively fills all of obj's fields with something random.
+-// Not safe for cyclic or tree-like structs!
+-// obj must be a pointer. Only exported (public) fields can be set (thanks, golang :/ )
+-// Intended for tests, so will panic on bad input or unimplemented fields.
+-func (f *Fuzzer) Fuzz(obj interface{}) {
+-	v := reflect.ValueOf(obj)
+-	if v.Kind() != reflect.Ptr {
+-		panic("needed ptr!")
+-	}
+-	v = v.Elem()
+-	f.doFuzz(v)
+-}
+-
+-func (f *Fuzzer) doFuzz(v reflect.Value) {
+-	if !v.CanSet() {
+-		return
+-	}
+-	// Check for both pointer and non-pointer custom functions.
+-	if v.CanAddr() && f.tryCustom(v.Addr()) {
+-		return
+-	}
+-	if f.tryCustom(v) {
+-		return
+-	}
+-	if fn, ok := fillFuncMap[v.Kind()]; ok {
+-		fn(v, f.r)
+-		return
+-	}
+-	switch v.Kind() {
+-	case reflect.Map:
+-		if f.genShouldFill() {
+-			v.Set(reflect.MakeMap(v.Type()))
+-			n := f.genElementCount()
+-			for i := 0; i < n; i++ {
+-				key := reflect.New(v.Type().Key()).Elem()
+-				f.doFuzz(key)
+-				val := reflect.New(v.Type().Elem()).Elem()
+-				f.doFuzz(val)
+-				v.SetMapIndex(key, val)
+-			}
+-			return
+-		}
+-		v.Set(reflect.Zero(v.Type()))
+-	case reflect.Ptr:
+-		if f.genShouldFill() {
+-			v.Set(reflect.New(v.Type().Elem()))
+-			f.doFuzz(v.Elem())
+-			return
+-		}
+-		v.Set(reflect.Zero(v.Type()))
+-	case reflect.Slice:
+-		if f.genShouldFill() {
+-			n := f.genElementCount()
+-			v.Set(reflect.MakeSlice(v.Type(), n, n))
+-			for i := 0; i < n; i++ {
+-				f.doFuzz(v.Index(i))
+-			}
+-			return
+-		}
+-		v.Set(reflect.Zero(v.Type()))
+-	case reflect.Struct:
+-		for i := 0; i < v.NumField(); i++ {
+-			f.doFuzz(v.Field(i))
+-		}
+-	case reflect.Array:
+-		fallthrough
+-	case reflect.Chan:
+-		fallthrough
+-	case reflect.Func:
+-		fallthrough
+-	case reflect.Interface:
+-		fallthrough
+-	default:
+-		panic(fmt.Sprintf("Can't handle %#v", v.Interface()))
+-	}
+-}
+-
+-// tryCustom searches for custom handlers, and returns true iff it finds a match
+-// and successfully randomizes v.
+-func (f *Fuzzer) tryCustom(v reflect.Value) bool {
+-	doCustom, ok := f.fuzzFuncs[v.Type()]
+-	if !ok {
+-		return false
+-	}
+-
+-	switch v.Kind() {
+-	case reflect.Ptr:
+-		if v.IsNil() {
+-			if !v.CanSet() {
+-				return false
+-			}
+-			v.Set(reflect.New(v.Type().Elem()))
+-		}
+-	case reflect.Map:
+-		if v.IsNil() {
+-			if !v.CanSet() {
+-				return false
+-			}
+-			v.Set(reflect.MakeMap(v.Type()))
+-		}
+-	default:
+-		return false
+-	}
+-
+-	doCustom.Call([]reflect.Value{v, reflect.ValueOf(Continue{
+-		f:    f,
+-		Rand: f.r,
+-	})})
+-	return true
+-}
+-
+-// Continue can be passed to custom fuzzing functions to allow them to use
+-// the correct source of randomness and to continue fuzzing their members.
+-type Continue struct {
+-	f *Fuzzer
+-
+-	// For convenience, Continue implements rand.Rand via embedding.
+-	// Use this for generating any randomness if you want your fuzzing
+-	// to be repeatable for a given seed.
+-	*rand.Rand
+-}
+-
+-// Fuzz continues fuzzing obj. obj must be a pointer.
+-func (c Continue) Fuzz(obj interface{}) {
+-	v := reflect.ValueOf(obj)
+-	if v.Kind() != reflect.Ptr {
+-		panic("needed ptr!")
+-	}
+-	v = v.Elem()
+-	c.f.doFuzz(v)
+-}
+-
+-// RandString makes a random string up to 20 characters long. The returned string
+-// may include a variety of (valid) UTF-8 encodings.
+-func (c Continue) RandString() string {
+-	return randString(c.Rand)
+-}
+-
+-// RandUint64 makes random 64 bit numbers.
+-// Weirdly, rand doesn't have a function that gives you 64 random bits.
+-func (c Continue) RandUint64() uint64 {
+-	return randUint64(c.Rand)
+-}
+-
+-// RandBool returns true or false randomly.
+-func (c Continue) RandBool() bool {
+-	return randBool(c.Rand)
+-}
+-
+-func fuzzInt(v reflect.Value, r *rand.Rand) {
+-	v.SetInt(int64(randUint64(r)))
+-}
+-
+-func fuzzUint(v reflect.Value, r *rand.Rand) {
+-	v.SetUint(randUint64(r))
+-}
+-
+-var fillFuncMap = map[reflect.Kind]func(reflect.Value, *rand.Rand){
+-	reflect.Bool: func(v reflect.Value, r *rand.Rand) {
+-		v.SetBool(randBool(r))
+-	},
+-	reflect.Int:     fuzzInt,
+-	reflect.Int8:    fuzzInt,
+-	reflect.Int16:   fuzzInt,
+-	reflect.Int32:   fuzzInt,
+-	reflect.Int64:   fuzzInt,
+-	reflect.Uint:    fuzzUint,
+-	reflect.Uint8:   fuzzUint,
+-	reflect.Uint16:  fuzzUint,
+-	reflect.Uint32:  fuzzUint,
+-	reflect.Uint64:  fuzzUint,
+-	reflect.Uintptr: fuzzUint,
+-	reflect.Float32: func(v reflect.Value, r *rand.Rand) {
+-		v.SetFloat(float64(r.Float32()))
+-	},
+-	reflect.Float64: func(v reflect.Value, r *rand.Rand) {
+-		v.SetFloat(r.Float64())
+-	},
+-	reflect.Complex64: func(v reflect.Value, r *rand.Rand) {
+-		panic("unimplemented")
+-	},
+-	reflect.Complex128: func(v reflect.Value, r *rand.Rand) {
+-		panic("unimplemented")
+-	},
+-	reflect.String: func(v reflect.Value, r *rand.Rand) {
+-		v.SetString(randString(r))
+-	},
+-	reflect.UnsafePointer: func(v reflect.Value, r *rand.Rand) {
+-		panic("unimplemented")
+-	},
+-}
+-
+-// randBool returns true or false randomly.
+-func randBool(r *rand.Rand) bool {
+-	if r.Int()&1 == 1 {
+-		return true
+-	}
+-	return false
+-}
+-
+-type charRange struct {
+-	first, last rune
+-}
+-
+-// choose returns a random unicode character from the given range, using the
+-// given randomness source.
+-func (r *charRange) choose(rand *rand.Rand) rune {
+-	count := int64(r.last - r.first)
+-	return r.first + rune(rand.Int63n(count))
+-}
+-
+-var unicodeRanges = []charRange{
+-	{' ', '~'},           // ASCII characters
+-	{'\u00a0', '\u02af'}, // Multi-byte encoded characters
+-	{'\u4e00', '\u9fff'}, // Common CJK (even longer encodings)
+-}
+-
+-// randString makes a random string up to 20 characters long. The returned string
+-// may include a variety of (valid) UTF-8 encodings.
+-func randString(r *rand.Rand) string {
+-	n := r.Intn(20)
+-	runes := make([]rune, n)
+-	for i := range runes {
+-		runes[i] = unicodeRanges[r.Intn(len(unicodeRanges))].choose(r)
+-	}
+-	return string(runes)
+-}
+-
+-// randUint64 makes random 64 bit numbers.
+-// Weirdly, rand doesn't have a function that gives you 64 random bits.
+-func randUint64(r *rand.Rand) uint64 {
+-	return uint64(r.Uint32())<<32 | uint64(r.Uint32())
+-}
+diff --git a/Godeps/_workspace/src/github.com/google/gofuzz/fuzz_test.go b/Godeps/_workspace/src/github.com/google/gofuzz/fuzz_test.go
+deleted file mode 100644
+index 4f0d4db..0000000
+--- a/Godeps/_workspace/src/github.com/google/gofuzz/fuzz_test.go
++++ /dev/null
+@@ -1,258 +0,0 @@
+-/*
+-Copyright 2014 Google Inc. All rights reserved.
+-
+-Licensed under the Apache License, Version 2.0 (the "License");
+-you may not use this file except in compliance with the License.
+-You may obtain a copy of the License at
+-
+-    http://www.apache.org/licenses/LICENSE-2.0
+-
+-Unless required by applicable law or agreed to in writing, software
+-distributed under the License is distributed on an "AS IS" BASIS,
+-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-See the License for the specific language governing permissions and
+-limitations under the License.
+-*/
+-
+-package fuzz
+-
+-import (
+-	"reflect"
+-	"testing"
+-)
+-
+-func TestFuzz_basic(t *testing.T) {
+-	obj := &struct {
+-		I    int
+-		I8   int8
+-		I16  int16
+-		I32  int32
+-		I64  int64
+-		U    uint
+-		U8   uint8
+-		U16  uint16
+-		U32  uint32
+-		U64  uint64
+-		Uptr uintptr
+-		S    string
+-		B    bool
+-	}{}
+-
+-	failed := map[string]int{}
+-	for i := 0; i < 10; i++ {
+-		New().Fuzz(obj)
+-
+-		if n, v := "i", obj.I; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "i8", obj.I8; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "i16", obj.I16; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "i32", obj.I32; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "i64", obj.I64; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "u", obj.U; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "u8", obj.U8; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "u16", obj.U16; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "u32", obj.U32; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "u64", obj.U64; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "uptr", obj.Uptr; v == 0 {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "s", obj.S; v == "" {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "b", obj.B; v == false {
+-			failed[n] = failed[n] + 1
+-		}
+-	}
+-	checkFailed(t, failed)
+-}
+-
+-func checkFailed(t *testing.T, failed map[string]int) {
+-	for k, v := range failed {
+-		if v > 8 {
+-			t.Errorf("%v seems to not be getting set, was zero value %v times", k, v)
+-		}
+-	}
+-}
+-
+-func TestFuzz_structptr(t *testing.T) {
+-	obj := &struct {
+-		A *struct {
+-			S string
+-		}
+-	}{}
+-
+-	f := New().NilChance(.5)
+-	failed := map[string]int{}
+-	for i := 0; i < 10; i++ {
+-		f.Fuzz(obj)
+-
+-		if n, v := "a not nil", obj.A; v == nil {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "a nil", obj.A; v != nil {
+-			failed[n] = failed[n] + 1
+-		}
+-		if n, v := "as", obj.A; v == nil || v.S == "" {
+-			failed[n] = failed[n] + 1
+-		}
+-	}
+-	checkFailed(t, failed)
+-}
+-
+-// tryFuzz tries fuzzing up to 20 times. Fail if check() never passes, report the highest
+-// stage it ever got to.
+-func tryFuzz(t *testing.T, f *Fuzzer, obj interface{}, check func() (stage int, passed bool)) {
+-	maxStage := 0
+-	for i := 0; i < 20; i++ {
+-		f.Fuzz(obj)
+-		stage, passed := check()
+-		if stage > maxStage {
+-			maxStage = stage
+-		}
+-		if passed {
+-			return
+-		}
+-	}
+-	t.Errorf("Only ever got to stage %v", maxStage)
+-}
+-
+-func TestFuzz_structmap(t *testing.T) {
+-	obj := &struct {
+-		A map[struct {
+-			S string
+-		}]struct {
+-			S2 string
+-		}
+-		B map[string]string
+-	}{}
+-
+-	tryFuzz(t, New(), obj, func() (int, bool) {
+-		if obj.A == nil {
+-			return 1, false
+-		}
+-		if len(obj.A) == 0 {
+-			return 2, false
+-		}
+-		for k, v := range obj.A {
+-			if k.S == "" {
+-				return 3, false
+-			}
+-			if v.S2 == "" {
+-				return 4, false
+-			}
+-		}
+-
+-		if obj.B == nil {
+-			return 5, false
+-		}
+-		if len(obj.B) == 0 {
+-			return 6, false
+-		}
+-		for k, v := range obj.B {
+-			if k == "" {
+-				return 7, false
+-			}
+-			if v == "" {
+-				return 8, false
+-			}
+-		}
+-		return 9, true
+-	})
+-}
+-
+-func TestFuzz_structslice(t *testing.T) {
+-	obj := &struct {
+-		A []struct {
+-			S string
+-		}
+-		B []string
+-	}{}
+-
+-	tryFuzz(t, New(), obj, func() (int, bool) {
+-		if obj.A == nil {
+-			return 1, false
+-		}
+-		if len(obj.A) == 0 {
+-			return 2, false
+-		}
+-		for _, v := range obj.A {
+-			if v.S == "" {
+-				return 3, false
+-			}
+-		}
+-
+-		if obj.B == nil {
+-			return 4, false
+-		}
+-		if len(obj.B) == 0 {
+-			return 5, false
+-		}
+-		for _, v := range obj.B {
+-			if v == "" {
+-				return 6, false
+-			}
+-		}
+-		return 7, true
+-	})
+-}
+-
+-func TestFuzz_custom(t *testing.T) {
+-	obj := &struct {
+-		A string
+-		B *string
+-		C map[string]string
+-		D *map[string]string
+-	}{}
+-
+-	testPhrase := "gotcalled"
+-	testMap := map[string]string{"C": "D"}
+-	f := New().Funcs(
+-		func(s *string, c Continue) {
+-			*s = testPhrase
+-		},
+-		func(m map[string]string, c Continue) {
+-			m["C"] = "D"
+-		},
+-	)
+-
+-	tryFuzz(t, f, obj, func() (int, bool) {
+-		if obj.A != testPhrase {
+-			return 1, false
+-		}
+-		if obj.B == nil {
+-			return 2, false
+-		}
+-		if *obj.B != testPhrase {
+-			return 3, false
+-		}
+-		if e, a := testMap, obj.C; !reflect.DeepEqual(e, a) {
+-			return 4, false
+-		}
+-		if obj.D == nil {
+-			return 5, false
+-		}
+-		if e, a := testMap, *obj.D; !reflect.DeepEqual(e, a) {
+-			return 6, false
+-		}
+-		return 7, true
+-	})
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/attempt.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/attempt.go
+deleted file mode 100644
+index c0654f5..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/attempt.go
++++ /dev/null
+@@ -1,74 +0,0 @@
+-package aws
+-
+-import (
+-	"time"
+-)
+-
+-// AttemptStrategy represents a strategy for waiting for an action
+-// to complete successfully. This is an internal type used by the
+-// implementation of other goamz packages.
+-type AttemptStrategy struct {
+-	Total time.Duration // total duration of attempt.
+-	Delay time.Duration // interval between each try in the burst.
+-	Min   int           // minimum number of retries; overrides Total
+-}
+-
+-type Attempt struct {
+-	strategy AttemptStrategy
+-	last     time.Time
+-	end      time.Time
+-	force    bool
+-	count    int
+-}
+-
+-// Start begins a new sequence of attempts for the given strategy.
+-func (s AttemptStrategy) Start() *Attempt {
+-	now := time.Now()
+-	return &Attempt{
+-		strategy: s,
+-		last:     now,
+-		end:      now.Add(s.Total),
+-		force:    true,
+-	}
+-}
+-
+-// Next waits until it is time to perform the next attempt or returns
+-// false if it is time to stop trying.
+-func (a *Attempt) Next() bool {
+-	now := time.Now()
+-	sleep := a.nextSleep(now)
+-	if !a.force && !now.Add(sleep).Before(a.end) && a.strategy.Min <= a.count {
+-		return false
+-	}
+-	a.force = false
+-	if sleep > 0 && a.count > 0 {
+-		time.Sleep(sleep)
+-		now = time.Now()
+-	}
+-	a.count++
+-	a.last = now
+-	return true
+-}
+-
+-func (a *Attempt) nextSleep(now time.Time) time.Duration {
+-	sleep := a.strategy.Delay - now.Sub(a.last)
+-	if sleep < 0 {
+-		return 0
+-	}
+-	return sleep
+-}
+-
+-// HasNext returns whether another attempt will be made if the current
+-// one fails. If it returns true, the following call to Next is
+-// guaranteed to return true.
+-func (a *Attempt) HasNext() bool {
+-	if a.force || a.strategy.Min > a.count {
+-		return true
+-	}
+-	now := time.Now()
+-	if now.Add(a.nextSleep(now)).Before(a.end) {
+-		a.force = true
+-		return true
+-	}
+-	return false
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/attempt_test.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/attempt_test.go
+deleted file mode 100644
+index 1fda5bf..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/attempt_test.go
++++ /dev/null
+@@ -1,57 +0,0 @@
+-package aws_test
+-
+-import (
+-	"github.com/mitchellh/goamz/aws"
+-	. "github.com/motain/gocheck"
+-	"time"
+-)
+-
+-func (S) TestAttemptTiming(c *C) {
+-	testAttempt := aws.AttemptStrategy{
+-		Total: 0.25e9,
+-		Delay: 0.1e9,
+-	}
+-	want := []time.Duration{0, 0.1e9, 0.2e9, 0.2e9}
+-	got := make([]time.Duration, 0, len(want)) // avoid allocation when testing timing
+-	t0 := time.Now()
+-	for a := testAttempt.Start(); a.Next(); {
+-		got = append(got, time.Now().Sub(t0))
+-	}
+-	got = append(got, time.Now().Sub(t0))
+-	c.Assert(got, HasLen, len(want))
+-	const margin = 0.01e9
+-	for i, got := range want {
+-		lo := want[i] - margin
+-		hi := want[i] + margin
+-		if got < lo || got > hi {
+-			c.Errorf("attempt %d want %g got %g", i, want[i].Seconds(), got.Seconds())
+-		}
+-	}
+-}
+-
+-func (S) TestAttemptNextHasNext(c *C) {
+-	a := aws.AttemptStrategy{}.Start()
+-	c.Assert(a.Next(), Equals, true)
+-	c.Assert(a.Next(), Equals, false)
+-
+-	a = aws.AttemptStrategy{}.Start()
+-	c.Assert(a.Next(), Equals, true)
+-	c.Assert(a.HasNext(), Equals, false)
+-	c.Assert(a.Next(), Equals, false)
+-
+-	a = aws.AttemptStrategy{Total: 2e8}.Start()
+-	c.Assert(a.Next(), Equals, true)
+-	c.Assert(a.HasNext(), Equals, true)
+-	time.Sleep(2e8)
+-	c.Assert(a.HasNext(), Equals, true)
+-	c.Assert(a.Next(), Equals, true)
+-	c.Assert(a.Next(), Equals, false)
+-
+-	a = aws.AttemptStrategy{Total: 1e8, Min: 2}.Start()
+-	time.Sleep(1e8)
+-	c.Assert(a.Next(), Equals, true)
+-	c.Assert(a.HasNext(), Equals, true)
+-	c.Assert(a.Next(), Equals, true)
+-	c.Assert(a.HasNext(), Equals, false)
+-	c.Assert(a.Next(), Equals, false)
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/aws.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/aws.go
+deleted file mode 100644
+index c304d55..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/aws.go
++++ /dev/null
+@@ -1,423 +0,0 @@
+-//
+-// goamz - Go packages to interact with the Amazon Web Services.
+-//
+-//   https://wiki.ubuntu.com/goamz
+-//
+-// Copyright (c) 2011 Canonical Ltd.
+-//
+-// Written by Gustavo Niemeyer <gustavo.niemeyer at canonical.com>
+-//
+-package aws
+-
+-import (
+-	"encoding/json"
+-	"errors"
+-	"fmt"
+-	"github.com/vaughan0/go-ini"
+-	"io/ioutil"
+-	"os"
+-)
+-
+-// Region defines the URLs where AWS services may be accessed.
+-//
+-// See http://goo.gl/d8BP1 for more details.
+-type Region struct {
+-	Name                 string // the canonical name of this region.
+-	EC2Endpoint          string
+-	S3Endpoint           string
+-	S3BucketEndpoint     string // Not needed by AWS S3. Use ${bucket} for bucket name.
+-	S3LocationConstraint bool   // true if this region requires a LocationConstraint declaration.
+-	S3LowercaseBucket    bool   // true if the region requires bucket names to be lower case.
+-	SDBEndpoint          string
+-	SNSEndpoint          string
+-	SQSEndpoint          string
+-	IAMEndpoint          string
+-	ELBEndpoint          string
+-	AutoScalingEndpoint  string
+-	RdsEndpoint          string
+-	Route53Endpoint      string
+-}
+-
+-var USGovWest = Region{
+-	"us-gov-west-1",
+-	"https://ec2.us-gov-west-1.amazonaws.com",
+-	"https://s3-fips-us-gov-west-1.amazonaws.com",
+-	"",
+-	true,
+-	true,
+-	"",
+-	"https://sns.us-gov-west-1.amazonaws.com",
+-	"https://sqs.us-gov-west-1.amazonaws.com",
+-	"https://iam.us-gov.amazonaws.com",
+-	"https://elasticloadbalancing.us-gov-west-1.amazonaws.com",
+-	"https://autoscaling.us-gov-west-1.amazonaws.com",
+-	"https://rds.us-gov-west-1.amazonaws.com",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var USEast = Region{
+-	"us-east-1",
+-	"https://ec2.us-east-1.amazonaws.com",
+-	"https://s3.amazonaws.com",
+-	"",
+-	false,
+-	false,
+-	"https://sdb.amazonaws.com",
+-	"https://sns.us-east-1.amazonaws.com",
+-	"https://sqs.us-east-1.amazonaws.com",
+-	"https://iam.amazonaws.com",
+-	"https://elasticloadbalancing.us-east-1.amazonaws.com",
+-	"https://autoscaling.us-east-1.amazonaws.com",
+-	"https://rds.us-east-1.amazonaws.com",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var USWest = Region{
+-	"us-west-1",
+-	"https://ec2.us-west-1.amazonaws.com",
+-	"https://s3-us-west-1.amazonaws.com",
+-	"",
+-	true,
+-	true,
+-	"https://sdb.us-west-1.amazonaws.com",
+-	"https://sns.us-west-1.amazonaws.com",
+-	"https://sqs.us-west-1.amazonaws.com",
+-	"https://iam.amazonaws.com",
+-	"https://elasticloadbalancing.us-west-1.amazonaws.com",
+-	"https://autoscaling.us-west-1.amazonaws.com",
+-	"https://rds.us-west-1.amazonaws.com",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var USWest2 = Region{
+-	"us-west-2",
+-	"https://ec2.us-west-2.amazonaws.com",
+-	"https://s3-us-west-2.amazonaws.com",
+-	"",
+-	true,
+-	true,
+-	"https://sdb.us-west-2.amazonaws.com",
+-	"https://sns.us-west-2.amazonaws.com",
+-	"https://sqs.us-west-2.amazonaws.com",
+-	"https://iam.amazonaws.com",
+-	"https://elasticloadbalancing.us-west-2.amazonaws.com",
+-	"https://autoscaling.us-west-2.amazonaws.com",
+-	"https://rds.us-west-2.amazonaws.com",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var EUWest = Region{
+-	"eu-west-1",
+-	"https://ec2.eu-west-1.amazonaws.com",
+-	"https://s3-eu-west-1.amazonaws.com",
+-	"",
+-	true,
+-	true,
+-	"https://sdb.eu-west-1.amazonaws.com",
+-	"https://sns.eu-west-1.amazonaws.com",
+-	"https://sqs.eu-west-1.amazonaws.com",
+-	"https://iam.amazonaws.com",
+-	"https://elasticloadbalancing.eu-west-1.amazonaws.com",
+-	"https://autoscaling.eu-west-1.amazonaws.com",
+-	"https://rds.eu-west-1.amazonaws.com",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var APSoutheast = Region{
+-	"ap-southeast-1",
+-	"https://ec2.ap-southeast-1.amazonaws.com",
+-	"https://s3-ap-southeast-1.amazonaws.com",
+-	"",
+-	true,
+-	true,
+-	"https://sdb.ap-southeast-1.amazonaws.com",
+-	"https://sns.ap-southeast-1.amazonaws.com",
+-	"https://sqs.ap-southeast-1.amazonaws.com",
+-	"https://iam.amazonaws.com",
+-	"https://elasticloadbalancing.ap-southeast-1.amazonaws.com",
+-	"https://autoscaling.ap-southeast-1.amazonaws.com",
+-	"https://rds.ap-southeast-1.amazonaws.com",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var APSoutheast2 = Region{
+-	"ap-southeast-2",
+-	"https://ec2.ap-southeast-2.amazonaws.com",
+-	"https://s3-ap-southeast-2.amazonaws.com",
+-	"",
+-	true,
+-	true,
+-	"https://sdb.ap-southeast-2.amazonaws.com",
+-	"https://sns.ap-southeast-2.amazonaws.com",
+-	"https://sqs.ap-southeast-2.amazonaws.com",
+-	"https://iam.amazonaws.com",
+-	"https://elasticloadbalancing.ap-southeast-2.amazonaws.com",
+-	"https://autoscaling.ap-southeast-2.amazonaws.com",
+-	"https://rds.ap-southeast-2.amazonaws.com",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var APNortheast = Region{
+-	"ap-northeast-1",
+-	"https://ec2.ap-northeast-1.amazonaws.com",
+-	"https://s3-ap-northeast-1.amazonaws.com",
+-	"",
+-	true,
+-	true,
+-	"https://sdb.ap-northeast-1.amazonaws.com",
+-	"https://sns.ap-northeast-1.amazonaws.com",
+-	"https://sqs.ap-northeast-1.amazonaws.com",
+-	"https://iam.amazonaws.com",
+-	"https://elasticloadbalancing.ap-northeast-1.amazonaws.com",
+-	"https://autoscaling.ap-northeast-1.amazonaws.com",
+-	"https://rds.ap-northeast-1.amazonaws.com",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var SAEast = Region{
+-	"sa-east-1",
+-	"https://ec2.sa-east-1.amazonaws.com",
+-	"https://s3-sa-east-1.amazonaws.com",
+-	"",
+-	true,
+-	true,
+-	"https://sdb.sa-east-1.amazonaws.com",
+-	"https://sns.sa-east-1.amazonaws.com",
+-	"https://sqs.sa-east-1.amazonaws.com",
+-	"https://iam.amazonaws.com",
+-	"https://elasticloadbalancing.sa-east-1.amazonaws.com",
+-	"https://autoscaling.sa-east-1.amazonaws.com",
+-	"https://rds.sa-east-1.amazonaws.com",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var CNNorth = Region{
+-	"cn-north-1",
+-	"https://ec2.cn-north-1.amazonaws.com.cn",
+-	"https://s3.cn-north-1.amazonaws.com.cn",
+-	"",
+-	true,
+-	true,
+-	"",
+-	"https://sns.cn-north-1.amazonaws.com.cn",
+-	"https://sqs.cn-north-1.amazonaws.com.cn",
+-	"https://iam.cn-north-1.amazonaws.com.cn",
+-	"https://elasticloadbalancing.cn-north-1.amazonaws.com.cn",
+-	"https://autoscaling.cn-north-1.amazonaws.com.cn",
+-	"https://rds.cn-north-1.amazonaws.com.cn",
+-	"https://route53.amazonaws.com",
+-}
+-
+-var Regions = map[string]Region{
+-	APNortheast.Name:  APNortheast,
+-	APSoutheast.Name:  APSoutheast,
+-	APSoutheast2.Name: APSoutheast2,
+-	EUWest.Name:       EUWest,
+-	USEast.Name:       USEast,
+-	USWest.Name:       USWest,
+-	USWest2.Name:      USWest2,
+-	SAEast.Name:       SAEast,
+-	USGovWest.Name:    USGovWest,
+-	CNNorth.Name:      CNNorth,
+-}
+-
+-type Auth struct {
+-	AccessKey, SecretKey, Token string
+-}
+-
+-var unreserved = make([]bool, 128)
+-var hex = "0123456789ABCDEF"
+-
+-func init() {
+-	// RFC3986
+-	u := "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567890-_.~"
+-	for _, c := range u {
+-		unreserved[c] = true
+-	}
+-}
+-
+-type credentials struct {
+-	Code            string
+-	LastUpdated     string
+-	Type            string
+-	AccessKeyId     string
+-	SecretAccessKey string
+-	Token           string
+-	Expiration      string
+-}
+-
+-// GetMetaData retrieves instance metadata about the current machine.
+-//
+-// See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html for more details.
+-func GetMetaData(path string) (contents []byte, err error) {
+-	url := "http://169.254.169.254/latest/meta-data/" + path
+-
+-	resp, err := RetryingClient.Get(url)
+-	if err != nil {
+-		return
+-	}
+-	defer resp.Body.Close()
+-
+-	if resp.StatusCode != 200 {
+-		err = fmt.Errorf("Code %d returned for url %s", resp.StatusCode, url)
+-		return
+-	}
+-
+-	body, err := ioutil.ReadAll(resp.Body)
+-	if err != nil {
+-		return
+-	}
+-	return []byte(body), err
+-}
+-
+-func getInstanceCredentials() (cred credentials, err error) {
+-	credentialPath := "iam/security-credentials/"
+-
+-	// Get the instance role
+-	role, err := GetMetaData(credentialPath)
+-	if err != nil {
+-		return
+-	}
+-
+-	// Get the instance role credentials
+-	credentialJSON, err := GetMetaData(credentialPath + string(role))
+-	if err != nil {
+-		return
+-	}
+-
+-	err = json.Unmarshal([]byte(credentialJSON), &cred)
+-	return
+-}
+-
+-// GetAuth creates an Auth based on either passed in credentials,
+-// environment information or instance based role credentials.
+-func GetAuth(accessKey string, secretKey string) (auth Auth, err error) {
+-	// First try passed in credentials
+-	if accessKey != "" && secretKey != "" {
+-		return Auth{accessKey, secretKey, ""}, nil
+-	}
+-
+-	// Next try to get auth from the environment
+-	auth, err = SharedAuth()
+-	if err == nil {
+-		// Found auth, return
+-		return
+-	}
+-
+-	// Next try to get auth from the environment
+-	auth, err = EnvAuth()
+-	if err == nil {
+-		// Found auth, return
+-		return
+-	}
+-
+-	// Next try getting auth from the instance role
+-	cred, err := getInstanceCredentials()
+-	if err == nil {
+-		// Found auth, return
+-		auth.AccessKey = cred.AccessKeyId
+-		auth.SecretKey = cred.SecretAccessKey
+-		auth.Token = cred.Token
+-		return
+-	}
+-	err = errors.New("No valid AWS authentication found")
+-	return
+-}
+-
+-// SharedAuth creates an Auth based on shared credentials stored in
+-// $HOME/.aws/credentials. The AWS_PROFILE environment variables is used to
+-// select the profile.
+-func SharedAuth() (auth Auth, err error) {
+-	var profileName = os.Getenv("AWS_PROFILE")
+-
+-	if profileName == "" {
+-		profileName = "default"
+-	}
+-
+-	var homeDir = os.Getenv("HOME")
+-	if homeDir == "" {
+-		err = errors.New("Could not get HOME")
+-		return
+-	}
+-
+-	var credentialsFile = homeDir + "/.aws/credentials"
+-	file, err := ini.LoadFile(credentialsFile)
+-	if err != nil {
+-		err = errors.New("Couldn't parse AWS credentials file")
+-		return
+-	}
+-
+-	var profile = file[profileName]
+-	if profile == nil {
+-		err = errors.New("Couldn't find profile in AWS credentials file")
+-		return
+-	}
+-
+-	auth.AccessKey = profile["aws_access_key_id"]
+-	auth.SecretKey = profile["aws_secret_access_key"]
+-
+-	if auth.AccessKey == "" {
+-		err = errors.New("AWS_ACCESS_KEY_ID not found in environment in credentials file")
+-	}
+-	if auth.SecretKey == "" {
+-		err = errors.New("AWS_SECRET_ACCESS_KEY not found in credentials file")
+-	}
+-	return
+-}
+-
+-// EnvAuth creates an Auth based on environment information.
+-// The AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment
+-// For accounts that require a security token, it is read from AWS_SECURITY_TOKEN
+-// variables are used.
+-func EnvAuth() (auth Auth, err error) {
+-	auth.AccessKey = os.Getenv("AWS_ACCESS_KEY_ID")
+-	if auth.AccessKey == "" {
+-		auth.AccessKey = os.Getenv("AWS_ACCESS_KEY")
+-	}
+-
+-	auth.SecretKey = os.Getenv("AWS_SECRET_ACCESS_KEY")
+-	if auth.SecretKey == "" {
+-		auth.SecretKey = os.Getenv("AWS_SECRET_KEY")
+-	}
+-
+-	auth.Token = os.Getenv("AWS_SECURITY_TOKEN")
+-
+-	if auth.AccessKey == "" {
+-		err = errors.New("AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment")
+-	}
+-	if auth.SecretKey == "" {
+-		err = errors.New("AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY not found in environment")
+-	}
+-	return
+-}
+-
+-// Encode takes a string and URI-encodes it in a way suitable
+-// to be used in AWS signatures.
+-func Encode(s string) string {
+-	encode := false
+-	for i := 0; i != len(s); i++ {
+-		c := s[i]
+-		if c > 127 || !unreserved[c] {
+-			encode = true
+-			break
+-		}
+-	}
+-	if !encode {
+-		return s
+-	}
+-	e := make([]byte, len(s)*3)
+-	ei := 0
+-	for i := 0; i != len(s); i++ {
+-		c := s[i]
+-		if c > 127 || !unreserved[c] {
+-			e[ei] = '%'
+-			e[ei+1] = hex[c>>4]
+-			e[ei+2] = hex[c&0xF]
+-			ei += 3
+-		} else {
+-			e[ei] = c
+-			ei += 1
+-		}
+-	}
+-	return string(e[:ei])
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/aws_test.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/aws_test.go
+deleted file mode 100644
+index 78cbbaf..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/aws_test.go
++++ /dev/null
+@@ -1,203 +0,0 @@
+-package aws_test
+-
+-import (
+-	"github.com/mitchellh/goamz/aws"
+-	. "github.com/motain/gocheck"
+-	"io/ioutil"
+-	"os"
+-	"strings"
+-	"testing"
+-)
+-
+-func Test(t *testing.T) {
+-	TestingT(t)
+-}
+-
+-var _ = Suite(&S{})
+-
+-type S struct {
+-	environ []string
+-}
+-
+-func (s *S) SetUpSuite(c *C) {
+-	s.environ = os.Environ()
+-}
+-
+-func (s *S) TearDownTest(c *C) {
+-	os.Clearenv()
+-	for _, kv := range s.environ {
+-		l := strings.SplitN(kv, "=", 2)
+-		os.Setenv(l[0], l[1])
+-	}
+-}
+-
+-func (s *S) TestSharedAuthNoHome(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_PROFILE", "foo")
+-	_, err := aws.SharedAuth()
+-	c.Assert(err, ErrorMatches, "Could not get HOME")
+-}
+-
+-func (s *S) TestSharedAuthNoCredentialsFile(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_PROFILE", "foo")
+-	os.Setenv("HOME", "/tmp")
+-	_, err := aws.SharedAuth()
+-	c.Assert(err, ErrorMatches, "Couldn't parse AWS credentials file")
+-}
+-
+-func (s *S) TestSharedAuthNoProfileInFile(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_PROFILE", "foo")
+-
+-	d, err := ioutil.TempDir("", "")
+-	if err != nil {
+-		panic(err)
+-	}
+-	defer os.RemoveAll(d)
+-
+-	err = os.Mkdir(d+"/.aws", 0755)
+-	if err != nil {
+-		panic(err)
+-	}
+-
+-	ioutil.WriteFile(d+"/.aws/credentials", []byte("[bar]\n"), 0644)
+-	os.Setenv("HOME", d)
+-
+-	_, err = aws.SharedAuth()
+-	c.Assert(err, ErrorMatches, "Couldn't find profile in AWS credentials file")
+-}
+-
+-func (s *S) TestSharedAuthNoKeysInProfile(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_PROFILE", "bar")
+-
+-	d, err := ioutil.TempDir("", "")
+-	if err != nil {
+-		panic(err)
+-	}
+-	defer os.RemoveAll(d)
+-
+-	err = os.Mkdir(d+"/.aws", 0755)
+-	if err != nil {
+-		panic(err)
+-	}
+-
+-	ioutil.WriteFile(d+"/.aws/credentials", []byte("[bar]\nawsaccesskeyid = AK.."), 0644)
+-	os.Setenv("HOME", d)
+-
+-	_, err = aws.SharedAuth()
+-	c.Assert(err, ErrorMatches, "AWS_SECRET_ACCESS_KEY not found in credentials file")
+-}
+-
+-func (s *S) TestSharedAuthDefaultCredentials(c *C) {
+-	os.Clearenv()
+-
+-	d, err := ioutil.TempDir("", "")
+-	if err != nil {
+-		panic(err)
+-	}
+-	defer os.RemoveAll(d)
+-
+-	err = os.Mkdir(d+"/.aws", 0755)
+-	if err != nil {
+-		panic(err)
+-	}
+-
+-	ioutil.WriteFile(d+"/.aws/credentials", []byte("[default]\naws_access_key_id = access\naws_secret_access_key = secret\n"), 0644)
+-	os.Setenv("HOME", d)
+-
+-	auth, err := aws.SharedAuth()
+-	c.Assert(err, IsNil)
+-	c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
+-}
+-
+-func (s *S) TestSharedAuth(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_PROFILE", "bar")
+-
+-	d, err := ioutil.TempDir("", "")
+-	if err != nil {
+-		panic(err)
+-	}
+-	defer os.RemoveAll(d)
+-
+-	err = os.Mkdir(d+"/.aws", 0755)
+-	if err != nil {
+-		panic(err)
+-	}
+-
+-	ioutil.WriteFile(d+"/.aws/credentials", []byte("[bar]\naws_access_key_id = access\naws_secret_access_key = secret\n"), 0644)
+-	os.Setenv("HOME", d)
+-
+-	auth, err := aws.SharedAuth()
+-	c.Assert(err, IsNil)
+-	c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
+-}
+-
+-func (s *S) TestEnvAuthNoSecret(c *C) {
+-	os.Clearenv()
+-	_, err := aws.EnvAuth()
+-	c.Assert(err, ErrorMatches, "AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY not found in environment")
+-}
+-
+-func (s *S) TestEnvAuthNoAccess(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_SECRET_ACCESS_KEY", "foo")
+-	_, err := aws.EnvAuth()
+-	c.Assert(err, ErrorMatches, "AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment")
+-}
+-
+-func (s *S) TestEnvAuth(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_SECRET_ACCESS_KEY", "secret")
+-	os.Setenv("AWS_ACCESS_KEY_ID", "access")
+-	auth, err := aws.EnvAuth()
+-	c.Assert(err, IsNil)
+-	c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
+-}
+-
+-func (s *S) TestEnvAuthWithToken(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_SECRET_ACCESS_KEY", "secret")
+-	os.Setenv("AWS_ACCESS_KEY_ID", "access")
+-	os.Setenv("AWS_SECURITY_TOKEN", "token")
+-	auth, err := aws.EnvAuth()
+-	c.Assert(err, IsNil)
+-	c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access", Token: "token"})
+-}
+-
+-func (s *S) TestEnvAuthAlt(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_SECRET_KEY", "secret")
+-	os.Setenv("AWS_ACCESS_KEY", "access")
+-	auth, err := aws.EnvAuth()
+-	c.Assert(err, IsNil)
+-	c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
+-}
+-
+-func (s *S) TestGetAuthStatic(c *C) {
+-	auth, err := aws.GetAuth("access", "secret")
+-	c.Assert(err, IsNil)
+-	c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
+-}
+-
+-func (s *S) TestGetAuthEnv(c *C) {
+-	os.Clearenv()
+-	os.Setenv("AWS_SECRET_ACCESS_KEY", "secret")
+-	os.Setenv("AWS_ACCESS_KEY_ID", "access")
+-	auth, err := aws.GetAuth("", "")
+-	c.Assert(err, IsNil)
+-	c.Assert(auth, Equals, aws.Auth{SecretKey: "secret", AccessKey: "access"})
+-}
+-
+-func (s *S) TestEncode(c *C) {
+-	c.Assert(aws.Encode("foo"), Equals, "foo")
+-	c.Assert(aws.Encode("/"), Equals, "%2F")
+-}
+-
+-func (s *S) TestRegionsAreNamed(c *C) {
+-	for n, r := range aws.Regions {
+-		c.Assert(n, Equals, r.Name)
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/client.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/client.go
+deleted file mode 100644
+index ee53238..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/client.go
++++ /dev/null
+@@ -1,125 +0,0 @@
+-package aws
+-
+-import (
+-	"math"
+-	"net"
+-	"net/http"
+-	"time"
+-)
+-
+-type RetryableFunc func(*http.Request, *http.Response, error) bool
+-type WaitFunc func(try int)
+-type DeadlineFunc func() time.Time
+-
+-type ResilientTransport struct {
+-	// Timeout is the maximum amount of time a dial will wait for
+-	// a connect to complete.
+-	//
+-	// The default is no timeout.
+-	//
+-	// With or without a timeout, the operating system may impose
+-	// its own earlier timeout. For instance, TCP timeouts are
+-	// often around 3 minutes.
+-	DialTimeout time.Duration
+-
+-	// MaxTries, if non-zero, specifies the number of times we will retry on
+-	// failure. Retries are only attempted for temporary network errors or known
+-	// safe failures.
+-	MaxTries    int
+-	Deadline    DeadlineFunc
+-	ShouldRetry RetryableFunc
+-	Wait        WaitFunc
+-	transport   *http.Transport
+-}
+-
+-// Convenience method for creating an http client
+-func NewClient(rt *ResilientTransport) *http.Client {
+-	rt.transport = &http.Transport{
+-		Dial: func(netw, addr string) (net.Conn, error) {
+-			c, err := net.DialTimeout(netw, addr, rt.DialTimeout)
+-			if err != nil {
+-				return nil, err
+-			}
+-			c.SetDeadline(rt.Deadline())
+-			return c, nil
+-		},
+-		DisableKeepAlives: true,
+-		Proxy:             http.ProxyFromEnvironment,
+-	}
+-	// TODO: Would be nice is ResilientTransport allowed clients to initialize
+-	// with http.Transport attributes.
+-	return &http.Client{
+-		Transport: rt,
+-	}
+-}
+-
+-var retryingTransport = &ResilientTransport{
+-	Deadline: func() time.Time {
+-		return time.Now().Add(5 * time.Second)
+-	},
+-	DialTimeout: 10 * time.Second,
+-	MaxTries:    3,
+-	ShouldRetry: awsRetry,
+-	Wait:        ExpBackoff,
+-}
+-
+-// Exported default client
+-var RetryingClient = NewClient(retryingTransport)
+-
+-func (t *ResilientTransport) RoundTrip(req *http.Request) (*http.Response, error) {
+-	return t.tries(req)
+-}
+-
+-// Retry a request a maximum of t.MaxTries times.
+-// We'll only retry if the proper criteria are met.
+-// If a wait function is specified, wait that amount of time
+-// In between requests.
+-func (t *ResilientTransport) tries(req *http.Request) (res *http.Response, err error) {
+-	for try := 0; try < t.MaxTries; try += 1 {
+-		res, err = t.transport.RoundTrip(req)
+-
+-		if !t.ShouldRetry(req, res, err) {
+-			break
+-		}
+-		if res != nil {
+-			res.Body.Close()
+-		}
+-		if t.Wait != nil {
+-			t.Wait(try)
+-		}
+-	}
+-
+-	return
+-}
+-
+-func ExpBackoff(try int) {
+-	time.Sleep(100 * time.Millisecond *
+-		time.Duration(math.Exp2(float64(try))))
+-}
+-
+-func LinearBackoff(try int) {
+-	time.Sleep(time.Duration(try*100) * time.Millisecond)
+-}
+-
+-// Decide if we should retry a request.
+-// In general, the criteria for retrying a request is described here
+-// http://docs.aws.amazon.com/general/latest/gr/api-retries.html
+-func awsRetry(req *http.Request, res *http.Response, err error) bool {
+-	retry := false
+-
+-	// Retry if there's a temporary network error.
+-	if neterr, ok := err.(net.Error); ok {
+-		if neterr.Temporary() {
+-			retry = true
+-		}
+-	}
+-
+-	// Retry if we get a 5xx series error.
+-	if res != nil {
+-		if res.StatusCode >= 500 && res.StatusCode < 600 {
+-			retry = true
+-		}
+-	}
+-
+-	return retry
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/client_test.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/client_test.go
+deleted file mode 100644
+index 2f6b39c..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/aws/client_test.go
++++ /dev/null
+@@ -1,121 +0,0 @@
+-package aws_test
+-
+-import (
+-	"fmt"
+-	"github.com/mitchellh/goamz/aws"
+-	"io/ioutil"
+-	"net/http"
+-	"net/http/httptest"
+-	"strings"
+-	"testing"
+-	"time"
+-)
+-
+-// Retrieve the response from handler using aws.RetryingClient
+-func serveAndGet(handler http.HandlerFunc) (body string, err error) {
+-	ts := httptest.NewServer(handler)
+-	defer ts.Close()
+-	resp, err := aws.RetryingClient.Get(ts.URL)
+-	if err != nil {
+-		return
+-	}
+-	if resp.StatusCode != 200 {
+-		return "", fmt.Errorf("Bad status code: %d", resp.StatusCode)
+-	}
+-	greeting, err := ioutil.ReadAll(resp.Body)
+-	resp.Body.Close()
+-	if err != nil {
+-		return
+-	}
+-	return strings.TrimSpace(string(greeting)), nil
+-}
+-
+-func TestClient_expected(t *testing.T) {
+-	body := "foo bar"
+-
+-	resp, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
+-		fmt.Fprintln(w, body)
+-	})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if resp != body {
+-		t.Fatal("Body not as expected.")
+-	}
+-}
+-
+-func TestClient_delay(t *testing.T) {
+-	body := "baz"
+-	wait := 4
+-	resp, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
+-		if wait < 0 {
+-			// If we dipped to zero delay and still failed.
+-			t.Fatal("Never succeeded.")
+-		}
+-		wait -= 1
+-		time.Sleep(time.Second * time.Duration(wait))
+-		fmt.Fprintln(w, body)
+-	})
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if resp != body {
+-		t.Fatal("Body not as expected.", resp)
+-	}
+-}
+-
+-func TestClient_no4xxRetry(t *testing.T) {
+-	tries := 0
+-
+-	// Fail once before succeeding.
+-	_, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
+-		tries += 1
+-		http.Error(w, "error", 404)
+-	})
+-
+-	if err == nil {
+-		t.Fatal("should have error")
+-	}
+-
+-	if tries != 1 {
+-		t.Fatalf("should only try once: %d", tries)
+-	}
+-}
+-
+-func TestClient_retries(t *testing.T) {
+-	body := "biz"
+-	failed := false
+-	// Fail once before succeeding.
+-	resp, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
+-		if !failed {
+-			http.Error(w, "error", 500)
+-			failed = true
+-		} else {
+-			fmt.Fprintln(w, body)
+-		}
+-	})
+-	if failed != true {
+-		t.Error("We didn't retry!")
+-	}
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	if resp != body {
+-		t.Fatal("Body not as expected.")
+-	}
+-}
+-
+-func TestClient_fails(t *testing.T) {
+-	tries := 0
+-	// Fail 3 times and return the last error.
+-	_, err := serveAndGet(func(w http.ResponseWriter, r *http.Request) {
+-		tries += 1
+-		http.Error(w, "error", 500)
+-	})
+-	if err == nil {
+-		t.Fatal(err)
+-	}
+-	if tries != 3 {
+-		t.Fatal("Didn't retry enough")
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2.go
+deleted file mode 100644
+index 8f94ad5..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2.go
++++ /dev/null
+@@ -1,2599 +0,0 @@
+-//
+-// goamz - Go packages to interact with the Amazon Web Services.
+-//
+-//   https://wiki.ubuntu.com/goamz
+-//
+-// Copyright (c) 2011 Canonical Ltd.
+-//
+-// Written by Gustavo Niemeyer <gustavo.niemeyer at canonical.com>
+-//
+-
+-package ec2
+-
+-import (
+-	"crypto/rand"
+-	"encoding/base64"
+-	"encoding/hex"
+-	"encoding/xml"
+-	"fmt"
+-	"log"
+-	"net/http"
+-	"net/http/httputil"
+-	"net/url"
+-	"sort"
+-	"strconv"
+-	"strings"
+-	"time"
+-
+-	"github.com/mitchellh/goamz/aws"
+-)
+-
+-const debug = false
+-
+-// The EC2 type encapsulates operations with a specific EC2 region.
+-type EC2 struct {
+-	aws.Auth
+-	aws.Region
+-	httpClient *http.Client
+-	private    byte // Reserve the right of using private data.
+-}
+-
+-// New creates a new EC2.
+-func NewWithClient(auth aws.Auth, region aws.Region, client *http.Client) *EC2 {
+-	return &EC2{auth, region, client, 0}
+-}
+-
+-func New(auth aws.Auth, region aws.Region) *EC2 {
+-	return NewWithClient(auth, region, aws.RetryingClient)
+-}
+-
+-// ----------------------------------------------------------------------------
+-// Filtering helper.
+-
+-// Filter builds filtering parameters to be used in an EC2 query which supports
+-// filtering.  For example:
+-//
+-//     filter := NewFilter()
+-//     filter.Add("architecture", "i386")
+-//     filter.Add("launch-index", "0")
+-//     resp, err := ec2.Instances(nil, filter)
+-//
+-type Filter struct {
+-	m map[string][]string
+-}
+-
+-// NewFilter creates a new Filter.
+-func NewFilter() *Filter {
+-	return &Filter{make(map[string][]string)}
+-}
+-
+-// Add appends a filtering parameter with the given name and value(s).
+-func (f *Filter) Add(name string, value ...string) {
+-	f.m[name] = append(f.m[name], value...)
+-}
+-
+-func (f *Filter) addParams(params map[string]string) {
+-	if f != nil {
+-		a := make([]string, len(f.m))
+-		i := 0
+-		for k := range f.m {
+-			a[i] = k
+-			i++
+-		}
+-		sort.StringSlice(a).Sort()
+-		for i, k := range a {
+-			prefix := "Filter." + strconv.Itoa(i+1)
+-			params[prefix+".Name"] = k
+-			for j, v := range f.m[k] {
+-				params[prefix+".Value."+strconv.Itoa(j+1)] = v
+-			}
+-		}
+-	}
+-}
+-
+-// ----------------------------------------------------------------------------
+-// Request dispatching logic.
+-
+-// Error encapsulates an error returned by EC2.
+-//
+-// See http://goo.gl/VZGuC for more details.
+-type Error struct {
+-	// HTTP status code (200, 403, ...)
+-	StatusCode int
+-	// EC2 error code ("UnsupportedOperation", ...)
+-	Code string
+-	// The human-oriented error message
+-	Message   string
+-	RequestId string `xml:"RequestID"`
+-}
+-
+-func (err *Error) Error() string {
+-	if err.Code == "" {
+-		return err.Message
+-	}
+-
+-	return fmt.Sprintf("%s (%s)", err.Message, err.Code)
+-}
+-
+-// For now a single error inst is being exposed. In the future it may be useful
+-// to provide access to all of them, but rather than doing it as an array/slice,
+-// use a *next pointer, so that it's backward compatible and it continues to be
+-// easy to handle the first error, which is what most people will want.
+-type xmlErrors struct {
+-	RequestId string  `xml:"RequestID"`
+-	Errors    []Error `xml:"Errors>Error"`
+-}
+-
+-var timeNow = time.Now
+-
+-func (ec2 *EC2) query(params map[string]string, resp interface{}) error {
+-	params["Version"] = "2014-05-01"
+-	params["Timestamp"] = timeNow().In(time.UTC).Format(time.RFC3339)
+-	endpoint, err := url.Parse(ec2.Region.EC2Endpoint)
+-	if err != nil {
+-		return err
+-	}
+-	if endpoint.Path == "" {
+-		endpoint.Path = "/"
+-	}
+-	sign(ec2.Auth, "GET", endpoint.Path, params, endpoint.Host)
+-	endpoint.RawQuery = multimap(params).Encode()
+-	if debug {
+-		log.Printf("get { %v } -> {\n", endpoint.String())
+-	}
+-
+-	r, err := ec2.httpClient.Get(endpoint.String())
+-	if err != nil {
+-		return err
+-	}
+-	defer r.Body.Close()
+-
+-	if debug {
+-		dump, _ := httputil.DumpResponse(r, true)
+-		log.Printf("response:\n")
+-		log.Printf("%v\n}\n", string(dump))
+-	}
+-	if r.StatusCode != 200 {
+-		return buildError(r)
+-	}
+-	err = xml.NewDecoder(r.Body).Decode(resp)
+-	return err
+-}
+-
+-func multimap(p map[string]string) url.Values {
+-	q := make(url.Values, len(p))
+-	for k, v := range p {
+-		q[k] = []string{v}
+-	}
+-	return q
+-}
+-
+-func buildError(r *http.Response) error {
+-	errors := xmlErrors{}
+-	xml.NewDecoder(r.Body).Decode(&errors)
+-	var err Error
+-	if len(errors.Errors) > 0 {
+-		err = errors.Errors[0]
+-	}
+-	err.RequestId = errors.RequestId
+-	err.StatusCode = r.StatusCode
+-	if err.Message == "" {
+-		err.Message = r.Status
+-	}
+-	return &err
+-}
+-
+-func makeParams(action string) map[string]string {
+-	params := make(map[string]string)
+-	params["Action"] = action
+-	return params
+-}
+-
+-func addParamsList(params map[string]string, label string, ids []string) {
+-	for i, id := range ids {
+-		params[label+"."+strconv.Itoa(i+1)] = id
+-	}
+-}
+-
+-func addBlockDeviceParams(prename string, params map[string]string, blockdevices []BlockDeviceMapping) {
+-	for i, k := range blockdevices {
+-		// Fixup index since Amazon counts these from 1
+-		prefix := prename + "BlockDeviceMapping." + strconv.Itoa(i+1) + "."
+-
+-		if k.DeviceName != "" {
+-			params[prefix+"DeviceName"] = k.DeviceName
+-		}
+-		if k.VirtualName != "" {
+-			params[prefix+"VirtualName"] = k.VirtualName
+-		}
+-		if k.SnapshotId != "" {
+-			params[prefix+"Ebs.SnapshotId"] = k.SnapshotId
+-		}
+-		if k.VolumeType != "" {
+-			params[prefix+"Ebs.VolumeType"] = k.VolumeType
+-		}
+-		if k.IOPS != 0 {
+-			params[prefix+"Ebs.Iops"] = strconv.FormatInt(k.IOPS, 10)
+-		}
+-		if k.VolumeSize != 0 {
+-			params[prefix+"Ebs.VolumeSize"] = strconv.FormatInt(k.VolumeSize, 10)
+-		}
+-		if k.DeleteOnTermination {
+-			params[prefix+"Ebs.DeleteOnTermination"] = "true"
+-		}
+-		if k.Encrypted {
+-			params[prefix+"Ebs.Encrypted"] = "true"
+-		}
+-		if k.NoDevice {
+-			params[prefix+"NoDevice"] = ""
+-		}
+-	}
+-}
+-
+-// ----------------------------------------------------------------------------
+-// Instance management functions and types.
+-
+-// The RunInstances type encapsulates options for the respective request in EC2.
+-//
+-// See http://goo.gl/Mcm3b for more details.
+-type RunInstances struct {
+-	ImageId                  string
+-	MinCount                 int
+-	MaxCount                 int
+-	KeyName                  string
+-	InstanceType             string
+-	SecurityGroups           []SecurityGroup
+-	IamInstanceProfile       string
+-	KernelId                 string
+-	RamdiskId                string
+-	UserData                 []byte
+-	AvailZone                string
+-	PlacementGroupName       string
+-	Monitoring               bool
+-	SubnetId                 string
+-	AssociatePublicIpAddress bool
+-	DisableAPITermination    bool
+-	ShutdownBehavior         string
+-	PrivateIPAddress         string
+-	BlockDevices             []BlockDeviceMapping
+-}
+-
+-// Response to a RunInstances request.
+-//
+-// See http://goo.gl/Mcm3b for more details.
+-type RunInstancesResp struct {
+-	RequestId      string          `xml:"requestId"`
+-	ReservationId  string          `xml:"reservationId"`
+-	OwnerId        string          `xml:"ownerId"`
+-	SecurityGroups []SecurityGroup `xml:"groupSet>item"`
+-	Instances      []Instance      `xml:"instancesSet>item"`
+-}
+-
+-// Instance encapsulates a running instance in EC2.
+-//
+-// See http://goo.gl/OCH8a for more details.
+-type Instance struct {
+-	InstanceId         string          `xml:"instanceId"`
+-	InstanceType       string          `xml:"instanceType"`
+-	ImageId            string          `xml:"imageId"`
+-	PrivateDNSName     string          `xml:"privateDnsName"`
+-	DNSName            string          `xml:"dnsName"`
+-	KeyName            string          `xml:"keyName"`
+-	AMILaunchIndex     int             `xml:"amiLaunchIndex"`
+-	Hypervisor         string          `xml:"hypervisor"`
+-	VirtType           string          `xml:"virtualizationType"`
+-	Monitoring         string          `xml:"monitoring>state"`
+-	AvailZone          string          `xml:"placement>availabilityZone"`
+-	PlacementGroupName string          `xml:"placement>groupName"`
+-	State              InstanceState   `xml:"instanceState"`
+-	Tags               []Tag           `xml:"tagSet>item"`
+-	VpcId              string          `xml:"vpcId"`
+-	SubnetId           string          `xml:"subnetId"`
+-	IamInstanceProfile string          `xml:"iamInstanceProfile"`
+-	PrivateIpAddress   string          `xml:"privateIpAddress"`
+-	PublicIpAddress    string          `xml:"ipAddress"`
+-	Architecture       string          `xml:"architecture"`
+-	LaunchTime         time.Time       `xml:"launchTime"`
+-	SourceDestCheck    bool            `xml:"sourceDestCheck"`
+-	SecurityGroups     []SecurityGroup `xml:"groupSet>item"`
+-}
+-
+-// RunInstances starts new instances in EC2.
+-// If options.MinCount and options.MaxCount are both zero, a single instance
+-// will be started; otherwise if options.MaxCount is zero, options.MinCount
+-// will be used insteead.
+-//
+-// See http://goo.gl/Mcm3b for more details.
+-func (ec2 *EC2) RunInstances(options *RunInstances) (resp *RunInstancesResp, err error) {
+-	params := makeParams("RunInstances")
+-	params["ImageId"] = options.ImageId
+-	params["InstanceType"] = options.InstanceType
+-	var min, max int
+-	if options.MinCount == 0 && options.MaxCount == 0 {
+-		min = 1
+-		max = 1
+-	} else if options.MaxCount == 0 {
+-		min = options.MinCount
+-		max = min
+-	} else {
+-		min = options.MinCount
+-		max = options.MaxCount
+-	}
+-	params["MinCount"] = strconv.Itoa(min)
+-	params["MaxCount"] = strconv.Itoa(max)
+-	token, err := clientToken()
+-	if err != nil {
+-		return nil, err
+-	}
+-	params["ClientToken"] = token
+-
+-	if options.KeyName != "" {
+-		params["KeyName"] = options.KeyName
+-	}
+-	if options.KernelId != "" {
+-		params["KernelId"] = options.KernelId
+-	}
+-	if options.RamdiskId != "" {
+-		params["RamdiskId"] = options.RamdiskId
+-	}
+-	if options.UserData != nil {
+-		userData := make([]byte, b64.EncodedLen(len(options.UserData)))
+-		b64.Encode(userData, options.UserData)
+-		params["UserData"] = string(userData)
+-	}
+-	if options.AvailZone != "" {
+-		params["Placement.AvailabilityZone"] = options.AvailZone
+-	}
+-	if options.PlacementGroupName != "" {
+-		params["Placement.GroupName"] = options.PlacementGroupName
+-	}
+-	if options.Monitoring {
+-		params["Monitoring.Enabled"] = "true"
+-	}
+-	if options.SubnetId != "" && options.AssociatePublicIpAddress {
+-		// If we have a non-default VPC / Subnet specified, we can flag
+-		// AssociatePublicIpAddress to get a Public IP assigned. By default these are not provided.
+-		// You cannot specify both SubnetId and the NetworkInterface.0.* parameters though, otherwise
+-		// you get: Network interfaces and an instance-level subnet ID may not be specified on the same request
+-		// You also need to attach Security Groups to the NetworkInterface instead of the instance,
+-		// to avoid: Network interfaces and an instance-level security groups may not be specified on
+-		// the same request
+-		params["NetworkInterface.0.DeviceIndex"] = "0"
+-		params["NetworkInterface.0.AssociatePublicIpAddress"] = "true"
+-		params["NetworkInterface.0.SubnetId"] = options.SubnetId
+-
+-		i := 1
+-		for _, g := range options.SecurityGroups {
+-			// We only have SecurityGroupId's on NetworkInterface's, no SecurityGroup params.
+-			if g.Id != "" {
+-				params["NetworkInterface.0.SecurityGroupId."+strconv.Itoa(i)] = g.Id
+-				i++
+-			}
+-		}
+-	} else {
+-		if options.SubnetId != "" {
+-			params["SubnetId"] = options.SubnetId
+-		}
+-
+-		i, j := 1, 1
+-		for _, g := range options.SecurityGroups {
+-			if g.Id != "" {
+-				params["SecurityGroupId."+strconv.Itoa(i)] = g.Id
+-				i++
+-			} else {
+-				params["SecurityGroup."+strconv.Itoa(j)] = g.Name
+-				j++
+-			}
+-		}
+-	}
+-	if options.IamInstanceProfile != "" {
+-		params["IamInstanceProfile.Name"] = options.IamInstanceProfile
+-	}
+-	if options.DisableAPITermination {
+-		params["DisableApiTermination"] = "true"
+-	}
+-	if options.ShutdownBehavior != "" {
+-		params["InstanceInitiatedShutdownBehavior"] = options.ShutdownBehavior
+-	}
+-	if options.PrivateIPAddress != "" {
+-		params["PrivateIpAddress"] = options.PrivateIPAddress
+-	}
+-	addBlockDeviceParams("", params, options.BlockDevices)
+-
+-	resp = &RunInstancesResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-func clientToken() (string, error) {
+-	// Maximum EC2 client token size is 64 bytes.
+-	// Each byte expands to two when hex encoded.
+-	buf := make([]byte, 32)
+-	_, err := rand.Read(buf)
+-	if err != nil {
+-		return "", err
+-	}
+-	return hex.EncodeToString(buf), nil
+-}
+-
+-// ----------------------------------------------------------------------------
+-// Spot Instance management functions and types.
+-
+-// The RequestSpotInstances type encapsulates options for the respective request in EC2.
+-//
+-// See http://goo.gl/GRZgCD for more details.
+-type RequestSpotInstances struct {
+-	SpotPrice                string
+-	InstanceCount            int
+-	Type                     string
+-	ImageId                  string
+-	KeyName                  string
+-	InstanceType             string
+-	SecurityGroups           []SecurityGroup
+-	IamInstanceProfile       string
+-	KernelId                 string
+-	RamdiskId                string
+-	UserData                 []byte
+-	AvailZone                string
+-	PlacementGroupName       string
+-	Monitoring               bool
+-	SubnetId                 string
+-	AssociatePublicIpAddress bool
+-	PrivateIPAddress         string
+-	BlockDevices             []BlockDeviceMapping
+-}
+-
+-type SpotInstanceSpec struct {
+-	ImageId                  string
+-	KeyName                  string
+-	InstanceType             string
+-	SecurityGroups           []SecurityGroup
+-	IamInstanceProfile       string
+-	KernelId                 string
+-	RamdiskId                string
+-	UserData                 []byte
+-	AvailZone                string
+-	PlacementGroupName       string
+-	Monitoring               bool
+-	SubnetId                 string
+-	AssociatePublicIpAddress bool
+-	PrivateIPAddress         string
+-	BlockDevices             []BlockDeviceMapping
+-}
+-
+-type SpotLaunchSpec struct {
+-	ImageId            string               `xml:"imageId"`
+-	KeyName            string               `xml:"keyName"`
+-	InstanceType       string               `xml:"instanceType"`
+-	SecurityGroups     []SecurityGroup      `xml:"groupSet>item"`
+-	IamInstanceProfile string               `xml:"iamInstanceProfile"`
+-	KernelId           string               `xml:"kernelId"`
+-	RamdiskId          string               `xml:"ramdiskId"`
+-	PlacementGroupName string               `xml:"placement>groupName"`
+-	Monitoring         bool                 `xml:"monitoring>enabled"`
+-	SubnetId           string               `xml:"subnetId"`
+-	BlockDevices       []BlockDeviceMapping `xml:"blockDeviceMapping>item"`
+-}
+-
+-type SpotStatus struct {
+-	Code       string `xml:"code"`
+-	UpdateTime string `xml:"updateTime"`
+-	Message    string `xml:"message"`
+-}
+-
+-type SpotRequestResult struct {
+-	SpotRequestId  string         `xml:"spotInstanceRequestId"`
+-	SpotPrice      string         `xml:"spotPrice"`
+-	Type           string         `xml:"type"`
+-	AvailZone      string         `xml:"launchedAvailabilityZone"`
+-	InstanceId     string         `xml:"instanceId"`
+-	State          string         `xml:"state"`
+-	Status         SpotStatus     `xml:"status"`
+-	SpotLaunchSpec SpotLaunchSpec `xml:"launchSpecification"`
+-	CreateTime     string         `xml:"createTime"`
+-	Tags           []Tag          `xml:"tagSet>item"`
+-}
+-
+-// Response to a RequestSpotInstances request.
+-//
+-// See http://goo.gl/GRZgCD for more details.
+-type RequestSpotInstancesResp struct {
+-	RequestId          string              `xml:"requestId"`
+-	SpotRequestResults []SpotRequestResult `xml:"spotInstanceRequestSet>item"`
+-}
+-
+-// RequestSpotInstances requests a new spot instances in EC2.
+-func (ec2 *EC2) RequestSpotInstances(options *RequestSpotInstances) (resp *RequestSpotInstancesResp, err error) {
+-	params := makeParams("RequestSpotInstances")
+-	prefix := "LaunchSpecification" + "."
+-
+-	params["SpotPrice"] = options.SpotPrice
+-	params[prefix+"ImageId"] = options.ImageId
+-	params[prefix+"InstanceType"] = options.InstanceType
+-
+-	if options.InstanceCount != 0 {
+-		params["InstanceCount"] = strconv.Itoa(options.InstanceCount)
+-	}
+-	if options.KeyName != "" {
+-		params[prefix+"KeyName"] = options.KeyName
+-	}
+-	if options.KernelId != "" {
+-		params[prefix+"KernelId"] = options.KernelId
+-	}
+-	if options.RamdiskId != "" {
+-		params[prefix+"RamdiskId"] = options.RamdiskId
+-	}
+-	if options.UserData != nil {
+-		userData := make([]byte, b64.EncodedLen(len(options.UserData)))
+-		b64.Encode(userData, options.UserData)
+-		params[prefix+"UserData"] = string(userData)
+-	}
+-	if options.AvailZone != "" {
+-		params[prefix+"Placement.AvailabilityZone"] = options.AvailZone
+-	}
+-	if options.PlacementGroupName != "" {
+-		params[prefix+"Placement.GroupName"] = options.PlacementGroupName
+-	}
+-	if options.Monitoring {
+-		params[prefix+"Monitoring.Enabled"] = "true"
+-	}
+-	if options.SubnetId != "" && options.AssociatePublicIpAddress {
+-		// If we have a non-default VPC / Subnet specified, we can flag
+-		// AssociatePublicIpAddress to get a Public IP assigned. By default these are not provided.
+-		// You cannot specify both SubnetId and the NetworkInterface.0.* parameters though, otherwise
+-		// you get: Network interfaces and an instance-level subnet ID may not be specified on the same request
+-		// You also need to attach Security Groups to the NetworkInterface instead of the instance,
+-		// to avoid: Network interfaces and an instance-level security groups may not be specified on
+-		// the same request
+-		params[prefix+"NetworkInterface.0.DeviceIndex"] = "0"
+-		params[prefix+"NetworkInterface.0.AssociatePublicIpAddress"] = "true"
+-		params[prefix+"NetworkInterface.0.SubnetId"] = options.SubnetId
+-
+-		i := 1
+-		for _, g := range options.SecurityGroups {
+-			// We only have SecurityGroupId's on NetworkInterface's, no SecurityGroup params.
+-			if g.Id != "" {
+-				params[prefix+"NetworkInterface.0.SecurityGroupId."+strconv.Itoa(i)] = g.Id
+-				i++
+-			}
+-		}
+-	} else {
+-		if options.SubnetId != "" {
+-			params[prefix+"SubnetId"] = options.SubnetId
+-		}
+-
+-		i, j := 1, 1
+-		for _, g := range options.SecurityGroups {
+-			if g.Id != "" {
+-				params[prefix+"SecurityGroupId."+strconv.Itoa(i)] = g.Id
+-				i++
+-			} else {
+-				params[prefix+"SecurityGroup."+strconv.Itoa(j)] = g.Name
+-				j++
+-			}
+-		}
+-	}
+-	if options.IamInstanceProfile != "" {
+-		params[prefix+"IamInstanceProfile.Name"] = options.IamInstanceProfile
+-	}
+-	if options.PrivateIPAddress != "" {
+-		params[prefix+"PrivateIpAddress"] = options.PrivateIPAddress
+-	}
+-	addBlockDeviceParams(prefix, params, options.BlockDevices)
+-
+-	resp = &RequestSpotInstancesResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Response to a DescribeSpotInstanceRequests request.
+-//
+-// See http://goo.gl/KsKJJk for more details.
+-type SpotRequestsResp struct {
+-	RequestId          string              `xml:"requestId"`
+-	SpotRequestResults []SpotRequestResult `xml:"spotInstanceRequestSet>item"`
+-}
+-
+-// DescribeSpotInstanceRequests returns details about spot requests in EC2.  Both parameters
+-// are optional, and if provided will limit the spot requests returned to those
+-// matching the given spot request ids or filtering rules.
+-//
+-// See http://goo.gl/KsKJJk for more details.
+-func (ec2 *EC2) DescribeSpotRequests(spotrequestIds []string, filter *Filter) (resp *SpotRequestsResp, err error) {
+-	params := makeParams("DescribeSpotInstanceRequests")
+-	addParamsList(params, "SpotInstanceRequestId", spotrequestIds)
+-	filter.addParams(params)
+-	resp = &SpotRequestsResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Response to a CancelSpotInstanceRequests request.
+-//
+-// See http://goo.gl/3BKHj for more details.
+-type CancelSpotRequestResult struct {
+-	SpotRequestId string `xml:"spotInstanceRequestId"`
+-	State         string `xml:"state"`
+-}
+-type CancelSpotRequestsResp struct {
+-	RequestId                string                    `xml:"requestId"`
+-	CancelSpotRequestResults []CancelSpotRequestResult `xml:"spotInstanceRequestSet>item"`
+-}
+-
+-// CancelSpotRequests requests the cancellation of spot requests when the given ids.
+-//
+-// See http://goo.gl/3BKHj for more details.
+-func (ec2 *EC2) CancelSpotRequests(spotrequestIds []string) (resp *CancelSpotRequestsResp, err error) {
+-	params := makeParams("CancelSpotInstanceRequests")
+-	addParamsList(params, "SpotInstanceRequestId", spotrequestIds)
+-	resp = &CancelSpotRequestsResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Response to a TerminateInstances request.
+-//
+-// See http://goo.gl/3BKHj for more details.
+-type TerminateInstancesResp struct {
+-	RequestId    string                `xml:"requestId"`
+-	StateChanges []InstanceStateChange `xml:"instancesSet>item"`
+-}
+-
+-// InstanceState encapsulates the state of an instance in EC2.
+-//
+-// See http://goo.gl/y3ZBq for more details.
+-type InstanceState struct {
+-	Code int    `xml:"code"` // Watch out, bits 15-8 have unpublished meaning.
+-	Name string `xml:"name"`
+-}
+-
+-// InstanceStateChange informs of the previous and current states
+-// for an instance when a state change is requested.
+-type InstanceStateChange struct {
+-	InstanceId    string        `xml:"instanceId"`
+-	CurrentState  InstanceState `xml:"currentState"`
+-	PreviousState InstanceState `xml:"previousState"`
+-}
+-
+-// TerminateInstances requests the termination of instances when the given ids.
+-//
+-// See http://goo.gl/3BKHj for more details.
+-func (ec2 *EC2) TerminateInstances(instIds []string) (resp *TerminateInstancesResp, err error) {
+-	params := makeParams("TerminateInstances")
+-	addParamsList(params, "InstanceId", instIds)
+-	resp = &TerminateInstancesResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Response to a DescribeInstances request.
+-//
+-// See http://goo.gl/mLbmw for more details.
+-type InstancesResp struct {
+-	RequestId    string        `xml:"requestId"`
+-	Reservations []Reservation `xml:"reservationSet>item"`
+-}
+-
+-// Reservation represents details about a reservation in EC2.
+-//
+-// See http://goo.gl/0ItPT for more details.
+-type Reservation struct {
+-	ReservationId  string          `xml:"reservationId"`
+-	OwnerId        string          `xml:"ownerId"`
+-	RequesterId    string          `xml:"requesterId"`
+-	SecurityGroups []SecurityGroup `xml:"groupSet>item"`
+-	Instances      []Instance      `xml:"instancesSet>item"`
+-}
+-
+-// Instances returns details about instances in EC2.  Both parameters
+-// are optional, and if provided will limit the instances returned to those
+-// matching the given instance ids or filtering rules.
+-//
+-// See http://goo.gl/4No7c for more details.
+-func (ec2 *EC2) Instances(instIds []string, filter *Filter) (resp *InstancesResp, err error) {
+-	params := makeParams("DescribeInstances")
+-	addParamsList(params, "InstanceId", instIds)
+-	filter.addParams(params)
+-	resp = &InstancesResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// ----------------------------------------------------------------------------
+-// Volume management
+-
+-// The CreateVolume request parameters
+-//
+-// See http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-CreateVolume.html
+-type CreateVolume struct {
+-	AvailZone  string
+-	Size       int64
+-	SnapshotId string
+-	VolumeType string
+-	IOPS       int64
+-	Encrypted  bool
+-}
+-
+-// Response to an AttachVolume request
+-type AttachVolumeResp struct {
+-	RequestId  string `xml:"requestId"`
+-	VolumeId   string `xml:"volumeId"`
+-	InstanceId string `xml:"instanceId"`
+-	Device     string `xml:"device"`
+-	Status     string `xml:"status"`
+-	AttachTime string `xml:"attachTime"`
+-}
+-
+-// Response to a CreateVolume request
+-type CreateVolumeResp struct {
+-	RequestId  string `xml:"requestId"`
+-	VolumeId   string `xml:"volumeId"`
+-	Size       int64  `xml:"size"`
+-	SnapshotId string `xml:"snapshotId"`
+-	AvailZone  string `xml:"availabilityZone"`
+-	Status     string `xml:"status"`
+-	CreateTime string `xml:"createTime"`
+-	VolumeType string `xml:"volumeType"`
+-	IOPS       int64  `xml:"iops"`
+-	Encrypted  bool   `xml:"encrypted"`
+-}
+-
+-// Volume is a single volume.
+-type Volume struct {
+-	VolumeId    string             `xml:"volumeId"`
+-	Size        string             `xml:"size"`
+-	SnapshotId  string             `xml:"snapshotId"`
+-	AvailZone   string             `xml:"availabilityZone"`
+-	Status      string             `xml:"status"`
+-	Attachments []VolumeAttachment `xml:"attachmentSet>item"`
+-	VolumeType  string             `xml:"volumeType"`
+-	IOPS        int64              `xml:"iops"`
+-	Encrypted   bool               `xml:"encrypted"`
+-	Tags        []Tag              `xml:"tagSet>item"`
+-}
+-
+-type VolumeAttachment struct {
+-	VolumeId   string `xml:"volumeId"`
+-	InstanceId string `xml:"instanceId"`
+-	Device     string `xml:"device"`
+-	Status     string `xml:"status"`
+-}
+-
+-// Response to a DescribeVolumes request
+-type VolumesResp struct {
+-	RequestId string   `xml:"requestId"`
+-	Volumes   []Volume `xml:"volumeSet>item"`
+-}
+-
+-// Attach a volume.
+-func (ec2 *EC2) AttachVolume(volumeId string, instanceId string, device string) (resp *AttachVolumeResp, err error) {
+-	params := makeParams("AttachVolume")
+-	params["VolumeId"] = volumeId
+-	params["InstanceId"] = instanceId
+-	params["Device"] = device
+-
+-	resp = &AttachVolumeResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Create a new volume.
+-func (ec2 *EC2) CreateVolume(options *CreateVolume) (resp *CreateVolumeResp, err error) {
+-	params := makeParams("CreateVolume")
+-	params["AvailabilityZone"] = options.AvailZone
+-	if options.Size > 0 {
+-		params["Size"] = strconv.FormatInt(options.Size, 10)
+-	}
+-
+-	if options.SnapshotId != "" {
+-		params["SnapshotId"] = options.SnapshotId
+-	}
+-
+-	if options.VolumeType != "" {
+-		params["VolumeType"] = options.VolumeType
+-	}
+-
+-	if options.IOPS > 0 {
+-		params["Iops"] = strconv.FormatInt(options.IOPS, 10)
+-	}
+-
+-	if options.Encrypted {
+-		params["Encrypted"] = "true"
+-	}
+-
+-	resp = &CreateVolumeResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Delete an EBS volume.
+-func (ec2 *EC2) DeleteVolume(id string) (resp *SimpleResp, err error) {
+-	params := makeParams("DeleteVolume")
+-	params["VolumeId"] = id
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Detaches an EBS volume.
+-func (ec2 *EC2) DetachVolume(id string) (resp *SimpleResp, err error) {
+-	params := makeParams("DetachVolume")
+-	params["VolumeId"] = id
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Finds or lists all volumes.
+-func (ec2 *EC2) Volumes(volIds []string, filter *Filter) (resp *VolumesResp, err error) {
+-	params := makeParams("DescribeVolumes")
+-	addParamsList(params, "VolumeId", volIds)
+-	filter.addParams(params)
+-	resp = &VolumesResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// ----------------------------------------------------------------------------
+-// ElasticIp management (for VPC)
+-
+-// The AllocateAddress request parameters
+-//
+-// see http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-AllocateAddress.html
+-type AllocateAddress struct {
+-	Domain string
+-}
+-
+-// Response to an AllocateAddress request
+-type AllocateAddressResp struct {
+-	RequestId    string `xml:"requestId"`
+-	PublicIp     string `xml:"publicIp"`
+-	Domain       string `xml:"domain"`
+-	AllocationId string `xml:"allocationId"`
+-}
+-
+-// The AssociateAddress request parameters
+-//
+-// http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-AssociateAddress.html
+-type AssociateAddress struct {
+-	InstanceId         string
+-	PublicIp           string
+-	AllocationId       string
+-	AllowReassociation bool
+-}
+-
+-// Response to an AssociateAddress request
+-type AssociateAddressResp struct {
+-	RequestId     string `xml:"requestId"`
+-	Return        bool   `xml:"return"`
+-	AssociationId string `xml:"associationId"`
+-}
+-
+-// Address represents an Elastic IP Address
+-// See http://goo.gl/uxCjp7 for more details
+-type Address struct {
+-	PublicIp                string `xml:"publicIp"`
+-	AllocationId            string `xml:"allocationId"`
+-	Domain                  string `xml:"domain"`
+-	InstanceId              string `xml:"instanceId"`
+-	AssociationId           string `xml:"associationId"`
+-	NetworkInterfaceId      string `xml:"networkInterfaceId"`
+-	NetworkInterfaceOwnerId string `xml:"networkInterfaceOwnerId"`
+-	PrivateIpAddress        string `xml:"privateIpAddress"`
+-}
+-
+-type DescribeAddressesResp struct {
+-	RequestId string    `xml:"requestId"`
+-	Addresses []Address `xml:"addressesSet>item"`
+-}
+-
+-// Allocate a new Elastic IP.
+-func (ec2 *EC2) AllocateAddress(options *AllocateAddress) (resp *AllocateAddressResp, err error) {
+-	params := makeParams("AllocateAddress")
+-	params["Domain"] = options.Domain
+-
+-	resp = &AllocateAddressResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Release an Elastic IP (VPC).
+-func (ec2 *EC2) ReleaseAddress(id string) (resp *SimpleResp, err error) {
+-	params := makeParams("ReleaseAddress")
+-	params["AllocationId"] = id
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Release an Elastic IP (Public)
+-func (ec2 *EC2) ReleasePublicAddress(publicIp string) (resp *SimpleResp, err error) {
+-	params := makeParams("ReleaseAddress")
+-	params["PublicIp"] = publicIp
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Associate an address with a VPC instance.
+-func (ec2 *EC2) AssociateAddress(options *AssociateAddress) (resp *AssociateAddressResp, err error) {
+-	params := makeParams("AssociateAddress")
+-	params["InstanceId"] = options.InstanceId
+-	if options.PublicIp != "" {
+-		params["PublicIp"] = options.PublicIp
+-	}
+-	if options.AllocationId != "" {
+-		params["AllocationId"] = options.AllocationId
+-	}
+-	if options.AllowReassociation {
+-		params["AllowReassociation"] = "true"
+-	}
+-
+-	resp = &AssociateAddressResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Disassociate an address from a VPC instance.
+-func (ec2 *EC2) DisassociateAddress(id string) (resp *SimpleResp, err error) {
+-	params := makeParams("DisassociateAddress")
+-	params["AssociationId"] = id
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// DescribeAddresses returns details about one or more
+-// Elastic IP Addresses. Returned addresses can be
+-// filtered by Public IP, Allocation ID or multiple filters
+-//
+-// See http://goo.gl/zW7J4p for more details.
+-func (ec2 *EC2) Addresses(publicIps []string, allocationIds []string, filter *Filter) (resp *DescribeAddressesResp, err error) {
+-	params := makeParams("DescribeAddresses")
+-	addParamsList(params, "PublicIp", publicIps)
+-	addParamsList(params, "AllocationId", allocationIds)
+-	filter.addParams(params)
+-	resp = &DescribeAddressesResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// ----------------------------------------------------------------------------
+-// Image and snapshot management functions and types.
+-
+-// The CreateImage request parameters.
+-//
+-// See http://goo.gl/cxU41 for more details.
+-type CreateImage struct {
+-	InstanceId   string
+-	Name         string
+-	Description  string
+-	NoReboot     bool
+-	BlockDevices []BlockDeviceMapping
+-}
+-
+-// Response to a CreateImage request.
+-//
+-// See http://goo.gl/cxU41 for more details.
+-type CreateImageResp struct {
+-	RequestId string `xml:"requestId"`
+-	ImageId   string `xml:"imageId"`
+-}
+-
+-// Response to a DescribeImages request.
+-//
+-// See http://goo.gl/hLnyg for more details.
+-type ImagesResp struct {
+-	RequestId string  `xml:"requestId"`
+-	Images    []Image `xml:"imagesSet>item"`
+-}
+-
+-// Response to a DescribeImageAttribute request.
+-//
+-// See http://goo.gl/bHO3zT for more details.
+-type ImageAttributeResp struct {
+-	RequestId    string               `xml:"requestId"`
+-	ImageId      string               `xml:"imageId"`
+-	Kernel       string               `xml:"kernel>value"`
+-	RamDisk      string               `xml:"ramdisk>value"`
+-	Description  string               `xml:"description>value"`
+-	Group        string               `xml:"launchPermission>item>group"`
+-	UserIds      []string             `xml:"launchPermission>item>userId"`
+-	ProductCodes []string             `xml:"productCodes>item>productCode"`
+-	BlockDevices []BlockDeviceMapping `xml:"blockDeviceMapping>item"`
+-}
+-
+-// The RegisterImage request parameters.
+-type RegisterImage struct {
+-	ImageLocation   string
+-	Name            string
+-	Description     string
+-	Architecture    string
+-	KernelId        string
+-	RamdiskId       string
+-	RootDeviceName  string
+-	VirtType        string
+-	SriovNetSupport string
+-	BlockDevices    []BlockDeviceMapping
+-}
+-
+-// Response to a RegisterImage request.
+-type RegisterImageResp struct {
+-	RequestId string `xml:"requestId"`
+-	ImageId   string `xml:"imageId"`
+-}
+-
+-// Response to a DegisterImage request.
+-//
+-// See http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DeregisterImage.html
+-type DeregisterImageResp struct {
+-	RequestId string `xml:"requestId"`
+-	Return    bool   `xml:"return"`
+-}
+-
+-// BlockDeviceMapping represents the association of a block device with an image.
+-//
+-// See http://goo.gl/wnDBf for more details.
+-type BlockDeviceMapping struct {
+-	DeviceName          string `xml:"deviceName"`
+-	VirtualName         string `xml:"virtualName"`
+-	SnapshotId          string `xml:"ebs>snapshotId"`
+-	VolumeType          string `xml:"ebs>volumeType"`
+-	VolumeSize          int64  `xml:"ebs>volumeSize"`
+-	DeleteOnTermination bool   `xml:"ebs>deleteOnTermination"`
+-	Encrypted           bool   `xml:"ebs>encrypted"`
+-	NoDevice            bool   `xml:"noDevice"`
+-
+-	// The number of I/O operations per second (IOPS) that the volume supports.
+-	IOPS int64 `xml:"ebs>iops"`
+-}
+-
+-// Image represents details about an image.
+-//
+-// See http://goo.gl/iSqJG for more details.
+-type Image struct {
+-	Id                 string               `xml:"imageId"`
+-	Name               string               `xml:"name"`
+-	Description        string               `xml:"description"`
+-	Type               string               `xml:"imageType"`
+-	State              string               `xml:"imageState"`
+-	Location           string               `xml:"imageLocation"`
+-	Public             bool                 `xml:"isPublic"`
+-	Architecture       string               `xml:"architecture"`
+-	Platform           string               `xml:"platform"`
+-	ProductCodes       []string             `xml:"productCode>item>productCode"`
+-	KernelId           string               `xml:"kernelId"`
+-	RamdiskId          string               `xml:"ramdiskId"`
+-	StateReason        string               `xml:"stateReason"`
+-	OwnerId            string               `xml:"imageOwnerId"`
+-	OwnerAlias         string               `xml:"imageOwnerAlias"`
+-	RootDeviceType     string               `xml:"rootDeviceType"`
+-	RootDeviceName     string               `xml:"rootDeviceName"`
+-	VirtualizationType string               `xml:"virtualizationType"`
+-	Hypervisor         string               `xml:"hypervisor"`
+-	BlockDevices       []BlockDeviceMapping `xml:"blockDeviceMapping>item"`
+-	Tags               []Tag                `xml:"tagSet>item"`
+-}
+-
+-// The ModifyImageAttribute request parameters.
+-type ModifyImageAttribute struct {
+-	AddUsers     []string
+-	RemoveUsers  []string
+-	AddGroups    []string
+-	RemoveGroups []string
+-	ProductCodes []string
+-	Description  string
+-}
+-
+-// The CopyImage request parameters.
+-//
+-// See http://goo.gl/hQwPCK for more details.
+-type CopyImage struct {
+-	SourceRegion  string
+-	SourceImageId string
+-	Name          string
+-	Description   string
+-	ClientToken   string
+-}
+-
+-// Response to a CopyImage request.
+-//
+-// See http://goo.gl/hQwPCK for more details.
+-type CopyImageResp struct {
+-	RequestId string `xml:"requestId"`
+-	ImageId   string `xml:"imageId"`
+-}
+-
+-// Creates an Amazon EBS-backed AMI from an Amazon EBS-backed instance
+-// that is either running or stopped.
+-//
+-// See http://goo.gl/cxU41 for more details.
+-func (ec2 *EC2) CreateImage(options *CreateImage) (resp *CreateImageResp, err error) {
+-	params := makeParams("CreateImage")
+-	params["InstanceId"] = options.InstanceId
+-	params["Name"] = options.Name
+-	if options.Description != "" {
+-		params["Description"] = options.Description
+-	}
+-	if options.NoReboot {
+-		params["NoReboot"] = "true"
+-	}
+-	addBlockDeviceParams("", params, options.BlockDevices)
+-
+-	resp = &CreateImageResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Images returns details about available images.
+-// The ids and filter parameters, if provided, will limit the images returned.
+-// For example, to get all the private images associated with this account set
+-// the boolean filter "is-public" to 0.
+-// For list of filters: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeImages.html
+-//
+-// Note: calling this function with nil ids and filter parameters will result in
+-// a very large number of images being returned.
+-//
+-// See http://goo.gl/SRBhW for more details.
+-func (ec2 *EC2) Images(ids []string, filter *Filter) (resp *ImagesResp, err error) {
+-	params := makeParams("DescribeImages")
+-	for i, id := range ids {
+-		params["ImageId."+strconv.Itoa(i+1)] = id
+-	}
+-	filter.addParams(params)
+-
+-	resp = &ImagesResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// ImagesByOwners returns details about available images.
+-// The ids, owners, and filter parameters, if provided, will limit the images returned.
+-// For example, to get all the private images associated with this account set
+-// the boolean filter "is-public" to 0.
+-// For list of filters: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeImages.html
+-//
+-// Note: calling this function with nil ids and filter parameters will result in
+-// a very large number of images being returned.
+-//
+-// See http://goo.gl/SRBhW for more details.
+-func (ec2 *EC2) ImagesByOwners(ids []string, owners []string, filter *Filter) (resp *ImagesResp, err error) {
+-	params := makeParams("DescribeImages")
+-	for i, id := range ids {
+-		params["ImageId."+strconv.Itoa(i+1)] = id
+-	}
+-	for i, owner := range owners {
+-		params[fmt.Sprintf("Owner.%d", i+1)] = owner
+-	}
+-
+-	filter.addParams(params)
+-
+-	resp = &ImagesResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// ImageAttribute describes an attribute of an AMI.
+-// You can specify only one attribute at a time.
+-// Valid attributes are:
+-//    description | kernel | ramdisk | launchPermission | productCodes | blockDeviceMapping
+-//
+-// See http://goo.gl/bHO3zT for more details.
+-func (ec2 *EC2) ImageAttribute(imageId, attribute string) (resp *ImageAttributeResp, err error) {
+-	params := makeParams("DescribeImageAttribute")
+-	params["ImageId"] = imageId
+-	params["Attribute"] = attribute
+-
+-	resp = &ImageAttributeResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// ModifyImageAttribute sets attributes for an image.
+-//
+-// See http://goo.gl/YUjO4G for more details.
+-func (ec2 *EC2) ModifyImageAttribute(imageId string, options *ModifyImageAttribute) (resp *SimpleResp, err error) {
+-	params := makeParams("ModifyImageAttribute")
+-	params["ImageId"] = imageId
+-	if options.Description != "" {
+-		params["Description.Value"] = options.Description
+-	}
+-
+-	if options.AddUsers != nil {
+-		for i, user := range options.AddUsers {
+-			p := fmt.Sprintf("LaunchPermission.Add.%d.UserId", i+1)
+-			params[p] = user
+-		}
+-	}
+-
+-	if options.RemoveUsers != nil {
+-		for i, user := range options.RemoveUsers {
+-			p := fmt.Sprintf("LaunchPermission.Remove.%d.UserId", i+1)
+-			params[p] = user
+-		}
+-	}
+-
+-	if options.AddGroups != nil {
+-		for i, group := range options.AddGroups {
+-			p := fmt.Sprintf("LaunchPermission.Add.%d.Group", i+1)
+-			params[p] = group
+-		}
+-	}
+-
+-	if options.RemoveGroups != nil {
+-		for i, group := range options.RemoveGroups {
+-			p := fmt.Sprintf("LaunchPermission.Remove.%d.Group", i+1)
+-			params[p] = group
+-		}
+-	}
+-
+-	if options.ProductCodes != nil {
+-		addParamsList(params, "ProductCode", options.ProductCodes)
+-	}
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		resp = nil
+-	}
+-
+-	return
+-}
+-
+-// Registers a new AMI with EC2.
+-//
+-// See: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-RegisterImage.html
+-func (ec2 *EC2) RegisterImage(options *RegisterImage) (resp *RegisterImageResp, err error) {
+-	params := makeParams("RegisterImage")
+-	params["Name"] = options.Name
+-	if options.ImageLocation != "" {
+-		params["ImageLocation"] = options.ImageLocation
+-	}
+-
+-	if options.Description != "" {
+-		params["Description"] = options.Description
+-	}
+-
+-	if options.Architecture != "" {
+-		params["Architecture"] = options.Architecture
+-	}
+-
+-	if options.KernelId != "" {
+-		params["KernelId"] = options.KernelId
+-	}
+-
+-	if options.RamdiskId != "" {
+-		params["RamdiskId"] = options.RamdiskId
+-	}
+-
+-	if options.RootDeviceName != "" {
+-		params["RootDeviceName"] = options.RootDeviceName
+-	}
+-
+-	if options.VirtType != "" {
+-		params["VirtualizationType"] = options.VirtType
+-	}
+-
+-	if options.SriovNetSupport != "" {
+-		params["SriovNetSupport"] = "simple"
+-	}
+-
+-	addBlockDeviceParams("", params, options.BlockDevices)
+-
+-	resp = &RegisterImageResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Degisters an image. Note that this does not delete the backing stores of the AMI.
+-//
+-// See http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DeregisterImage.html
+-func (ec2 *EC2) DeregisterImage(imageId string) (resp *DeregisterImageResp, err error) {
+-	params := makeParams("DeregisterImage")
+-	params["ImageId"] = imageId
+-
+-	resp = &DeregisterImageResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Copy and Image from one region to another.
+-//
+-// See http://goo.gl/hQwPCK for more details.
+-func (ec2 *EC2) CopyImage(options *CopyImage) (resp *CopyImageResp, err error) {
+-	params := makeParams("CopyImage")
+-
+-	if options.SourceRegion != "" {
+-		params["SourceRegion"] = options.SourceRegion
+-	}
+-
+-	if options.SourceImageId != "" {
+-		params["SourceImageId"] = options.SourceImageId
+-	}
+-
+-	if options.Name != "" {
+-		params["Name"] = options.Name
+-	}
+-
+-	if options.Description != "" {
+-		params["Description"] = options.Description
+-	}
+-
+-	if options.ClientToken != "" {
+-		params["ClientToken"] = options.ClientToken
+-	}
+-
+-	resp = &CopyImageResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Response to a CreateSnapshot request.
+-//
+-// See http://goo.gl/ttcda for more details.
+-type CreateSnapshotResp struct {
+-	RequestId string `xml:"requestId"`
+-	Snapshot
+-}
+-
+-// CreateSnapshot creates a volume snapshot and stores it in S3.
+-//
+-// See http://goo.gl/ttcda for more details.
+-func (ec2 *EC2) CreateSnapshot(volumeId, description string) (resp *CreateSnapshotResp, err error) {
+-	params := makeParams("CreateSnapshot")
+-	params["VolumeId"] = volumeId
+-	params["Description"] = description
+-
+-	resp = &CreateSnapshotResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// DeleteSnapshots deletes the volume snapshots with the given ids.
+-//
+-// Note: If you make periodic snapshots of a volume, the snapshots are
+-// incremental so that only the blocks on the device that have changed
+-// since your last snapshot are incrementally saved in the new snapshot.
+-// Even though snapshots are saved incrementally, the snapshot deletion
+-// process is designed so that you need to retain only the most recent
+-// snapshot in order to restore the volume.
+-//
+-// See http://goo.gl/vwU1y for more details.
+-func (ec2 *EC2) DeleteSnapshots(ids []string) (resp *SimpleResp, err error) {
+-	params := makeParams("DeleteSnapshot")
+-	for i, id := range ids {
+-		params["SnapshotId."+strconv.Itoa(i+1)] = id
+-	}
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Response to a DescribeSnapshots request.
+-//
+-// See http://goo.gl/nClDT for more details.
+-type SnapshotsResp struct {
+-	RequestId string     `xml:"requestId"`
+-	Snapshots []Snapshot `xml:"snapshotSet>item"`
+-}
+-
+-// Snapshot represents details about a volume snapshot.
+-//
+-// See http://goo.gl/nkovs for more details.
+-type Snapshot struct {
+-	Id          string `xml:"snapshotId"`
+-	VolumeId    string `xml:"volumeId"`
+-	VolumeSize  string `xml:"volumeSize"`
+-	Status      string `xml:"status"`
+-	StartTime   string `xml:"startTime"`
+-	Description string `xml:"description"`
+-	Progress    string `xml:"progress"`
+-	OwnerId     string `xml:"ownerId"`
+-	OwnerAlias  string `xml:"ownerAlias"`
+-	Encrypted   bool   `xml:"encrypted"`
+-	Tags        []Tag  `xml:"tagSet>item"`
+-}
+-
+-// Snapshots returns details about volume snapshots available to the user.
+-// The ids and filter parameters, if provided, limit the snapshots returned.
+-//
+-// See http://goo.gl/ogJL4 for more details.
+-func (ec2 *EC2) Snapshots(ids []string, filter *Filter) (resp *SnapshotsResp, err error) {
+-	params := makeParams("DescribeSnapshots")
+-	for i, id := range ids {
+-		params["SnapshotId."+strconv.Itoa(i+1)] = id
+-	}
+-	filter.addParams(params)
+-
+-	resp = &SnapshotsResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// ----------------------------------------------------------------------------
+-// KeyPair management functions and types.
+-
+-type KeyPair struct {
+-	Name        string `xml:"keyName"`
+-	Fingerprint string `xml:"keyFingerprint"`
+-}
+-
+-type KeyPairsResp struct {
+-	RequestId string    `xml:"requestId"`
+-	Keys      []KeyPair `xml:"keySet>item"`
+-}
+-
+-type CreateKeyPairResp struct {
+-	RequestId      string `xml:"requestId"`
+-	KeyName        string `xml:"keyName"`
+-	KeyFingerprint string `xml:"keyFingerprint"`
+-	KeyMaterial    string `xml:"keyMaterial"`
+-}
+-
+-type ImportKeyPairResponse struct {
+-	RequestId      string `xml:"requestId"`
+-	KeyName        string `xml:"keyName"`
+-	KeyFingerprint string `xml:"keyFingerprint"`
+-}
+-
+-// CreateKeyPair creates a new key pair and returns the private key contents.
+-//
+-// See http://goo.gl/0S6hV
+-func (ec2 *EC2) CreateKeyPair(keyName string) (resp *CreateKeyPairResp, err error) {
+-	params := makeParams("CreateKeyPair")
+-	params["KeyName"] = keyName
+-
+-	resp = &CreateKeyPairResp{}
+-	err = ec2.query(params, resp)
+-	if err == nil {
+-		resp.KeyFingerprint = strings.TrimSpace(resp.KeyFingerprint)
+-	}
+-	return
+-}
+-
+-// DeleteKeyPair deletes a key pair.
+-//
+-// See http://goo.gl/0bqok
+-func (ec2 *EC2) DeleteKeyPair(name string) (resp *SimpleResp, err error) {
+-	params := makeParams("DeleteKeyPair")
+-	params["KeyName"] = name
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	return
+-}
+-
+-// KeyPairs returns list of key pairs for this account
+-//
+-// See http://goo.gl/Apzsfz
+-func (ec2 *EC2) KeyPairs(keynames []string, filter *Filter) (resp *KeyPairsResp, err error) {
+-	params := makeParams("DescribeKeyPairs")
+-	for i, name := range keynames {
+-		params["KeyName."+strconv.Itoa(i)] = name
+-	}
+-	filter.addParams(params)
+-
+-	resp = &KeyPairsResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return resp, nil
+-}
+-
+-// ImportKeyPair imports a key into AWS
+-//
+-// See http://goo.gl/NbZUvw
+-func (ec2 *EC2) ImportKeyPair(keyname string, key string) (resp *ImportKeyPairResponse, err error) {
+-	params := makeParams("ImportKeyPair")
+-	params["KeyName"] = keyname
+-
+-	// Oddly, AWS requires the key material to be base64-encoded, even if it was
+-	// already encoded. So, we force another round of encoding...
+-	// c.f. https://groups.google.com/forum/?fromgroups#!topic/boto-dev/IczrStO9Q8M
+-	params["PublicKeyMaterial"] = base64.StdEncoding.EncodeToString([]byte(key))
+-
+-	resp = &ImportKeyPairResponse{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// ----------------------------------------------------------------------------
+-// Security group management functions and types.
+-
+-// SimpleResp represents a response to an EC2 request which on success will
+-// return no other information besides a request id.
+-type SimpleResp struct {
+-	XMLName   xml.Name
+-	RequestId string `xml:"requestId"`
+-}
+-
+-// CreateSecurityGroupResp represents a response to a CreateSecurityGroup request.
+-type CreateSecurityGroupResp struct {
+-	SecurityGroup
+-	RequestId string `xml:"requestId"`
+-}
+-
+-// CreateSecurityGroup run a CreateSecurityGroup request in EC2, with the provided
+-// name and description.
+-//
+-// See http://goo.gl/Eo7Yl for more details.
+-func (ec2 *EC2) CreateSecurityGroup(group SecurityGroup) (resp *CreateSecurityGroupResp, err error) {
+-	params := makeParams("CreateSecurityGroup")
+-	params["GroupName"] = group.Name
+-	params["GroupDescription"] = group.Description
+-	if group.VpcId != "" {
+-		params["VpcId"] = group.VpcId
+-	}
+-
+-	resp = &CreateSecurityGroupResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	resp.Name = group.Name
+-	return resp, nil
+-}
+-
+-// SecurityGroupsResp represents a response to a DescribeSecurityGroups
+-// request in EC2.
+-//
+-// See http://goo.gl/k12Uy for more details.
+-type SecurityGroupsResp struct {
+-	RequestId string              `xml:"requestId"`
+-	Groups    []SecurityGroupInfo `xml:"securityGroupInfo>item"`
+-}
+-
+-// SecurityGroup encapsulates details for a security group in EC2.
+-//
+-// See http://goo.gl/CIdyP for more details.
+-type SecurityGroupInfo struct {
+-	SecurityGroup
+-	OwnerId     string   `xml:"ownerId"`
+-	Description string   `xml:"groupDescription"`
+-	IPPerms     []IPPerm `xml:"ipPermissions>item"`
+-}
+-
+-// IPPerm represents an allowance within an EC2 security group.
+-//
+-// See http://goo.gl/4oTxv for more details.
+-type IPPerm struct {
+-	Protocol     string              `xml:"ipProtocol"`
+-	FromPort     int                 `xml:"fromPort"`
+-	ToPort       int                 `xml:"toPort"`
+-	SourceIPs    []string            `xml:"ipRanges>item>cidrIp"`
+-	SourceGroups []UserSecurityGroup `xml:"groups>item"`
+-}
+-
+-// UserSecurityGroup holds a security group and the owner
+-// of that group.
+-type UserSecurityGroup struct {
+-	Id      string `xml:"groupId"`
+-	Name    string `xml:"groupName"`
+-	OwnerId string `xml:"userId"`
+-}
+-
+-// SecurityGroup represents an EC2 security group.
+-// If SecurityGroup is used as a parameter, then one of Id or Name
+-// may be empty. If both are set, then Id is used.
+-type SecurityGroup struct {
+-	Id          string `xml:"groupId"`
+-	Name        string `xml:"groupName"`
+-	Description string `xml:"groupDescription"`
+-	VpcId       string `xml:"vpcId"`
+-}
+-
+-// SecurityGroupNames is a convenience function that
+-// returns a slice of security groups with the given names.
+-func SecurityGroupNames(names ...string) []SecurityGroup {
+-	g := make([]SecurityGroup, len(names))
+-	for i, name := range names {
+-		g[i] = SecurityGroup{Name: name}
+-	}
+-	return g
+-}
+-
+-// SecurityGroupNames is a convenience function that
+-// returns a slice of security groups with the given ids.
+-func SecurityGroupIds(ids ...string) []SecurityGroup {
+-	g := make([]SecurityGroup, len(ids))
+-	for i, id := range ids {
+-		g[i] = SecurityGroup{Id: id}
+-	}
+-	return g
+-}
+-
+-// SecurityGroups returns details about security groups in EC2.  Both parameters
+-// are optional, and if provided will limit the security groups returned to those
+-// matching the given groups or filtering rules.
+-//
+-// See http://goo.gl/k12Uy for more details.
+-func (ec2 *EC2) SecurityGroups(groups []SecurityGroup, filter *Filter) (resp *SecurityGroupsResp, err error) {
+-	params := makeParams("DescribeSecurityGroups")
+-	i, j := 1, 1
+-	for _, g := range groups {
+-		if g.Id != "" {
+-			params["GroupId."+strconv.Itoa(i)] = g.Id
+-			i++
+-		} else {
+-			params["GroupName."+strconv.Itoa(j)] = g.Name
+-			j++
+-		}
+-	}
+-	filter.addParams(params)
+-
+-	resp = &SecurityGroupsResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// DeleteSecurityGroup removes the given security group in EC2.
+-//
+-// See http://goo.gl/QJJDO for more details.
+-func (ec2 *EC2) DeleteSecurityGroup(group SecurityGroup) (resp *SimpleResp, err error) {
+-	params := makeParams("DeleteSecurityGroup")
+-	if group.Id != "" {
+-		params["GroupId"] = group.Id
+-	} else {
+-		params["GroupName"] = group.Name
+-	}
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// AuthorizeSecurityGroup creates an allowance for clients matching the provided
+-// rules to access instances within the given security group.
+-//
+-// See http://goo.gl/u2sDJ for more details.
+-func (ec2 *EC2) AuthorizeSecurityGroup(group SecurityGroup, perms []IPPerm) (resp *SimpleResp, err error) {
+-	return ec2.authOrRevoke("AuthorizeSecurityGroupIngress", group, perms)
+-}
+-
+-// AuthorizeSecurityGroupEgress creates an allowance for clients matching the provided
+-// rules for egress access.
+-//
+-// See http://goo.gl/UHnH4L for more details.
+-func (ec2 *EC2) AuthorizeSecurityGroupEgress(group SecurityGroup, perms []IPPerm) (resp *SimpleResp, err error) {
+-	return ec2.authOrRevoke("AuthorizeSecurityGroupEgress", group, perms)
+-}
+-
+-// RevokeSecurityGroup revokes permissions from a group.
+-//
+-// See http://goo.gl/ZgdxA for more details.
+-func (ec2 *EC2) RevokeSecurityGroup(group SecurityGroup, perms []IPPerm) (resp *SimpleResp, err error) {
+-	return ec2.authOrRevoke("RevokeSecurityGroupIngress", group, perms)
+-}
+-
+-func (ec2 *EC2) authOrRevoke(op string, group SecurityGroup, perms []IPPerm) (resp *SimpleResp, err error) {
+-	params := makeParams(op)
+-	if group.Id != "" {
+-		params["GroupId"] = group.Id
+-	} else {
+-		params["GroupName"] = group.Name
+-	}
+-
+-	for i, perm := range perms {
+-		prefix := "IpPermissions." + strconv.Itoa(i+1)
+-		params[prefix+".IpProtocol"] = perm.Protocol
+-		params[prefix+".FromPort"] = strconv.Itoa(perm.FromPort)
+-		params[prefix+".ToPort"] = strconv.Itoa(perm.ToPort)
+-		for j, ip := range perm.SourceIPs {
+-			params[prefix+".IpRanges."+strconv.Itoa(j+1)+".CidrIp"] = ip
+-		}
+-		for j, g := range perm.SourceGroups {
+-			subprefix := prefix + ".Groups." + strconv.Itoa(j+1)
+-			if g.OwnerId != "" {
+-				params[subprefix+".UserId"] = g.OwnerId
+-			}
+-			if g.Id != "" {
+-				params[subprefix+".GroupId"] = g.Id
+-			} else {
+-				params[subprefix+".GroupName"] = g.Name
+-			}
+-		}
+-	}
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// ResourceTag represents key-value metadata used to classify and organize
+-// EC2 instances.
+-//
+-// See http://goo.gl/bncl3 for more details
+-type Tag struct {
+-	Key   string `xml:"key"`
+-	Value string `xml:"value"`
+-}
+-
+-// CreateTags adds or overwrites one or more tags for the specified taggable resources.
+-// For a list of tagable resources, see: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html
+-//
+-// See http://goo.gl/Vmkqc for more details
+-func (ec2 *EC2) CreateTags(resourceIds []string, tags []Tag) (resp *SimpleResp, err error) {
+-	params := makeParams("CreateTags")
+-	addParamsList(params, "ResourceId", resourceIds)
+-
+-	for j, tag := range tags {
+-		params["Tag."+strconv.Itoa(j+1)+".Key"] = tag.Key
+-		params["Tag."+strconv.Itoa(j+1)+".Value"] = tag.Value
+-	}
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-type TagsResp struct {
+-	RequestId string        `xml:"requestId"`
+-	Tags      []ResourceTag `xml:"tagSet>item"`
+-}
+-
+-type ResourceTag struct {
+-	Tag
+-	ResourceId   string `xml:"resourceId"`
+-	ResourceType string `xml:"resourceType"`
+-}
+-
+-func (ec2 *EC2) Tags(filter *Filter) (*TagsResp, error) {
+-	params := makeParams("DescribeTags")
+-	filter.addParams(params)
+-
+-	resp := &TagsResp{}
+-	if err := ec2.query(params, resp); err != nil {
+-		return nil, err
+-	}
+-
+-	return resp, nil
+-}
+-
+-// Response to a StartInstances request.
+-//
+-// See http://goo.gl/awKeF for more details.
+-type StartInstanceResp struct {
+-	RequestId    string                `xml:"requestId"`
+-	StateChanges []InstanceStateChange `xml:"instancesSet>item"`
+-}
+-
+-// Response to a StopInstances request.
+-//
+-// See http://goo.gl/436dJ for more details.
+-type StopInstanceResp struct {
+-	RequestId    string                `xml:"requestId"`
+-	StateChanges []InstanceStateChange `xml:"instancesSet>item"`
+-}
+-
+-// StartInstances starts an Amazon EBS-backed AMI that you've previously stopped.
+-//
+-// See http://goo.gl/awKeF for more details.
+-func (ec2 *EC2) StartInstances(ids ...string) (resp *StartInstanceResp, err error) {
+-	params := makeParams("StartInstances")
+-	addParamsList(params, "InstanceId", ids)
+-	resp = &StartInstanceResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// StopInstances requests stopping one or more Amazon EBS-backed instances.
+-//
+-// See http://goo.gl/436dJ for more details.
+-func (ec2 *EC2) StopInstances(ids ...string) (resp *StopInstanceResp, err error) {
+-	params := makeParams("StopInstances")
+-	addParamsList(params, "InstanceId", ids)
+-	resp = &StopInstanceResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// RebootInstance requests a reboot of one or more instances. This operation is asynchronous;
+-// it only queues a request to reboot the specified instance(s). The operation will succeed
+-// if the instances are valid and belong to you.
+-//
+-// Requests to reboot terminated instances are ignored.
+-//
+-// See http://goo.gl/baoUf for more details.
+-func (ec2 *EC2) RebootInstances(ids ...string) (resp *SimpleResp, err error) {
+-	params := makeParams("RebootInstances")
+-	addParamsList(params, "InstanceId", ids)
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// The ModifyInstanceAttribute request parameters.
+-type ModifyInstance struct {
+-	InstanceType          string
+-	BlockDevices          []BlockDeviceMapping
+-	DisableAPITermination bool
+-	EbsOptimized          bool
+-	SecurityGroups        []SecurityGroup
+-	ShutdownBehavior      string
+-	KernelId              string
+-	RamdiskId             string
+-	SourceDestCheck       bool
+-	SriovNetSupport       bool
+-	UserData              []byte
+-
+-	SetSourceDestCheck bool
+-}
+-
+-// Response to a ModifyInstanceAttribute request.
+-//
+-// http://goo.gl/icuXh5 for more details.
+-type ModifyInstanceResp struct {
+-	RequestId string `xml:"requestId"`
+-	Return    bool   `xml:"return"`
+-}
+-
+-// ModifyImageAttribute modifies the specified attribute of the specified instance.
+-// You can specify only one attribute at a time. To modify some attributes, the
+-// instance must be stopped.
+-//
+-// See http://goo.gl/icuXh5 for more details.
+-func (ec2 *EC2) ModifyInstance(instId string, options *ModifyInstance) (resp *ModifyInstanceResp, err error) {
+-	params := makeParams("ModifyInstanceAttribute")
+-	params["InstanceId"] = instId
+-	addBlockDeviceParams("", params, options.BlockDevices)
+-
+-	if options.InstanceType != "" {
+-		params["InstanceType.Value"] = options.InstanceType
+-	}
+-
+-	if options.DisableAPITermination {
+-		params["DisableApiTermination.Value"] = "true"
+-	}
+-
+-	if options.EbsOptimized {
+-		params["EbsOptimized"] = "true"
+-	}
+-
+-	if options.ShutdownBehavior != "" {
+-		params["InstanceInitiatedShutdownBehavior.Value"] = options.ShutdownBehavior
+-	}
+-
+-	if options.KernelId != "" {
+-		params["Kernel.Value"] = options.KernelId
+-	}
+-
+-	if options.RamdiskId != "" {
+-		params["Ramdisk.Value"] = options.RamdiskId
+-	}
+-
+-	if options.SourceDestCheck || options.SetSourceDestCheck {
+-		if options.SourceDestCheck {
+-			params["SourceDestCheck.Value"] = "true"
+-		} else {
+-			params["SourceDestCheck.Value"] = "false"
+-		}
+-	}
+-
+-	if options.SriovNetSupport {
+-		params["SriovNetSupport.Value"] = "simple"
+-	}
+-
+-	if options.UserData != nil {
+-		userData := make([]byte, b64.EncodedLen(len(options.UserData)))
+-		b64.Encode(userData, options.UserData)
+-		params["UserData"] = string(userData)
+-	}
+-
+-	i := 1
+-	for _, g := range options.SecurityGroups {
+-		if g.Id != "" {
+-			params["GroupId."+strconv.Itoa(i)] = g.Id
+-			i++
+-		}
+-	}
+-
+-	resp = &ModifyInstanceResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		resp = nil
+-	}
+-	return
+-}
+-
+-// ----------------------------------------------------------------------------
+-// VPC management functions and types.
+-
+-// The CreateVpc request parameters
+-//
+-// See http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-CreateVpc.html
+-type CreateVpc struct {
+-	CidrBlock       string
+-	InstanceTenancy string
+-}
+-
+-// Response to a CreateVpc request
+-type CreateVpcResp struct {
+-	RequestId string `xml:"requestId"`
+-	VPC       VPC    `xml:"vpc"`
+-}
+-
+-// The ModifyVpcAttribute request parameters.
+-//
+-// See http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/index.html?ApiReference-query-DescribeVpcAttribute.html for more details.
+-type ModifyVpcAttribute struct {
+-	EnableDnsSupport   bool
+-	EnableDnsHostnames bool
+-
+-	SetEnableDnsSupport   bool
+-	SetEnableDnsHostnames bool
+-}
+-
+-// Response to a DescribeVpcAttribute request.
+-//
+-// See http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/index.html?ApiReference-query-DescribeVpcAttribute.html for more details.
+-type VpcAttributeResp struct {
+-	RequestId          string `xml:"requestId"`
+-	VpcId              string `xml:"vpcId"`
+-	EnableDnsSupport   bool   `xml:"enableDnsSupport>value"`
+-	EnableDnsHostnames bool   `xml:"enableDnsHostnames>value"`
+-}
+-
+-// CreateInternetGateway request parameters.
+-//
+-// http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-CreateInternetGateway.html
+-type CreateInternetGateway struct{}
+-
+-// CreateInternetGateway response
+-type CreateInternetGatewayResp struct {
+-	RequestId       string          `xml:"requestId"`
+-	InternetGateway InternetGateway `xml:"internetGateway"`
+-}
+-
+-// The CreateRouteTable request parameters.
+-//
+-// http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-CreateRouteTable.html
+-type CreateRouteTable struct {
+-	VpcId string
+-}
+-
+-// Response to a CreateRouteTable request.
+-type CreateRouteTableResp struct {
+-	RequestId  string     `xml:"requestId"`
+-	RouteTable RouteTable `xml:"routeTable"`
+-}
+-
+-// CreateRoute request parameters
+-//
+-// http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-CreateRoute.html
+-type CreateRoute struct {
+-	RouteTableId           string
+-	DestinationCidrBlock   string
+-	GatewayId              string
+-	InstanceId             string
+-	NetworkInterfaceId     string
+-	VpcPeeringConnectionId string
+-}
+-type ReplaceRoute struct {
+-	RouteTableId           string
+-	DestinationCidrBlock   string
+-	GatewayId              string
+-	InstanceId             string
+-	NetworkInterfaceId     string
+-	VpcPeeringConnectionId string
+-}
+-
+-type AssociateRouteTableResp struct {
+-	RequestId     string `xml:"requestId"`
+-	AssociationId string `xml:"associationId"`
+-}
+-type ReassociateRouteTableResp struct {
+-	RequestId     string `xml:"requestId"`
+-	AssociationId string `xml:"newAssociationId"`
+-}
+-
+-// The CreateSubnet request parameters
+-//
+-// http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-CreateSubnet.html
+-type CreateSubnet struct {
+-	VpcId            string
+-	CidrBlock        string
+-	AvailabilityZone string
+-}
+-
+-// Response to a CreateSubnet request
+-type CreateSubnetResp struct {
+-	RequestId string `xml:"requestId"`
+-	Subnet    Subnet `xml:"subnet"`
+-}
+-
+-// Response to a DescribeInternetGateways request.
+-type InternetGatewaysResp struct {
+-	RequestId        string            `xml:"requestId"`
+-	InternetGateways []InternetGateway `xml:"internetGatewaySet>item"`
+-}
+-
+-// Response to a DescribeRouteTables request.
+-type RouteTablesResp struct {
+-	RequestId   string       `xml:"requestId"`
+-	RouteTables []RouteTable `xml:"routeTableSet>item"`
+-}
+-
+-// Response to a DescribeVpcs request.
+-type VpcsResp struct {
+-	RequestId string `xml:"requestId"`
+-	VPCs      []VPC  `xml:"vpcSet>item"`
+-}
+-
+-// Internet Gateway
+-type InternetGateway struct {
+-	InternetGatewayId string                      `xml:"internetGatewayId"`
+-	Attachments       []InternetGatewayAttachment `xml:"attachmentSet>item"`
+-	Tags              []Tag                       `xml:"tagSet>item"`
+-}
+-
+-type InternetGatewayAttachment struct {
+-	VpcId string `xml:"vpcId"`
+-	State string `xml:"state"`
+-}
+-
+-// Routing Table
+-type RouteTable struct {
+-	RouteTableId string                  `xml:"routeTableId"`
+-	VpcId        string                  `xml:"vpcId"`
+-	Associations []RouteTableAssociation `xml:"associationSet>item"`
+-	Routes       []Route                 `xml:"routeSet>item"`
+-	Tags         []Tag                   `xml:"tagSet>item"`
+-}
+-
+-type RouteTableAssociation struct {
+-	AssociationId string `xml:"routeTableAssociationId"`
+-	RouteTableId  string `xml:"routeTableId"`
+-	SubnetId      string `xml:"subnetId"`
+-	Main          bool   `xml:"main"`
+-}
+-
+-type Route struct {
+-	DestinationCidrBlock   string `xml:"destinationCidrBlock"`
+-	GatewayId              string `xml:"gatewayId"`
+-	InstanceId             string `xml:"instanceId"`
+-	InstanceOwnerId        string `xml:"instanceOwnerId"`
+-	NetworkInterfaceId     string `xml:"networkInterfaceId"`
+-	State                  string `xml:"state"`
+-	Origin                 string `xml:"origin"`
+-	VpcPeeringConnectionId string `xml:"vpcPeeringConnectionId"`
+-}
+-
+-// Subnet
+-type Subnet struct {
+-	SubnetId                string `xml:"subnetId"`
+-	State                   string `xml:"state"`
+-	VpcId                   string `xml:"vpcId"`
+-	CidrBlock               string `xml:"cidrBlock"`
+-	AvailableIpAddressCount int    `xml:"availableIpAddressCount"`
+-	AvailabilityZone        string `xml:"availabilityZone"`
+-	DefaultForAZ            bool   `xml:"defaultForAz"`
+-	MapPublicIpOnLaunch     bool   `xml:"mapPublicIpOnLaunch"`
+-	Tags                    []Tag  `xml:"tagSet>item"`
+-}
+-
+-// VPC represents a single VPC.
+-type VPC struct {
+-	VpcId           string `xml:"vpcId"`
+-	State           string `xml:"state"`
+-	CidrBlock       string `xml:"cidrBlock"`
+-	DHCPOptionsID   string `xml:"dhcpOptionsId"`
+-	InstanceTenancy string `xml:"instanceTenancy"`
+-	IsDefault       bool   `xml:"isDefault"`
+-	Tags            []Tag  `xml:"tagSet>item"`
+-}
+-
+-// Response to a DescribeSubnets request.
+-type SubnetsResp struct {
+-	RequestId string   `xml:"requestId"`
+-	Subnets   []Subnet `xml:"subnetSet>item"`
+-}
+-
+-// Create a new VPC.
+-func (ec2 *EC2) CreateVpc(options *CreateVpc) (resp *CreateVpcResp, err error) {
+-	params := makeParams("CreateVpc")
+-	params["CidrBlock"] = options.CidrBlock
+-
+-	if options.InstanceTenancy != "" {
+-		params["InstanceTenancy"] = options.InstanceTenancy
+-	}
+-
+-	resp = &CreateVpcResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Delete a VPC.
+-func (ec2 *EC2) DeleteVpc(id string) (resp *SimpleResp, err error) {
+-	params := makeParams("DeleteVpc")
+-	params["VpcId"] = id
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// DescribeVpcs
+-//
+-// See http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeVpcs.html
+-func (ec2 *EC2) DescribeVpcs(ids []string, filter *Filter) (resp *VpcsResp, err error) {
+-	params := makeParams("DescribeVpcs")
+-	addParamsList(params, "VpcId", ids)
+-	filter.addParams(params)
+-	resp = &VpcsResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// VpcAttribute describes an attribute of a VPC.
+-// You can specify only one attribute at a time.
+-// Valid attributes are:
+-//    enableDnsSupport | enableDnsHostnames
+-//
+-// See http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/index.html?ApiReference-query-DescribeVpcAttribute.html for more details.
+-func (ec2 *EC2) VpcAttribute(vpcId, attribute string) (resp *VpcAttributeResp, err error) {
+-	params := makeParams("DescribeVpcAttribute")
+-	params["VpcId"] = vpcId
+-	params["Attribute"] = attribute
+-
+-	resp = &VpcAttributeResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// ModifyVpcAttribute modifies the specified attribute of the specified VPC.
+-//
+-// See http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/index.html?ApiReference-query-ModifyVpcAttribute.html for more details.
+-func (ec2 *EC2) ModifyVpcAttribute(vpcId string, options *ModifyVpcAttribute) (*SimpleResp, error) {
+-	params := makeParams("ModifyVpcAttribute")
+-
+-	params["VpcId"] = vpcId
+-
+-	if options.SetEnableDnsSupport {
+-		params["EnableDnsSupport.Value"] = strconv.FormatBool(options.EnableDnsSupport)
+-	}
+-
+-	if options.SetEnableDnsHostnames {
+-		params["EnableDnsHostnames.Value"] = strconv.FormatBool(options.EnableDnsHostnames)
+-	}
+-
+-	resp := &SimpleResp{}
+-	if err := ec2.query(params, resp); err != nil {
+-		return nil, err
+-	}
+-
+-	return resp, nil
+-}
+-
+-// Create a new subnet.
+-func (ec2 *EC2) CreateSubnet(options *CreateSubnet) (resp *CreateSubnetResp, err error) {
+-	params := makeParams("CreateSubnet")
+-	params["AvailabilityZone"] = options.AvailabilityZone
+-	params["CidrBlock"] = options.CidrBlock
+-	params["VpcId"] = options.VpcId
+-
+-	resp = &CreateSubnetResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Delete a Subnet.
+-func (ec2 *EC2) DeleteSubnet(id string) (resp *SimpleResp, err error) {
+-	params := makeParams("DeleteSubnet")
+-	params["SubnetId"] = id
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// DescribeSubnets
+-//
+-// http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeSubnets.html
+-func (ec2 *EC2) DescribeSubnets(ids []string, filter *Filter) (resp *SubnetsResp, err error) {
+-	params := makeParams("DescribeSubnets")
+-	addParamsList(params, "SubnetId", ids)
+-	filter.addParams(params)
+-
+-	resp = &SubnetsResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Create a new internet gateway.
+-func (ec2 *EC2) CreateInternetGateway(
+-	options *CreateInternetGateway) (resp *CreateInternetGatewayResp, err error) {
+-	params := makeParams("CreateInternetGateway")
+-
+-	resp = &CreateInternetGatewayResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Attach an InternetGateway.
+-func (ec2 *EC2) AttachInternetGateway(id, vpcId string) (resp *SimpleResp, err error) {
+-	params := makeParams("AttachInternetGateway")
+-	params["InternetGatewayId"] = id
+-	params["VpcId"] = vpcId
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Detach an InternetGateway.
+-func (ec2 *EC2) DetachInternetGateway(id, vpcId string) (resp *SimpleResp, err error) {
+-	params := makeParams("DetachInternetGateway")
+-	params["InternetGatewayId"] = id
+-	params["VpcId"] = vpcId
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Delete an InternetGateway.
+-func (ec2 *EC2) DeleteInternetGateway(id string) (resp *SimpleResp, err error) {
+-	params := makeParams("DeleteInternetGateway")
+-	params["InternetGatewayId"] = id
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// DescribeInternetGateways
+-//
+-// http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInternetGateways.html
+-func (ec2 *EC2) DescribeInternetGateways(ids []string, filter *Filter) (resp *InternetGatewaysResp, err error) {
+-	params := makeParams("DescribeInternetGateways")
+-	addParamsList(params, "InternetGatewayId", ids)
+-	filter.addParams(params)
+-
+-	resp = &InternetGatewaysResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Create a new routing table.
+-func (ec2 *EC2) CreateRouteTable(
+-	options *CreateRouteTable) (resp *CreateRouteTableResp, err error) {
+-	params := makeParams("CreateRouteTable")
+-	params["VpcId"] = options.VpcId
+-
+-	resp = &CreateRouteTableResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Delete a RouteTable.
+-func (ec2 *EC2) DeleteRouteTable(id string) (resp *SimpleResp, err error) {
+-	params := makeParams("DeleteRouteTable")
+-	params["RouteTableId"] = id
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// DescribeRouteTables
+-//
+-// http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeRouteTables.html
+-func (ec2 *EC2) DescribeRouteTables(ids []string, filter *Filter) (resp *RouteTablesResp, err error) {
+-	params := makeParams("DescribeRouteTables")
+-	addParamsList(params, "RouteTableId", ids)
+-	filter.addParams(params)
+-
+-	resp = &RouteTablesResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return
+-}
+-
+-// Associate a routing table.
+-func (ec2 *EC2) AssociateRouteTable(id, subnetId string) (*AssociateRouteTableResp, error) {
+-	params := makeParams("AssociateRouteTable")
+-	params["RouteTableId"] = id
+-	params["SubnetId"] = subnetId
+-
+-	resp := &AssociateRouteTableResp{}
+-	err := ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// Disassociate a routing table.
+-func (ec2 *EC2) DisassociateRouteTable(id string) (*SimpleResp, error) {
+-	params := makeParams("DisassociateRouteTable")
+-	params["AssociationId"] = id
+-
+-	resp := &SimpleResp{}
+-	err := ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// Re-associate a routing table.
+-func (ec2 *EC2) ReassociateRouteTable(id, routeTableId string) (*ReassociateRouteTableResp, error) {
+-	params := makeParams("ReplaceRouteTableAssociation")
+-	params["AssociationId"] = id
+-	params["RouteTableId"] = routeTableId
+-
+-	resp := &ReassociateRouteTableResp{}
+-	err := ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return resp, nil
+-}
+-
+-// Create a new route.
+-func (ec2 *EC2) CreateRoute(options *CreateRoute) (resp *SimpleResp, err error) {
+-	params := makeParams("CreateRoute")
+-	params["RouteTableId"] = options.RouteTableId
+-	params["DestinationCidrBlock"] = options.DestinationCidrBlock
+-
+-	if v := options.GatewayId; v != "" {
+-		params["GatewayId"] = v
+-	}
+-	if v := options.InstanceId; v != "" {
+-		params["InstanceId"] = v
+-	}
+-	if v := options.NetworkInterfaceId; v != "" {
+-		params["NetworkInterfaceId"] = v
+-	}
+-	if v := options.VpcPeeringConnectionId; v != "" {
+-		params["VpcPeeringConnectionId"] = v
+-	}
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Delete a Route.
+-func (ec2 *EC2) DeleteRoute(routeTableId, cidr string) (resp *SimpleResp, err error) {
+-	params := makeParams("DeleteRoute")
+-	params["RouteTableId"] = routeTableId
+-	params["DestinationCidrBlock"] = cidr
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// Replace a new route.
+-func (ec2 *EC2) ReplaceRoute(options *ReplaceRoute) (resp *SimpleResp, err error) {
+-	params := makeParams("ReplaceRoute")
+-	params["RouteTableId"] = options.RouteTableId
+-	params["DestinationCidrBlock"] = options.DestinationCidrBlock
+-
+-	if v := options.GatewayId; v != "" {
+-		params["GatewayId"] = v
+-	}
+-	if v := options.InstanceId; v != "" {
+-		params["InstanceId"] = v
+-	}
+-	if v := options.NetworkInterfaceId; v != "" {
+-		params["NetworkInterfaceId"] = v
+-	}
+-	if v := options.VpcPeeringConnectionId; v != "" {
+-		params["VpcPeeringConnectionId"] = v
+-	}
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+-
+-// The ResetImageAttribute request parameters.
+-type ResetImageAttribute struct {
+-	Attribute string
+-}
+-
+-// ResetImageAttribute resets an attribute of an AMI to its default value.
+-//
+-// http://goo.gl/r6ZCPm for more details.
+-func (ec2 *EC2) ResetImageAttribute(imageId string, options *ResetImageAttribute) (resp *SimpleResp, err error) {
+-	params := makeParams("ResetImageAttribute")
+-	params["ImageId"] = imageId
+-
+-	if options.Attribute != "" {
+-		params["Attribute"] = options.Attribute
+-	}
+-
+-	resp = &SimpleResp{}
+-	err = ec2.query(params, resp)
+-	if err != nil {
+-		return nil, err
+-	}
+-	return
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2_test.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2_test.go
+deleted file mode 100644
+index 849bfe2..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2_test.go
++++ /dev/null
+@@ -1,1243 +0,0 @@
+-package ec2_test
+-
+-import (
+-	"testing"
+-
+-	"github.com/mitchellh/goamz/aws"
+-	"github.com/mitchellh/goamz/ec2"
+-	"github.com/mitchellh/goamz/testutil"
+-	. "github.com/motain/gocheck"
+-)
+-
+-func Test(t *testing.T) {
+-	TestingT(t)
+-}
+-
+-var _ = Suite(&S{})
+-
+-type S struct {
+-	ec2 *ec2.EC2
+-}
+-
+-var testServer = testutil.NewHTTPServer()
+-
+-func (s *S) SetUpSuite(c *C) {
+-	testServer.Start()
+-	auth := aws.Auth{"abc", "123", ""}
+-	s.ec2 = ec2.NewWithClient(
+-		auth,
+-		aws.Region{EC2Endpoint: testServer.URL},
+-		testutil.DefaultClient,
+-	)
+-}
+-
+-func (s *S) TearDownTest(c *C) {
+-	testServer.Flush()
+-}
+-
+-func (s *S) TestRunInstancesErrorDump(c *C) {
+-	testServer.Response(400, nil, ErrorDump)
+-
+-	options := ec2.RunInstances{
+-		ImageId:      "ami-a6f504cf", // Ubuntu Maverick, i386, instance store
+-		InstanceType: "t1.micro",     // Doesn't work with micro, results in 400.
+-	}
+-
+-	msg := `AMIs with an instance-store root device are not supported for the instance type 't1\.micro'\.`
+-
+-	resp, err := s.ec2.RunInstances(&options)
+-
+-	testServer.WaitRequest()
+-
+-	c.Assert(resp, IsNil)
+-	c.Assert(err, ErrorMatches, msg+` \(UnsupportedOperation\)`)
+-
+-	ec2err, ok := err.(*ec2.Error)
+-	c.Assert(ok, Equals, true)
+-	c.Assert(ec2err.StatusCode, Equals, 400)
+-	c.Assert(ec2err.Code, Equals, "UnsupportedOperation")
+-	c.Assert(ec2err.Message, Matches, msg)
+-	c.Assert(ec2err.RequestId, Equals, "0503f4e9-bbd6-483c-b54f-c4ae9f3b30f4")
+-}
+-
+-func (s *S) TestRequestSpotInstancesErrorDump(c *C) {
+-	testServer.Response(400, nil, ErrorDump)
+-
+-	options := ec2.RequestSpotInstances{
+-		SpotPrice:    "0.01",
+-		ImageId:      "ami-a6f504cf", // Ubuntu Maverick, i386, instance store
+-		InstanceType: "t1.micro",     // Doesn't work with micro, results in 400.
+-	}
+-
+-	msg := `AMIs with an instance-store root device are not supported for the instance type 't1\.micro'\.`
+-
+-	resp, err := s.ec2.RequestSpotInstances(&options)
+-
+-	testServer.WaitRequest()
+-
+-	c.Assert(resp, IsNil)
+-	c.Assert(err, ErrorMatches, msg+` \(UnsupportedOperation\)`)
+-
+-	ec2err, ok := err.(*ec2.Error)
+-	c.Assert(ok, Equals, true)
+-	c.Assert(ec2err.StatusCode, Equals, 400)
+-	c.Assert(ec2err.Code, Equals, "UnsupportedOperation")
+-	c.Assert(ec2err.Message, Matches, msg)
+-	c.Assert(ec2err.RequestId, Equals, "0503f4e9-bbd6-483c-b54f-c4ae9f3b30f4")
+-}
+-
+-func (s *S) TestRunInstancesErrorWithoutXML(c *C) {
+-	testServer.Responses(5, 500, nil, "")
+-	options := ec2.RunInstances{ImageId: "image-id"}
+-
+-	resp, err := s.ec2.RunInstances(&options)
+-
+-	testServer.WaitRequest()
+-
+-	c.Assert(resp, IsNil)
+-	c.Assert(err, ErrorMatches, "500 Internal Server Error")
+-
+-	ec2err, ok := err.(*ec2.Error)
+-	c.Assert(ok, Equals, true)
+-	c.Assert(ec2err.StatusCode, Equals, 500)
+-	c.Assert(ec2err.Code, Equals, "")
+-	c.Assert(ec2err.Message, Equals, "500 Internal Server Error")
+-	c.Assert(ec2err.RequestId, Equals, "")
+-}
+-
+-func (s *S) TestRequestSpotInstancesErrorWithoutXML(c *C) {
+-	testServer.Responses(5, 500, nil, "")
+-	options := ec2.RequestSpotInstances{SpotPrice: "spot-price", ImageId: "image-id"}
+-
+-	resp, err := s.ec2.RequestSpotInstances(&options)
+-
+-	testServer.WaitRequest()
+-
+-	c.Assert(resp, IsNil)
+-	c.Assert(err, ErrorMatches, "500 Internal Server Error")
+-
+-	ec2err, ok := err.(*ec2.Error)
+-	c.Assert(ok, Equals, true)
+-	c.Assert(ec2err.StatusCode, Equals, 500)
+-	c.Assert(ec2err.Code, Equals, "")
+-	c.Assert(ec2err.Message, Equals, "500 Internal Server Error")
+-	c.Assert(ec2err.RequestId, Equals, "")
+-}
+-
+-func (s *S) TestRunInstancesExample(c *C) {
+-	testServer.Response(200, nil, RunInstancesExample)
+-
+-	options := ec2.RunInstances{
+-		KeyName:               "my-keys",
+-		ImageId:               "image-id",
+-		InstanceType:          "inst-type",
+-		SecurityGroups:        []ec2.SecurityGroup{{Name: "g1"}, {Id: "g2"}, {Name: "g3"}, {Id: "g4"}},
+-		UserData:              []byte("1234"),
+-		KernelId:              "kernel-id",
+-		RamdiskId:             "ramdisk-id",
+-		AvailZone:             "zone",
+-		PlacementGroupName:    "group",
+-		Monitoring:            true,
+-		SubnetId:              "subnet-id",
+-		DisableAPITermination: true,
+-		ShutdownBehavior:      "terminate",
+-		PrivateIPAddress:      "10.0.0.25",
+-		BlockDevices: []ec2.BlockDeviceMapping{
+-			{DeviceName: "/dev/sdb", VirtualName: "ephemeral0"},
+-			{DeviceName: "/dev/sdc", SnapshotId: "snap-a08912c9", DeleteOnTermination: true},
+-		},
+-	}
+-	resp, err := s.ec2.RunInstances(&options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"RunInstances"})
+-	c.Assert(req.Form["ImageId"], DeepEquals, []string{"image-id"})
+-	c.Assert(req.Form["MinCount"], DeepEquals, []string{"1"})
+-	c.Assert(req.Form["MaxCount"], DeepEquals, []string{"1"})
+-	c.Assert(req.Form["KeyName"], DeepEquals, []string{"my-keys"})
+-	c.Assert(req.Form["InstanceType"], DeepEquals, []string{"inst-type"})
+-	c.Assert(req.Form["SecurityGroup.1"], DeepEquals, []string{"g1"})
+-	c.Assert(req.Form["SecurityGroup.2"], DeepEquals, []string{"g3"})
+-	c.Assert(req.Form["SecurityGroupId.1"], DeepEquals, []string{"g2"})
+-	c.Assert(req.Form["SecurityGroupId.2"], DeepEquals, []string{"g4"})
+-	c.Assert(req.Form["UserData"], DeepEquals, []string{"MTIzNA=="})
+-	c.Assert(req.Form["KernelId"], DeepEquals, []string{"kernel-id"})
+-	c.Assert(req.Form["RamdiskId"], DeepEquals, []string{"ramdisk-id"})
+-	c.Assert(req.Form["Placement.AvailabilityZone"], DeepEquals, []string{"zone"})
+-	c.Assert(req.Form["Placement.GroupName"], DeepEquals, []string{"group"})
+-	c.Assert(req.Form["Monitoring.Enabled"], DeepEquals, []string{"true"})
+-	c.Assert(req.Form["SubnetId"], DeepEquals, []string{"subnet-id"})
+-	c.Assert(req.Form["DisableApiTermination"], DeepEquals, []string{"true"})
+-	c.Assert(req.Form["InstanceInitiatedShutdownBehavior"], DeepEquals, []string{"terminate"})
+-	c.Assert(req.Form["PrivateIpAddress"], DeepEquals, []string{"10.0.0.25"})
+-	c.Assert(req.Form["BlockDeviceMapping.1.DeviceName"], DeepEquals, []string{"/dev/sdb"})
+-	c.Assert(req.Form["BlockDeviceMapping.1.VirtualName"], DeepEquals, []string{"ephemeral0"})
+-	c.Assert(req.Form["BlockDeviceMapping.2.Ebs.SnapshotId"], DeepEquals, []string{"snap-a08912c9"})
+-	c.Assert(req.Form["BlockDeviceMapping.2.Ebs.DeleteOnTermination"], DeepEquals, []string{"true"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.ReservationId, Equals, "r-47a5402e")
+-	c.Assert(resp.OwnerId, Equals, "999988887777")
+-	c.Assert(resp.SecurityGroups, DeepEquals, []ec2.SecurityGroup{{Name: "default", Id: "sg-67ad940e"}})
+-	c.Assert(resp.Instances, HasLen, 3)
+-
+-	i0 := resp.Instances[0]
+-	c.Assert(i0.InstanceId, Equals, "i-2ba64342")
+-	c.Assert(i0.InstanceType, Equals, "m1.small")
+-	c.Assert(i0.ImageId, Equals, "ami-60a54009")
+-	c.Assert(i0.Monitoring, Equals, "enabled")
+-	c.Assert(i0.KeyName, Equals, "example-key-name")
+-	c.Assert(i0.AMILaunchIndex, Equals, 0)
+-	c.Assert(i0.VirtType, Equals, "paravirtual")
+-	c.Assert(i0.Hypervisor, Equals, "xen")
+-
+-	i1 := resp.Instances[1]
+-	c.Assert(i1.InstanceId, Equals, "i-2bc64242")
+-	c.Assert(i1.InstanceType, Equals, "m1.small")
+-	c.Assert(i1.ImageId, Equals, "ami-60a54009")
+-	c.Assert(i1.Monitoring, Equals, "enabled")
+-	c.Assert(i1.KeyName, Equals, "example-key-name")
+-	c.Assert(i1.AMILaunchIndex, Equals, 1)
+-	c.Assert(i1.VirtType, Equals, "paravirtual")
+-	c.Assert(i1.Hypervisor, Equals, "xen")
+-
+-	i2 := resp.Instances[2]
+-	c.Assert(i2.InstanceId, Equals, "i-2be64332")
+-	c.Assert(i2.InstanceType, Equals, "m1.small")
+-	c.Assert(i2.ImageId, Equals, "ami-60a54009")
+-	c.Assert(i2.Monitoring, Equals, "enabled")
+-	c.Assert(i2.KeyName, Equals, "example-key-name")
+-	c.Assert(i2.AMILaunchIndex, Equals, 2)
+-	c.Assert(i2.VirtType, Equals, "paravirtual")
+-	c.Assert(i2.Hypervisor, Equals, "xen")
+-}
+-
+-func (s *S) TestRequestSpotInstancesExample(c *C) {
+-	testServer.Response(200, nil, RequestSpotInstancesExample)
+-
+-	options := ec2.RequestSpotInstances{
+-		SpotPrice:          "0.5",
+-		KeyName:            "my-keys",
+-		ImageId:            "image-id",
+-		InstanceType:       "inst-type",
+-		SecurityGroups:     []ec2.SecurityGroup{{Name: "g1"}, {Id: "g2"}, {Name: "g3"}, {Id: "g4"}},
+-		UserData:           []byte("1234"),
+-		KernelId:           "kernel-id",
+-		RamdiskId:          "ramdisk-id",
+-		AvailZone:          "zone",
+-		PlacementGroupName: "group",
+-		Monitoring:         true,
+-		SubnetId:           "subnet-id",
+-		PrivateIPAddress:   "10.0.0.25",
+-		BlockDevices: []ec2.BlockDeviceMapping{
+-			{DeviceName: "/dev/sdb", VirtualName: "ephemeral0"},
+-			{DeviceName: "/dev/sdc", SnapshotId: "snap-a08912c9", DeleteOnTermination: true},
+-		},
+-	}
+-	resp, err := s.ec2.RequestSpotInstances(&options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"RequestSpotInstances"})
+-	c.Assert(req.Form["SpotPrice"], DeepEquals, []string{"0.5"})
+-	c.Assert(req.Form["LaunchSpecification.ImageId"], DeepEquals, []string{"image-id"})
+-	c.Assert(req.Form["LaunchSpecification.KeyName"], DeepEquals, []string{"my-keys"})
+-	c.Assert(req.Form["LaunchSpecification.InstanceType"], DeepEquals, []string{"inst-type"})
+-	c.Assert(req.Form["LaunchSpecification.SecurityGroup.1"], DeepEquals, []string{"g1"})
+-	c.Assert(req.Form["LaunchSpecification.SecurityGroup.2"], DeepEquals, []string{"g3"})
+-	c.Assert(req.Form["LaunchSpecification.SecurityGroupId.1"], DeepEquals, []string{"g2"})
+-	c.Assert(req.Form["LaunchSpecification.SecurityGroupId.2"], DeepEquals, []string{"g4"})
+-	c.Assert(req.Form["LaunchSpecification.UserData"], DeepEquals, []string{"MTIzNA=="})
+-	c.Assert(req.Form["LaunchSpecification.KernelId"], DeepEquals, []string{"kernel-id"})
+-	c.Assert(req.Form["LaunchSpecification.RamdiskId"], DeepEquals, []string{"ramdisk-id"})
+-	c.Assert(req.Form["LaunchSpecification.Placement.AvailabilityZone"], DeepEquals, []string{"zone"})
+-	c.Assert(req.Form["LaunchSpecification.Placement.GroupName"], DeepEquals, []string{"group"})
+-	c.Assert(req.Form["LaunchSpecification.Monitoring.Enabled"], DeepEquals, []string{"true"})
+-	c.Assert(req.Form["LaunchSpecification.SubnetId"], DeepEquals, []string{"subnet-id"})
+-	c.Assert(req.Form["LaunchSpecification.PrivateIpAddress"], DeepEquals, []string{"10.0.0.25"})
+-	c.Assert(req.Form["LaunchSpecification.BlockDeviceMapping.1.DeviceName"], DeepEquals, []string{"/dev/sdb"})
+-	c.Assert(req.Form["LaunchSpecification.BlockDeviceMapping.1.VirtualName"], DeepEquals, []string{"ephemeral0"})
+-	c.Assert(req.Form["LaunchSpecification.BlockDeviceMapping.2.Ebs.SnapshotId"], DeepEquals, []string{"snap-a08912c9"})
+-	c.Assert(req.Form["LaunchSpecification.BlockDeviceMapping.2.Ebs.DeleteOnTermination"], DeepEquals, []string{"true"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.SpotRequestResults[0].SpotRequestId, Equals, "sir-1a2b3c4d")
+-	c.Assert(resp.SpotRequestResults[0].SpotPrice, Equals, "0.5")
+-	c.Assert(resp.SpotRequestResults[0].State, Equals, "open")
+-	c.Assert(resp.SpotRequestResults[0].SpotLaunchSpec.ImageId, Equals, "ami-1a2b3c4d")
+-	c.Assert(resp.SpotRequestResults[0].Status.Code, Equals, "pending-evaluation")
+-	c.Assert(resp.SpotRequestResults[0].Status.UpdateTime, Equals, "2008-05-07T12:51:50.000Z")
+-	c.Assert(resp.SpotRequestResults[0].Status.Message, Equals, "Your Spot request has been submitted for review, and is pending evaluation.")
+-}
+-
+-func (s *S) TestCancelSpotRequestsExample(c *C) {
+-	testServer.Response(200, nil, CancelSpotRequestsExample)
+-
+-	resp, err := s.ec2.CancelSpotRequests([]string{"s-1", "s-2"})
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"CancelSpotInstanceRequests"})
+-	c.Assert(req.Form["SpotInstanceRequestId.1"], DeepEquals, []string{"s-1"})
+-	c.Assert(req.Form["SpotInstanceRequestId.2"], DeepEquals, []string{"s-2"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.CancelSpotRequestResults[0].SpotRequestId, Equals, "sir-1a2b3c4d")
+-	c.Assert(resp.CancelSpotRequestResults[0].State, Equals, "cancelled")
+-}
+-
+-func (s *S) TestTerminateInstancesExample(c *C) {
+-	testServer.Response(200, nil, TerminateInstancesExample)
+-
+-	resp, err := s.ec2.TerminateInstances([]string{"i-1", "i-2"})
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"TerminateInstances"})
+-	c.Assert(req.Form["InstanceId.1"], DeepEquals, []string{"i-1"})
+-	c.Assert(req.Form["InstanceId.2"], DeepEquals, []string{"i-2"})
+-	c.Assert(req.Form["UserData"], IsNil)
+-	c.Assert(req.Form["KernelId"], IsNil)
+-	c.Assert(req.Form["RamdiskId"], IsNil)
+-	c.Assert(req.Form["Placement.AvailabilityZone"], IsNil)
+-	c.Assert(req.Form["Placement.GroupName"], IsNil)
+-	c.Assert(req.Form["Monitoring.Enabled"], IsNil)
+-	c.Assert(req.Form["SubnetId"], IsNil)
+-	c.Assert(req.Form["DisableApiTermination"], IsNil)
+-	c.Assert(req.Form["InstanceInitiatedShutdownBehavior"], IsNil)
+-	c.Assert(req.Form["PrivateIpAddress"], IsNil)
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.StateChanges, HasLen, 1)
+-	c.Assert(resp.StateChanges[0].InstanceId, Equals, "i-3ea74257")
+-	c.Assert(resp.StateChanges[0].CurrentState.Code, Equals, 32)
+-	c.Assert(resp.StateChanges[0].CurrentState.Name, Equals, "shutting-down")
+-	c.Assert(resp.StateChanges[0].PreviousState.Code, Equals, 16)
+-	c.Assert(resp.StateChanges[0].PreviousState.Name, Equals, "running")
+-}
+-
+-func (s *S) TestDescribeSpotRequestsExample(c *C) {
+-	testServer.Response(200, nil, DescribeSpotRequestsExample)
+-
+-	filter := ec2.NewFilter()
+-	filter.Add("key1", "value1")
+-	filter.Add("key2", "value2", "value3")
+-
+-	resp, err := s.ec2.DescribeSpotRequests([]string{"s-1", "s-2"}, filter)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeSpotInstanceRequests"})
+-	c.Assert(req.Form["SpotInstanceRequestId.1"], DeepEquals, []string{"s-1"})
+-	c.Assert(req.Form["SpotInstanceRequestId.2"], DeepEquals, []string{"s-2"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "b1719f2a-5334-4479-b2f1-26926EXAMPLE")
+-	c.Assert(resp.SpotRequestResults[0].SpotRequestId, Equals, "sir-1a2b3c4d")
+-	c.Assert(resp.SpotRequestResults[0].State, Equals, "active")
+-	c.Assert(resp.SpotRequestResults[0].SpotPrice, Equals, "0.5")
+-	c.Assert(resp.SpotRequestResults[0].SpotLaunchSpec.ImageId, Equals, "ami-1a2b3c4d")
+-	c.Assert(resp.SpotRequestResults[0].Status.Code, Equals, "fulfilled")
+-	c.Assert(resp.SpotRequestResults[0].Status.UpdateTime, Equals, "2008-05-07T12:51:50.000Z")
+-	c.Assert(resp.SpotRequestResults[0].Status.Message, Equals, "Your Spot request is fulfilled.")
+-}
+-
+-func (s *S) TestDescribeInstancesExample1(c *C) {
+-	testServer.Response(200, nil, DescribeInstancesExample1)
+-
+-	filter := ec2.NewFilter()
+-	filter.Add("key1", "value1")
+-	filter.Add("key2", "value2", "value3")
+-
+-	resp, err := s.ec2.Instances([]string{"i-1", "i-2"}, nil)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeInstances"})
+-	c.Assert(req.Form["InstanceId.1"], DeepEquals, []string{"i-1"})
+-	c.Assert(req.Form["InstanceId.2"], DeepEquals, []string{"i-2"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "98e3c9a4-848c-4d6d-8e8a-b1bdEXAMPLE")
+-	c.Assert(resp.Reservations, HasLen, 2)
+-
+-	r0 := resp.Reservations[0]
+-	c.Assert(r0.ReservationId, Equals, "r-b27e30d9")
+-	c.Assert(r0.OwnerId, Equals, "999988887777")
+-	c.Assert(r0.RequesterId, Equals, "854251627541")
+-	c.Assert(r0.SecurityGroups, DeepEquals, []ec2.SecurityGroup{{Name: "default", Id: "sg-67ad940e"}})
+-	c.Assert(r0.Instances, HasLen, 1)
+-
+-	r0i := r0.Instances[0]
+-	c.Assert(r0i.InstanceId, Equals, "i-c5cd56af")
+-	c.Assert(r0i.PrivateDNSName, Equals, "domU-12-31-39-10-56-34.compute-1.internal")
+-	c.Assert(r0i.DNSName, Equals, "ec2-174-129-165-232.compute-1.amazonaws.com")
+-	c.Assert(r0i.AvailZone, Equals, "us-east-1b")
+-}
+-
+-func (s *S) TestDescribeInstancesExample2(c *C) {
+-	testServer.Response(200, nil, DescribeInstancesExample2)
+-
+-	filter := ec2.NewFilter()
+-	filter.Add("key1", "value1")
+-	filter.Add("key2", "value2", "value3")
+-
+-	resp, err := s.ec2.Instances([]string{"i-1", "i-2"}, filter)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeInstances"})
+-	c.Assert(req.Form["InstanceId.1"], DeepEquals, []string{"i-1"})
+-	c.Assert(req.Form["InstanceId.2"], DeepEquals, []string{"i-2"})
+-	c.Assert(req.Form["Filter.1.Name"], DeepEquals, []string{"key1"})
+-	c.Assert(req.Form["Filter.1.Value.1"], DeepEquals, []string{"value1"})
+-	c.Assert(req.Form["Filter.1.Value.2"], IsNil)
+-	c.Assert(req.Form["Filter.2.Name"], DeepEquals, []string{"key2"})
+-	c.Assert(req.Form["Filter.2.Value.1"], DeepEquals, []string{"value2"})
+-	c.Assert(req.Form["Filter.2.Value.2"], DeepEquals, []string{"value3"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.Reservations, HasLen, 1)
+-
+-	r0 := resp.Reservations[0]
+-	r0i := r0.Instances[0]
+-	c.Assert(r0i.State.Code, Equals, 16)
+-	c.Assert(r0i.State.Name, Equals, "running")
+-
+-	r0t0 := r0i.Tags[0]
+-	r0t1 := r0i.Tags[1]
+-	c.Assert(r0t0.Key, Equals, "webserver")
+-	c.Assert(r0t0.Value, Equals, "")
+-	c.Assert(r0t1.Key, Equals, "stack")
+-	c.Assert(r0t1.Value, Equals, "Production")
+-}
+-
+-func (s *S) TestCreateImageExample(c *C) {
+-	testServer.Response(200, nil, CreateImageExample)
+-
+-	options := &ec2.CreateImage{
+-		InstanceId:  "i-123456",
+-		Name:        "foo",
+-		Description: "Test CreateImage",
+-		NoReboot:    true,
+-		BlockDevices: []ec2.BlockDeviceMapping{
+-			{DeviceName: "/dev/sdb", VirtualName: "ephemeral0"},
+-			{DeviceName: "/dev/sdc", SnapshotId: "snap-a08912c9", DeleteOnTermination: true},
+-		},
+-	}
+-
+-	resp, err := s.ec2.CreateImage(options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"CreateImage"})
+-	c.Assert(req.Form["InstanceId"], DeepEquals, []string{options.InstanceId})
+-	c.Assert(req.Form["Name"], DeepEquals, []string{options.Name})
+-	c.Assert(req.Form["Description"], DeepEquals, []string{options.Description})
+-	c.Assert(req.Form["NoReboot"], DeepEquals, []string{"true"})
+-	c.Assert(req.Form["BlockDeviceMapping.1.DeviceName"], DeepEquals, []string{"/dev/sdb"})
+-	c.Assert(req.Form["BlockDeviceMapping.1.VirtualName"], DeepEquals, []string{"ephemeral0"})
+-	c.Assert(req.Form["BlockDeviceMapping.2.DeviceName"], DeepEquals, []string{"/dev/sdc"})
+-	c.Assert(req.Form["BlockDeviceMapping.2.Ebs.SnapshotId"], DeepEquals, []string{"snap-a08912c9"})
+-	c.Assert(req.Form["BlockDeviceMapping.2.Ebs.DeleteOnTermination"], DeepEquals, []string{"true"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.ImageId, Equals, "ami-4fa54026")
+-}
+-
+-func (s *S) TestDescribeImagesExample(c *C) {
+-	testServer.Response(200, nil, DescribeImagesExample)
+-
+-	filter := ec2.NewFilter()
+-	filter.Add("key1", "value1")
+-	filter.Add("key2", "value2", "value3")
+-
+-	resp, err := s.ec2.Images([]string{"ami-1", "ami-2"}, filter)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeImages"})
+-	c.Assert(req.Form["ImageId.1"], DeepEquals, []string{"ami-1"})
+-	c.Assert(req.Form["ImageId.2"], DeepEquals, []string{"ami-2"})
+-	c.Assert(req.Form["Filter.1.Name"], DeepEquals, []string{"key1"})
+-	c.Assert(req.Form["Filter.1.Value.1"], DeepEquals, []string{"value1"})
+-	c.Assert(req.Form["Filter.1.Value.2"], IsNil)
+-	c.Assert(req.Form["Filter.2.Name"], DeepEquals, []string{"key2"})
+-	c.Assert(req.Form["Filter.2.Value.1"], DeepEquals, []string{"value2"})
+-	c.Assert(req.Form["Filter.2.Value.2"], DeepEquals, []string{"value3"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "4a4a27a2-2e7c-475d-b35b-ca822EXAMPLE")
+-	c.Assert(resp.Images, HasLen, 1)
+-
+-	i0 := resp.Images[0]
+-	c.Assert(i0.Id, Equals, "ami-a2469acf")
+-	c.Assert(i0.Type, Equals, "machine")
+-	c.Assert(i0.Name, Equals, "example-marketplace-amzn-ami.1")
+-	c.Assert(i0.Description, Equals, "Amazon Linux AMI i386 EBS")
+-	c.Assert(i0.Location, Equals, "aws-marketplace/example-marketplace-amzn-ami.1")
+-	c.Assert(i0.State, Equals, "available")
+-	c.Assert(i0.Public, Equals, true)
+-	c.Assert(i0.OwnerId, Equals, "123456789999")
+-	c.Assert(i0.OwnerAlias, Equals, "aws-marketplace")
+-	c.Assert(i0.Architecture, Equals, "i386")
+-	c.Assert(i0.KernelId, Equals, "aki-805ea7e9")
+-	c.Assert(i0.RootDeviceType, Equals, "ebs")
+-	c.Assert(i0.RootDeviceName, Equals, "/dev/sda1")
+-	c.Assert(i0.VirtualizationType, Equals, "paravirtual")
+-	c.Assert(i0.Hypervisor, Equals, "xen")
+-
+-	c.Assert(i0.BlockDevices, HasLen, 1)
+-	c.Assert(i0.BlockDevices[0].DeviceName, Equals, "/dev/sda1")
+-	c.Assert(i0.BlockDevices[0].SnapshotId, Equals, "snap-787e9403")
+-	c.Assert(i0.BlockDevices[0].VolumeSize, Equals, int64(8))
+-	c.Assert(i0.BlockDevices[0].DeleteOnTermination, Equals, true)
+-
+-	testServer.Response(200, nil, DescribeImagesExample)
+-	resp2, err := s.ec2.ImagesByOwners([]string{"ami-1", "ami-2"}, []string{"123456789999", "id2"}, filter)
+-
+-	req2 := testServer.WaitRequest()
+-	c.Assert(req2.Form["Action"], DeepEquals, []string{"DescribeImages"})
+-	c.Assert(req2.Form["ImageId.1"], DeepEquals, []string{"ami-1"})
+-	c.Assert(req2.Form["ImageId.2"], DeepEquals, []string{"ami-2"})
+-	c.Assert(req2.Form["Owner.1"], DeepEquals, []string{"123456789999"})
+-	c.Assert(req2.Form["Owner.2"], DeepEquals, []string{"id2"})
+-	c.Assert(req2.Form["Filter.1.Name"], DeepEquals, []string{"key1"})
+-	c.Assert(req2.Form["Filter.1.Value.1"], DeepEquals, []string{"value1"})
+-	c.Assert(req2.Form["Filter.1.Value.2"], IsNil)
+-	c.Assert(req2.Form["Filter.2.Name"], DeepEquals, []string{"key2"})
+-	c.Assert(req2.Form["Filter.2.Value.1"], DeepEquals, []string{"value2"})
+-	c.Assert(req2.Form["Filter.2.Value.2"], DeepEquals, []string{"value3"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp2.RequestId, Equals, "4a4a27a2-2e7c-475d-b35b-ca822EXAMPLE")
+-	c.Assert(resp2.Images, HasLen, 1)
+-
+-	i1 := resp2.Images[0]
+-	c.Assert(i1.Id, Equals, "ami-a2469acf")
+-	c.Assert(i1.Type, Equals, "machine")
+-	c.Assert(i1.Name, Equals, "example-marketplace-amzn-ami.1")
+-	c.Assert(i1.Description, Equals, "Amazon Linux AMI i386 EBS")
+-	c.Assert(i1.Location, Equals, "aws-marketplace/example-marketplace-amzn-ami.1")
+-	c.Assert(i1.State, Equals, "available")
+-	c.Assert(i1.Public, Equals, true)
+-	c.Assert(i1.OwnerId, Equals, "123456789999")
+-	c.Assert(i1.OwnerAlias, Equals, "aws-marketplace")
+-	c.Assert(i1.Architecture, Equals, "i386")
+-	c.Assert(i1.KernelId, Equals, "aki-805ea7e9")
+-	c.Assert(i1.RootDeviceType, Equals, "ebs")
+-	c.Assert(i1.RootDeviceName, Equals, "/dev/sda1")
+-	c.Assert(i1.VirtualizationType, Equals, "paravirtual")
+-	c.Assert(i1.Hypervisor, Equals, "xen")
+-
+-	c.Assert(i1.BlockDevices, HasLen, 1)
+-	c.Assert(i1.BlockDevices[0].DeviceName, Equals, "/dev/sda1")
+-	c.Assert(i1.BlockDevices[0].SnapshotId, Equals, "snap-787e9403")
+-	c.Assert(i1.BlockDevices[0].VolumeSize, Equals, int64(8))
+-	c.Assert(i1.BlockDevices[0].DeleteOnTermination, Equals, true)
+-}
+-
+-func (s *S) TestImageAttributeExample(c *C) {
+-	testServer.Response(200, nil, ImageAttributeExample)
+-
+-	resp, err := s.ec2.ImageAttribute("ami-61a54008", "launchPermission")
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeImageAttribute"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.ImageId, Equals, "ami-61a54008")
+-	c.Assert(resp.Group, Equals, "all")
+-	c.Assert(resp.UserIds[0], Equals, "495219933132")
+-}
+-
+-func (s *S) TestCreateSnapshotExample(c *C) {
+-	testServer.Response(200, nil, CreateSnapshotExample)
+-
+-	resp, err := s.ec2.CreateSnapshot("vol-4d826724", "Daily Backup")
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"CreateSnapshot"})
+-	c.Assert(req.Form["VolumeId"], DeepEquals, []string{"vol-4d826724"})
+-	c.Assert(req.Form["Description"], DeepEquals, []string{"Daily Backup"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.Snapshot.Id, Equals, "snap-78a54011")
+-	c.Assert(resp.Snapshot.VolumeId, Equals, "vol-4d826724")
+-	c.Assert(resp.Snapshot.Status, Equals, "pending")
+-	c.Assert(resp.Snapshot.StartTime, Equals, "2008-05-07T12:51:50.000Z")
+-	c.Assert(resp.Snapshot.Progress, Equals, "60%")
+-	c.Assert(resp.Snapshot.OwnerId, Equals, "111122223333")
+-	c.Assert(resp.Snapshot.VolumeSize, Equals, "10")
+-	c.Assert(resp.Snapshot.Description, Equals, "Daily Backup")
+-}
+-
+-func (s *S) TestDeleteSnapshotsExample(c *C) {
+-	testServer.Response(200, nil, DeleteSnapshotExample)
+-
+-	resp, err := s.ec2.DeleteSnapshots([]string{"snap-78a54011"})
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DeleteSnapshot"})
+-	c.Assert(req.Form["SnapshotId.1"], DeepEquals, []string{"snap-78a54011"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestDescribeSnapshotsExample(c *C) {
+-	testServer.Response(200, nil, DescribeSnapshotsExample)
+-
+-	filter := ec2.NewFilter()
+-	filter.Add("key1", "value1")
+-	filter.Add("key2", "value2", "value3")
+-
+-	resp, err := s.ec2.Snapshots([]string{"snap-1", "snap-2"}, filter)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeSnapshots"})
+-	c.Assert(req.Form["SnapshotId.1"], DeepEquals, []string{"snap-1"})
+-	c.Assert(req.Form["SnapshotId.2"], DeepEquals, []string{"snap-2"})
+-	c.Assert(req.Form["Filter.1.Name"], DeepEquals, []string{"key1"})
+-	c.Assert(req.Form["Filter.1.Value.1"], DeepEquals, []string{"value1"})
+-	c.Assert(req.Form["Filter.1.Value.2"], IsNil)
+-	c.Assert(req.Form["Filter.2.Name"], DeepEquals, []string{"key2"})
+-	c.Assert(req.Form["Filter.2.Value.1"], DeepEquals, []string{"value2"})
+-	c.Assert(req.Form["Filter.2.Value.2"], DeepEquals, []string{"value3"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.Snapshots, HasLen, 1)
+-
+-	s0 := resp.Snapshots[0]
+-	c.Assert(s0.Id, Equals, "snap-1a2b3c4d")
+-	c.Assert(s0.VolumeId, Equals, "vol-8875daef")
+-	c.Assert(s0.VolumeSize, Equals, "15")
+-	c.Assert(s0.Status, Equals, "pending")
+-	c.Assert(s0.StartTime, Equals, "2010-07-29T04:12:01.000Z")
+-	c.Assert(s0.Progress, Equals, "30%")
+-	c.Assert(s0.OwnerId, Equals, "111122223333")
+-	c.Assert(s0.Description, Equals, "Daily Backup")
+-
+-	c.Assert(s0.Tags, HasLen, 1)
+-	c.Assert(s0.Tags[0].Key, Equals, "Purpose")
+-	c.Assert(s0.Tags[0].Value, Equals, "demo_db_14_backup")
+-}
+-
+-func (s *S) TestModifyImageAttributeExample(c *C) {
+-	testServer.Response(200, nil, ModifyImageAttributeExample)
+-
+-	options := ec2.ModifyImageAttribute{
+-		Description: "Test Description",
+-	}
+-
+-	resp, err := s.ec2.ModifyImageAttribute("ami-4fa54026", &options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"ModifyImageAttribute"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestModifyImageAttributeExample_complex(c *C) {
+-	testServer.Response(200, nil, ModifyImageAttributeExample)
+-
+-	options := ec2.ModifyImageAttribute{
+-		AddUsers:     []string{"u1", "u2"},
+-		RemoveUsers:  []string{"u3"},
+-		AddGroups:    []string{"g1", "g3"},
+-		RemoveGroups: []string{"g2"},
+-		Description:  "Test Description",
+-	}
+-
+-	resp, err := s.ec2.ModifyImageAttribute("ami-4fa54026", &options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"ModifyImageAttribute"})
+-	c.Assert(req.Form["LaunchPermission.Add.1.UserId"], DeepEquals, []string{"u1"})
+-	c.Assert(req.Form["LaunchPermission.Add.2.UserId"], DeepEquals, []string{"u2"})
+-	c.Assert(req.Form["LaunchPermission.Remove.1.UserId"], DeepEquals, []string{"u3"})
+-	c.Assert(req.Form["LaunchPermission.Add.1.Group"], DeepEquals, []string{"g1"})
+-	c.Assert(req.Form["LaunchPermission.Add.2.Group"], DeepEquals, []string{"g3"})
+-	c.Assert(req.Form["LaunchPermission.Remove.1.Group"], DeepEquals, []string{"g2"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestCopyImageExample(c *C) {
+-	testServer.Response(200, nil, CopyImageExample)
+-
+-	options := ec2.CopyImage{
+-		SourceRegion:  "us-west-2",
+-		SourceImageId: "ami-1a2b3c4d",
+-		Description:   "Test Description",
+-	}
+-
+-	resp, err := s.ec2.CopyImage(&options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"CopyImage"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "60bc441d-fa2c-494d-b155-5d6a3EXAMPLE")
+-}
+-
+-func (s *S) TestCreateKeyPairExample(c *C) {
+-	testServer.Response(200, nil, CreateKeyPairExample)
+-
+-	resp, err := s.ec2.CreateKeyPair("foo")
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"CreateKeyPair"})
+-	c.Assert(req.Form["KeyName"], DeepEquals, []string{"foo"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.KeyName, Equals, "foo")
+-	c.Assert(resp.KeyFingerprint, Equals, "00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00")
+-}
+-
+-func (s *S) TestDeleteKeyPairExample(c *C) {
+-	testServer.Response(200, nil, DeleteKeyPairExample)
+-
+-	resp, err := s.ec2.DeleteKeyPair("foo")
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DeleteKeyPair"})
+-	c.Assert(req.Form["KeyName"], DeepEquals, []string{"foo"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestCreateSecurityGroupExample(c *C) {
+-	testServer.Response(200, nil, CreateSecurityGroupExample)
+-
+-	resp, err := s.ec2.CreateSecurityGroup(ec2.SecurityGroup{Name: "websrv", Description: "Web Servers"})
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"CreateSecurityGroup"})
+-	c.Assert(req.Form["GroupName"], DeepEquals, []string{"websrv"})
+-	c.Assert(req.Form["GroupDescription"], DeepEquals, []string{"Web Servers"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.Name, Equals, "websrv")
+-	c.Assert(resp.Id, Equals, "sg-67ad940e")
+-}
+-
+-func (s *S) TestDescribeSecurityGroupsExample(c *C) {
+-	testServer.Response(200, nil, DescribeSecurityGroupsExample)
+-
+-	resp, err := s.ec2.SecurityGroups([]ec2.SecurityGroup{{Name: "WebServers"}, {Name: "RangedPortsBySource"}}, nil)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeSecurityGroups"})
+-	c.Assert(req.Form["GroupName.1"], DeepEquals, []string{"WebServers"})
+-	c.Assert(req.Form["GroupName.2"], DeepEquals, []string{"RangedPortsBySource"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.Groups, HasLen, 2)
+-
+-	g0 := resp.Groups[0]
+-	c.Assert(g0.OwnerId, Equals, "999988887777")
+-	c.Assert(g0.Name, Equals, "WebServers")
+-	c.Assert(g0.Id, Equals, "sg-67ad940e")
+-	c.Assert(g0.Description, Equals, "Web Servers")
+-	c.Assert(g0.IPPerms, HasLen, 1)
+-
+-	g0ipp := g0.IPPerms[0]
+-	c.Assert(g0ipp.Protocol, Equals, "tcp")
+-	c.Assert(g0ipp.FromPort, Equals, 80)
+-	c.Assert(g0ipp.ToPort, Equals, 80)
+-	c.Assert(g0ipp.SourceIPs, DeepEquals, []string{"0.0.0.0/0"})
+-
+-	g1 := resp.Groups[1]
+-	c.Assert(g1.OwnerId, Equals, "999988887777")
+-	c.Assert(g1.Name, Equals, "RangedPortsBySource")
+-	c.Assert(g1.Id, Equals, "sg-76abc467")
+-	c.Assert(g1.Description, Equals, "Group A")
+-	c.Assert(g1.IPPerms, HasLen, 1)
+-
+-	g1ipp := g1.IPPerms[0]
+-	c.Assert(g1ipp.Protocol, Equals, "tcp")
+-	c.Assert(g1ipp.FromPort, Equals, 6000)
+-	c.Assert(g1ipp.ToPort, Equals, 7000)
+-	c.Assert(g1ipp.SourceIPs, IsNil)
+-}
+-
+-func (s *S) TestDescribeSecurityGroupsExampleWithFilter(c *C) {
+-	testServer.Response(200, nil, DescribeSecurityGroupsExample)
+-
+-	filter := ec2.NewFilter()
+-	filter.Add("ip-permission.protocol", "tcp")
+-	filter.Add("ip-permission.from-port", "22")
+-	filter.Add("ip-permission.to-port", "22")
+-	filter.Add("ip-permission.group-name", "app_server_group", "database_group")
+-
+-	_, err := s.ec2.SecurityGroups(nil, filter)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeSecurityGroups"})
+-	c.Assert(req.Form["Filter.1.Name"], DeepEquals, []string{"ip-permission.from-port"})
+-	c.Assert(req.Form["Filter.1.Value.1"], DeepEquals, []string{"22"})
+-	c.Assert(req.Form["Filter.2.Name"], DeepEquals, []string{"ip-permission.group-name"})
+-	c.Assert(req.Form["Filter.2.Value.1"], DeepEquals, []string{"app_server_group"})
+-	c.Assert(req.Form["Filter.2.Value.2"], DeepEquals, []string{"database_group"})
+-	c.Assert(req.Form["Filter.3.Name"], DeepEquals, []string{"ip-permission.protocol"})
+-	c.Assert(req.Form["Filter.3.Value.1"], DeepEquals, []string{"tcp"})
+-	c.Assert(req.Form["Filter.4.Name"], DeepEquals, []string{"ip-permission.to-port"})
+-	c.Assert(req.Form["Filter.4.Value.1"], DeepEquals, []string{"22"})
+-
+-	c.Assert(err, IsNil)
+-}
+-
+-func (s *S) TestDescribeSecurityGroupsDumpWithGroup(c *C) {
+-	testServer.Response(200, nil, DescribeSecurityGroupsDump)
+-
+-	resp, err := s.ec2.SecurityGroups(nil, nil)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeSecurityGroups"})
+-	c.Assert(err, IsNil)
+-	c.Check(resp.Groups, HasLen, 1)
+-	c.Check(resp.Groups[0].IPPerms, HasLen, 2)
+-
+-	ipp0 := resp.Groups[0].IPPerms[0]
+-	c.Assert(ipp0.SourceIPs, IsNil)
+-	c.Check(ipp0.Protocol, Equals, "icmp")
+-	c.Assert(ipp0.SourceGroups, HasLen, 1)
+-	c.Check(ipp0.SourceGroups[0].OwnerId, Equals, "12345")
+-	c.Check(ipp0.SourceGroups[0].Name, Equals, "default")
+-	c.Check(ipp0.SourceGroups[0].Id, Equals, "sg-67ad940e")
+-
+-	ipp1 := resp.Groups[0].IPPerms[1]
+-	c.Check(ipp1.Protocol, Equals, "tcp")
+-	c.Assert(ipp0.SourceIPs, IsNil)
+-	c.Assert(ipp0.SourceGroups, HasLen, 1)
+-	c.Check(ipp1.SourceGroups[0].Id, Equals, "sg-76abc467")
+-	c.Check(ipp1.SourceGroups[0].OwnerId, Equals, "12345")
+-	c.Check(ipp1.SourceGroups[0].Name, Equals, "other")
+-}
+-
+-func (s *S) TestDeleteSecurityGroupExample(c *C) {
+-	testServer.Response(200, nil, DeleteSecurityGroupExample)
+-
+-	resp, err := s.ec2.DeleteSecurityGroup(ec2.SecurityGroup{Name: "websrv"})
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DeleteSecurityGroup"})
+-	c.Assert(req.Form["GroupName"], DeepEquals, []string{"websrv"})
+-	c.Assert(req.Form["GroupId"], IsNil)
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestDeleteSecurityGroupExampleWithId(c *C) {
+-	testServer.Response(200, nil, DeleteSecurityGroupExample)
+-
+-	// ignore return and error - we're only want to check the parameter handling.
+-	s.ec2.DeleteSecurityGroup(ec2.SecurityGroup{Id: "sg-67ad940e", Name: "ignored"})
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["GroupName"], IsNil)
+-	c.Assert(req.Form["GroupId"], DeepEquals, []string{"sg-67ad940e"})
+-}
+-
+-func (s *S) TestAuthorizeSecurityGroupExample1(c *C) {
+-	testServer.Response(200, nil, AuthorizeSecurityGroupIngressExample)
+-
+-	perms := []ec2.IPPerm{{
+-		Protocol:  "tcp",
+-		FromPort:  80,
+-		ToPort:    80,
+-		SourceIPs: []string{"205.192.0.0/16", "205.159.0.0/16"},
+-	}}
+-	resp, err := s.ec2.AuthorizeSecurityGroup(ec2.SecurityGroup{Name: "websrv"}, perms)
+-
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"AuthorizeSecurityGroupIngress"})
+-	c.Assert(req.Form["GroupName"], DeepEquals, []string{"websrv"})
+-	c.Assert(req.Form["IpPermissions.1.IpProtocol"], DeepEquals, []string{"tcp"})
+-	c.Assert(req.Form["IpPermissions.1.FromPort"], DeepEquals, []string{"80"})
+-	c.Assert(req.Form["IpPermissions.1.ToPort"], DeepEquals, []string{"80"})
+-	c.Assert(req.Form["IpPermissions.1.IpRanges.1.CidrIp"], DeepEquals, []string{"205.192.0.0/16"})
+-	c.Assert(req.Form["IpPermissions.1.IpRanges.2.CidrIp"], DeepEquals, []string{"205.159.0.0/16"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestAuthorizeSecurityGroupEgress(c *C) {
+-	testServer.Response(200, nil, AuthorizeSecurityGroupEgressExample)
+-
+-	perms := []ec2.IPPerm{{
+-		Protocol:  "tcp",
+-		FromPort:  80,
+-		ToPort:    80,
+-		SourceIPs: []string{"205.192.0.0/16", "205.159.0.0/16"},
+-	}}
+-	resp, err := s.ec2.AuthorizeSecurityGroupEgress(ec2.SecurityGroup{Name: "websrv"}, perms)
+-
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"AuthorizeSecurityGroupEgress"})
+-	c.Assert(req.Form["GroupName"], DeepEquals, []string{"websrv"})
+-	c.Assert(req.Form["IpPermissions.1.IpProtocol"], DeepEquals, []string{"tcp"})
+-	c.Assert(req.Form["IpPermissions.1.FromPort"], DeepEquals, []string{"80"})
+-	c.Assert(req.Form["IpPermissions.1.ToPort"], DeepEquals, []string{"80"})
+-	c.Assert(req.Form["IpPermissions.1.IpRanges.1.CidrIp"], DeepEquals, []string{"205.192.0.0/16"})
+-	c.Assert(req.Form["IpPermissions.1.IpRanges.2.CidrIp"], DeepEquals, []string{"205.159.0.0/16"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestAuthorizeSecurityGroupExample1WithId(c *C) {
+-	testServer.Response(200, nil, AuthorizeSecurityGroupIngressExample)
+-
+-	perms := []ec2.IPPerm{{
+-		Protocol:  "tcp",
+-		FromPort:  80,
+-		ToPort:    80,
+-		SourceIPs: []string{"205.192.0.0/16", "205.159.0.0/16"},
+-	}}
+-	// ignore return and error - we're only want to check the parameter handling.
+-	s.ec2.AuthorizeSecurityGroup(ec2.SecurityGroup{Id: "sg-67ad940e", Name: "ignored"}, perms)
+-
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["GroupName"], IsNil)
+-	c.Assert(req.Form["GroupId"], DeepEquals, []string{"sg-67ad940e"})
+-}
+-
+-func (s *S) TestAuthorizeSecurityGroupExample2(c *C) {
+-	testServer.Response(200, nil, AuthorizeSecurityGroupIngressExample)
+-
+-	perms := []ec2.IPPerm{{
+-		Protocol: "tcp",
+-		FromPort: 80,
+-		ToPort:   81,
+-		SourceGroups: []ec2.UserSecurityGroup{
+-			{OwnerId: "999988887777", Name: "OtherAccountGroup"},
+-			{Id: "sg-67ad940e"},
+-		},
+-	}}
+-	resp, err := s.ec2.AuthorizeSecurityGroup(ec2.SecurityGroup{Name: "websrv"}, perms)
+-
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"AuthorizeSecurityGroupIngress"})
+-	c.Assert(req.Form["GroupName"], DeepEquals, []string{"websrv"})
+-	c.Assert(req.Form["IpPermissions.1.IpProtocol"], DeepEquals, []string{"tcp"})
+-	c.Assert(req.Form["IpPermissions.1.FromPort"], DeepEquals, []string{"80"})
+-	c.Assert(req.Form["IpPermissions.1.ToPort"], DeepEquals, []string{"81"})
+-	c.Assert(req.Form["IpPermissions.1.Groups.1.UserId"], DeepEquals, []string{"999988887777"})
+-	c.Assert(req.Form["IpPermissions.1.Groups.1.GroupName"], DeepEquals, []string{"OtherAccountGroup"})
+-	c.Assert(req.Form["IpPermissions.1.Groups.2.UserId"], IsNil)
+-	c.Assert(req.Form["IpPermissions.1.Groups.2.GroupName"], IsNil)
+-	c.Assert(req.Form["IpPermissions.1.Groups.2.GroupId"], DeepEquals, []string{"sg-67ad940e"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestRevokeSecurityGroupExample(c *C) {
+-	// RevokeSecurityGroup is implemented by the same code as AuthorizeSecurityGroup
+-	// so there's no need to duplicate all the tests.
+-	testServer.Response(200, nil, RevokeSecurityGroupIngressExample)
+-
+-	resp, err := s.ec2.RevokeSecurityGroup(ec2.SecurityGroup{Name: "websrv"}, nil)
+-
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"RevokeSecurityGroupIngress"})
+-	c.Assert(req.Form["GroupName"], DeepEquals, []string{"websrv"})
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestCreateTags(c *C) {
+-	testServer.Response(200, nil, CreateTagsExample)
+-
+-	resp, err := s.ec2.CreateTags([]string{"ami-1a2b3c4d", "i-7f4d3a2b"}, []ec2.Tag{{"webserver", ""}, {"stack", "Production"}})
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["ResourceId.1"], DeepEquals, []string{"ami-1a2b3c4d"})
+-	c.Assert(req.Form["ResourceId.2"], DeepEquals, []string{"i-7f4d3a2b"})
+-	c.Assert(req.Form["Tag.1.Key"], DeepEquals, []string{"webserver"})
+-	c.Assert(req.Form["Tag.1.Value"], DeepEquals, []string{""})
+-	c.Assert(req.Form["Tag.2.Key"], DeepEquals, []string{"stack"})
+-	c.Assert(req.Form["Tag.2.Value"], DeepEquals, []string{"Production"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestStartInstances(c *C) {
+-	testServer.Response(200, nil, StartInstancesExample)
+-
+-	resp, err := s.ec2.StartInstances("i-10a64379")
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"StartInstances"})
+-	c.Assert(req.Form["InstanceId.1"], DeepEquals, []string{"i-10a64379"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-
+-	s0 := resp.StateChanges[0]
+-	c.Assert(s0.InstanceId, Equals, "i-10a64379")
+-	c.Assert(s0.CurrentState.Code, Equals, 0)
+-	c.Assert(s0.CurrentState.Name, Equals, "pending")
+-	c.Assert(s0.PreviousState.Code, Equals, 80)
+-	c.Assert(s0.PreviousState.Name, Equals, "stopped")
+-}
+-
+-func (s *S) TestStopInstances(c *C) {
+-	testServer.Response(200, nil, StopInstancesExample)
+-
+-	resp, err := s.ec2.StopInstances("i-10a64379")
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"StopInstances"})
+-	c.Assert(req.Form["InstanceId.1"], DeepEquals, []string{"i-10a64379"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-
+-	s0 := resp.StateChanges[0]
+-	c.Assert(s0.InstanceId, Equals, "i-10a64379")
+-	c.Assert(s0.CurrentState.Code, Equals, 64)
+-	c.Assert(s0.CurrentState.Name, Equals, "stopping")
+-	c.Assert(s0.PreviousState.Code, Equals, 16)
+-	c.Assert(s0.PreviousState.Name, Equals, "running")
+-}
+-
+-func (s *S) TestRebootInstances(c *C) {
+-	testServer.Response(200, nil, RebootInstancesExample)
+-
+-	resp, err := s.ec2.RebootInstances("i-10a64379")
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"RebootInstances"})
+-	c.Assert(req.Form["InstanceId.1"], DeepEquals, []string{"i-10a64379"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestSignatureWithEndpointPath(c *C) {
+-	ec2.FakeTime(true)
+-	defer ec2.FakeTime(false)
+-
+-	testServer.Response(200, nil, RebootInstancesExample)
+-
+-	// https://bugs.launchpad.net/goamz/+bug/1022749
+-	ec2 := ec2.NewWithClient(s.ec2.Auth, aws.Region{EC2Endpoint: testServer.URL + "/services/Cloud"}, testutil.DefaultClient)
+-
+-	_, err := ec2.RebootInstances("i-10a64379")
+-	c.Assert(err, IsNil)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Signature"], DeepEquals, []string{"QmvgkYGn19WirCuCz/jRp3RmRgFwWR5WRkKZ5AZnyXQ="})
+-}
+-
+-func (s *S) TestAllocateAddressExample(c *C) {
+-	testServer.Response(200, nil, AllocateAddressExample)
+-
+-	options := &ec2.AllocateAddress{
+-		Domain: "vpc",
+-	}
+-
+-	resp, err := s.ec2.AllocateAddress(options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"AllocateAddress"})
+-	c.Assert(req.Form["Domain"], DeepEquals, []string{"vpc"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.PublicIp, Equals, "198.51.100.1")
+-	c.Assert(resp.Domain, Equals, "vpc")
+-	c.Assert(resp.AllocationId, Equals, "eipalloc-5723d13e")
+-}
+-
+-func (s *S) TestReleaseAddressExample(c *C) {
+-	testServer.Response(200, nil, ReleaseAddressExample)
+-
+-	resp, err := s.ec2.ReleaseAddress("eipalloc-5723d13e")
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"ReleaseAddress"})
+-	c.Assert(req.Form["AllocationId"], DeepEquals, []string{"eipalloc-5723d13e"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestAssociateAddressExample(c *C) {
+-	testServer.Response(200, nil, AssociateAddressExample)
+-
+-	options := &ec2.AssociateAddress{
+-		InstanceId:         "i-4fd2431a",
+-		AllocationId:       "eipalloc-5723d13e",
+-		AllowReassociation: true,
+-	}
+-
+-	resp, err := s.ec2.AssociateAddress(options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"AssociateAddress"})
+-	c.Assert(req.Form["InstanceId"], DeepEquals, []string{"i-4fd2431a"})
+-	c.Assert(req.Form["AllocationId"], DeepEquals, []string{"eipalloc-5723d13e"})
+-	c.Assert(req.Form["AllowReassociation"], DeepEquals, []string{"true"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-	c.Assert(resp.AssociationId, Equals, "eipassoc-fc5ca095")
+-}
+-
+-func (s *S) TestDisassociateAddressExample(c *C) {
+-	testServer.Response(200, nil, DisassociateAddressExample)
+-
+-	resp, err := s.ec2.DisassociateAddress("eipassoc-aa7486c3")
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DisassociateAddress"})
+-	c.Assert(req.Form["AssociationId"], DeepEquals, []string{"eipassoc-aa7486c3"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestModifyInstance(c *C) {
+-	testServer.Response(200, nil, ModifyInstanceExample)
+-
+-	options := ec2.ModifyInstance{
+-		InstanceType:          "m1.small",
+-		DisableAPITermination: true,
+-		EbsOptimized:          true,
+-		SecurityGroups:        []ec2.SecurityGroup{{Id: "g1"}, {Id: "g2"}},
+-		ShutdownBehavior:      "terminate",
+-		KernelId:              "kernel-id",
+-		RamdiskId:             "ramdisk-id",
+-		SourceDestCheck:       true,
+-		SriovNetSupport:       true,
+-		UserData:              []byte("1234"),
+-		BlockDevices: []ec2.BlockDeviceMapping{
+-			{DeviceName: "/dev/sda1", SnapshotId: "snap-a08912c9", DeleteOnTermination: true},
+-		},
+-	}
+-
+-	resp, err := s.ec2.ModifyInstance("i-2ba64342", &options)
+-	req := testServer.WaitRequest()
+-
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"ModifyInstanceAttribute"})
+-	c.Assert(req.Form["InstanceId"], DeepEquals, []string{"i-2ba64342"})
+-	c.Assert(req.Form["InstanceType.Value"], DeepEquals, []string{"m1.small"})
+-	c.Assert(req.Form["BlockDeviceMapping.1.DeviceName"], DeepEquals, []string{"/dev/sda1"})
+-	c.Assert(req.Form["BlockDeviceMapping.1.Ebs.SnapshotId"], DeepEquals, []string{"snap-a08912c9"})
+-	c.Assert(req.Form["BlockDeviceMapping.1.Ebs.DeleteOnTermination"], DeepEquals, []string{"true"})
+-	c.Assert(req.Form["DisableApiTermination.Value"], DeepEquals, []string{"true"})
+-	c.Assert(req.Form["EbsOptimized"], DeepEquals, []string{"true"})
+-	c.Assert(req.Form["GroupId.1"], DeepEquals, []string{"g1"})
+-	c.Assert(req.Form["GroupId.2"], DeepEquals, []string{"g2"})
+-	c.Assert(req.Form["InstanceInitiatedShutdownBehavior.Value"], DeepEquals, []string{"terminate"})
+-	c.Assert(req.Form["Kernel.Value"], DeepEquals, []string{"kernel-id"})
+-	c.Assert(req.Form["Ramdisk.Value"], DeepEquals, []string{"ramdisk-id"})
+-	c.Assert(req.Form["SourceDestCheck.Value"], DeepEquals, []string{"true"})
+-	c.Assert(req.Form["SriovNetSupport.Value"], DeepEquals, []string{"simple"})
+-	c.Assert(req.Form["UserData"], DeepEquals, []string{"MTIzNA=="})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+-
+-func (s *S) TestCreateVpc(c *C) {
+-	testServer.Response(200, nil, CreateVpcExample)
+-
+-	options := &ec2.CreateVpc{
+-		CidrBlock: "foo",
+-	}
+-
+-	resp, err := s.ec2.CreateVpc(options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["CidrBlock"], DeepEquals, []string{"foo"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "7a62c49f-347e-4fc4-9331-6e8eEXAMPLE")
+-	c.Assert(resp.VPC.VpcId, Equals, "vpc-1a2b3c4d")
+-	c.Assert(resp.VPC.State, Equals, "pending")
+-	c.Assert(resp.VPC.CidrBlock, Equals, "10.0.0.0/16")
+-	c.Assert(resp.VPC.DHCPOptionsID, Equals, "dopt-1a2b3c4d2")
+-	c.Assert(resp.VPC.InstanceTenancy, Equals, "default")
+-}
+-
+-func (s *S) TestDescribeVpcs(c *C) {
+-	testServer.Response(200, nil, DescribeVpcsExample)
+-
+-	filter := ec2.NewFilter()
+-	filter.Add("key1", "value1")
+-	filter.Add("key2", "value2", "value3")
+-
+-	resp, err := s.ec2.DescribeVpcs([]string{"id1", "id2"}, filter)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"DescribeVpcs"})
+-	c.Assert(req.Form["VpcId.1"], DeepEquals, []string{"id1"})
+-	c.Assert(req.Form["VpcId.2"], DeepEquals, []string{"id2"})
+-	c.Assert(req.Form["Filter.1.Name"], DeepEquals, []string{"key1"})
+-	c.Assert(req.Form["Filter.1.Value.1"], DeepEquals, []string{"value1"})
+-	c.Assert(req.Form["Filter.1.Value.2"], IsNil)
+-	c.Assert(req.Form["Filter.2.Name"], DeepEquals, []string{"key2"})
+-	c.Assert(req.Form["Filter.2.Value.1"], DeepEquals, []string{"value2"})
+-	c.Assert(req.Form["Filter.2.Value.2"], DeepEquals, []string{"value3"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "7a62c49f-347e-4fc4-9331-6e8eEXAMPLE")
+-	c.Assert(resp.VPCs, HasLen, 1)
+-}
+-
+-func (s *S) TestCreateSubnet(c *C) {
+-	testServer.Response(200, nil, CreateSubnetExample)
+-
+-	options := &ec2.CreateSubnet{
+-		AvailabilityZone: "baz",
+-		CidrBlock:        "foo",
+-		VpcId:            "bar",
+-	}
+-
+-	resp, err := s.ec2.CreateSubnet(options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["VpcId"], DeepEquals, []string{"bar"})
+-	c.Assert(req.Form["CidrBlock"], DeepEquals, []string{"foo"})
+-	c.Assert(req.Form["AvailabilityZone"], DeepEquals, []string{"baz"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "7a62c49f-347e-4fc4-9331-6e8eEXAMPLE")
+-	c.Assert(resp.Subnet.SubnetId, Equals, "subnet-9d4a7b6c")
+-	c.Assert(resp.Subnet.State, Equals, "pending")
+-	c.Assert(resp.Subnet.VpcId, Equals, "vpc-1a2b3c4d")
+-	c.Assert(resp.Subnet.CidrBlock, Equals, "10.0.1.0/24")
+-	c.Assert(resp.Subnet.AvailableIpAddressCount, Equals, 251)
+-}
+-
+-func (s *S) TestResetImageAttribute(c *C) {
+-	testServer.Response(200, nil, ResetImageAttributeExample)
+-
+-	options := ec2.ResetImageAttribute{Attribute: "launchPermission"}
+-	resp, err := s.ec2.ResetImageAttribute("i-2ba64342", &options)
+-
+-	req := testServer.WaitRequest()
+-	c.Assert(req.Form["Action"], DeepEquals, []string{"ResetImageAttribute"})
+-
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.RequestId, Equals, "59dbff89-35bd-4eac-99ed-be587EXAMPLE")
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2i_test.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2i_test.go
+deleted file mode 100644
+index 3773041..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2i_test.go
++++ /dev/null
+@@ -1,203 +0,0 @@
+-package ec2_test
+-
+-import (
+-	"crypto/rand"
+-	"fmt"
+-	"github.com/mitchellh/goamz/aws"
+-	"github.com/mitchellh/goamz/ec2"
+-	"github.com/mitchellh/goamz/testutil"
+-	. "github.com/motain/gocheck"
+-)
+-
+-// AmazonServer represents an Amazon EC2 server.
+-type AmazonServer struct {
+-	auth aws.Auth
+-}
+-
+-func (s *AmazonServer) SetUp(c *C) {
+-	auth, err := aws.EnvAuth()
+-	if err != nil {
+-		c.Fatal(err.Error())
+-	}
+-	s.auth = auth
+-}
+-
+-// Suite cost per run: 0.02 USD
+-var _ = Suite(&AmazonClientSuite{})
+-
+-// AmazonClientSuite tests the client against a live EC2 server.
+-type AmazonClientSuite struct {
+-	srv AmazonServer
+-	ClientTests
+-}
+-
+-func (s *AmazonClientSuite) SetUpSuite(c *C) {
+-	if !testutil.Amazon {
+-		c.Skip("AmazonClientSuite tests not enabled")
+-	}
+-	s.srv.SetUp(c)
+-	s.ec2 = ec2.NewWithClient(s.srv.auth, aws.USEast, testutil.DefaultClient)
+-}
+-
+-// ClientTests defines integration tests designed to test the client.
+-// It is not used as a test suite in itself, but embedded within
+-// another type.
+-type ClientTests struct {
+-	ec2 *ec2.EC2
+-}
+-
+-var imageId = "ami-ccf405a5" // Ubuntu Maverick, i386, EBS store
+-
+-// Cost: 0.00 USD
+-func (s *ClientTests) TestRunInstancesError(c *C) {
+-	options := ec2.RunInstances{
+-		ImageId:      "ami-a6f504cf", // Ubuntu Maverick, i386, instance store
+-		InstanceType: "t1.micro",     // Doesn't work with micro, results in 400.
+-	}
+-
+-	resp, err := s.ec2.RunInstances(&options)
+-
+-	c.Assert(resp, IsNil)
+-	c.Assert(err, ErrorMatches, "AMI.*root device.*not supported.*")
+-
+-	ec2err, ok := err.(*ec2.Error)
+-	c.Assert(ok, Equals, true)
+-	c.Assert(ec2err.StatusCode, Equals, 400)
+-	c.Assert(ec2err.Code, Equals, "UnsupportedOperation")
+-	c.Assert(ec2err.Message, Matches, "AMI.*root device.*not supported.*")
+-	c.Assert(ec2err.RequestId, Matches, ".+")
+-}
+-
+-// Cost: 0.02 USD
+-func (s *ClientTests) TestRunAndTerminate(c *C) {
+-	options := ec2.RunInstances{
+-		ImageId:      imageId,
+-		InstanceType: "t1.micro",
+-	}
+-	resp1, err := s.ec2.RunInstances(&options)
+-	c.Assert(err, IsNil)
+-	c.Check(resp1.ReservationId, Matches, "r-[0-9a-f]*")
+-	c.Check(resp1.OwnerId, Matches, "[0-9]+")
+-	c.Check(resp1.Instances, HasLen, 1)
+-	c.Check(resp1.Instances[0].InstanceType, Equals, "t1.micro")
+-
+-	instId := resp1.Instances[0].InstanceId
+-
+-	resp2, err := s.ec2.Instances([]string{instId}, nil)
+-	c.Assert(err, IsNil)
+-	if c.Check(resp2.Reservations, HasLen, 1) && c.Check(len(resp2.Reservations[0].Instances), Equals, 1) {
+-		inst := resp2.Reservations[0].Instances[0]
+-		c.Check(inst.InstanceId, Equals, instId)
+-	}
+-
+-	resp3, err := s.ec2.TerminateInstances([]string{instId})
+-	c.Assert(err, IsNil)
+-	c.Check(resp3.StateChanges, HasLen, 1)
+-	c.Check(resp3.StateChanges[0].InstanceId, Equals, instId)
+-	c.Check(resp3.StateChanges[0].CurrentState.Name, Equals, "shutting-down")
+-	c.Check(resp3.StateChanges[0].CurrentState.Code, Equals, 32)
+-}
+-
+-// Cost: 0.00 USD
+-func (s *ClientTests) TestSecurityGroups(c *C) {
+-	name := "goamz-test"
+-	descr := "goamz security group for tests"
+-
+-	// Clean it up, if a previous test left it around and avoid leaving it around.
+-	s.ec2.DeleteSecurityGroup(ec2.SecurityGroup{Name: name})
+-	defer s.ec2.DeleteSecurityGroup(ec2.SecurityGroup{Name: name})
+-
+-	resp1, err := s.ec2.CreateSecurityGroup(ec2.SecurityGroup{Name: name, Description: descr})
+-	c.Assert(err, IsNil)
+-	c.Assert(resp1.RequestId, Matches, ".+")
+-	c.Assert(resp1.Name, Equals, name)
+-	c.Assert(resp1.Id, Matches, ".+")
+-
+-	resp1, err = s.ec2.CreateSecurityGroup(ec2.SecurityGroup{Name: name, Description: descr})
+-	ec2err, _ := err.(*ec2.Error)
+-	c.Assert(resp1, IsNil)
+-	c.Assert(ec2err, NotNil)
+-	c.Assert(ec2err.Code, Equals, "InvalidGroup.Duplicate")
+-
+-	perms := []ec2.IPPerm{{
+-		Protocol:  "tcp",
+-		FromPort:  0,
+-		ToPort:    1024,
+-		SourceIPs: []string{"127.0.0.1/24"},
+-	}}
+-
+-	resp2, err := s.ec2.AuthorizeSecurityGroup(ec2.SecurityGroup{Name: name}, perms)
+-	c.Assert(err, IsNil)
+-	c.Assert(resp2.RequestId, Matches, ".+")
+-
+-	resp3, err := s.ec2.SecurityGroups(ec2.SecurityGroupNames(name), nil)
+-	c.Assert(err, IsNil)
+-	c.Assert(resp3.RequestId, Matches, ".+")
+-	c.Assert(resp3.Groups, HasLen, 1)
+-
+-	g0 := resp3.Groups[0]
+-	c.Assert(g0.Name, Equals, name)
+-	c.Assert(g0.Description, Equals, descr)
+-	c.Assert(g0.IPPerms, HasLen, 1)
+-	c.Assert(g0.IPPerms[0].Protocol, Equals, "tcp")
+-	c.Assert(g0.IPPerms[0].FromPort, Equals, 0)
+-	c.Assert(g0.IPPerms[0].ToPort, Equals, 1024)
+-	c.Assert(g0.IPPerms[0].SourceIPs, DeepEquals, []string{"127.0.0.1/24"})
+-
+-	resp2, err = s.ec2.DeleteSecurityGroup(ec2.SecurityGroup{Name: name})
+-	c.Assert(err, IsNil)
+-	c.Assert(resp2.RequestId, Matches, ".+")
+-}
+-
+-var sessionId = func() string {
+-	buf := make([]byte, 8)
+-	// if we have no randomness, we'll just make do, so ignore the error.
+-	rand.Read(buf)
+-	return fmt.Sprintf("%x", buf)
+-}()
+-
+-// sessionName reutrns a name that is probably
+-// unique to this test session.
+-func sessionName(prefix string) string {
+-	return prefix + "-" + sessionId
+-}
+-
+-var allRegions = []aws.Region{
+-	aws.USEast,
+-	aws.USWest,
+-	aws.EUWest,
+-	aws.APSoutheast,
+-	aws.APNortheast,
+-}
+-
+-// Communicate with all EC2 endpoints to see if they are alive.
+-func (s *ClientTests) TestRegions(c *C) {
+-	name := sessionName("goamz-region-test")
+-	perms := []ec2.IPPerm{{
+-		Protocol:  "tcp",
+-		FromPort:  80,
+-		ToPort:    80,
+-		SourceIPs: []string{"127.0.0.1/32"},
+-	}}
+-	errs := make(chan error, len(allRegions))
+-	for _, region := range allRegions {
+-		go func(r aws.Region) {
+-			e := ec2.NewWithClient(s.ec2.Auth, r, testutil.DefaultClient)
+-			_, err := e.AuthorizeSecurityGroup(ec2.SecurityGroup{Name: name}, perms)
+-			errs <- err
+-		}(region)
+-	}
+-	for _ = range allRegions {
+-		err := <-errs
+-		if err != nil {
+-			ec2_err, ok := err.(*ec2.Error)
+-			if ok {
+-				c.Check(ec2_err.Code, Matches, "InvalidGroup.NotFound")
+-			} else {
+-				c.Errorf("Non-EC2 error: %s", err)
+-			}
+-		} else {
+-			c.Errorf("Test should have errored but it seems to have succeeded")
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2t_test.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2t_test.go
+deleted file mode 100644
+index fe50356..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2t_test.go
++++ /dev/null
+@@ -1,580 +0,0 @@
+-package ec2_test
+-
+-import (
+-	"fmt"
+-	"github.com/mitchellh/goamz/aws"
+-	"github.com/mitchellh/goamz/ec2"
+-	"github.com/mitchellh/goamz/ec2/ec2test"
+-	"github.com/mitchellh/goamz/testutil"
+-	. "github.com/motain/gocheck"
+-	"regexp"
+-	"sort"
+-)
+-
+-// LocalServer represents a local ec2test fake server.
+-type LocalServer struct {
+-	auth   aws.Auth
+-	region aws.Region
+-	srv    *ec2test.Server
+-}
+-
+-func (s *LocalServer) SetUp(c *C) {
+-	srv, err := ec2test.NewServer()
+-	c.Assert(err, IsNil)
+-	c.Assert(srv, NotNil)
+-
+-	s.srv = srv
+-	s.region = aws.Region{EC2Endpoint: srv.URL()}
+-}
+-
+-// LocalServerSuite defines tests that will run
+-// against the local ec2test server. It includes
+-// selected tests from ClientTests;
+-// when the ec2test functionality is sufficient, it should
+-// include all of them, and ClientTests can be simply embedded.
+-type LocalServerSuite struct {
+-	srv LocalServer
+-	ServerTests
+-	clientTests ClientTests
+-}
+-
+-var _ = Suite(&LocalServerSuite{})
+-
+-func (s *LocalServerSuite) SetUpSuite(c *C) {
+-	s.srv.SetUp(c)
+-	s.ServerTests.ec2 = ec2.NewWithClient(s.srv.auth, s.srv.region, testutil.DefaultClient)
+-	s.clientTests.ec2 = ec2.NewWithClient(s.srv.auth, s.srv.region, testutil.DefaultClient)
+-}
+-
+-func (s *LocalServerSuite) TestRunAndTerminate(c *C) {
+-	s.clientTests.TestRunAndTerminate(c)
+-}
+-
+-func (s *LocalServerSuite) TestSecurityGroups(c *C) {
+-	s.clientTests.TestSecurityGroups(c)
+-}
+-
+-// TestUserData is not defined on ServerTests because it
+-// requires the ec2test server to function.
+-func (s *LocalServerSuite) TestUserData(c *C) {
+-	data := make([]byte, 256)
+-	for i := range data {
+-		data[i] = byte(i)
+-	}
+-	inst, err := s.ec2.RunInstances(&ec2.RunInstances{
+-		ImageId:      imageId,
+-		InstanceType: "t1.micro",
+-		UserData:     data,
+-	})
+-	c.Assert(err, IsNil)
+-	c.Assert(inst, NotNil)
+-	c.Assert(inst.Instances[0].DNSName, Equals, inst.Instances[0].InstanceId+".example.com")
+-
+-	id := inst.Instances[0].InstanceId
+-
+-	defer s.ec2.TerminateInstances([]string{id})
+-
+-	tinst := s.srv.srv.Instance(id)
+-	c.Assert(tinst, NotNil)
+-	c.Assert(tinst.UserData, DeepEquals, data)
+-}
+-
+-// AmazonServerSuite runs the ec2test server tests against a live EC2 server.
+-// It will only be activated if the -all flag is specified.
+-type AmazonServerSuite struct {
+-	srv AmazonServer
+-	ServerTests
+-}
+-
+-var _ = Suite(&AmazonServerSuite{})
+-
+-func (s *AmazonServerSuite) SetUpSuite(c *C) {
+-	if !testutil.Amazon {
+-		c.Skip("AmazonServerSuite tests not enabled")
+-	}
+-	s.srv.SetUp(c)
+-	s.ServerTests.ec2 = ec2.NewWithClient(s.srv.auth, aws.USEast, testutil.DefaultClient)
+-}
+-
+-// ServerTests defines a set of tests designed to test
+-// the ec2test local fake ec2 server.
+-// It is not used as a test suite in itself, but embedded within
+-// another type.
+-type ServerTests struct {
+-	ec2 *ec2.EC2
+-}
+-
+-func terminateInstances(c *C, e *ec2.EC2, insts []*ec2.Instance) {
+-	var ids []string
+-	for _, inst := range insts {
+-		if inst != nil {
+-			ids = append(ids, inst.InstanceId)
+-		}
+-	}
+-	_, err := e.TerminateInstances(ids)
+-	c.Check(err, IsNil, Commentf("%d INSTANCES LEFT RUNNING!!!", len(ids)))
+-}
+-
+-func (s *ServerTests) makeTestGroup(c *C, name, descr string) ec2.SecurityGroup {
+-	// Clean it up if a previous test left it around.
+-	_, err := s.ec2.DeleteSecurityGroup(ec2.SecurityGroup{Name: name})
+-	if err != nil && err.(*ec2.Error).Code != "InvalidGroup.NotFound" {
+-		c.Fatalf("delete security group: %v", err)
+-	}
+-
+-	resp, err := s.ec2.CreateSecurityGroup(ec2.SecurityGroup{Name: name, Description: descr})
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.Name, Equals, name)
+-	return resp.SecurityGroup
+-}
+-
+-func (s *ServerTests) TestIPPerms(c *C) {
+-	g0 := s.makeTestGroup(c, "goamz-test0", "ec2test group 0")
+-	defer s.ec2.DeleteSecurityGroup(g0)
+-
+-	g1 := s.makeTestGroup(c, "goamz-test1", "ec2test group 1")
+-	defer s.ec2.DeleteSecurityGroup(g1)
+-
+-	resp, err := s.ec2.SecurityGroups([]ec2.SecurityGroup{g0, g1}, nil)
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.Groups, HasLen, 2)
+-	c.Assert(resp.Groups[0].IPPerms, HasLen, 0)
+-	c.Assert(resp.Groups[1].IPPerms, HasLen, 0)
+-
+-	ownerId := resp.Groups[0].OwnerId
+-
+-	// test some invalid parameters
+-	// TODO more
+-	_, err = s.ec2.AuthorizeSecurityGroup(g0, []ec2.IPPerm{{
+-		Protocol:  "tcp",
+-		FromPort:  0,
+-		ToPort:    1024,
+-		SourceIPs: []string{"z127.0.0.1/24"},
+-	}})
+-	c.Assert(err, NotNil)
+-	c.Check(err.(*ec2.Error).Code, Equals, "InvalidPermission.Malformed")
+-
+-	// Check that AuthorizeSecurityGroup adds the correct authorizations.
+-	_, err = s.ec2.AuthorizeSecurityGroup(g0, []ec2.IPPerm{{
+-		Protocol:  "tcp",
+-		FromPort:  2000,
+-		ToPort:    2001,
+-		SourceIPs: []string{"127.0.0.0/24"},
+-		SourceGroups: []ec2.UserSecurityGroup{{
+-			Name: g1.Name,
+-		}, {
+-			Id: g0.Id,
+-		}},
+-	}, {
+-		Protocol:  "tcp",
+-		FromPort:  2000,
+-		ToPort:    2001,
+-		SourceIPs: []string{"200.1.1.34/32"},
+-	}})
+-	c.Assert(err, IsNil)
+-
+-	resp, err = s.ec2.SecurityGroups([]ec2.SecurityGroup{g0}, nil)
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.Groups, HasLen, 1)
+-	c.Assert(resp.Groups[0].IPPerms, HasLen, 1)
+-
+-	perm := resp.Groups[0].IPPerms[0]
+-	srcg := perm.SourceGroups
+-	c.Assert(srcg, HasLen, 2)
+-
+-	// Normalize so we don't care about returned order.
+-	if srcg[0].Name == g1.Name {
+-		srcg[0], srcg[1] = srcg[1], srcg[0]
+-	}
+-	c.Check(srcg[0].Name, Equals, g0.Name)
+-	c.Check(srcg[0].Id, Equals, g0.Id)
+-	c.Check(srcg[0].OwnerId, Equals, ownerId)
+-	c.Check(srcg[1].Name, Equals, g1.Name)
+-	c.Check(srcg[1].Id, Equals, g1.Id)
+-	c.Check(srcg[1].OwnerId, Equals, ownerId)
+-
+-	sort.Strings(perm.SourceIPs)
+-	c.Check(perm.SourceIPs, DeepEquals, []string{"127.0.0.0/24", "200.1.1.34/32"})
+-
+-	// Check that we can't delete g1 (because g0 is using it)
+-	_, err = s.ec2.DeleteSecurityGroup(g1)
+-	c.Assert(err, NotNil)
+-	c.Check(err.(*ec2.Error).Code, Equals, "InvalidGroup.InUse")
+-
+-	_, err = s.ec2.RevokeSecurityGroup(g0, []ec2.IPPerm{{
+-		Protocol:     "tcp",
+-		FromPort:     2000,
+-		ToPort:       2001,
+-		SourceGroups: []ec2.UserSecurityGroup{{Id: g1.Id}},
+-	}, {
+-		Protocol:  "tcp",
+-		FromPort:  2000,
+-		ToPort:    2001,
+-		SourceIPs: []string{"200.1.1.34/32"},
+-	}})
+-	c.Assert(err, IsNil)
+-
+-	resp, err = s.ec2.SecurityGroups([]ec2.SecurityGroup{g0}, nil)
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.Groups, HasLen, 1)
+-	c.Assert(resp.Groups[0].IPPerms, HasLen, 1)
+-
+-	perm = resp.Groups[0].IPPerms[0]
+-	srcg = perm.SourceGroups
+-	c.Assert(srcg, HasLen, 1)
+-	c.Check(srcg[0].Name, Equals, g0.Name)
+-	c.Check(srcg[0].Id, Equals, g0.Id)
+-	c.Check(srcg[0].OwnerId, Equals, ownerId)
+-
+-	c.Check(perm.SourceIPs, DeepEquals, []string{"127.0.0.0/24"})
+-
+-	// We should be able to delete g1 now because we've removed its only use.
+-	_, err = s.ec2.DeleteSecurityGroup(g1)
+-	c.Assert(err, IsNil)
+-
+-	_, err = s.ec2.DeleteSecurityGroup(g0)
+-	c.Assert(err, IsNil)
+-
+-	f := ec2.NewFilter()
+-	f.Add("group-id", g0.Id, g1.Id)
+-	resp, err = s.ec2.SecurityGroups(nil, f)
+-	c.Assert(err, IsNil)
+-	c.Assert(resp.Groups, HasLen, 0)
+-}
+-
+-func (s *ServerTests) TestDuplicateIPPerm(c *C) {
+-	name := "goamz-test"
+-	descr := "goamz security group for tests"
+-
+-	// Clean it up, if a previous test left it around and avoid leaving it around.
+-	s.ec2.DeleteSecurityGroup(ec2.SecurityGroup{Name: name})
+-	defer s.ec2.DeleteSecurityGroup(ec2.SecurityGroup{Name: name})
+-
+-	resp1, err := s.ec2.CreateSecurityGroup(ec2.SecurityGroup{Name: name, Description: descr})
+-	c.Assert(err, IsNil)
+-	c.Assert(resp1.Name, Equals, name)
+-
+-	perms := []ec2.IPPerm{{
+-		Protocol:  "tcp",
+-		FromPort:  200,
+-		ToPort:    1024,
+-		SourceIPs: []string{"127.0.0.1/24"},
+-	}, {
+-		Protocol:  "tcp",
+-		FromPort:  0,
+-		ToPort:    100,
+-		SourceIPs: []string{"127.0.0.1/24"},
+-	}}
+-
+-	_, err = s.ec2.AuthorizeSecurityGroup(ec2.SecurityGroup{Name: name}, perms[0:1])
+-	c.Assert(err, IsNil)
+-
+-	_, err = s.ec2.AuthorizeSecurityGroup(ec2.SecurityGroup{Name: name}, perms[0:2])
+-	c.Assert(err, ErrorMatches, `.*\(InvalidPermission.Duplicate\)`)
+-}
+-
+-type filterSpec struct {
+-	name   string
+-	values []string
+-}
+-
+-func (s *ServerTests) TestInstanceFiltering(c *C) {
+-	groupResp, err := s.ec2.CreateSecurityGroup(ec2.SecurityGroup{Name: sessionName("testgroup1"), Description: "testgroup one description"})
+-	c.Assert(err, IsNil)
+-	group1 := groupResp.SecurityGroup
+-	defer s.ec2.DeleteSecurityGroup(group1)
+-
+-	groupResp, err = s.ec2.CreateSecurityGroup(ec2.SecurityGroup{Name: sessionName("testgroup2"), Description: "testgroup two description"})
+-	c.Assert(err, IsNil)
+-	group2 := groupResp.SecurityGroup
+-	defer s.ec2.DeleteSecurityGroup(group2)
+-
+-	insts := make([]*ec2.Instance, 3)
+-	inst, err := s.ec2.RunInstances(&ec2.RunInstances{
+-		MinCount:       2,
+-		ImageId:        imageId,
+-		InstanceType:   "t1.micro",
+-		SecurityGroups: []ec2.SecurityGroup{group1},
+-	})
+-	c.Assert(err, IsNil)
+-	insts[0] = &inst.Instances[0]
+-	insts[1] = &inst.Instances[1]
+-	defer terminateInstances(c, s.ec2, insts)
+-
+-	imageId2 := "ami-e358958a" // Natty server, i386, EBS store
+-	inst, err = s.ec2.RunInstances(&ec2.RunInstances{
+-		ImageId:        imageId2,
+-		InstanceType:   "t1.micro",
+-		SecurityGroups: []ec2.SecurityGroup{group2},
+-	})
+-	c.Assert(err, IsNil)
+-	insts[2] = &inst.Instances[0]
+-
+-	ids := func(indices ...int) (instIds []string) {
+-		for _, index := range indices {
+-			instIds = append(instIds, insts[index].InstanceId)
+-		}
+-		return
+-	}
+-
+-	tests := []struct {
+-		about       string
+-		instanceIds []string     // instanceIds argument to Instances method.
+-		filters     []filterSpec // filters argument to Instances method.
+-		resultIds   []string     // set of instance ids of expected results.
+-		allowExtra  bool         // resultIds may be incomplete.
+-		err         string       // expected error.
+-	}{
+-		{
+-			about:      "check that Instances returns all instances",
+-			resultIds:  ids(0, 1, 2),
+-			allowExtra: true,
+-		}, {
+-			about:       "check that specifying two instance ids returns them",
+-			instanceIds: ids(0, 2),
+-			resultIds:   ids(0, 2),
+-		}, {
+-			about:       "check that specifying a non-existent instance id gives an error",
+-			instanceIds: append(ids(0), "i-deadbeef"),
+-			err:         `.*\(InvalidInstanceID\.NotFound\)`,
+-		}, {
+-			about: "check that a filter allowed both instances returns both of them",
+-			filters: []filterSpec{
+-				{"instance-id", ids(0, 2)},
+-			},
+-			resultIds: ids(0, 2),
+-		}, {
+-			about: "check that a filter allowing only one instance returns it",
+-			filters: []filterSpec{
+-				{"instance-id", ids(1)},
+-			},
+-			resultIds: ids(1),
+-		}, {
+-			about: "check that a filter allowing no instances returns none",
+-			filters: []filterSpec{
+-				{"instance-id", []string{"i-deadbeef12345"}},
+-			},
+-		}, {
+-			about: "check that filtering on group id works",
+-			filters: []filterSpec{
+-				{"group-id", []string{group1.Id}},
+-			},
+-			resultIds: ids(0, 1),
+-		}, {
+-			about: "check that filtering on group name works",
+-			filters: []filterSpec{
+-				{"group-name", []string{group1.Name}},
+-			},
+-			resultIds: ids(0, 1),
+-		}, {
+-			about: "check that filtering on image id works",
+-			filters: []filterSpec{
+-				{"image-id", []string{imageId}},
+-			},
+-			resultIds:  ids(0, 1),
+-			allowExtra: true,
+-		}, {
+-			about: "combination filters 1",
+-			filters: []filterSpec{
+-				{"image-id", []string{imageId, imageId2}},
+-				{"group-name", []string{group1.Name}},
+-			},
+-			resultIds: ids(0, 1),
+-		}, {
+-			about: "combination filters 2",
+-			filters: []filterSpec{
+-				{"image-id", []string{imageId2}},
+-				{"group-name", []string{group1.Name}},
+-			},
+-		},
+-	}
+-	for i, t := range tests {
+-		c.Logf("%d. %s", i, t.about)
+-		var f *ec2.Filter
+-		if t.filters != nil {
+-			f = ec2.NewFilter()
+-			for _, spec := range t.filters {
+-				f.Add(spec.name, spec.values...)
+-			}
+-		}
+-		resp, err := s.ec2.Instances(t.instanceIds, f)
+-		if t.err != "" {
+-			c.Check(err, ErrorMatches, t.err)
+-			continue
+-		}
+-		c.Assert(err, IsNil)
+-		insts := make(map[string]*ec2.Instance)
+-		for _, r := range resp.Reservations {
+-			for j := range r.Instances {
+-				inst := &r.Instances[j]
+-				c.Check(insts[inst.InstanceId], IsNil, Commentf("duplicate instance id: %q", inst.InstanceId))
+-				insts[inst.InstanceId] = inst
+-			}
+-		}
+-		if !t.allowExtra {
+-			c.Check(insts, HasLen, len(t.resultIds), Commentf("expected %d instances got %#v", len(t.resultIds), insts))
+-		}
+-		for j, id := range t.resultIds {
+-			c.Check(insts[id], NotNil, Commentf("instance id %d (%q) not found; got %#v", j, id, insts))
+-		}
+-	}
+-}
+-
+-func idsOnly(gs []ec2.SecurityGroup) []ec2.SecurityGroup {
+-	for i := range gs {
+-		gs[i].Name = ""
+-	}
+-	return gs
+-}
+-
+-func namesOnly(gs []ec2.SecurityGroup) []ec2.SecurityGroup {
+-	for i := range gs {
+-		gs[i].Id = ""
+-	}
+-	return gs
+-}
+-
+-func (s *ServerTests) TestGroupFiltering(c *C) {
+-	g := make([]ec2.SecurityGroup, 4)
+-	for i := range g {
+-		resp, err := s.ec2.CreateSecurityGroup(ec2.SecurityGroup{Name: sessionName(fmt.Sprintf("testgroup%d", i)), Description: fmt.Sprintf("testdescription%d", i)})
+-		c.Assert(err, IsNil)
+-		g[i] = resp.SecurityGroup
+-		c.Logf("group %d: %v", i, g[i])
+-		defer s.ec2.DeleteSecurityGroup(g[i])
+-	}
+-
+-	perms := [][]ec2.IPPerm{
+-		{{
+-			Protocol:  "tcp",
+-			FromPort:  100,
+-			ToPort:    200,
+-			SourceIPs: []string{"1.2.3.4/32"},
+-		}},
+-		{{
+-			Protocol:     "tcp",
+-			FromPort:     200,
+-			ToPort:       300,
+-			SourceGroups: []ec2.UserSecurityGroup{{Id: g[1].Id}},
+-		}},
+-		{{
+-			Protocol:     "udp",
+-			FromPort:     200,
+-			ToPort:       400,
+-			SourceGroups: []ec2.UserSecurityGroup{{Id: g[1].Id}},
+-		}},
+-	}
+-	for i, ps := range perms {
+-		_, err := s.ec2.AuthorizeSecurityGroup(g[i], ps)
+-		c.Assert(err, IsNil)
+-	}
+-
+-	groups := func(indices ...int) (gs []ec2.SecurityGroup) {
+-		for _, index := range indices {
+-			gs = append(gs, g[index])
+-		}
+-		return
+-	}
+-
+-	type groupTest struct {
+-		about      string
+-		groups     []ec2.SecurityGroup // groupIds argument to SecurityGroups method.
+-		filters    []filterSpec        // filters argument to SecurityGroups method.
+-		results    []ec2.SecurityGroup // set of expected result groups.
+-		allowExtra bool                // specified results may be incomplete.
+-		err        string              // expected error.
+-	}
+-	filterCheck := func(name, val string, gs []ec2.SecurityGroup) groupTest {
+-		return groupTest{
+-			about:      "filter check " + name,
+-			filters:    []filterSpec{{name, []string{val}}},
+-			results:    gs,
+-			allowExtra: true,
+-		}
+-	}
+-	tests := []groupTest{
+-		{
+-			about:      "check that SecurityGroups returns all groups",
+-			results:    groups(0, 1, 2, 3),
+-			allowExtra: true,
+-		}, {
+-			about:   "check that specifying two group ids returns them",
+-			groups:  idsOnly(groups(0, 2)),
+-			results: groups(0, 2),
+-		}, {
+-			about:   "check that specifying names only works",
+-			groups:  namesOnly(groups(0, 2)),
+-			results: groups(0, 2),
+-		}, {
+-			about:  "check that specifying a non-existent group id gives an error",
+-			groups: append(groups(0), ec2.SecurityGroup{Id: "sg-eeeeeeeee"}),
+-			err:    `.*\(InvalidGroup\.NotFound\)`,
+-		}, {
+-			about: "check that a filter allowed two groups returns both of them",
+-			filters: []filterSpec{
+-				{"group-id", []string{g[0].Id, g[2].Id}},
+-			},
+-			results: groups(0, 2),
+-		},
+-		{
+-			about:  "check that the previous filter works when specifying a list of ids",
+-			groups: groups(1, 2),
+-			filters: []filterSpec{
+-				{"group-id", []string{g[0].Id, g[2].Id}},
+-			},
+-			results: groups(2),
+-		}, {
+-			about: "check that a filter allowing no groups returns none",
+-			filters: []filterSpec{
+-				{"group-id", []string{"sg-eeeeeeeee"}},
+-			},
+-		},
+-		filterCheck("description", "testdescription1", groups(1)),
+-		filterCheck("group-name", g[2].Name, groups(2)),
+-		filterCheck("ip-permission.cidr", "1.2.3.4/32", groups(0)),
+-		filterCheck("ip-permission.group-name", g[1].Name, groups(1, 2)),
+-		filterCheck("ip-permission.protocol", "udp", groups(2)),
+-		filterCheck("ip-permission.from-port", "200", groups(1, 2)),
+-		filterCheck("ip-permission.to-port", "200", groups(0)),
+-		// TODO owner-id
+-	}
+-	for i, t := range tests {
+-		c.Logf("%d. %s", i, t.about)
+-		var f *ec2.Filter
+-		if t.filters != nil {
+-			f = ec2.NewFilter()
+-			for _, spec := range t.filters {
+-				f.Add(spec.name, spec.values...)
+-			}
+-		}
+-		resp, err := s.ec2.SecurityGroups(t.groups, f)
+-		if t.err != "" {
+-			c.Check(err, ErrorMatches, t.err)
+-			continue
+-		}
+-		c.Assert(err, IsNil)
+-		groups := make(map[string]*ec2.SecurityGroup)
+-		for j := range resp.Groups {
+-			group := &resp.Groups[j].SecurityGroup
+-			c.Check(groups[group.Id], IsNil, Commentf("duplicate group id: %q", group.Id))
+-
+-			groups[group.Id] = group
+-		}
+-		// If extra groups may be returned, eliminate all groups that
+-		// we did not create in this session apart from the default group.
+-		if t.allowExtra {
+-			namePat := regexp.MustCompile(sessionName("testgroup[0-9]"))
+-			for id, g := range groups {
+-				if !namePat.MatchString(g.Name) {
+-					delete(groups, id)
+-				}
+-			}
+-		}
+-		c.Check(groups, HasLen, len(t.results))
+-		for j, g := range t.results {
+-			rg := groups[g.Id]
+-			c.Assert(rg, NotNil, Commentf("group %d (%v) not found; got %#v", j, g, groups))
+-			c.Check(rg.Name, Equals, g.Name, Commentf("group %d (%v)", j, g))
+-		}
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2test/filter.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2test/filter.go
+deleted file mode 100644
+index 1a0c046..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2test/filter.go
++++ /dev/null
+@@ -1,84 +0,0 @@
+-package ec2test
+-
+-import (
+-	"fmt"
+-	"net/url"
+-	"strings"
+-)
+-
+-// filter holds an ec2 filter.  A filter maps an attribute to a set of
+-// possible values for that attribute. For an item to pass through the
+-// filter, every attribute of the item mentioned in the filter must match
+-// at least one of its given values.
+-type filter map[string][]string
+-
+-// newFilter creates a new filter from the Filter fields in the url form.
+-//
+-// The filtering is specified through a map of name=>values, where the
+-// name is a well-defined key identifying the data to be matched,
+-// and the list of values holds the possible values the filtered
+-// item can take for the key to be included in the
+-// result set. For example:
+-//
+-//   Filter.1.Name=instance-type
+-//   Filter.1.Value.1=m1.small
+-//   Filter.1.Value.2=m1.large
+-//
+-func newFilter(form url.Values) filter {
+-	// TODO return an error if the fields are not well formed?
+-	names := make(map[int]string)
+-	values := make(map[int][]string)
+-	maxId := 0
+-	for name, fvalues := range form {
+-		var rest string
+-		var id int
+-		if x, _ := fmt.Sscanf(name, "Filter.%d.%s", &id, &rest); x != 2 {
+-			continue
+-		}
+-		if id > maxId {
+-			maxId = id
+-		}
+-		if rest == "Name" {
+-			names[id] = fvalues[0]
+-			continue
+-		}
+-		if !strings.HasPrefix(rest, "Value.") {
+-			continue
+-		}
+-		values[id] = append(values[id], fvalues[0])
+-	}
+-
+-	f := make(filter)
+-	for id, name := range names {
+-		f[name] = values[id]
+-	}
+-	return f
+-}
+-
+-func notDigit(r rune) bool {
+-	return r < '0' || r > '9'
+-}
+-
+-// filterable represents an object that can be passed through a filter.
+-type filterable interface {
+-	// matchAttr returns true if given attribute of the
+-	// object matches value. It returns an error if the
+-	// attribute is not recognised or the value is malformed.
+-	matchAttr(attr, value string) (bool, error)
+-}
+-
+-// ok returns true if x passes through the filter.
+-func (f filter) ok(x filterable) (bool, error) {
+-next:
+-	for a, vs := range f {
+-		for _, v := range vs {
+-			if ok, err := x.matchAttr(a, v); ok {
+-				continue next
+-			} else if err != nil {
+-				return false, fmt.Errorf("bad attribute or value %q=%q for type %T: %v", a, v, x, err)
+-			}
+-		}
+-		return false, nil
+-	}
+-	return true, nil
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2test/server.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2test/server.go
+deleted file mode 100644
+index 2f24cb2..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/ec2test/server.go
++++ /dev/null
+@@ -1,993 +0,0 @@
+-// The ec2test package implements a fake EC2 provider with
+-// the capability of inducing errors on any given operation,
+-// and retrospectively determining what operations have been
+-// carried out.
+-package ec2test
+-
+-import (
+-	"encoding/base64"
+-	"encoding/xml"
+-	"fmt"
+-	"github.com/mitchellh/goamz/ec2"
+-	"io"
+-	"net"
+-	"net/http"
+-	"net/url"
+-	"regexp"
+-	"strconv"
+-	"strings"
+-	"sync"
+-)
+-
+-var b64 = base64.StdEncoding
+-
+-// Action represents a request that changes the ec2 state.
+-type Action struct {
+-	RequestId string
+-
+-	// Request holds the requested action as a url.Values instance
+-	Request url.Values
+-
+-	// If the action succeeded, Response holds the value that
+-	// was marshalled to build the XML response for the request.
+-	Response interface{}
+-
+-	// If the action failed, Err holds an error giving details of the failure.
+-	Err *ec2.Error
+-}
+-
+-// TODO possible other things:
+-// - some virtual time stamp interface, so a client
+-// can ask for all actions after a certain virtual time.
+-
+-// Server implements an EC2 simulator for use in testing.
+-type Server struct {
+-	url      string
+-	listener net.Listener
+-	mu       sync.Mutex
+-	reqs     []*Action
+-
+-	instances            map[string]*Instance      // id -> instance
+-	reservations         map[string]*reservation   // id -> reservation
+-	groups               map[string]*securityGroup // id -> group
+-	maxId                counter
+-	reqId                counter
+-	reservationId        counter
+-	groupId              counter
+-	initialInstanceState ec2.InstanceState
+-}
+-
+-// reservation holds a simulated ec2 reservation.
+-type reservation struct {
+-	id        string
+-	instances map[string]*Instance
+-	groups    []*securityGroup
+-}
+-
+-// instance holds a simulated ec2 instance
+-type Instance struct {
+-	// UserData holds the data that was passed to the RunInstances request
+-	// when the instance was started.
+-	UserData    []byte
+-	id          string
+-	imageId     string
+-	reservation *reservation
+-	instType    string
+-	state       ec2.InstanceState
+-}
+-
+-// permKey represents permission for a given security
+-// group or IP address (but not both) to access a given range of
+-// ports. Equality of permKeys is used in the implementation of
+-// permission sets, relying on the uniqueness of securityGroup
+-// instances.
+-type permKey struct {
+-	protocol string
+-	fromPort int
+-	toPort   int
+-	group    *securityGroup
+-	ipAddr   string
+-}
+-
+-// securityGroup holds a simulated ec2 security group.
+-// Instances of securityGroup should only be created through
+-// Server.createSecurityGroup to ensure that groups can be
+-// compared by pointer value.
+-type securityGroup struct {
+-	id          string
+-	name        string
+-	description string
+-
+-	perms map[permKey]bool
+-}
+-
+-func (g *securityGroup) ec2SecurityGroup() ec2.SecurityGroup {
+-	return ec2.SecurityGroup{
+-		Name: g.name,
+-		Id:   g.id,
+-	}
+-}
+-
+-func (g *securityGroup) matchAttr(attr, value string) (ok bool, err error) {
+-	switch attr {
+-	case "description":
+-		return g.description == value, nil
+-	case "group-id":
+-		return g.id == value, nil
+-	case "group-name":
+-		return g.name == value, nil
+-	case "ip-permission.cidr":
+-		return g.hasPerm(func(k permKey) bool { return k.ipAddr == value }), nil
+-	case "ip-permission.group-name":
+-		return g.hasPerm(func(k permKey) bool {
+-			return k.group != nil && k.group.name == value
+-		}), nil
+-	case "ip-permission.from-port":
+-		port, err := strconv.Atoi(value)
+-		if err != nil {
+-			return false, err
+-		}
+-		return g.hasPerm(func(k permKey) bool { return k.fromPort == port }), nil
+-	case "ip-permission.to-port":
+-		port, err := strconv.Atoi(value)
+-		if err != nil {
+-			return false, err
+-		}
+-		return g.hasPerm(func(k permKey) bool { return k.toPort == port }), nil
+-	case "ip-permission.protocol":
+-		return g.hasPerm(func(k permKey) bool { return k.protocol == value }), nil
+-	case "owner-id":
+-		return value == ownerId, nil
+-	}
+-	return false, fmt.Errorf("unknown attribute %q", attr)
+-}
+-
+-func (g *securityGroup) hasPerm(test func(k permKey) bool) bool {
+-	for k := range g.perms {
+-		if test(k) {
+-			return true
+-		}
+-	}
+-	return false
+-}
+-
+-// ec2Perms returns the list of EC2 permissions granted
+-// to g. It groups permissions by port range and protocol.
+-func (g *securityGroup) ec2Perms() (perms []ec2.IPPerm) {
+-	// The grouping is held in result. We use permKey for convenience,
+-	// (ensuring that the group and ipAddr of each key is zero). For
+-	// each protocol/port range combination, we build up the permission
+-	// set in the associated value.
+-	result := make(map[permKey]*ec2.IPPerm)
+-	for k := range g.perms {
+-		groupKey := k
+-		groupKey.group = nil
+-		groupKey.ipAddr = ""
+-
+-		ec2p := result[groupKey]
+-		if ec2p == nil {
+-			ec2p = &ec2.IPPerm{
+-				Protocol: k.protocol,
+-				FromPort: k.fromPort,
+-				ToPort:   k.toPort,
+-			}
+-			result[groupKey] = ec2p
+-		}
+-		if k.group != nil {
+-			ec2p.SourceGroups = append(ec2p.SourceGroups,
+-				ec2.UserSecurityGroup{
+-					Id:      k.group.id,
+-					Name:    k.group.name,
+-					OwnerId: ownerId,
+-				})
+-		} else {
+-			ec2p.SourceIPs = append(ec2p.SourceIPs, k.ipAddr)
+-		}
+-	}
+-	for _, ec2p := range result {
+-		perms = append(perms, *ec2p)
+-	}
+-	return
+-}
+-
+-var actions = map[string]func(*Server, http.ResponseWriter, *http.Request, string) interface{}{
+-	"RunInstances":                  (*Server).runInstances,
+-	"TerminateInstances":            (*Server).terminateInstances,
+-	"DescribeInstances":             (*Server).describeInstances,
+-	"CreateSecurityGroup":           (*Server).createSecurityGroup,
+-	"DescribeSecurityGroups":        (*Server).describeSecurityGroups,
+-	"DeleteSecurityGroup":           (*Server).deleteSecurityGroup,
+-	"AuthorizeSecurityGroupIngress": (*Server).authorizeSecurityGroupIngress,
+-	"RevokeSecurityGroupIngress":    (*Server).revokeSecurityGroupIngress,
+-}
+-
+-const ownerId = "9876"
+-
+-// newAction allocates a new action and adds it to the
+-// recorded list of server actions.
+-func (srv *Server) newAction() *Action {
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-
+-	a := new(Action)
+-	srv.reqs = append(srv.reqs, a)
+-	return a
+-}
+-
+-// NewServer returns a new server.
+-func NewServer() (*Server, error) {
+-	srv := &Server{
+-		instances:            make(map[string]*Instance),
+-		groups:               make(map[string]*securityGroup),
+-		reservations:         make(map[string]*reservation),
+-		initialInstanceState: Pending,
+-	}
+-
+-	// Add default security group.
+-	g := &securityGroup{
+-		name:        "default",
+-		description: "default group",
+-		id:          fmt.Sprintf("sg-%d", srv.groupId.next()),
+-	}
+-	g.perms = map[permKey]bool{
+-		permKey{
+-			protocol: "icmp",
+-			fromPort: -1,
+-			toPort:   -1,
+-			group:    g,
+-		}: true,
+-		permKey{
+-			protocol: "tcp",
+-			fromPort: 0,
+-			toPort:   65535,
+-			group:    g,
+-		}: true,
+-		permKey{
+-			protocol: "udp",
+-			fromPort: 0,
+-			toPort:   65535,
+-			group:    g,
+-		}: true,
+-	}
+-	srv.groups[g.id] = g
+-
+-	l, err := net.Listen("tcp", "localhost:0")
+-	if err != nil {
+-		return nil, fmt.Errorf("cannot listen on localhost: %v", err)
+-	}
+-	srv.listener = l
+-
+-	srv.url = "http://" + l.Addr().String()
+-
+-	// we use HandlerFunc rather than *Server directly so that we
+-	// can avoid exporting HandlerFunc from *Server.
+-	go http.Serve(l, http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+-		srv.serveHTTP(w, req)
+-	}))
+-	return srv, nil
+-}
+-
+-// Quit closes down the server.
+-func (srv *Server) Quit() {
+-	srv.listener.Close()
+-}
+-
+-// SetInitialInstanceState sets the state that any new instances will be started in.
+-func (srv *Server) SetInitialInstanceState(state ec2.InstanceState) {
+-	srv.mu.Lock()
+-	srv.initialInstanceState = state
+-	srv.mu.Unlock()
+-}
+-
+-// URL returns the URL of the server.
+-func (srv *Server) URL() string {
+-	return srv.url
+-}
+-
+-// serveHTTP serves the EC2 protocol.
+-func (srv *Server) serveHTTP(w http.ResponseWriter, req *http.Request) {
+-	req.ParseForm()
+-
+-	a := srv.newAction()
+-	a.RequestId = fmt.Sprintf("req%d", srv.reqId.next())
+-	a.Request = req.Form
+-
+-	// Methods on Server that deal with parsing user data
+-	// may fail. To save on error handling code, we allow these
+-	// methods to call fatalf, which will panic with an *ec2.Error
+-	// which will be caught here and returned
+-	// to the client as a properly formed EC2 error.
+-	defer func() {
+-		switch err := recover().(type) {
+-		case *ec2.Error:
+-			a.Err = err
+-			err.RequestId = a.RequestId
+-			writeError(w, err)
+-		case nil:
+-		default:
+-			panic(err)
+-		}
+-	}()
+-
+-	f := actions[req.Form.Get("Action")]
+-	if f == nil {
+-		fatalf(400, "InvalidParameterValue", "Unrecognized Action")
+-	}
+-
+-	response := f(srv, w, req, a.RequestId)
+-	a.Response = response
+-
+-	w.Header().Set("Content-Type", `xml version="1.0" encoding="UTF-8"`)
+-	xmlMarshal(w, response)
+-}
+-
+-// Instance returns the instance for the given instance id.
+-// It returns nil if there is no such instance.
+-func (srv *Server) Instance(id string) *Instance {
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-	return srv.instances[id]
+-}
+-
+-// writeError writes an appropriate error response.
+-// TODO how should we deal with errors when the
+-// error itself is potentially generated by backend-agnostic
+-// code?
+-func writeError(w http.ResponseWriter, err *ec2.Error) {
+-	// Error encapsulates an error returned by EC2.
+-	// TODO merge with ec2.Error when xml supports ignoring a field.
+-	type ec2error struct {
+-		Code      string // EC2 error code ("UnsupportedOperation", ...)
+-		Message   string // The human-oriented error message
+-		RequestId string
+-	}
+-
+-	type Response struct {
+-		RequestId string
+-		Errors    []ec2error `xml:"Errors>Error"`
+-	}
+-
+-	w.Header().Set("Content-Type", `xml version="1.0" encoding="UTF-8"`)
+-	w.WriteHeader(err.StatusCode)
+-	xmlMarshal(w, Response{
+-		RequestId: err.RequestId,
+-		Errors: []ec2error{{
+-			Code:    err.Code,
+-			Message: err.Message,
+-		}},
+-	})
+-}
+-
+-// xmlMarshal is the same as xml.Marshal except that
+-// it panics on error. The marshalling should not fail,
+-// but we want to know if it does.
+-func xmlMarshal(w io.Writer, x interface{}) {
+-	if err := xml.NewEncoder(w).Encode(x); err != nil {
+-		panic(fmt.Errorf("error marshalling %#v: %v", x, err))
+-	}
+-}
+-
+-// formToGroups parses a set of SecurityGroup form values
+-// as found in a RunInstances request, and returns the resulting
+-// slice of security groups.
+-// It calls fatalf if a group is not found.
+-func (srv *Server) formToGroups(form url.Values) []*securityGroup {
+-	var groups []*securityGroup
+-	for name, values := range form {
+-		switch {
+-		case strings.HasPrefix(name, "SecurityGroupId."):
+-			if g := srv.groups[values[0]]; g != nil {
+-				groups = append(groups, g)
+-			} else {
+-				fatalf(400, "InvalidGroup.NotFound", "unknown group id %q", values[0])
+-			}
+-		case strings.HasPrefix(name, "SecurityGroup."):
+-			var found *securityGroup
+-			for _, g := range srv.groups {
+-				if g.name == values[0] {
+-					found = g
+-				}
+-			}
+-			if found == nil {
+-				fatalf(400, "InvalidGroup.NotFound", "unknown group name %q", values[0])
+-			}
+-			groups = append(groups, found)
+-		}
+-	}
+-	return groups
+-}
+-
+-// runInstances implements the EC2 RunInstances entry point.
+-func (srv *Server) runInstances(w http.ResponseWriter, req *http.Request, reqId string) interface{} {
+-	min := atoi(req.Form.Get("MinCount"))
+-	max := atoi(req.Form.Get("MaxCount"))
+-	if min < 0 || max < 1 {
+-		fatalf(400, "InvalidParameterValue", "bad values for MinCount or MaxCount")
+-	}
+-	if min > max {
+-		fatalf(400, "InvalidParameterCombination", "MinCount is greater than MaxCount")
+-	}
+-	var userData []byte
+-	if data := req.Form.Get("UserData"); data != "" {
+-		var err error
+-		userData, err = b64.DecodeString(data)
+-		if err != nil {
+-			fatalf(400, "InvalidParameterValue", "bad UserData value: %v", err)
+-		}
+-	}
+-
+-	// TODO attributes still to consider:
+-	//    ImageId:                  accept anything, we can verify later
+-	//    KeyName                   ?
+-	//    InstanceType              ?
+-	//    KernelId                  ?
+-	//    RamdiskId                 ?
+-	//    AvailZone                 ?
+-	//    GroupName                 tag
+-	//    Monitoring                ignore?
+-	//    SubnetId                  ?
+-	//    DisableAPITermination     bool
+-	//    ShutdownBehavior          string
+-	//    PrivateIPAddress          string
+-
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-
+-	// make sure that form fields are correct before creating the reservation.
+-	instType := req.Form.Get("InstanceType")
+-	imageId := req.Form.Get("ImageId")
+-
+-	r := srv.newReservation(srv.formToGroups(req.Form))
+-
+-	var resp ec2.RunInstancesResp
+-	resp.RequestId = reqId
+-	resp.ReservationId = r.id
+-	resp.OwnerId = ownerId
+-
+-	for i := 0; i < max; i++ {
+-		inst := srv.newInstance(r, instType, imageId, srv.initialInstanceState)
+-		inst.UserData = userData
+-		resp.Instances = append(resp.Instances, inst.ec2instance())
+-	}
+-	return &resp
+-}
+-
+-func (srv *Server) group(group ec2.SecurityGroup) *securityGroup {
+-	if group.Id != "" {
+-		return srv.groups[group.Id]
+-	}
+-	for _, g := range srv.groups {
+-		if g.name == group.Name {
+-			return g
+-		}
+-	}
+-	return nil
+-}
+-
+-// NewInstances creates n new instances in srv with the given instance type,
+-// image ID,  initial state and security groups. If any group does not already
+-// exist, it will be created. NewInstances returns the ids of the new instances.
+-func (srv *Server) NewInstances(n int, instType string, imageId string, state ec2.InstanceState, groups []ec2.SecurityGroup) []string {
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-
+-	rgroups := make([]*securityGroup, len(groups))
+-	for i, group := range groups {
+-		g := srv.group(group)
+-		if g == nil {
+-			fatalf(400, "InvalidGroup.NotFound", "no such group %v", g)
+-		}
+-		rgroups[i] = g
+-	}
+-	r := srv.newReservation(rgroups)
+-
+-	ids := make([]string, n)
+-	for i := 0; i < n; i++ {
+-		inst := srv.newInstance(r, instType, imageId, state)
+-		ids[i] = inst.id
+-	}
+-	return ids
+-}
+-
+-func (srv *Server) newInstance(r *reservation, instType string, imageId string, state ec2.InstanceState) *Instance {
+-	inst := &Instance{
+-		id:          fmt.Sprintf("i-%d", srv.maxId.next()),
+-		instType:    instType,
+-		imageId:     imageId,
+-		state:       state,
+-		reservation: r,
+-	}
+-	srv.instances[inst.id] = inst
+-	r.instances[inst.id] = inst
+-	return inst
+-}
+-
+-func (srv *Server) newReservation(groups []*securityGroup) *reservation {
+-	r := &reservation{
+-		id:        fmt.Sprintf("r-%d", srv.reservationId.next()),
+-		instances: make(map[string]*Instance),
+-		groups:    groups,
+-	}
+-
+-	srv.reservations[r.id] = r
+-	return r
+-}
+-
+-func (srv *Server) terminateInstances(w http.ResponseWriter, req *http.Request, reqId string) interface{} {
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-	var resp ec2.TerminateInstancesResp
+-	resp.RequestId = reqId
+-	var insts []*Instance
+-	for attr, vals := range req.Form {
+-		if strings.HasPrefix(attr, "InstanceId.") {
+-			id := vals[0]
+-			inst := srv.instances[id]
+-			if inst == nil {
+-				fatalf(400, "InvalidInstanceID.NotFound", "no such instance id %q", id)
+-			}
+-			insts = append(insts, inst)
+-		}
+-	}
+-	for _, inst := range insts {
+-		resp.StateChanges = append(resp.StateChanges, inst.terminate())
+-	}
+-	return &resp
+-}
+-
+-func (inst *Instance) terminate() (d ec2.InstanceStateChange) {
+-	d.PreviousState = inst.state
+-	inst.state = ShuttingDown
+-	d.CurrentState = inst.state
+-	d.InstanceId = inst.id
+-	return d
+-}
+-
+-func (inst *Instance) ec2instance() ec2.Instance {
+-	return ec2.Instance{
+-		InstanceId:   inst.id,
+-		InstanceType: inst.instType,
+-		ImageId:      inst.imageId,
+-		DNSName:      fmt.Sprintf("%s.example.com", inst.id),
+-		// TODO the rest
+-	}
+-}
+-
+-func (inst *Instance) matchAttr(attr, value string) (ok bool, err error) {
+-	switch attr {
+-	case "architecture":
+-		return value == "i386", nil
+-	case "instance-id":
+-		return inst.id == value, nil
+-	case "group-id":
+-		for _, g := range inst.reservation.groups {
+-			if g.id == value {
+-				return true, nil
+-			}
+-		}
+-		return false, nil
+-	case "group-name":
+-		for _, g := range inst.reservation.groups {
+-			if g.name == value {
+-				return true, nil
+-			}
+-		}
+-		return false, nil
+-	case "image-id":
+-		return value == inst.imageId, nil
+-	case "instance-state-code":
+-		code, err := strconv.Atoi(value)
+-		if err != nil {
+-			return false, err
+-		}
+-		return code&0xff == inst.state.Code, nil
+-	case "instance-state-name":
+-		return value == inst.state.Name, nil
+-	}
+-	return false, fmt.Errorf("unknown attribute %q", attr)
+-}
+-
+-var (
+-	Pending      = ec2.InstanceState{0, "pending"}
+-	Running      = ec2.InstanceState{16, "running"}
+-	ShuttingDown = ec2.InstanceState{32, "shutting-down"}
+-	Terminated   = ec2.InstanceState{16, "terminated"}
+-	Stopped      = ec2.InstanceState{16, "stopped"}
+-)
+-
+-func (srv *Server) createSecurityGroup(w http.ResponseWriter, req *http.Request, reqId string) interface{} {
+-	name := req.Form.Get("GroupName")
+-	if name == "" {
+-		fatalf(400, "InvalidParameterValue", "empty security group name")
+-	}
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-	if srv.group(ec2.SecurityGroup{Name: name}) != nil {
+-		fatalf(400, "InvalidGroup.Duplicate", "group %q already exists", name)
+-	}
+-	g := &securityGroup{
+-		name:        name,
+-		description: req.Form.Get("GroupDescription"),
+-		id:          fmt.Sprintf("sg-%d", srv.groupId.next()),
+-		perms:       make(map[permKey]bool),
+-	}
+-	srv.groups[g.id] = g
+-	// we define a local type for this because ec2.CreateSecurityGroupResp
+-	// contains SecurityGroup, but the response to this request
+-	// should not contain the security group name.
+-	type CreateSecurityGroupResponse struct {
+-		RequestId string `xml:"requestId"`
+-		Return    bool   `xml:"return"`
+-		GroupId   string `xml:"groupId"`
+-	}
+-	r := &CreateSecurityGroupResponse{
+-		RequestId: reqId,
+-		Return:    true,
+-		GroupId:   g.id,
+-	}
+-	return r
+-}
+-
+-func (srv *Server) notImplemented(w http.ResponseWriter, req *http.Request, reqId string) interface{} {
+-	fatalf(500, "InternalError", "not implemented")
+-	panic("not reached")
+-}
+-
+-func (srv *Server) describeInstances(w http.ResponseWriter, req *http.Request, reqId string) interface{} {
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-	insts := make(map[*Instance]bool)
+-	for name, vals := range req.Form {
+-		if !strings.HasPrefix(name, "InstanceId.") {
+-			continue
+-		}
+-		inst := srv.instances[vals[0]]
+-		if inst == nil {
+-			fatalf(400, "InvalidInstanceID.NotFound", "instance %q not found", vals[0])
+-		}
+-		insts[inst] = true
+-	}
+-
+-	f := newFilter(req.Form)
+-
+-	var resp ec2.InstancesResp
+-	resp.RequestId = reqId
+-	for _, r := range srv.reservations {
+-		var instances []ec2.Instance
+-		for _, inst := range r.instances {
+-			if len(insts) > 0 && !insts[inst] {
+-				continue
+-			}
+-			ok, err := f.ok(inst)
+-			if ok {
+-				instances = append(instances, inst.ec2instance())
+-			} else if err != nil {
+-				fatalf(400, "InvalidParameterValue", "describe instances: %v", err)
+-			}
+-		}
+-		if len(instances) > 0 {
+-			var groups []ec2.SecurityGroup
+-			for _, g := range r.groups {
+-				groups = append(groups, g.ec2SecurityGroup())
+-			}
+-			resp.Reservations = append(resp.Reservations, ec2.Reservation{
+-				ReservationId:  r.id,
+-				OwnerId:        ownerId,
+-				Instances:      instances,
+-				SecurityGroups: groups,
+-			})
+-		}
+-	}
+-	return &resp
+-}
+-
+-func (srv *Server) describeSecurityGroups(w http.ResponseWriter, req *http.Request, reqId string) interface{} {
+-	// BUG similar bug to describeInstances, but for GroupName and GroupId
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-
+-	var groups []*securityGroup
+-	for name, vals := range req.Form {
+-		var g ec2.SecurityGroup
+-		switch {
+-		case strings.HasPrefix(name, "GroupName."):
+-			g.Name = vals[0]
+-		case strings.HasPrefix(name, "GroupId."):
+-			g.Id = vals[0]
+-		default:
+-			continue
+-		}
+-		sg := srv.group(g)
+-		if sg == nil {
+-			fatalf(400, "InvalidGroup.NotFound", "no such group %v", g)
+-		}
+-		groups = append(groups, sg)
+-	}
+-	if len(groups) == 0 {
+-		for _, g := range srv.groups {
+-			groups = append(groups, g)
+-		}
+-	}
+-
+-	f := newFilter(req.Form)
+-	var resp ec2.SecurityGroupsResp
+-	resp.RequestId = reqId
+-	for _, group := range groups {
+-		ok, err := f.ok(group)
+-		if ok {
+-			resp.Groups = append(resp.Groups, ec2.SecurityGroupInfo{
+-				OwnerId:       ownerId,
+-				SecurityGroup: group.ec2SecurityGroup(),
+-				Description:   group.description,
+-				IPPerms:       group.ec2Perms(),
+-			})
+-		} else if err != nil {
+-			fatalf(400, "InvalidParameterValue", "describe security groups: %v", err)
+-		}
+-	}
+-	return &resp
+-}
+-
+-func (srv *Server) authorizeSecurityGroupIngress(w http.ResponseWriter, req *http.Request, reqId string) interface{} {
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-	g := srv.group(ec2.SecurityGroup{
+-		Name: req.Form.Get("GroupName"),
+-		Id:   req.Form.Get("GroupId"),
+-	})
+-	if g == nil {
+-		fatalf(400, "InvalidGroup.NotFound", "group not found")
+-	}
+-	perms := srv.parsePerms(req)
+-
+-	for _, p := range perms {
+-		if g.perms[p] {
+-			fatalf(400, "InvalidPermission.Duplicate", "Permission has already been authorized on the specified group")
+-		}
+-	}
+-	for _, p := range perms {
+-		g.perms[p] = true
+-	}
+-	return &ec2.SimpleResp{
+-		XMLName:   xml.Name{"", "AuthorizeSecurityGroupIngressResponse"},
+-		RequestId: reqId,
+-	}
+-}
+-
+-func (srv *Server) revokeSecurityGroupIngress(w http.ResponseWriter, req *http.Request, reqId string) interface{} {
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-	g := srv.group(ec2.SecurityGroup{
+-		Name: req.Form.Get("GroupName"),
+-		Id:   req.Form.Get("GroupId"),
+-	})
+-	if g == nil {
+-		fatalf(400, "InvalidGroup.NotFound", "group not found")
+-	}
+-	perms := srv.parsePerms(req)
+-
+-	// Note EC2 does not give an error if asked to revoke an authorization
+-	// that does not exist.
+-	for _, p := range perms {
+-		delete(g.perms, p)
+-	}
+-	return &ec2.SimpleResp{
+-		XMLName:   xml.Name{"", "RevokeSecurityGroupIngressResponse"},
+-		RequestId: reqId,
+-	}
+-}
+-
+-var secGroupPat = regexp.MustCompile(`^sg-[a-z0-9]+$`)
+-var ipPat = regexp.MustCompile(`^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/[0-9]+$`)
+-var ownerIdPat = regexp.MustCompile(`^[0-9]+$`)
+-
+-// parsePerms returns a slice of permKey values extracted
+-// from the permission fields in req.
+-func (srv *Server) parsePerms(req *http.Request) []permKey {
+-	// perms maps an index found in the form to its associated
+-	// IPPerm. For instance, the form value with key
+-	// "IpPermissions.3.FromPort" will be stored in perms[3].FromPort
+-	perms := make(map[int]ec2.IPPerm)
+-
+-	type subgroupKey struct {
+-		id1, id2 int
+-	}
+-	// Each IPPerm can have many source security groups.  The form key
+-	// for a source security group contains two indices: the index
+-	// of the IPPerm and the sub-index of the security group. The
+-	// sourceGroups map maps from a subgroupKey containing these
+-	// two indices to the associated security group. For instance,
+-	// the form value with key "IPPermissions.3.Groups.2.GroupName"
+-	// will be stored in sourceGroups[subgroupKey{3, 2}].Name.
+-	sourceGroups := make(map[subgroupKey]ec2.UserSecurityGroup)
+-
+-	// For each value in the form we store its associated information in the
+-	// above maps. The maps are necessary because the form keys may
+-	// arrive in any order, and the indices are not
+-	// necessarily sequential or even small.
+-	for name, vals := range req.Form {
+-		val := vals[0]
+-		var id1 int
+-		var rest string
+-		if x, _ := fmt.Sscanf(name, "IpPermissions.%d.%s", &id1, &rest); x != 2 {
+-			continue
+-		}
+-		ec2p := perms[id1]
+-		switch {
+-		case rest == "FromPort":
+-			ec2p.FromPort = atoi(val)
+-		case rest == "ToPort":
+-			ec2p.ToPort = atoi(val)
+-		case rest == "IpProtocol":
+-			switch val {
+-			case "tcp", "udp", "icmp":
+-				ec2p.Protocol = val
+-			default:
+-				// check it's a well formed number
+-				atoi(val)
+-				ec2p.Protocol = val
+-			}
+-		case strings.HasPrefix(rest, "Groups."):
+-			k := subgroupKey{id1: id1}
+-			if x, _ := fmt.Sscanf(rest[len("Groups."):], "%d.%s", &k.id2, &rest); x != 2 {
+-				continue
+-			}
+-			g := sourceGroups[k]
+-			switch rest {
+-			case "UserId":
+-				// BUG if the user id is blank, this does not conform to the
+-				// way that EC2 handles it - a specified but blank owner id
+-				// can cause RevokeSecurityGroupIngress to fail with
+-				// "group not found" even if the security group id has been
+-				// correctly specified.
+-				// By failing here, we ensure that we fail early in this case.
+-				if !ownerIdPat.MatchString(val) {
+-					fatalf(400, "InvalidUserID.Malformed", "Invalid user ID: %q", val)
+-				}
+-				g.OwnerId = val
+-			case "GroupName":
+-				g.Name = val
+-			case "GroupId":
+-				if !secGroupPat.MatchString(val) {
+-					fatalf(400, "InvalidGroupId.Malformed", "Invalid group ID: %q", val)
+-				}
+-				g.Id = val
+-			default:
+-				fatalf(400, "UnknownParameter", "unknown parameter %q", name)
+-			}
+-			sourceGroups[k] = g
+-		case strings.HasPrefix(rest, "IpRanges."):
+-			var id2 int
+-			if x, _ := fmt.Sscanf(rest[len("IpRanges."):], "%d.%s", &id2, &rest); x != 2 {
+-				continue
+-			}
+-			switch rest {
+-			case "CidrIp":
+-				if !ipPat.MatchString(val) {
+-					fatalf(400, "InvalidPermission.Malformed", "Invalid IP range: %q", val)
+-				}
+-				ec2p.SourceIPs = append(ec2p.SourceIPs, val)
+-			default:
+-				fatalf(400, "UnknownParameter", "unknown parameter %q", name)
+-			}
+-		default:
+-			fatalf(400, "UnknownParameter", "unknown parameter %q", name)
+-		}
+-		perms[id1] = ec2p
+-	}
+-	// Associate each set of source groups with its IPPerm.
+-	for k, g := range sourceGroups {
+-		p := perms[k.id1]
+-		p.SourceGroups = append(p.SourceGroups, g)
+-		perms[k.id1] = p
+-	}
+-
+-	// Now that we have built up the IPPerms we need, we check for
+-	// parameter errors and build up a permKey for each permission,
+-	// looking up security groups from srv as we do so.
+-	var result []permKey
+-	for _, p := range perms {
+-		if p.FromPort > p.ToPort {
+-			fatalf(400, "InvalidParameterValue", "invalid port range")
+-		}
+-		k := permKey{
+-			protocol: p.Protocol,
+-			fromPort: p.FromPort,
+-			toPort:   p.ToPort,
+-		}
+-		for _, g := range p.SourceGroups {
+-			if g.OwnerId != "" && g.OwnerId != ownerId {
+-				fatalf(400, "InvalidGroup.NotFound", "group %q not found", g.Name)
+-			}
+-			var ec2g ec2.SecurityGroup
+-			switch {
+-			case g.Id != "":
+-				ec2g.Id = g.Id
+-			case g.Name != "":
+-				ec2g.Name = g.Name
+-			}
+-			k.group = srv.group(ec2g)
+-			if k.group == nil {
+-				fatalf(400, "InvalidGroup.NotFound", "group %v not found", g)
+-			}
+-			result = append(result, k)
+-		}
+-		k.group = nil
+-		for _, ip := range p.SourceIPs {
+-			k.ipAddr = ip
+-			result = append(result, k)
+-		}
+-	}
+-	return result
+-}
+-
+-func (srv *Server) deleteSecurityGroup(w http.ResponseWriter, req *http.Request, reqId string) interface{} {
+-	srv.mu.Lock()
+-	defer srv.mu.Unlock()
+-	g := srv.group(ec2.SecurityGroup{
+-		Name: req.Form.Get("GroupName"),
+-		Id:   req.Form.Get("GroupId"),
+-	})
+-	if g == nil {
+-		fatalf(400, "InvalidGroup.NotFound", "group not found")
+-	}
+-	for _, r := range srv.reservations {
+-		for _, h := range r.groups {
+-			if h == g && r.hasRunningMachine() {
+-				fatalf(500, "InvalidGroup.InUse", "group is currently in use by a running instance")
+-			}
+-		}
+-	}
+-	for _, sg := range srv.groups {
+-		// If a group refers to itself, it's ok to delete it.
+-		if sg == g {
+-			continue
+-		}
+-		for k := range sg.perms {
+-			if k.group == g {
+-				fatalf(500, "InvalidGroup.InUse", "group is currently in use by group %q", sg.id)
+-			}
+-		}
+-	}
+-
+-	delete(srv.groups, g.id)
+-	return &ec2.SimpleResp{
+-		XMLName:   xml.Name{"", "DeleteSecurityGroupResponse"},
+-		RequestId: reqId,
+-	}
+-}
+-
+-func (r *reservation) hasRunningMachine() bool {
+-	for _, inst := range r.instances {
+-		if inst.state.Code != ShuttingDown.Code && inst.state.Code != Terminated.Code {
+-			return true
+-		}
+-	}
+-	return false
+-}
+-
+-type counter int
+-
+-func (c *counter) next() (i int) {
+-	i = int(*c)
+-	(*c)++
+-	return
+-}
+-
+-// atoi is like strconv.Atoi but is fatal if the
+-// string is not well formed.
+-func atoi(s string) int {
+-	i, err := strconv.Atoi(s)
+-	if err != nil {
+-		fatalf(400, "InvalidParameterValue", "bad number: %v", err)
+-	}
+-	return i
+-}
+-
+-func fatalf(statusCode int, code string, f string, a ...interface{}) {
+-	panic(&ec2.Error{
+-		StatusCode: statusCode,
+-		Code:       code,
+-		Message:    fmt.Sprintf(f, a...),
+-	})
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/export_test.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/export_test.go
+deleted file mode 100644
+index 1c24422..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/export_test.go
++++ /dev/null
+@@ -1,22 +0,0 @@
+-package ec2
+-
+-import (
+-	"github.com/mitchellh/goamz/aws"
+-	"time"
+-)
+-
+-func Sign(auth aws.Auth, method, path string, params map[string]string, host string) {
+-	sign(auth, method, path, params, host)
+-}
+-
+-func fixedTime() time.Time {
+-	return time.Date(2012, 1, 1, 0, 0, 0, 0, time.UTC)
+-}
+-
+-func FakeTime(fakeIt bool) {
+-	if fakeIt {
+-		timeNow = fixedTime
+-	} else {
+-		timeNow = time.Now
+-	}
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/responses_test.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/responses_test.go
+deleted file mode 100644
+index 0a4dbb3..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/responses_test.go
++++ /dev/null
+@@ -1,854 +0,0 @@
+-package ec2_test
+-
+-var ErrorDump = `
+-<?xml version="1.0" encoding="UTF-8"?>
+-<Response><Errors><Error><Code>UnsupportedOperation</Code>
+-<Message>AMIs with an instance-store root device are not supported for the instance type 't1.micro'.</Message>
+-</Error></Errors><RequestID>0503f4e9-bbd6-483c-b54f-c4ae9f3b30f4</RequestID></Response>
+-`
+-
+-// http://goo.gl/Mcm3b
+-var RunInstancesExample = `
+-<RunInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <reservationId>r-47a5402e</reservationId>
+-  <ownerId>999988887777</ownerId>
+-  <groupSet>
+-      <item>
+-          <groupId>sg-67ad940e</groupId>
+-          <groupName>default</groupName>
+-      </item>
+-  </groupSet>
+-  <instancesSet>
+-    <item>
+-      <instanceId>i-2ba64342</instanceId>
+-      <imageId>ami-60a54009</imageId>
+-      <instanceState>
+-        <code>0</code>
+-        <name>pending</name>
+-      </instanceState>
+-      <privateDnsName></privateDnsName>
+-      <dnsName></dnsName>
+-      <keyName>example-key-name</keyName>
+-      <amiLaunchIndex>0</amiLaunchIndex>
+-      <instanceType>m1.small</instanceType>
+-      <launchTime>2007-08-07T11:51:50.000Z</launchTime>
+-      <placement>
+-        <availabilityZone>us-east-1b</availabilityZone>
+-      </placement>
+-      <monitoring>
+-        <state>enabled</state>
+-      </monitoring>
+-      <virtualizationType>paravirtual</virtualizationType>
+-      <clientToken/>
+-      <tagSet/>
+-      <hypervisor>xen</hypervisor>
+-    </item>
+-    <item>
+-      <instanceId>i-2bc64242</instanceId>
+-      <imageId>ami-60a54009</imageId>
+-      <instanceState>
+-        <code>0</code>
+-        <name>pending</name>
+-      </instanceState>
+-      <privateDnsName></privateDnsName>
+-      <dnsName></dnsName>
+-      <keyName>example-key-name</keyName>
+-      <amiLaunchIndex>1</amiLaunchIndex>
+-      <instanceType>m1.small</instanceType>
+-      <launchTime>2007-08-07T11:51:50.000Z</launchTime>
+-      <placement>
+-         <availabilityZone>us-east-1b</availabilityZone>
+-      </placement>
+-      <monitoring>
+-        <state>enabled</state>
+-      </monitoring>
+-      <virtualizationType>paravirtual</virtualizationType>
+-      <clientToken/>
+-      <tagSet/>
+-      <hypervisor>xen</hypervisor>
+-    </item>
+-    <item>
+-      <instanceId>i-2be64332</instanceId>
+-      <imageId>ami-60a54009</imageId>
+-      <instanceState>
+-        <code>0</code>
+-        <name>pending</name>
+-      </instanceState>
+-      <privateDnsName></privateDnsName>
+-      <dnsName></dnsName>
+-      <keyName>example-key-name</keyName>
+-      <amiLaunchIndex>2</amiLaunchIndex>
+-      <instanceType>m1.small</instanceType>
+-      <launchTime>2007-08-07T11:51:50.000Z</launchTime>
+-      <placement>
+-         <availabilityZone>us-east-1b</availabilityZone>
+-      </placement>
+-      <monitoring>
+-        <state>enabled</state>
+-      </monitoring>
+-      <virtualizationType>paravirtual</virtualizationType>
+-      <clientToken/>
+-      <tagSet/>
+-      <hypervisor>xen</hypervisor>
+-    </item>
+-  </instancesSet>
+-</RunInstancesResponse>
+-`
+-
+-// http://goo.gl/GRZgCD
+-var RequestSpotInstancesExample = `
+-<RequestSpotInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2014-02-01/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <spotInstanceRequestSet>
+-    <item>
+-      <spotInstanceRequestId>sir-1a2b3c4d</spotInstanceRequestId>
+-      <spotPrice>0.5</spotPrice>
+-      <type>one-time</type>
+-      <state>open</state>
+-      <status>
+-        <code>pending-evaluation</code>
+-        <updateTime>2008-05-07T12:51:50.000Z</updateTime>
+-        <message>Your Spot request has been submitted for review, and is pending evaluation.</message>
+-      </status>
+-      <availabilityZoneGroup>MyAzGroup</availabilityZoneGroup>
+-      <launchSpecification>
+-        <imageId>ami-1a2b3c4d</imageId>
+-        <keyName>gsg-keypair</keyName>
+-        <groupSet>
+-          <item>
+-            <groupId>sg-1a2b3c4d</groupId>
+-            <groupName>websrv</groupName>
+-          </item>
+-        </groupSet>
+-        <instanceType>m1.small</instanceType>
+-        <blockDeviceMapping/>
+-        <monitoring>
+-          <enabled>false</enabled>
+-        </monitoring>
+-        <ebsOptimized>false</ebsOptimized>
+-      </launchSpecification>
+-      <createTime>YYYY-MM-DDTHH:MM:SS.000Z</createTime>
+-      <productDescription>Linux/UNIX</productDescription>
+-    </item>
+- </spotInstanceRequestSet>
+-</RequestSpotInstancesResponse>
+-`
+-
+-// http://goo.gl/KsKJJk
+-var DescribeSpotRequestsExample = `
+-<DescribeSpotInstanceRequestsResponse xmlns="http://ec2.amazonaws.com/doc/2014-02-01/">
+-  <requestId>b1719f2a-5334-4479-b2f1-26926EXAMPLE</requestId>
+-  <spotInstanceRequestSet>
+-    <item>
+-      <spotInstanceRequestId>sir-1a2b3c4d</spotInstanceRequestId>
+-      <spotPrice>0.5</spotPrice>
+-      <type>one-time</type>
+-      <state>active</state>
+-      <status>
+-        <code>fulfilled</code>
+-        <updateTime>2008-05-07T12:51:50.000Z</updateTime>
+-        <message>Your Spot request is fulfilled.</message>
+-      </status>
+-      <launchSpecification>
+-        <imageId>ami-1a2b3c4d</imageId>
+-        <keyName>gsg-keypair</keyName>
+-        <groupSet>
+-          <item>
+-            <groupId>sg-1a2b3c4d</groupId>
+-            <groupName>websrv</groupName>
+-          </item>
+-        </groupSet>
+-        <instanceType>m1.small</instanceType>
+-        <monitoring>
+-          <enabled>false</enabled>
+-        </monitoring>
+-        <ebsOptimized>false</ebsOptimized>
+-      </launchSpecification>
+-      <instanceId>i-1a2b3c4d</instanceId>
+-      <createTime>YYYY-MM-DDTHH:MM:SS.000Z</createTime>
+-      <productDescription>Linux/UNIX</productDescription>
+-      <launchedAvailabilityZone>us-east-1a</launchedAvailabilityZone>
+-    </item>
+-  </spotInstanceRequestSet>
+-</DescribeSpotInstanceRequestsResponse>
+-`
+-
+-// http://goo.gl/DcfFgJ
+-var CancelSpotRequestsExample = `
+-<CancelSpotInstanceRequestsResponse xmlns="http://ec2.amazonaws.com/doc/2014-02-01/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <spotInstanceRequestSet>
+-    <item>
+-      <spotInstanceRequestId>sir-1a2b3c4d</spotInstanceRequestId>
+-      <state>cancelled</state>
+-    </item>
+-  </spotInstanceRequestSet>
+-</CancelSpotInstanceRequestsResponse>
+-`
+-
+-// http://goo.gl/3BKHj
+-var TerminateInstancesExample = `
+-<TerminateInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <instancesSet>
+-    <item>
+-      <instanceId>i-3ea74257</instanceId>
+-      <currentState>
+-        <code>32</code>
+-        <name>shutting-down</name>
+-      </currentState>
+-      <previousState>
+-        <code>16</code>
+-        <name>running</name>
+-      </previousState>
+-    </item>
+-  </instancesSet>
+-</TerminateInstancesResponse>
+-`
+-
+-// http://goo.gl/mLbmw
+-var DescribeInstancesExample1 = `
+-<DescribeInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>98e3c9a4-848c-4d6d-8e8a-b1bdEXAMPLE</requestId>
+-  <reservationSet>
+-    <item>
+-      <reservationId>r-b27e30d9</reservationId>
+-      <ownerId>999988887777</ownerId>
+-      <groupSet>
+-        <item>
+-          <groupId>sg-67ad940e</groupId>
+-          <groupName>default</groupName>
+-        </item>
+-      </groupSet>
+-      <instancesSet>
+-        <item>
+-          <instanceId>i-c5cd56af</instanceId>
+-          <imageId>ami-1a2b3c4d</imageId>
+-          <instanceState>
+-            <code>16</code>
+-            <name>running</name>
+-          </instanceState>
+-          <privateDnsName>domU-12-31-39-10-56-34.compute-1.internal</privateDnsName>
+-          <dnsName>ec2-174-129-165-232.compute-1.amazonaws.com</dnsName>
+-          <reason/>
+-          <keyName>GSG_Keypair</keyName>
+-          <amiLaunchIndex>0</amiLaunchIndex>
+-          <productCodes/>
+-          <instanceType>m1.small</instanceType>
+-          <launchTime>2010-08-17T01:15:18.000Z</launchTime>
+-          <placement>
+-            <availabilityZone>us-east-1b</availabilityZone>
+-            <groupName/>
+-          </placement>
+-          <kernelId>aki-94c527fd</kernelId>
+-          <ramdiskId>ari-96c527ff</ramdiskId>
+-          <monitoring>
+-            <state>disabled</state>
+-          </monitoring>
+-          <privateIpAddress>10.198.85.190</privateIpAddress>
+-          <ipAddress>174.129.165.232</ipAddress>
+-          <architecture>i386</architecture>
+-          <rootDeviceType>ebs</rootDeviceType>
+-          <rootDeviceName>/dev/sda1</rootDeviceName>
+-          <blockDeviceMapping>
+-            <item>
+-              <deviceName>/dev/sda1</deviceName>
+-              <ebs>
+-                <volumeId>vol-a082c1c9</volumeId>
+-                <status>attached</status>
+-                <attachTime>2010-08-17T01:15:21.000Z</attachTime>
+-                <deleteOnTermination>false</deleteOnTermination>
+-              </ebs>
+-            </item>
+-          </blockDeviceMapping>
+-          <instanceLifecycle>spot</instanceLifecycle>
+-          <spotInstanceRequestId>sir-7a688402</spotInstanceRequestId>
+-          <virtualizationType>paravirtual</virtualizationType>
+-          <clientToken/>
+-          <tagSet/>
+-          <hypervisor>xen</hypervisor>
+-       </item>
+-      </instancesSet>
+-      <requesterId>854251627541</requesterId>
+-    </item>
+-    <item>
+-      <reservationId>r-b67e30dd</reservationId>
+-      <ownerId>999988887777</ownerId>
+-      <groupSet>
+-        <item>
+-          <groupId>sg-67ad940e</groupId>
+-          <groupName>default</groupName>
+-        </item>
+-      </groupSet>
+-      <instancesSet>
+-        <item>
+-          <instanceId>i-d9cd56b3</instanceId>
+-          <imageId>ami-1a2b3c4d</imageId>
+-          <instanceState>
+-            <code>16</code>
+-            <name>running</name>
+-          </instanceState>
+-          <privateDnsName>domU-12-31-39-10-54-E5.compute-1.internal</privateDnsName>
+-          <dnsName>ec2-184-73-58-78.compute-1.amazonaws.com</dnsName>
+-          <reason/>
+-          <keyName>GSG_Keypair</keyName>
+-          <amiLaunchIndex>0</amiLaunchIndex>
+-          <productCodes/>
+-          <instanceType>m1.large</instanceType>
+-          <launchTime>2010-08-17T01:15:19.000Z</launchTime>
+-          <placement>
+-            <availabilityZone>us-east-1b</availabilityZone>
+-            <groupName/>
+-          </placement>
+-          <kernelId>aki-94c527fd</kernelId>
+-          <ramdiskId>ari-96c527ff</ramdiskId>
+-          <monitoring>
+-            <state>disabled</state>
+-          </monitoring>
+-          <privateIpAddress>10.198.87.19</privateIpAddress>
+-          <ipAddress>184.73.58.78</ipAddress>
+-          <architecture>i386</architecture>
+-          <rootDeviceType>ebs</rootDeviceType>
+-          <rootDeviceName>/dev/sda1</rootDeviceName>
+-          <blockDeviceMapping>
+-            <item>
+-              <deviceName>/dev/sda1</deviceName>
+-              <ebs>
+-                <volumeId>vol-a282c1cb</volumeId>
+-                <status>attached</status>
+-                <attachTime>2010-08-17T01:15:23.000Z</attachTime>
+-                <deleteOnTermination>false</deleteOnTermination>
+-              </ebs>
+-            </item>
+-          </blockDeviceMapping>
+-          <instanceLifecycle>spot</instanceLifecycle>
+-          <spotInstanceRequestId>sir-55a3aa02</spotInstanceRequestId>
+-          <virtualizationType>paravirtual</virtualizationType>
+-          <clientToken/>
+-          <tagSet/>
+-          <hypervisor>xen</hypervisor>
+-       </item>
+-      </instancesSet>
+-      <requesterId>854251627541</requesterId>
+-    </item>
+-  </reservationSet>
+-</DescribeInstancesResponse>
+-`
+-
+-// http://goo.gl/mLbmw
+-var DescribeInstancesExample2 = `
+-<DescribeInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <reservationSet>
+-    <item>
+-      <reservationId>r-bc7e30d7</reservationId>
+-      <ownerId>999988887777</ownerId>
+-      <groupSet>
+-        <item>
+-          <groupId>sg-67ad940e</groupId>
+-          <groupName>default</groupName>
+-        </item>
+-      </groupSet>
+-      <instancesSet>
+-        <item>
+-          <instanceId>i-c7cd56ad</instanceId>
+-          <imageId>ami-b232d0db</imageId>
+-          <instanceState>
+-            <code>16</code>
+-            <name>running</name>
+-          </instanceState>
+-          <privateDnsName>domU-12-31-39-01-76-06.compute-1.internal</privateDnsName>
+-          <dnsName>ec2-72-44-52-124.compute-1.amazonaws.com</dnsName>
+-          <keyName>GSG_Keypair</keyName>
+-          <amiLaunchIndex>0</amiLaunchIndex>
+-          <productCodes/>
+-          <instanceType>m1.small</instanceType>
+-          <launchTime>2010-08-17T01:15:16.000Z</launchTime>
+-          <placement>
+-              <availabilityZone>us-east-1b</availabilityZone>
+-          </placement>
+-          <kernelId>aki-94c527fd</kernelId>
+-          <ramdiskId>ari-96c527ff</ramdiskId>
+-          <monitoring>
+-              <state>disabled</state>
+-          </monitoring>
+-          <privateIpAddress>10.255.121.240</privateIpAddress>
+-          <ipAddress>72.44.52.124</ipAddress>
+-          <architecture>i386</architecture>
+-          <rootDeviceType>ebs</rootDeviceType>
+-          <rootDeviceName>/dev/sda1</rootDeviceName>
+-          <blockDeviceMapping>
+-              <item>
+-                 <deviceName>/dev/sda1</deviceName>
+-                 <ebs>
+-                    <volumeId>vol-a482c1cd</volumeId>
+-                    <status>attached</status>
+-                    <attachTime>2010-08-17T01:15:26.000Z</attachTime>
+-                    <deleteOnTermination>true</deleteOnTermination>
+-                </ebs>
+-             </item>
+-          </blockDeviceMapping>
+-          <virtualizationType>paravirtual</virtualizationType>
+-          <clientToken/>
+-          <tagSet>
+-              <item>
+-                    <key>webserver</key>
+-                    <value></value>
+-             </item>
+-              <item>
+-                    <key>stack</key>
+-                    <value>Production</value>
+-             </item>
+-          </tagSet>
+-          <hypervisor>xen</hypervisor>
+-        </item>
+-      </instancesSet>
+-    </item>
+-  </reservationSet>
+-</DescribeInstancesResponse>
+-`
+-
+-// http://goo.gl/cxU41
+-var CreateImageExample = `
+-<CreateImageResponse xmlns="http://ec2.amazonaws.com/doc/2013-02-01/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <imageId>ami-4fa54026</imageId>
+-</CreateImageResponse>
+-`
+-
+-// http://goo.gl/V0U25
+-var DescribeImagesExample = `
+-<DescribeImagesResponse xmlns="http://ec2.amazonaws.com/doc/2012-08-15/">
+-         <requestId>4a4a27a2-2e7c-475d-b35b-ca822EXAMPLE</requestId>
+-    <imagesSet>
+-        <item>
+-            <imageId>ami-a2469acf</imageId>
+-            <imageLocation>aws-marketplace/example-marketplace-amzn-ami.1</imageLocation>
+-            <imageState>available</imageState>
+-            <imageOwnerId>123456789999</imageOwnerId>
+-            <isPublic>true</isPublic>
+-            <productCodes>
+-                <item>
+-                    <productCode>a1b2c3d4e5f6g7h8i9j10k11</productCode>
+-                    <type>marketplace</type>
+-                </item>
+-            </productCodes>
+-            <architecture>i386</architecture>
+-            <imageType>machine</imageType>
+-            <kernelId>aki-805ea7e9</kernelId>
+-            <imageOwnerAlias>aws-marketplace</imageOwnerAlias>
+-            <name>example-marketplace-amzn-ami.1</name>
+-            <description>Amazon Linux AMI i386 EBS</description>
+-            <rootDeviceType>ebs</rootDeviceType>
+-            <rootDeviceName>/dev/sda1</rootDeviceName>
+-            <blockDeviceMapping>
+-                <item>
+-                    <deviceName>/dev/sda1</deviceName>
+-                    <ebs>
+-                        <snapshotId>snap-787e9403</snapshotId>
+-                        <volumeSize>8</volumeSize>
+-                        <deleteOnTermination>true</deleteOnTermination>
+-                    </ebs>
+-                </item>
+-            </blockDeviceMapping>
+-            <virtualizationType>paravirtual</virtualizationType>
+-            <hypervisor>xen</hypervisor>
+-        </item>
+-    </imagesSet>
+-</DescribeImagesResponse>
+-`
+-
+-// http://goo.gl/bHO3z
+-var ImageAttributeExample = `
+-<DescribeImageAttributeResponse xmlns="http://ec2.amazonaws.com/doc/2013-07-15/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <imageId>ami-61a54008</imageId>
+-   <launchPermission>
+-      <item>
+-         <group>all</group>
+-      </item>
+-      <item>
+-         <userId>495219933132</userId>
+-      </item>
+-   </launchPermission>
+-</DescribeImageAttributeResponse>
+-`
+-
+-// http://goo.gl/ttcda
+-var CreateSnapshotExample = `
+-<CreateSnapshotResponse xmlns="http://ec2.amazonaws.com/doc/2012-10-01/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <snapshotId>snap-78a54011</snapshotId>
+-  <volumeId>vol-4d826724</volumeId>
+-  <status>pending</status>
+-  <startTime>2008-05-07T12:51:50.000Z</startTime>
+-  <progress>60%</progress>
+-  <ownerId>111122223333</ownerId>
+-  <volumeSize>10</volumeSize>
+-  <description>Daily Backup</description>
+-</CreateSnapshotResponse>
+-`
+-
+-// http://goo.gl/vwU1y
+-var DeleteSnapshotExample = `
+-<DeleteSnapshotResponse xmlns="http://ec2.amazonaws.com/doc/2012-10-01/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <return>true</return>
+-</DeleteSnapshotResponse>
+-`
+-
+-// http://goo.gl/nkovs
+-var DescribeSnapshotsExample = `
+-<DescribeSnapshotsResponse xmlns="http://ec2.amazonaws.com/doc/2012-10-01/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <snapshotSet>
+-      <item>
+-         <snapshotId>snap-1a2b3c4d</snapshotId>
+-         <volumeId>vol-8875daef</volumeId>
+-         <status>pending</status>
+-         <startTime>2010-07-29T04:12:01.000Z</startTime>
+-         <progress>30%</progress>
+-         <ownerId>111122223333</ownerId>
+-         <volumeSize>15</volumeSize>
+-         <description>Daily Backup</description>
+-         <tagSet>
+-            <item>
+-               <key>Purpose</key>
+-               <value>demo_db_14_backup</value>
+-            </item>
+-         </tagSet>
+-      </item>
+-   </snapshotSet>
+-</DescribeSnapshotsResponse>
+-`
+-
+-// http://goo.gl/YUjO4G
+-var ModifyImageAttributeExample = `
+-<ModifyImageAttributeResponse xmlns="http://ec2.amazonaws.com/doc/2013-06-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <return>true</return>
+-</ModifyImageAttributeResponse>
+-`
+-
+-// http://goo.gl/hQwPCK
+-var CopyImageExample = `
+-<CopyImageResponse xmlns="http://ec2.amazonaws.com/doc/2013-06-15/">
+-   <requestId>60bc441d-fa2c-494d-b155-5d6a3EXAMPLE</requestId>
+-   <imageId>ami-4d3c2b1a</imageId>
+-</CopyImageResponse>
+-`
+-
+-var CreateKeyPairExample = `
+-<CreateKeyPairResponse xmlns="http://ec2.amazonaws.com/doc/2013-02-01/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <keyName>foo</keyName>
+-  <keyFingerprint>
+-     00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
+-  </keyFingerprint>
+-  <keyMaterial>---- BEGIN RSA PRIVATE KEY ----
+-MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMC
+-VVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6
+-b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAd
+-BgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wHhcNMTEwNDI1MjA0NTIxWhcN
+-MTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYD
+-VQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25z
+-b2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFt
+-YXpvbi5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ
+-21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9T
+-rDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpE
+-Ibb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4
+-nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0Fkb
+-FFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTb
+-NYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE=
+------END RSA PRIVATE KEY-----
+-</keyMaterial>
+-</CreateKeyPairResponse>
+-`
+-
+-var DeleteKeyPairExample = `
+-<DeleteKeyPairResponse xmlns="http://ec2.amazonaws.com/doc/2013-02-01/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <return>true</return>
+-</DeleteKeyPairResponse>
+-`
+-
+-// http://goo.gl/Eo7Yl
+-var CreateSecurityGroupExample = `
+-<CreateSecurityGroupResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <return>true</return>
+-   <groupId>sg-67ad940e</groupId>
+-</CreateSecurityGroupResponse>
+-`
+-
+-// http://goo.gl/k12Uy
+-var DescribeSecurityGroupsExample = `
+-<DescribeSecurityGroupsResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <securityGroupInfo>
+-    <item>
+-      <ownerId>999988887777</ownerId>
+-      <groupName>WebServers</groupName>
+-      <groupId>sg-67ad940e</groupId>
+-      <groupDescription>Web Servers</groupDescription>
+-      <ipPermissions>
+-        <item>
+-           <ipProtocol>tcp</ipProtocol>
+-           <fromPort>80</fromPort>
+-           <toPort>80</toPort>
+-           <groups/>
+-           <ipRanges>
+-             <item>
+-               <cidrIp>0.0.0.0/0</cidrIp>
+-             </item>
+-           </ipRanges>
+-        </item>
+-      </ipPermissions>
+-    </item>
+-    <item>
+-      <ownerId>999988887777</ownerId>
+-      <groupName>RangedPortsBySource</groupName>
+-      <groupId>sg-76abc467</groupId>
+-      <groupDescription>Group A</groupDescription>
+-      <ipPermissions>
+-        <item>
+-           <ipProtocol>tcp</ipProtocol>
+-           <fromPort>6000</fromPort>
+-           <toPort>7000</toPort>
+-           <groups/>
+-           <ipRanges/>
+-        </item>
+-      </ipPermissions>
+-    </item>
+-  </securityGroupInfo>
+-</DescribeSecurityGroupsResponse>
+-`
+-
+-// A dump which includes groups within ip permissions.
+-var DescribeSecurityGroupsDump = `
+-<?xml version="1.0" encoding="UTF-8"?>
+-<DescribeSecurityGroupsResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-    <requestId>87b92b57-cc6e-48b2-943f-f6f0e5c9f46c</requestId>
+-    <securityGroupInfo>
+-        <item>
+-            <ownerId>12345</ownerId>
+-            <groupName>default</groupName>
+-            <groupDescription>default group</groupDescription>
+-            <ipPermissions>
+-                <item>
+-                    <ipProtocol>icmp</ipProtocol>
+-                    <fromPort>-1</fromPort>
+-                    <toPort>-1</toPort>
+-                    <groups>
+-                        <item>
+-                            <userId>12345</userId>
+-                            <groupName>default</groupName>
+-                            <groupId>sg-67ad940e</groupId>
+-                        </item>
+-                    </groups>
+-                    <ipRanges/>
+-                </item>
+-                <item>
+-                    <ipProtocol>tcp</ipProtocol>
+-                    <fromPort>0</fromPort>
+-                    <toPort>65535</toPort>
+-                    <groups>
+-                        <item>
+-                            <userId>12345</userId>
+-                            <groupName>other</groupName>
+-                            <groupId>sg-76abc467</groupId>
+-                        </item>
+-                    </groups>
+-                    <ipRanges/>
+-                </item>
+-            </ipPermissions>
+-        </item>
+-    </securityGroupInfo>
+-</DescribeSecurityGroupsResponse>
+-`
+-
+-// http://goo.gl/QJJDO
+-var DeleteSecurityGroupExample = `
+-<DeleteSecurityGroupResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <return>true</return>
+-</DeleteSecurityGroupResponse>
+-`
+-
+-// http://goo.gl/u2sDJ
+-var AuthorizeSecurityGroupIngressExample = `
+-<AuthorizeSecurityGroupIngressResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <return>true</return>
+-</AuthorizeSecurityGroupIngressResponse>
+-`
+-
+-// http://goo.gl/u2sDJ
+-var AuthorizeSecurityGroupEgressExample = `
+-<AuthorizeSecurityGroupEgressResponse xmlns="http://ec2.amazonaws.com/doc/2014-06-15/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <return>true</return>
+-</AuthorizeSecurityGroupEgressResponse>
+-`
+-
+-// http://goo.gl/Mz7xr
+-var RevokeSecurityGroupIngressExample = `
+-<RevokeSecurityGroupIngressResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <return>true</return>
+-</RevokeSecurityGroupIngressResponse>
+-`
+-
+-// http://goo.gl/Vmkqc
+-var CreateTagsExample = `
+-<CreateTagsResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <return>true</return>
+-</CreateTagsResponse>
+-`
+-
+-// http://goo.gl/awKeF
+-var StartInstancesExample = `
+-<StartInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <instancesSet>
+-    <item>
+-      <instanceId>i-10a64379</instanceId>
+-      <currentState>
+-          <code>0</code>
+-          <name>pending</name>
+-      </currentState>
+-      <previousState>
+-          <code>80</code>
+-          <name>stopped</name>
+-      </previousState>
+-    </item>
+-  </instancesSet>
+-</StartInstancesResponse>
+-`
+-
+-// http://goo.gl/436dJ
+-var StopInstancesExample = `
+-<StopInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <instancesSet>
+-    <item>
+-      <instanceId>i-10a64379</instanceId>
+-      <currentState>
+-          <code>64</code>
+-          <name>stopping</name>
+-      </currentState>
+-      <previousState>
+-          <code>16</code>
+-          <name>running</name>
+-      </previousState>
+-    </item>
+-  </instancesSet>
+-</StopInstancesResponse>
+-`
+-
+-// http://goo.gl/baoUf
+-var RebootInstancesExample = `
+-<RebootInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2011-12-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <return>true</return>
+-</RebootInstancesResponse>
+-`
+-
+-// http://goo.gl/9rprDN
+-var AllocateAddressExample = `
+-<AllocateAddressResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <publicIp>198.51.100.1</publicIp>
+-   <domain>vpc</domain>
+-   <allocationId>eipalloc-5723d13e</allocationId>
+-</AllocateAddressResponse>
+-`
+-
+-// http://goo.gl/3Q0oCc
+-var ReleaseAddressExample = `
+-<ReleaseAddressResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <return>true</return>
+-</ReleaseAddressResponse>
+-`
+-
+-// http://goo.gl/uOSQE
+-var AssociateAddressExample = `
+-<AssociateAddressResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <return>true</return>
+-   <associationId>eipassoc-fc5ca095</associationId>
+-</AssociateAddressResponse>
+-`
+-
+-// http://goo.gl/LrOa0
+-var DisassociateAddressExample = `
+-<DisassociateAddressResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
+-   <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-   <return>true</return>
+-</DisassociateAddressResponse>
+-`
+-
+-// http://goo.gl/icuXh5
+-var ModifyInstanceExample = `
+-<ModifyImageAttributeResponse xmlns="http://ec2.amazonaws.com/doc/2013-06-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <return>true</return>
+-</ModifyImageAttributeResponse>
+-`
+-
+-var CreateVpcExample = `
+-<CreateVpcResponse xmlns="http://ec2.amazonaws.com/doc/2014-06-15/">
+-   <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
+-   <vpc>
+-      <vpcId>vpc-1a2b3c4d</vpcId>
+-      <state>pending</state>
+-      <cidrBlock>10.0.0.0/16</cidrBlock>
+-      <dhcpOptionsId>dopt-1a2b3c4d2</dhcpOptionsId>
+-      <instanceTenancy>default</instanceTenancy>
+-      <tagSet/>
+-   </vpc>
+-</CreateVpcResponse>
+-`
+-
+-var DescribeVpcsExample = `
+-<DescribeVpcsResponse xmlns="http://ec2.amazonaws.com/doc/2014-06-15/">
+-  <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
+-  <vpcSet>
+-    <item>
+-      <vpcId>vpc-1a2b3c4d</vpcId>
+-      <state>available</state>
+-      <cidrBlock>10.0.0.0/23</cidrBlock>
+-      <dhcpOptionsId>dopt-7a8b9c2d</dhcpOptionsId>
+-      <instanceTenancy>default</instanceTenancy>
+-      <isDefault>false</isDefault>
+-      <tagSet/>
+-    </item>
+-  </vpcSet>
+-</DescribeVpcsResponse>
+-`
+-
+-var CreateSubnetExample = `
+-<CreateSubnetResponse xmlns="http://ec2.amazonaws.com/doc/2014-06-15/">
+-  <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
+-  <subnet>
+-    <subnetId>subnet-9d4a7b6c</subnetId>
+-    <state>pending</state>
+-    <vpcId>vpc-1a2b3c4d</vpcId>
+-    <cidrBlock>10.0.1.0/24</cidrBlock>
+-    <availableIpAddressCount>251</availableIpAddressCount>
+-    <availabilityZone>us-east-1a</availabilityZone>
+-    <tagSet/>
+-  </subnet>
+-</CreateSubnetResponse>
+-`
+-
+-// http://goo.gl/r6ZCPm
+-var ResetImageAttributeExample = `
+-<ResetImageAttributeResponse xmlns="http://ec2.amazonaws.com/doc/2014-06-15/">
+-  <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
+-  <return>true</return>
+-</ResetImageAttributeResponse>
+-`
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/sign.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/sign.go
+deleted file mode 100644
+index bffc3c7..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/sign.go
++++ /dev/null
+@@ -1,45 +0,0 @@
+-package ec2
+-
+-import (
+-	"crypto/hmac"
+-	"crypto/sha256"
+-	"encoding/base64"
+-	"github.com/mitchellh/goamz/aws"
+-	"sort"
+-	"strings"
+-)
+-
+-// ----------------------------------------------------------------------------
+-// EC2 signing (http://goo.gl/fQmAN)
+-
+-var b64 = base64.StdEncoding
+-
+-func sign(auth aws.Auth, method, path string, params map[string]string, host string) {
+-	params["AWSAccessKeyId"] = auth.AccessKey
+-	params["SignatureVersion"] = "2"
+-	params["SignatureMethod"] = "HmacSHA256"
+-	if auth.Token != "" {
+-		params["SecurityToken"] = auth.Token
+-	}
+-
+-	// AWS specifies that the parameters in a signed request must
+-	// be provided in the natural order of the keys. This is distinct
+-	// from the natural order of the encoded value of key=value.
+-	// Percent and equals affect the sorting order.
+-	var keys, sarray []string
+-	for k, _ := range params {
+-		keys = append(keys, k)
+-	}
+-	sort.Strings(keys)
+-	for _, k := range keys {
+-		sarray = append(sarray, aws.Encode(k)+"="+aws.Encode(params[k]))
+-	}
+-	joined := strings.Join(sarray, "&")
+-	payload := method + "\n" + host + "\n" + path + "\n" + joined
+-	hash := hmac.New(sha256.New, []byte(auth.SecretKey))
+-	hash.Write([]byte(payload))
+-	signature := make([]byte, b64.EncodedLen(hash.Size()))
+-	b64.Encode(signature, hash.Sum(nil))
+-
+-	params["Signature"] = string(signature)
+-}
+diff --git a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/sign_test.go b/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/sign_test.go
+deleted file mode 100644
+index 86d203e..0000000
+--- a/Godeps/_workspace/src/github.com/mitchellh/goamz/ec2/sign_test.go
++++ /dev/null
+@@ -1,68 +0,0 @@
+-package ec2_test
+-
+-import (
+-	"github.com/mitchellh/goamz/aws"
+-	"github.com/mitchellh/goamz/ec2"
+-	. "github.com/motain/gocheck"
+-)
+-
+-// EC2 ReST authentication docs: http://goo.gl/fQmAN
+-
+-var testAuth = aws.Auth{"user", "secret", ""}
+-
+-func (s *S) TestBasicSignature(c *C) {
+-	params := map[string]string{}
+-	ec2.Sign(testAuth, "GET", "/path", params, "localhost")
+-	c.Assert(params["SignatureVersion"], Equals, "2")
+-	c.Assert(params["SignatureMethod"], Equals, "HmacSHA256")
+-	expected := "6lSe5QyXum0jMVc7cOUz32/52ZnL7N5RyKRk/09yiK4="
+-	c.Assert(params["Signature"], Equals, expected)
+-}
+-
+-func (s *S) TestParamSignature(c *C) {
+-	params := map[string]string{
+-		"param1": "value1",
+-		"param2": "value2",
+-		"param3": "value3",
+-	}
+-	ec2.Sign(testAuth, "GET", "/path", params, "localhost")
+-	expected := "XWOR4+0lmK8bD8CGDGZ4kfuSPbb2JibLJiCl/OPu1oU="
+-	c.Assert(params["Signature"], Equals, expected)
+-}
+-
+-func (s *S) TestManyParams(c *C) {
+-	params := map[string]string{
+-		"param1":  "value10",
+-		"param2":  "value2",
+-		"param3":  "value3",
+-		"param4":  "value4",
+-		"param5":  "value5",
+-		"param6":  "value6",
+-		"param7":  "value7",
+-		"param8":  "value8",
+-		"param9":  "value9",
+-		"param10": "value1",
+-	}
+-	ec2.Sign(testAuth, "GET", "/path", params, "localhost")
+-	expected := "di0sjxIvezUgQ1SIL6i+C/H8lL+U0CQ9frLIak8jkVg="
+-	c.Assert(params["Signature"], Equals, expected)
+-}
+-
+-func (s *S) TestEscaping(c *C) {
+-	params := map[string]string{"Nonce": "+ +"}
+-	ec2.Sign(testAuth, "GET", "/path", params, "localhost")
+-	c.Assert(params["Nonce"], Equals, "+ +")
+-	expected := "bqffDELReIqwjg/W0DnsnVUmfLK4wXVLO4/LuG+1VFA="
+-	c.Assert(params["Signature"], Equals, expected)
+-}
+-
+-func (s *S) TestSignatureExample1(c *C) {
+-	params := map[string]string{
+-		"Timestamp": "2009-02-01T12:53:20+00:00",
+-		"Version":   "2007-11-07",
+-		"Action":    "ListDomains",
+-	}
+-	ec2.Sign(aws.Auth{"access", "secret", ""}, "GET", "/", params, "sdb.amazonaws.com")
+-	expected := "okj96/5ucWBSc1uR2zXVfm6mDHtgfNv657rRtt/aunQ="
+-	c.Assert(params["Signature"], Equals, expected)
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/.gitignore b/Godeps/_workspace/src/github.com/stretchr/objx/.gitignore
+deleted file mode 100644
+index 0026861..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/.gitignore
++++ /dev/null
+@@ -1,22 +0,0 @@
+-# Compiled Object files, Static and Dynamic libs (Shared Objects)
+-*.o
+-*.a
+-*.so
+-
+-# Folders
+-_obj
+-_test
+-
+-# Architecture specific extensions/prefixes
+-*.[568vq]
+-[568vq].out
+-
+-*.cgo1.go
+-*.cgo2.c
+-_cgo_defun.c
+-_cgo_gotypes.go
+-_cgo_export.*
+-
+-_testmain.go
+-
+-*.exe
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/README.md b/Godeps/_workspace/src/github.com/stretchr/objx/README.md
+deleted file mode 100644
+index 4aa1806..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/README.md
++++ /dev/null
+@@ -1,3 +0,0 @@
+-# objx
+-
+-  * Jump into the [API Documentation](http://godoc.org/github.com/stretchr/objx)
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/accessors.go b/Godeps/_workspace/src/github.com/stretchr/objx/accessors.go
+deleted file mode 100644
+index 721bcac..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/accessors.go
++++ /dev/null
+@@ -1,179 +0,0 @@
+-package objx
+-
+-import (
+-	"fmt"
+-	"regexp"
+-	"strconv"
+-	"strings"
+-)
+-
+-// arrayAccesRegexString is the regex used to extract the array number
+-// from the access path
+-const arrayAccesRegexString = `^(.+)\[([0-9]+)\]$`
+-
+-// arrayAccesRegex is the compiled arrayAccesRegexString
+-var arrayAccesRegex = regexp.MustCompile(arrayAccesRegexString)
+-
+-// Get gets the value using the specified selector and
+-// returns it inside a new Obj object.
+-//
+-// If it cannot find the value, Get will return a nil
+-// value inside an instance of Obj.
+-//
+-// Get can only operate directly on map[string]interface{} and []interface.
+-//
+-// Example
+-//
+-// To access the title of the third chapter of the second book, do:
+-//
+-//    o.Get("books[1].chapters[2].title")
+-func (m Map) Get(selector string) *Value {
+-	rawObj := access(m, selector, nil, false, false)
+-	return &Value{data: rawObj}
+-}
+-
+-// Set sets the value using the specified selector and
+-// returns the object on which Set was called.
+-//
+-// Set can only operate directly on map[string]interface{} and []interface
+-//
+-// Example
+-//
+-// To set the title of the third chapter of the second book, do:
+-//
+-//    o.Set("books[1].chapters[2].title","Time to Go")
+-func (m Map) Set(selector string, value interface{}) Map {
+-	access(m, selector, value, true, false)
+-	return m
+-}
+-
+-// access accesses the object using the selector and performs the
+-// appropriate action.
+-func access(current, selector, value interface{}, isSet, panics bool) interface{} {
+-
+-	switch selector.(type) {
+-	case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64:
+-
+-		if array, ok := current.([]interface{}); ok {
+-			index := intFromInterface(selector)
+-
+-			if index >= len(array) {
+-				if panics {
+-					panic(fmt.Sprintf("objx: Index %d is out of range. Slice only contains %d items.", index, len(array)))
+-				}
+-				return nil
+-			}
+-
+-			return array[index]
+-		}
+-
+-		return nil
+-
+-	case string:
+-
+-		selStr := selector.(string)
+-		selSegs := strings.SplitN(selStr, PathSeparator, 2)
+-		thisSel := selSegs[0]
+-		index := -1
+-		var err error
+-
+-		// https://github.com/stretchr/objx/issues/12
+-		if strings.Contains(thisSel, "[") {
+-
+-			arrayMatches := arrayAccesRegex.FindStringSubmatch(thisSel)
+-
+-			if len(arrayMatches) > 0 {
+-
+-				// Get the key into the map
+-				thisSel = arrayMatches[1]
+-
+-				// Get the index into the array at the key
+-				index, err = strconv.Atoi(arrayMatches[2])
+-
+-				if err != nil {
+-					// This should never happen. If it does, something has gone
+-					// seriously wrong. Panic.
+-					panic("objx: Array index is not an integer.  Must use array[int].")
+-				}
+-
+-			}
+-		}
+-
+-		if curMap, ok := current.(Map); ok {
+-			current = map[string]interface{}(curMap)
+-		}
+-
+-		// get the object in question
+-		switch current.(type) {
+-		case map[string]interface{}:
+-			curMSI := current.(map[string]interface{})
+-			if len(selSegs) <= 1 && isSet {
+-				curMSI[thisSel] = value
+-				return nil
+-			} else {
+-				current = curMSI[thisSel]
+-			}
+-		default:
+-			current = nil
+-		}
+-
+-		if current == nil && panics {
+-			panic(fmt.Sprintf("objx: '%v' invalid on object.", selector))
+-		}
+-
+-		// do we need to access the item of an array?
+-		if index > -1 {
+-			if array, ok := current.([]interface{}); ok {
+-				if index < len(array) {
+-					current = array[index]
+-				} else {
+-					if panics {
+-						panic(fmt.Sprintf("objx: Index %d is out of range. Slice only contains %d items.", index, len(array)))
+-					}
+-					current = nil
+-				}
+-			}
+-		}
+-
+-		if len(selSegs) > 1 {
+-			current = access(current, selSegs[1], value, isSet, panics)
+-		}
+-
+-	}
+-
+-	return current
+-
+-}
+-
+-// intFromInterface converts an interface object to the largest
+-// representation of an unsigned integer using a type switch and
+-// assertions
+-func intFromInterface(selector interface{}) int {
+-	var value int
+-	switch selector.(type) {
+-	case int:
+-		value = selector.(int)
+-	case int8:
+-		value = int(selector.(int8))
+-	case int16:
+-		value = int(selector.(int16))
+-	case int32:
+-		value = int(selector.(int32))
+-	case int64:
+-		value = int(selector.(int64))
+-	case uint:
+-		value = int(selector.(uint))
+-	case uint8:
+-		value = int(selector.(uint8))
+-	case uint16:
+-		value = int(selector.(uint16))
+-	case uint32:
+-		value = int(selector.(uint32))
+-	case uint64:
+-		value = int(selector.(uint64))
+-	default:
+-		panic("objx: array access argument is not an integer type (this should never happen)")
+-	}
+-
+-	return value
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/accessors_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/accessors_test.go
+deleted file mode 100644
+index ce5d8e4..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/accessors_test.go
++++ /dev/null
+@@ -1,145 +0,0 @@
+-package objx
+-
+-import (
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-func TestAccessorsAccessGetSingleField(t *testing.T) {
+-
+-	current := map[string]interface{}{"name": "Tyler"}
+-	assert.Equal(t, "Tyler", access(current, "name", nil, false, true))
+-
+-}
+-func TestAccessorsAccessGetDeep(t *testing.T) {
+-
+-	current := map[string]interface{}{"name": map[string]interface{}{"first": "Tyler", "last": "Bunnell"}}
+-	assert.Equal(t, "Tyler", access(current, "name.first", nil, false, true))
+-	assert.Equal(t, "Bunnell", access(current, "name.last", nil, false, true))
+-
+-}
+-func TestAccessorsAccessGetDeepDeep(t *testing.T) {
+-
+-	current := map[string]interface{}{"one": map[string]interface{}{"two": map[string]interface{}{"three": map[string]interface{}{"four": 4}}}}
+-	assert.Equal(t, 4, access(current, "one.two.three.four", nil, false, true))
+-
+-}
+-func TestAccessorsAccessGetInsideArray(t *testing.T) {
+-
+-	current := map[string]interface{}{"names": []interface{}{map[string]interface{}{"first": "Tyler", "last": "Bunnell"}, map[string]interface{}{"first": "Capitol", "last": "Bollocks"}}}
+-	assert.Equal(t, "Tyler", access(current, "names[0].first", nil, false, true))
+-	assert.Equal(t, "Bunnell", access(current, "names[0].last", nil, false, true))
+-	assert.Equal(t, "Capitol", access(current, "names[1].first", nil, false, true))
+-	assert.Equal(t, "Bollocks", access(current, "names[1].last", nil, false, true))
+-
+-	assert.Panics(t, func() {
+-		access(current, "names[2]", nil, false, true)
+-	})
+-	assert.Nil(t, access(current, "names[2]", nil, false, false))
+-
+-}
+-
+-func TestAccessorsAccessGetFromArrayWithInt(t *testing.T) {
+-
+-	current := []interface{}{map[string]interface{}{"first": "Tyler", "last": "Bunnell"}, map[string]interface{}{"first": "Capitol", "last": "Bollocks"}}
+-	one := access(current, 0, nil, false, false)
+-	two := access(current, 1, nil, false, false)
+-	three := access(current, 2, nil, false, false)
+-
+-	assert.Equal(t, "Tyler", one.(map[string]interface{})["first"])
+-	assert.Equal(t, "Capitol", two.(map[string]interface{})["first"])
+-	assert.Nil(t, three)
+-
+-}
+-
+-func TestAccessorsGet(t *testing.T) {
+-
+-	current := New(map[string]interface{}{"name": "Tyler"})
+-	assert.Equal(t, "Tyler", current.Get("name").data)
+-
+-}
+-
+-func TestAccessorsAccessSetSingleField(t *testing.T) {
+-
+-	current := map[string]interface{}{"name": "Tyler"}
+-	access(current, "name", "Mat", true, false)
+-	assert.Equal(t, current["name"], "Mat")
+-
+-	access(current, "age", 29, true, true)
+-	assert.Equal(t, current["age"], 29)
+-
+-}
+-
+-func TestAccessorsAccessSetSingleFieldNotExisting(t *testing.T) {
+-
+-	current := map[string]interface{}{}
+-	access(current, "name", "Mat", true, false)
+-	assert.Equal(t, current["name"], "Mat")
+-
+-}
+-
+-func TestAccessorsAccessSetDeep(t *testing.T) {
+-
+-	current := map[string]interface{}{"name": map[string]interface{}{"first": "Tyler", "last": "Bunnell"}}
+-
+-	access(current, "name.first", "Mat", true, true)
+-	access(current, "name.last", "Ryer", true, true)
+-
+-	assert.Equal(t, "Mat", access(current, "name.first", nil, false, true))
+-	assert.Equal(t, "Ryer", access(current, "name.last", nil, false, true))
+-
+-}
+-func TestAccessorsAccessSetDeepDeep(t *testing.T) {
+-
+-	current := map[string]interface{}{"one": map[string]interface{}{"two": map[string]interface{}{"three": map[string]interface{}{"four": 4}}}}
+-
+-	access(current, "one.two.three.four", 5, true, true)
+-
+-	assert.Equal(t, 5, access(current, "one.two.three.four", nil, false, true))
+-
+-}
+-func TestAccessorsAccessSetArray(t *testing.T) {
+-
+-	current := map[string]interface{}{"names": []interface{}{"Tyler"}}
+-
+-	access(current, "names[0]", "Mat", true, true)
+-
+-	assert.Equal(t, "Mat", access(current, "names[0]", nil, false, true))
+-
+-}
+-func TestAccessorsAccessSetInsideArray(t *testing.T) {
+-
+-	current := map[string]interface{}{"names": []interface{}{map[string]interface{}{"first": "Tyler", "last": "Bunnell"}, map[string]interface{}{"first": "Capitol", "last": "Bollocks"}}}
+-
+-	access(current, "names[0].first", "Mat", true, true)
+-	access(current, "names[0].last", "Ryer", true, true)
+-	access(current, "names[1].first", "Captain", true, true)
+-	access(current, "names[1].last", "Underpants", true, true)
+-
+-	assert.Equal(t, "Mat", access(current, "names[0].first", nil, false, true))
+-	assert.Equal(t, "Ryer", access(current, "names[0].last", nil, false, true))
+-	assert.Equal(t, "Captain", access(current, "names[1].first", nil, false, true))
+-	assert.Equal(t, "Underpants", access(current, "names[1].last", nil, false, true))
+-
+-}
+-
+-func TestAccessorsAccessSetFromArrayWithInt(t *testing.T) {
+-
+-	current := []interface{}{map[string]interface{}{"first": "Tyler", "last": "Bunnell"}, map[string]interface{}{"first": "Capitol", "last": "Bollocks"}}
+-	one := access(current, 0, nil, false, false)
+-	two := access(current, 1, nil, false, false)
+-	three := access(current, 2, nil, false, false)
+-
+-	assert.Equal(t, "Tyler", one.(map[string]interface{})["first"])
+-	assert.Equal(t, "Capitol", two.(map[string]interface{})["first"])
+-	assert.Nil(t, three)
+-
+-}
+-
+-func TestAccessorsSet(t *testing.T) {
+-
+-	current := New(map[string]interface{}{"name": "Tyler"})
+-	current.Set("name", "Mat")
+-	assert.Equal(t, "Mat", current.Get("name").data)
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/codegen/array-access.txt b/Godeps/_workspace/src/github.com/stretchr/objx/codegen/array-access.txt
+deleted file mode 100644
+index 3060234..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/codegen/array-access.txt
++++ /dev/null
+@@ -1,14 +0,0 @@
+-  case []{1}:
+-    a := object.([]{1})
+-    if isSet {
+-      a[index] = value.({1})
+-    } else {
+-      if index >= len(a) {
+-        if panics {
+-          panic(fmt.Sprintf("objx: Index %d is out of range because the []{1} only contains %d items.", index, len(a)))
+-        }
+-        return nil
+-      } else {
+-        return a[index]
+-      }
+-    }
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/codegen/index.html b/Godeps/_workspace/src/github.com/stretchr/objx/codegen/index.html
+deleted file mode 100644
+index 379ffc3..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/codegen/index.html
++++ /dev/null
+@@ -1,86 +0,0 @@
+-<html>
+-	<head>
+-	<title>
+-		Codegen
+-	</title>
+-	<style>
+-		body {
+-			width: 800px;
+-			margin: auto;
+-		}
+-		textarea {
+-			width: 100%;
+-			min-height: 100px;
+-			font-family: Courier;
+-		}
+-	</style>
+-	</head>
+-	<body>
+-
+-		<h2>
+-			Template
+-		</h2>
+-		<p>
+-			Use <code>{x}</code> as a placeholder for each argument.
+-		</p>
+-		<textarea id="template"></textarea>
+-
+-		<h2>
+-			Arguments (comma separated)
+-		</h2>
+-		<p>
+-			One block per line
+-		</p>
+-		<textarea id="args"></textarea>
+-
+-		<h2>
+-			Output
+-		</h2>
+-		<input id="go" type="button" value="Generate code" />
+-
+-		<textarea id="output"></textarea>
+-
+-		<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
+-		<script>
+-
+-			$(function(){
+-
+-				$("#go").click(function(){
+-
+-					var output = ""
+-					var template = $("#template").val()
+-					var args = $("#args").val()
+-
+-					// collect the args
+-					var argLines = args.split("\n")
+-					for (var line in argLines) {
+-
+-						var argLine = argLines[line];
+-						var thisTemp = template
+-
+-						// get individual args
+-						var args = argLine.split(",")
+-
+-						for (var argI in args) {
+-							var argText = args[argI];
+-							var argPlaceholder = "{" + argI + "}";
+-
+-							while (thisTemp.indexOf(argPlaceholder) > -1) {
+-								thisTemp = thisTemp.replace(argPlaceholder, argText);
+-							}
+-
+-						}
+-
+-						output += thisTemp
+-
+-					}
+-
+-					$("#output").val(output);
+-
+-				});
+-
+-			});
+-
+-		</script>
+-	</body>
+-</html>
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/codegen/template.txt b/Godeps/_workspace/src/github.com/stretchr/objx/codegen/template.txt
+deleted file mode 100644
+index b396900..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/codegen/template.txt
++++ /dev/null
+@@ -1,286 +0,0 @@
+-/*
+-	{4} ({1} and []{1})
+-	--------------------------------------------------
+-*/
+-
+-// {4} gets the value as a {1}, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) {4}(optionalDefault ...{1}) {1} {
+-	if s, ok := v.data.({1}); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return {3}
+-}
+-
+-// Must{4} gets the value as a {1}.
+-//
+-// Panics if the object is not a {1}.
+-func (v *Value) Must{4}() {1} {
+-	return v.data.({1})
+-}
+-
+-// {4}Slice gets the value as a []{1}, returns the optionalDefault
+-// value or nil if the value is not a []{1}.
+-func (v *Value) {4}Slice(optionalDefault ...[]{1}) []{1} {
+-	if s, ok := v.data.([]{1}); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// Must{4}Slice gets the value as a []{1}.
+-//
+-// Panics if the object is not a []{1}.
+-func (v *Value) Must{4}Slice() []{1} {
+-	return v.data.([]{1})
+-}
+-
+-// Is{4} gets whether the object contained is a {1} or not.
+-func (v *Value) Is{4}() bool {
+-  _, ok := v.data.({1})
+-  return ok
+-}
+-
+-// Is{4}Slice gets whether the object contained is a []{1} or not.
+-func (v *Value) Is{4}Slice() bool {
+-	_, ok := v.data.([]{1})
+-	return ok
+-}
+-
+-// Each{4} calls the specified callback for each object
+-// in the []{1}.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) Each{4}(callback func(int, {1}) bool) *Value {
+-
+-	for index, val := range v.Must{4}Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// Where{4} uses the specified decider function to select items
+-// from the []{1}.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) Where{4}(decider func(int, {1}) bool) *Value {
+-
+-	var selected []{1}
+-
+-	v.Each{4}(func(index int, val {1}) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data:selected}
+-
+-}
+-
+-// Group{4} uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]{1}.
+-func (v *Value) Group{4}(grouper func(int, {1}) string) *Value {
+-
+-	groups := make(map[string][]{1})
+-
+-	v.Each{4}(func(index int, val {1}) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]{1}, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data:groups}
+-
+-}
+-
+-// Replace{4} uses the specified function to replace each {1}s
+-// by iterating each item.  The data in the returned result will be a
+-// []{1} containing the replaced items.
+-func (v *Value) Replace{4}(replacer func(int, {1}) {1}) *Value {
+-
+-	arr := v.Must{4}Slice()
+-	replaced := make([]{1}, len(arr))
+-
+-	v.Each{4}(func(index int, val {1}) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data:replaced}
+-
+-}
+-
+-// Collect{4} uses the specified collector function to collect a value
+-// for each of the {1}s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) Collect{4}(collector func(int, {1}) interface{}) *Value {
+-
+-	arr := v.Must{4}Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.Each{4}(func(index int, val {1}) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data:collected}
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func Test{4}(t *testing.T) {
+-
+-  val := {1}( {2} )
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").{4}())
+-	assert.Equal(t, val, New(m).Get("value").Must{4}())
+-	assert.Equal(t, {1}({3}), New(m).Get("nothing").{4}())
+-	assert.Equal(t, val, New(m).Get("nothing").{4}({2}))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").Must{4}()
+-	})
+-
+-}
+-
+-func Test{4}Slice(t *testing.T) {
+-
+-  val := {1}( {2} )
+-	m := map[string]interface{}{"value": []{1}{ val }, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").{4}Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").Must{4}Slice()[0])
+-	assert.Equal(t, []{1}(nil), New(m).Get("nothing").{4}Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").{4}Slice( []{1}{ {1}({2}) } )[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").Must{4}Slice()
+-	})
+-
+-}
+-
+-func TestIs{4}(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: {1}({2})}
+-	assert.True(t, v.Is{4}())
+-
+-	v = &Value{data: []{1}{ {1}({2}) }}
+-	assert.True(t, v.Is{4}Slice())
+-
+-}
+-
+-func TestEach{4}(t *testing.T) {
+-
+-	v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
+-	count := 0
+-	replacedVals := make([]{1}, 0)
+-	assert.Equal(t, v, v.Each{4}(func(i int, val {1}) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.Must{4}Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.Must{4}Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.Must{4}Slice()[2])
+-
+-}
+-
+-func TestWhere{4}(t *testing.T) {
+-
+-	v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
+-
+-	selected := v.Where{4}(func(i int, val {1}) bool {
+-		return i%2==0
+-	}).Must{4}Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroup{4}(t *testing.T) {
+-
+-	v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
+-
+-	grouped := v.Group{4}(func(i int, val {1}) string {
+-		return fmt.Sprintf("%v", i%2==0)
+-	}).data.(map[string][]{1})
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplace{4}(t *testing.T) {
+-
+-	v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
+-
+-	rawArr := v.Must{4}Slice()
+-
+-	replaced := v.Replace{4}(func(index int, val {1}) {1} {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.Must{4}Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollect{4}(t *testing.T) {
+-
+-	v := &Value{data: []{1}{ {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}), {1}({2}) }}
+-
+-	collected := v.Collect{4}(func(index int, val {1}) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/codegen/types_list.txt b/Godeps/_workspace/src/github.com/stretchr/objx/codegen/types_list.txt
+deleted file mode 100644
+index 069d43d..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/codegen/types_list.txt
++++ /dev/null
+@@ -1,20 +0,0 @@
+-Interface,interface{},"something",nil,Inter
+-Map,map[string]interface{},map[string]interface{}{"name":"Tyler"},nil,MSI
+-ObjxMap,(Map),New(1),New(nil),ObjxMap
+-Bool,bool,true,false,Bool
+-String,string,"hello","",Str
+-Int,int,1,0,Int
+-Int8,int8,1,0,Int8
+-Int16,int16,1,0,Int16
+-Int32,int32,1,0,Int32
+-Int64,int64,1,0,Int64
+-Uint,uint,1,0,Uint
+-Uint8,uint8,1,0,Uint8
+-Uint16,uint16,1,0,Uint16
+-Uint32,uint32,1,0,Uint32
+-Uint64,uint64,1,0,Uint64
+-Uintptr,uintptr,1,0,Uintptr
+-Float32,float32,1,0,Float32
+-Float64,float64,1,0,Float64
+-Complex64,complex64,1,0,Complex64
+-Complex128,complex128,1,0,Complex128
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/constants.go b/Godeps/_workspace/src/github.com/stretchr/objx/constants.go
+deleted file mode 100644
+index f9eb42a..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/constants.go
++++ /dev/null
+@@ -1,13 +0,0 @@
+-package objx
+-
+-const (
+-	// PathSeparator is the character used to separate the elements
+-	// of the keypath.
+-	//
+-	// For example, `location.address.city`
+-	PathSeparator string = "."
+-
+-	// SignatureSeparator is the character that is used to
+-	// separate the Base64 string from the security signature.
+-	SignatureSeparator = "_"
+-)
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/conversions.go b/Godeps/_workspace/src/github.com/stretchr/objx/conversions.go
+deleted file mode 100644
+index 9cdfa9f..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/conversions.go
++++ /dev/null
+@@ -1,117 +0,0 @@
+-package objx
+-
+-import (
+-	"bytes"
+-	"encoding/base64"
+-	"encoding/json"
+-	"errors"
+-	"fmt"
+-	"net/url"
+-)
+-
+-// JSON converts the contained object to a JSON string
+-// representation
+-func (m Map) JSON() (string, error) {
+-
+-	result, err := json.Marshal(m)
+-
+-	if err != nil {
+-		err = errors.New("objx: JSON encode failed with: " + err.Error())
+-	}
+-
+-	return string(result), err
+-
+-}
+-
+-// MustJSON converts the contained object to a JSON string
+-// representation and panics if there is an error
+-func (m Map) MustJSON() string {
+-	result, err := m.JSON()
+-	if err != nil {
+-		panic(err.Error())
+-	}
+-	return result
+-}
+-
+-// Base64 converts the contained object to a Base64 string
+-// representation of the JSON string representation
+-func (m Map) Base64() (string, error) {
+-
+-	var buf bytes.Buffer
+-
+-	jsonData, err := m.JSON()
+-	if err != nil {
+-		return "", err
+-	}
+-
+-	encoder := base64.NewEncoder(base64.StdEncoding, &buf)
+-	encoder.Write([]byte(jsonData))
+-	encoder.Close()
+-
+-	return buf.String(), nil
+-
+-}
+-
+-// MustBase64 converts the contained object to a Base64 string
+-// representation of the JSON string representation and panics
+-// if there is an error
+-func (m Map) MustBase64() string {
+-	result, err := m.Base64()
+-	if err != nil {
+-		panic(err.Error())
+-	}
+-	return result
+-}
+-
+-// SignedBase64 converts the contained object to a Base64 string
+-// representation of the JSON string representation and signs it
+-// using the provided key.
+-func (m Map) SignedBase64(key string) (string, error) {
+-
+-	base64, err := m.Base64()
+-	if err != nil {
+-		return "", err
+-	}
+-
+-	sig := HashWithKey(base64, key)
+-
+-	return base64 + SignatureSeparator + sig, nil
+-
+-}
+-
+-// MustSignedBase64 converts the contained object to a Base64 string
+-// representation of the JSON string representation and signs it
+-// using the provided key and panics if there is an error
+-func (m Map) MustSignedBase64(key string) string {
+-	result, err := m.SignedBase64(key)
+-	if err != nil {
+-		panic(err.Error())
+-	}
+-	return result
+-}
+-
+-/*
+-	URL Query
+-	------------------------------------------------
+-*/
+-
+-// URLValues creates a url.Values object from an Obj. This
+-// function requires that the wrapped object be a map[string]interface{}
+-func (m Map) URLValues() url.Values {
+-
+-	vals := make(url.Values)
+-
+-	for k, v := range m {
+-		//TODO: can this be done without sprintf?
+-		vals.Set(k, fmt.Sprintf("%v", v))
+-	}
+-
+-	return vals
+-}
+-
+-// URLQuery gets an encoded URL query representing the given
+-// Obj. This function requires that the wrapped object be a
+-// map[string]interface{}
+-func (m Map) URLQuery() (string, error) {
+-	return m.URLValues().Encode(), nil
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/conversions_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/conversions_test.go
+deleted file mode 100644
+index e9ccd29..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/conversions_test.go
++++ /dev/null
+@@ -1,94 +0,0 @@
+-package objx
+-
+-import (
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-func TestConversionJSON(t *testing.T) {
+-
+-	jsonString := `{"name":"Mat"}`
+-	o := MustFromJSON(jsonString)
+-
+-	result, err := o.JSON()
+-
+-	if assert.NoError(t, err) {
+-		assert.Equal(t, jsonString, result)
+-	}
+-
+-	assert.Equal(t, jsonString, o.MustJSON())
+-
+-}
+-
+-func TestConversionJSONWithError(t *testing.T) {
+-
+-	o := MSI()
+-	o["test"] = func() {}
+-
+-	assert.Panics(t, func() {
+-		o.MustJSON()
+-	})
+-
+-	_, err := o.JSON()
+-
+-	assert.Error(t, err)
+-
+-}
+-
+-func TestConversionBase64(t *testing.T) {
+-
+-	o := New(map[string]interface{}{"name": "Mat"})
+-
+-	result, err := o.Base64()
+-
+-	if assert.NoError(t, err) {
+-		assert.Equal(t, "eyJuYW1lIjoiTWF0In0=", result)
+-	}
+-
+-	assert.Equal(t, "eyJuYW1lIjoiTWF0In0=", o.MustBase64())
+-
+-}
+-
+-func TestConversionBase64WithError(t *testing.T) {
+-
+-	o := MSI()
+-	o["test"] = func() {}
+-
+-	assert.Panics(t, func() {
+-		o.MustBase64()
+-	})
+-
+-	_, err := o.Base64()
+-
+-	assert.Error(t, err)
+-
+-}
+-
+-func TestConversionSignedBase64(t *testing.T) {
+-
+-	o := New(map[string]interface{}{"name": "Mat"})
+-
+-	result, err := o.SignedBase64("key")
+-
+-	if assert.NoError(t, err) {
+-		assert.Equal(t, "eyJuYW1lIjoiTWF0In0=_67ee82916f90b2c0d68c903266e8998c9ef0c3d6", result)
+-	}
+-
+-	assert.Equal(t, "eyJuYW1lIjoiTWF0In0=_67ee82916f90b2c0d68c903266e8998c9ef0c3d6", o.MustSignedBase64("key"))
+-
+-}
+-
+-func TestConversionSignedBase64WithError(t *testing.T) {
+-
+-	o := MSI()
+-	o["test"] = func() {}
+-
+-	assert.Panics(t, func() {
+-		o.MustSignedBase64("key")
+-	})
+-
+-	_, err := o.SignedBase64("key")
+-
+-	assert.Error(t, err)
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/doc.go b/Godeps/_workspace/src/github.com/stretchr/objx/doc.go
+deleted file mode 100644
+index 47bf85e..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/doc.go
++++ /dev/null
+@@ -1,72 +0,0 @@
+-// objx - Go package for dealing with maps, slices, JSON and other data.
+-//
+-// Overview
+-//
+-// Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes
+-// a powerful `Get` method (among others) that allows you to easily and quickly get
+-// access to data within the map, without having to worry too much about type assertions,
+-// missing data, default values etc.
+-//
+-// Pattern
+-//
+-// Objx uses a preditable pattern to make access data from within `map[string]interface{}'s
+-// easy.
+-//
+-// Call one of the `objx.` functions to create your `objx.Map` to get going:
+-//
+-//     m, err := objx.FromJSON(json)
+-//
+-// NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong,
+-// the rest will be optimistic and try to figure things out without panicking.
+-//
+-// Use `Get` to access the value you're interested in.  You can use dot and array
+-// notation too:
+-//
+-//     m.Get("places[0].latlng")
+-//
+-// Once you have saught the `Value` you're interested in, you can use the `Is*` methods
+-// to determine its type.
+-//
+-//     if m.Get("code").IsStr() { /* ... */ }
+-//
+-// Or you can just assume the type, and use one of the strong type methods to
+-// extract the real value:
+-//
+-//     m.Get("code").Int()
+-//
+-// If there's no value there (or if it's the wrong type) then a default value
+-// will be returned, or you can be explicit about the default value.
+-//
+-//     Get("code").Int(-1)
+-//
+-// If you're dealing with a slice of data as a value, Objx provides many useful
+-// methods for iterating, manipulating and selecting that data.  You can find out more
+-// by exploring the index below.
+-//
+-// Reading data
+-//
+-// A simple example of how to use Objx:
+-//
+-//     // use MustFromJSON to make an objx.Map from some JSON
+-//     m := objx.MustFromJSON(`{"name": "Mat", "age": 30}`)
+-//
+-//     // get the details
+-//     name := m.Get("name").Str()
+-//     age := m.Get("age").Int()
+-//
+-//     // get their nickname (or use their name if they
+-//     // don't have one)
+-//     nickname := m.Get("nickname").Str(name)
+-//
+-// Ranging
+-//
+-// Since `objx.Map` is a `map[string]interface{}` you can treat it as such.  For
+-// example, to `range` the data, do what you would expect:
+-//
+-//     m := objx.MustFromJSON(json)
+-//     for key, value := range m {
+-//
+-//       /* ... do your magic ... */
+-//
+-//     }
+-package objx
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/fixture_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/fixture_test.go
+deleted file mode 100644
+index 27f7d90..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/fixture_test.go
++++ /dev/null
+@@ -1,98 +0,0 @@
+-package objx
+-
+-import (
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-var fixtures = []struct {
+-	// name is the name of the fixture (used for reporting
+-	// failures)
+-	name string
+-	// data is the JSON data to be worked on
+-	data string
+-	// get is the argument(s) to pass to Get
+-	get interface{}
+-	// output is the expected output
+-	output interface{}
+-}{
+-	{
+-		name:   "Simple get",
+-		data:   `{"name": "Mat"}`,
+-		get:    "name",
+-		output: "Mat",
+-	},
+-	{
+-		name:   "Get with dot notation",
+-		data:   `{"address": {"city": "Boulder"}}`,
+-		get:    "address.city",
+-		output: "Boulder",
+-	},
+-	{
+-		name:   "Deep get with dot notation",
+-		data:   `{"one": {"two": {"three": {"four": "hello"}}}}`,
+-		get:    "one.two.three.four",
+-		output: "hello",
+-	},
+-	{
+-		name:   "Get missing with dot notation",
+-		data:   `{"one": {"two": {"three": {"four": "hello"}}}}`,
+-		get:    "one.ten",
+-		output: nil,
+-	},
+-	{
+-		name:   "Get with array notation",
+-		data:   `{"tags": ["one", "two", "three"]}`,
+-		get:    "tags[1]",
+-		output: "two",
+-	},
+-	{
+-		name:   "Get with array and dot notation",
+-		data:   `{"types": { "tags": ["one", "two", "three"]}}`,
+-		get:    "types.tags[1]",
+-		output: "two",
+-	},
+-	{
+-		name:   "Get with array and dot notation - field after array",
+-		data:   `{"tags": [{"name":"one"}, {"name":"two"}, {"name":"three"}]}`,
+-		get:    "tags[1].name",
+-		output: "two",
+-	},
+-	{
+-		name:   "Complex get with array and dot notation",
+-		data:   `{"tags": [{"list": [{"one":"pizza"}]}]}`,
+-		get:    "tags[0].list[0].one",
+-		output: "pizza",
+-	},
+-	{
+-		name:   "Get field from within string should be nil",
+-		data:   `{"name":"Tyler"}`,
+-		get:    "name.something",
+-		output: nil,
+-	},
+-	{
+-		name:   "Get field from within string (using array accessor) should be nil",
+-		data:   `{"numbers":["one", "two", "three"]}`,
+-		get:    "numbers[0].nope",
+-		output: nil,
+-	},
+-}
+-
+-func TestFixtures(t *testing.T) {
+-
+-	for _, fixture := range fixtures {
+-
+-		m := MustFromJSON(fixture.data)
+-
+-		// get the value
+-		t.Logf("Running get fixture: \"%s\" (%v)", fixture.name, fixture)
+-		value := m.Get(fixture.get.(string))
+-
+-		// make sure it matches
+-		assert.Equal(t, fixture.output, value.data,
+-			"Get fixture \"%s\" failed: %v", fixture.name, fixture,
+-		)
+-
+-	}
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/map.go b/Godeps/_workspace/src/github.com/stretchr/objx/map.go
+deleted file mode 100644
+index eb6ed8e..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/map.go
++++ /dev/null
+@@ -1,222 +0,0 @@
+-package objx
+-
+-import (
+-	"encoding/base64"
+-	"encoding/json"
+-	"errors"
+-	"io/ioutil"
+-	"net/url"
+-	"strings"
+-)
+-
+-// MSIConvertable is an interface that defines methods for converting your
+-// custom types to a map[string]interface{} representation.
+-type MSIConvertable interface {
+-	// MSI gets a map[string]interface{} (msi) representing the
+-	// object.
+-	MSI() map[string]interface{}
+-}
+-
+-// Map provides extended functionality for working with
+-// untyped data, in particular map[string]interface (msi).
+-type Map map[string]interface{}
+-
+-// Value returns the internal value instance
+-func (m Map) Value() *Value {
+-	return &Value{data: m}
+-}
+-
+-// Nil represents a nil Map.
+-var Nil Map = New(nil)
+-
+-// New creates a new Map containing the map[string]interface{} in the data argument.
+-// If the data argument is not a map[string]interface, New attempts to call the
+-// MSI() method on the MSIConvertable interface to create one.
+-func New(data interface{}) Map {
+-	if _, ok := data.(map[string]interface{}); !ok {
+-		if converter, ok := data.(MSIConvertable); ok {
+-			data = converter.MSI()
+-		} else {
+-			return nil
+-		}
+-	}
+-	return Map(data.(map[string]interface{}))
+-}
+-
+-// MSI creates a map[string]interface{} and puts it inside a new Map.
+-//
+-// The arguments follow a key, value pattern.
+-//
+-// Panics
+-//
+-// Panics if any key arugment is non-string or if there are an odd number of arguments.
+-//
+-// Example
+-//
+-// To easily create Maps:
+-//
+-//     m := objx.MSI("name", "Mat", "age", 29, "subobj", objx.MSI("active", true))
+-//
+-//     // creates an Map equivalent to
+-//     m := objx.New(map[string]interface{}{"name": "Mat", "age": 29, "subobj": map[string]interface{}{"active": true}})
+-func MSI(keyAndValuePairs ...interface{}) Map {
+-
+-	newMap := make(map[string]interface{})
+-	keyAndValuePairsLen := len(keyAndValuePairs)
+-
+-	if keyAndValuePairsLen%2 != 0 {
+-		panic("objx: MSI must have an even number of arguments following the 'key, value' pattern.")
+-	}
+-
+-	for i := 0; i < keyAndValuePairsLen; i = i + 2 {
+-
+-		key := keyAndValuePairs[i]
+-		value := keyAndValuePairs[i+1]
+-
+-		// make sure the key is a string
+-		keyString, keyStringOK := key.(string)
+-		if !keyStringOK {
+-			panic("objx: MSI must follow 'string, interface{}' pattern.  " + keyString + " is not a valid key.")
+-		}
+-
+-		newMap[keyString] = value
+-
+-	}
+-
+-	return New(newMap)
+-}
+-
+-// ****** Conversion Constructors
+-
+-// MustFromJSON creates a new Map containing the data specified in the
+-// jsonString.
+-//
+-// Panics if the JSON is invalid.
+-func MustFromJSON(jsonString string) Map {
+-	o, err := FromJSON(jsonString)
+-
+-	if err != nil {
+-		panic("objx: MustFromJSON failed with error: " + err.Error())
+-	}
+-
+-	return o
+-}
+-
+-// FromJSON creates a new Map containing the data specified in the
+-// jsonString.
+-//
+-// Returns an error if the JSON is invalid.
+-func FromJSON(jsonString string) (Map, error) {
+-
+-	var data interface{}
+-	err := json.Unmarshal([]byte(jsonString), &data)
+-
+-	if err != nil {
+-		return Nil, err
+-	}
+-
+-	return New(data), nil
+-
+-}
+-
+-// FromBase64 creates a new Obj containing the data specified
+-// in the Base64 string.
+-//
+-// The string is an encoded JSON string returned by Base64
+-func FromBase64(base64String string) (Map, error) {
+-
+-	decoder := base64.NewDecoder(base64.StdEncoding, strings.NewReader(base64String))
+-
+-	decoded, err := ioutil.ReadAll(decoder)
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	return FromJSON(string(decoded))
+-}
+-
+-// MustFromBase64 creates a new Obj containing the data specified
+-// in the Base64 string and panics if there is an error.
+-//
+-// The string is an encoded JSON string returned by Base64
+-func MustFromBase64(base64String string) Map {
+-
+-	result, err := FromBase64(base64String)
+-
+-	if err != nil {
+-		panic("objx: MustFromBase64 failed with error: " + err.Error())
+-	}
+-
+-	return result
+-}
+-
+-// FromSignedBase64 creates a new Obj containing the data specified
+-// in the Base64 string.
+-//
+-// The string is an encoded JSON string returned by SignedBase64
+-func FromSignedBase64(base64String, key string) (Map, error) {
+-	parts := strings.Split(base64String, SignatureSeparator)
+-	if len(parts) != 2 {
+-		return nil, errors.New("objx: Signed base64 string is malformed.")
+-	}
+-
+-	sig := HashWithKey(parts[0], key)
+-	if parts[1] != sig {
+-		return nil, errors.New("objx: Signature for base64 data does not match.")
+-	}
+-
+-	return FromBase64(parts[0])
+-}
+-
+-// MustFromSignedBase64 creates a new Obj containing the data specified
+-// in the Base64 string and panics if there is an error.
+-//
+-// The string is an encoded JSON string returned by Base64
+-func MustFromSignedBase64(base64String, key string) Map {
+-
+-	result, err := FromSignedBase64(base64String, key)
+-
+-	if err != nil {
+-		panic("objx: MustFromSignedBase64 failed with error: " + err.Error())
+-	}
+-
+-	return result
+-}
+-
+-// FromURLQuery generates a new Obj by parsing the specified
+-// query.
+-//
+-// For queries with multiple values, the first value is selected.
+-func FromURLQuery(query string) (Map, error) {
+-
+-	vals, err := url.ParseQuery(query)
+-
+-	if err != nil {
+-		return nil, err
+-	}
+-
+-	m := make(map[string]interface{})
+-	for k, vals := range vals {
+-		m[k] = vals[0]
+-	}
+-
+-	return New(m), nil
+-}
+-
+-// MustFromURLQuery generates a new Obj by parsing the specified
+-// query.
+-//
+-// For queries with multiple values, the first value is selected.
+-//
+-// Panics if it encounters an error
+-func MustFromURLQuery(query string) Map {
+-
+-	o, err := FromURLQuery(query)
+-
+-	if err != nil {
+-		panic("objx: MustFromURLQuery failed with error: " + err.Error())
+-	}
+-
+-	return o
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/map_for_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/map_for_test.go
+deleted file mode 100644
+index 6beb506..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/map_for_test.go
++++ /dev/null
+@@ -1,10 +0,0 @@
+-package objx
+-
+-var TestMap map[string]interface{} = map[string]interface{}{
+-	"name": "Tyler",
+-	"address": map[string]interface{}{
+-		"city":  "Salt Lake City",
+-		"state": "UT",
+-	},
+-	"numbers": []interface{}{"one", "two", "three", "four", "five"},
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/map_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/map_test.go
+deleted file mode 100644
+index 1f8b45c..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/map_test.go
++++ /dev/null
+@@ -1,147 +0,0 @@
+-package objx
+-
+-import (
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-type Convertable struct {
+-	name string
+-}
+-
+-func (c *Convertable) MSI() map[string]interface{} {
+-	return map[string]interface{}{"name": c.name}
+-}
+-
+-type Unconvertable struct {
+-	name string
+-}
+-
+-func TestMapCreation(t *testing.T) {
+-
+-	o := New(nil)
+-	assert.Nil(t, o)
+-
+-	o = New("Tyler")
+-	assert.Nil(t, o)
+-
+-	unconvertable := &Unconvertable{name: "Tyler"}
+-	o = New(unconvertable)
+-	assert.Nil(t, o)
+-
+-	convertable := &Convertable{name: "Tyler"}
+-	o = New(convertable)
+-	if assert.NotNil(t, convertable) {
+-		assert.Equal(t, "Tyler", o["name"], "Tyler")
+-	}
+-
+-	o = MSI()
+-	if assert.NotNil(t, o) {
+-		assert.NotNil(t, o)
+-	}
+-
+-	o = MSI("name", "Tyler")
+-	if assert.NotNil(t, o) {
+-		if assert.NotNil(t, o) {
+-			assert.Equal(t, o["name"], "Tyler")
+-		}
+-	}
+-
+-}
+-
+-func TestMapMustFromJSONWithError(t *testing.T) {
+-
+-	_, err := FromJSON(`"name":"Mat"}`)
+-	assert.Error(t, err)
+-
+-}
+-
+-func TestMapFromJSON(t *testing.T) {
+-
+-	o := MustFromJSON(`{"name":"Mat"}`)
+-
+-	if assert.NotNil(t, o) {
+-		if assert.NotNil(t, o) {
+-			assert.Equal(t, "Mat", o["name"])
+-		}
+-	}
+-
+-}
+-
+-func TestMapFromJSONWithError(t *testing.T) {
+-
+-	var m Map
+-
+-	assert.Panics(t, func() {
+-		m = MustFromJSON(`"name":"Mat"}`)
+-	})
+-
+-	assert.Nil(t, m)
+-
+-}
+-
+-func TestMapFromBase64String(t *testing.T) {
+-
+-	base64String := "eyJuYW1lIjoiTWF0In0="
+-
+-	o, err := FromBase64(base64String)
+-
+-	if assert.NoError(t, err) {
+-		assert.Equal(t, o.Get("name").Str(), "Mat")
+-	}
+-
+-	assert.Equal(t, MustFromBase64(base64String).Get("name").Str(), "Mat")
+-
+-}
+-
+-func TestMapFromBase64StringWithError(t *testing.T) {
+-
+-	base64String := "eyJuYW1lIjoiTWFasd0In0="
+-
+-	_, err := FromBase64(base64String)
+-
+-	assert.Error(t, err)
+-
+-	assert.Panics(t, func() {
+-		MustFromBase64(base64String)
+-	})
+-
+-}
+-
+-func TestMapFromSignedBase64String(t *testing.T) {
+-
+-	base64String := "eyJuYW1lIjoiTWF0In0=_67ee82916f90b2c0d68c903266e8998c9ef0c3d6"
+-
+-	o, err := FromSignedBase64(base64String, "key")
+-
+-	if assert.NoError(t, err) {
+-		assert.Equal(t, o.Get("name").Str(), "Mat")
+-	}
+-
+-	assert.Equal(t, MustFromSignedBase64(base64String, "key").Get("name").Str(), "Mat")
+-
+-}
+-
+-func TestMapFromSignedBase64StringWithError(t *testing.T) {
+-
+-	base64String := "eyJuYW1lasdIjoiTWF0In0=_67ee82916f90b2c0d68c903266e8998c9ef0c3d6"
+-
+-	_, err := FromSignedBase64(base64String, "key")
+-
+-	assert.Error(t, err)
+-
+-	assert.Panics(t, func() {
+-		MustFromSignedBase64(base64String, "key")
+-	})
+-
+-}
+-
+-func TestMapFromURLQuery(t *testing.T) {
+-
+-	m, err := FromURLQuery("name=tyler&state=UT")
+-	if assert.NoError(t, err) && assert.NotNil(t, m) {
+-		assert.Equal(t, "tyler", m.Get("name").Str())
+-		assert.Equal(t, "UT", m.Get("state").Str())
+-	}
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/mutations.go b/Godeps/_workspace/src/github.com/stretchr/objx/mutations.go
+deleted file mode 100644
+index b35c863..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/mutations.go
++++ /dev/null
+@@ -1,81 +0,0 @@
+-package objx
+-
+-// Exclude returns a new Map with the keys in the specified []string
+-// excluded.
+-func (d Map) Exclude(exclude []string) Map {
+-
+-	excluded := make(Map)
+-	for k, v := range d {
+-		var shouldInclude bool = true
+-		for _, toExclude := range exclude {
+-			if k == toExclude {
+-				shouldInclude = false
+-				break
+-			}
+-		}
+-		if shouldInclude {
+-			excluded[k] = v
+-		}
+-	}
+-
+-	return excluded
+-}
+-
+-// Copy creates a shallow copy of the Obj.
+-func (m Map) Copy() Map {
+-	copied := make(map[string]interface{})
+-	for k, v := range m {
+-		copied[k] = v
+-	}
+-	return New(copied)
+-}
+-
+-// Merge blends the specified map with a copy of this map and returns the result.
+-//
+-// Keys that appear in both will be selected from the specified map.
+-// This method requires that the wrapped object be a map[string]interface{}
+-func (m Map) Merge(merge Map) Map {
+-	return m.Copy().MergeHere(merge)
+-}
+-
+-// Merge blends the specified map with this map and returns the current map.
+-//
+-// Keys that appear in both will be selected from the specified map.  The original map
+-// will be modified. This method requires that
+-// the wrapped object be a map[string]interface{}
+-func (m Map) MergeHere(merge Map) Map {
+-
+-	for k, v := range merge {
+-		m[k] = v
+-	}
+-
+-	return m
+-
+-}
+-
+-// Transform builds a new Obj giving the transformer a chance
+-// to change the keys and values as it goes. This method requires that
+-// the wrapped object be a map[string]interface{}
+-func (m Map) Transform(transformer func(key string, value interface{}) (string, interface{})) Map {
+-	newMap := make(map[string]interface{})
+-	for k, v := range m {
+-		modifiedKey, modifiedVal := transformer(k, v)
+-		newMap[modifiedKey] = modifiedVal
+-	}
+-	return New(newMap)
+-}
+-
+-// TransformKeys builds a new map using the specified key mapping.
+-//
+-// Unspecified keys will be unaltered.
+-// This method requires that the wrapped object be a map[string]interface{}
+-func (m Map) TransformKeys(mapping map[string]string) Map {
+-	return m.Transform(func(key string, value interface{}) (string, interface{}) {
+-
+-		if newKey, ok := mapping[key]; ok {
+-			return newKey, value
+-		}
+-
+-		return key, value
+-	})
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/mutations_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/mutations_test.go
+deleted file mode 100644
+index e20ee23..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/mutations_test.go
++++ /dev/null
+@@ -1,77 +0,0 @@
+-package objx
+-
+-import (
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-func TestExclude(t *testing.T) {
+-
+-	d := make(Map)
+-	d["name"] = "Mat"
+-	d["age"] = 29
+-	d["secret"] = "ABC"
+-
+-	excluded := d.Exclude([]string{"secret"})
+-
+-	assert.Equal(t, d["name"], excluded["name"])
+-	assert.Equal(t, d["age"], excluded["age"])
+-	assert.False(t, excluded.Has("secret"), "secret should be excluded")
+-
+-}
+-
+-func TestCopy(t *testing.T) {
+-
+-	d1 := make(map[string]interface{})
+-	d1["name"] = "Tyler"
+-	d1["location"] = "UT"
+-
+-	d1Obj := New(d1)
+-	d2Obj := d1Obj.Copy()
+-
+-	d2Obj["name"] = "Mat"
+-
+-	assert.Equal(t, d1Obj.Get("name").Str(), "Tyler")
+-	assert.Equal(t, d2Obj.Get("name").Str(), "Mat")
+-
+-}
+-
+-func TestMerge(t *testing.T) {
+-
+-	d := make(map[string]interface{})
+-	d["name"] = "Mat"
+-
+-	d1 := make(map[string]interface{})
+-	d1["name"] = "Tyler"
+-	d1["location"] = "UT"
+-
+-	dObj := New(d)
+-	d1Obj := New(d1)
+-
+-	merged := dObj.Merge(d1Obj)
+-
+-	assert.Equal(t, merged.Get("name").Str(), d1Obj.Get("name").Str())
+-	assert.Equal(t, merged.Get("location").Str(), d1Obj.Get("location").Str())
+-	assert.Empty(t, dObj.Get("location").Str())
+-
+-}
+-
+-func TestMergeHere(t *testing.T) {
+-
+-	d := make(map[string]interface{})
+-	d["name"] = "Mat"
+-
+-	d1 := make(map[string]interface{})
+-	d1["name"] = "Tyler"
+-	d1["location"] = "UT"
+-
+-	dObj := New(d)
+-	d1Obj := New(d1)
+-
+-	merged := dObj.MergeHere(d1Obj)
+-
+-	assert.Equal(t, dObj, merged, "With MergeHere, it should return the first modified map")
+-	assert.Equal(t, merged.Get("name").Str(), d1Obj.Get("name").Str())
+-	assert.Equal(t, merged.Get("location").Str(), d1Obj.Get("location").Str())
+-	assert.Equal(t, merged.Get("location").Str(), dObj.Get("location").Str())
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/security.go b/Godeps/_workspace/src/github.com/stretchr/objx/security.go
+deleted file mode 100644
+index fdd6be9..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/security.go
++++ /dev/null
+@@ -1,14 +0,0 @@
+-package objx
+-
+-import (
+-	"crypto/sha1"
+-	"encoding/hex"
+-)
+-
+-// HashWithKey hashes the specified string using the security
+-// key.
+-func HashWithKey(data, key string) string {
+-	hash := sha1.New()
+-	hash.Write([]byte(data + ":" + key))
+-	return hex.EncodeToString(hash.Sum(nil))
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/security_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/security_test.go
+deleted file mode 100644
+index 8f0898f..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/security_test.go
++++ /dev/null
+@@ -1,12 +0,0 @@
+-package objx
+-
+-import (
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-func TestHashWithKey(t *testing.T) {
+-
+-	assert.Equal(t, "0ce84d8d01f2c7b6e0882b784429c54d280ea2d9", HashWithKey("abc", "def"))
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/simple_example_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/simple_example_test.go
+deleted file mode 100644
+index 5408c7f..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/simple_example_test.go
++++ /dev/null
+@@ -1,41 +0,0 @@
+-package objx
+-
+-import (
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-func TestSimpleExample(t *testing.T) {
+-
+-	// build a map from a JSON object
+-	o := MustFromJSON(`{"name":"Mat","foods":["indian","chinese"], "location":{"county":"hobbiton","city":"the shire"}}`)
+-
+-	// Map can be used as a straight map[string]interface{}
+-	assert.Equal(t, o["name"], "Mat")
+-
+-	// Get an Value object
+-	v := o.Get("name")
+-	assert.Equal(t, v, &Value{data: "Mat"})
+-
+-	// Test the contained value
+-	assert.False(t, v.IsInt())
+-	assert.False(t, v.IsBool())
+-	assert.True(t, v.IsStr())
+-
+-	// Get the contained value
+-	assert.Equal(t, v.Str(), "Mat")
+-
+-	// Get a default value if the contained value is not of the expected type or does not exist
+-	assert.Equal(t, 1, v.Int(1))
+-
+-	// Get a value by using array notation
+-	assert.Equal(t, "indian", o.Get("foods[0]").Data())
+-
+-	// Set a value by using array notation
+-	o.Set("foods[0]", "italian")
+-	assert.Equal(t, "italian", o.Get("foods[0]").Str())
+-
+-	// Get a value by using dot notation
+-	assert.Equal(t, "hobbiton", o.Get("location.county").Str())
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/tests.go b/Godeps/_workspace/src/github.com/stretchr/objx/tests.go
+deleted file mode 100644
+index d9e0b47..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/tests.go
++++ /dev/null
+@@ -1,17 +0,0 @@
+-package objx
+-
+-// Has gets whether there is something at the specified selector
+-// or not.
+-//
+-// If m is nil, Has will always return false.
+-func (m Map) Has(selector string) bool {
+-	if m == nil {
+-		return false
+-	}
+-	return !m.Get(selector).IsNil()
+-}
+-
+-// IsNil gets whether the data is nil or not.
+-func (v *Value) IsNil() bool {
+-	return v == nil || v.data == nil
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/tests_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/tests_test.go
+deleted file mode 100644
+index bcc1eb0..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/tests_test.go
++++ /dev/null
+@@ -1,24 +0,0 @@
+-package objx
+-
+-import (
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-func TestHas(t *testing.T) {
+-
+-	m := New(TestMap)
+-
+-	assert.True(t, m.Has("name"))
+-	assert.True(t, m.Has("address.state"))
+-	assert.True(t, m.Has("numbers[4]"))
+-
+-	assert.False(t, m.Has("address.state.nope"))
+-	assert.False(t, m.Has("address.nope"))
+-	assert.False(t, m.Has("nope"))
+-	assert.False(t, m.Has("numbers[5]"))
+-
+-	m = nil
+-	assert.False(t, m.Has("nothing"))
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/type_specific_codegen.go b/Godeps/_workspace/src/github.com/stretchr/objx/type_specific_codegen.go
+deleted file mode 100644
+index f3ecb29..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/type_specific_codegen.go
++++ /dev/null
+@@ -1,2881 +0,0 @@
+-package objx
+-
+-/*
+-	Inter (interface{} and []interface{})
+-	--------------------------------------------------
+-*/
+-
+-// Inter gets the value as a interface{}, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Inter(optionalDefault ...interface{}) interface{} {
+-	if s, ok := v.data.(interface{}); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustInter gets the value as a interface{}.
+-//
+-// Panics if the object is not a interface{}.
+-func (v *Value) MustInter() interface{} {
+-	return v.data.(interface{})
+-}
+-
+-// InterSlice gets the value as a []interface{}, returns the optionalDefault
+-// value or nil if the value is not a []interface{}.
+-func (v *Value) InterSlice(optionalDefault ...[]interface{}) []interface{} {
+-	if s, ok := v.data.([]interface{}); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustInterSlice gets the value as a []interface{}.
+-//
+-// Panics if the object is not a []interface{}.
+-func (v *Value) MustInterSlice() []interface{} {
+-	return v.data.([]interface{})
+-}
+-
+-// IsInter gets whether the object contained is a interface{} or not.
+-func (v *Value) IsInter() bool {
+-	_, ok := v.data.(interface{})
+-	return ok
+-}
+-
+-// IsInterSlice gets whether the object contained is a []interface{} or not.
+-func (v *Value) IsInterSlice() bool {
+-	_, ok := v.data.([]interface{})
+-	return ok
+-}
+-
+-// EachInter calls the specified callback for each object
+-// in the []interface{}.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachInter(callback func(int, interface{}) bool) *Value {
+-
+-	for index, val := range v.MustInterSlice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereInter uses the specified decider function to select items
+-// from the []interface{}.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereInter(decider func(int, interface{}) bool) *Value {
+-
+-	var selected []interface{}
+-
+-	v.EachInter(func(index int, val interface{}) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupInter uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]interface{}.
+-func (v *Value) GroupInter(grouper func(int, interface{}) string) *Value {
+-
+-	groups := make(map[string][]interface{})
+-
+-	v.EachInter(func(index int, val interface{}) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]interface{}, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceInter uses the specified function to replace each interface{}s
+-// by iterating each item.  The data in the returned result will be a
+-// []interface{} containing the replaced items.
+-func (v *Value) ReplaceInter(replacer func(int, interface{}) interface{}) *Value {
+-
+-	arr := v.MustInterSlice()
+-	replaced := make([]interface{}, len(arr))
+-
+-	v.EachInter(func(index int, val interface{}) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectInter uses the specified collector function to collect a value
+-// for each of the interface{}s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectInter(collector func(int, interface{}) interface{}) *Value {
+-
+-	arr := v.MustInterSlice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachInter(func(index int, val interface{}) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	MSI (map[string]interface{} and []map[string]interface{})
+-	--------------------------------------------------
+-*/
+-
+-// MSI gets the value as a map[string]interface{}, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) MSI(optionalDefault ...map[string]interface{}) map[string]interface{} {
+-	if s, ok := v.data.(map[string]interface{}); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustMSI gets the value as a map[string]interface{}.
+-//
+-// Panics if the object is not a map[string]interface{}.
+-func (v *Value) MustMSI() map[string]interface{} {
+-	return v.data.(map[string]interface{})
+-}
+-
+-// MSISlice gets the value as a []map[string]interface{}, returns the optionalDefault
+-// value or nil if the value is not a []map[string]interface{}.
+-func (v *Value) MSISlice(optionalDefault ...[]map[string]interface{}) []map[string]interface{} {
+-	if s, ok := v.data.([]map[string]interface{}); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustMSISlice gets the value as a []map[string]interface{}.
+-//
+-// Panics if the object is not a []map[string]interface{}.
+-func (v *Value) MustMSISlice() []map[string]interface{} {
+-	return v.data.([]map[string]interface{})
+-}
+-
+-// IsMSI gets whether the object contained is a map[string]interface{} or not.
+-func (v *Value) IsMSI() bool {
+-	_, ok := v.data.(map[string]interface{})
+-	return ok
+-}
+-
+-// IsMSISlice gets whether the object contained is a []map[string]interface{} or not.
+-func (v *Value) IsMSISlice() bool {
+-	_, ok := v.data.([]map[string]interface{})
+-	return ok
+-}
+-
+-// EachMSI calls the specified callback for each object
+-// in the []map[string]interface{}.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachMSI(callback func(int, map[string]interface{}) bool) *Value {
+-
+-	for index, val := range v.MustMSISlice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereMSI uses the specified decider function to select items
+-// from the []map[string]interface{}.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereMSI(decider func(int, map[string]interface{}) bool) *Value {
+-
+-	var selected []map[string]interface{}
+-
+-	v.EachMSI(func(index int, val map[string]interface{}) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupMSI uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]map[string]interface{}.
+-func (v *Value) GroupMSI(grouper func(int, map[string]interface{}) string) *Value {
+-
+-	groups := make(map[string][]map[string]interface{})
+-
+-	v.EachMSI(func(index int, val map[string]interface{}) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]map[string]interface{}, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceMSI uses the specified function to replace each map[string]interface{}s
+-// by iterating each item.  The data in the returned result will be a
+-// []map[string]interface{} containing the replaced items.
+-func (v *Value) ReplaceMSI(replacer func(int, map[string]interface{}) map[string]interface{}) *Value {
+-
+-	arr := v.MustMSISlice()
+-	replaced := make([]map[string]interface{}, len(arr))
+-
+-	v.EachMSI(func(index int, val map[string]interface{}) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectMSI uses the specified collector function to collect a value
+-// for each of the map[string]interface{}s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectMSI(collector func(int, map[string]interface{}) interface{}) *Value {
+-
+-	arr := v.MustMSISlice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachMSI(func(index int, val map[string]interface{}) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	ObjxMap ((Map) and [](Map))
+-	--------------------------------------------------
+-*/
+-
+-// ObjxMap gets the value as a (Map), returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) ObjxMap(optionalDefault ...(Map)) Map {
+-	if s, ok := v.data.((Map)); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return New(nil)
+-}
+-
+-// MustObjxMap gets the value as a (Map).
+-//
+-// Panics if the object is not a (Map).
+-func (v *Value) MustObjxMap() Map {
+-	return v.data.((Map))
+-}
+-
+-// ObjxMapSlice gets the value as a [](Map), returns the optionalDefault
+-// value or nil if the value is not a [](Map).
+-func (v *Value) ObjxMapSlice(optionalDefault ...[](Map)) [](Map) {
+-	if s, ok := v.data.([](Map)); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustObjxMapSlice gets the value as a [](Map).
+-//
+-// Panics if the object is not a [](Map).
+-func (v *Value) MustObjxMapSlice() [](Map) {
+-	return v.data.([](Map))
+-}
+-
+-// IsObjxMap gets whether the object contained is a (Map) or not.
+-func (v *Value) IsObjxMap() bool {
+-	_, ok := v.data.((Map))
+-	return ok
+-}
+-
+-// IsObjxMapSlice gets whether the object contained is a [](Map) or not.
+-func (v *Value) IsObjxMapSlice() bool {
+-	_, ok := v.data.([](Map))
+-	return ok
+-}
+-
+-// EachObjxMap calls the specified callback for each object
+-// in the [](Map).
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachObjxMap(callback func(int, Map) bool) *Value {
+-
+-	for index, val := range v.MustObjxMapSlice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereObjxMap uses the specified decider function to select items
+-// from the [](Map).  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereObjxMap(decider func(int, Map) bool) *Value {
+-
+-	var selected [](Map)
+-
+-	v.EachObjxMap(func(index int, val Map) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupObjxMap uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][](Map).
+-func (v *Value) GroupObjxMap(grouper func(int, Map) string) *Value {
+-
+-	groups := make(map[string][](Map))
+-
+-	v.EachObjxMap(func(index int, val Map) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([](Map), 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceObjxMap uses the specified function to replace each (Map)s
+-// by iterating each item.  The data in the returned result will be a
+-// [](Map) containing the replaced items.
+-func (v *Value) ReplaceObjxMap(replacer func(int, Map) Map) *Value {
+-
+-	arr := v.MustObjxMapSlice()
+-	replaced := make([](Map), len(arr))
+-
+-	v.EachObjxMap(func(index int, val Map) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectObjxMap uses the specified collector function to collect a value
+-// for each of the (Map)s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectObjxMap(collector func(int, Map) interface{}) *Value {
+-
+-	arr := v.MustObjxMapSlice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachObjxMap(func(index int, val Map) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Bool (bool and []bool)
+-	--------------------------------------------------
+-*/
+-
+-// Bool gets the value as a bool, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Bool(optionalDefault ...bool) bool {
+-	if s, ok := v.data.(bool); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return false
+-}
+-
+-// MustBool gets the value as a bool.
+-//
+-// Panics if the object is not a bool.
+-func (v *Value) MustBool() bool {
+-	return v.data.(bool)
+-}
+-
+-// BoolSlice gets the value as a []bool, returns the optionalDefault
+-// value or nil if the value is not a []bool.
+-func (v *Value) BoolSlice(optionalDefault ...[]bool) []bool {
+-	if s, ok := v.data.([]bool); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustBoolSlice gets the value as a []bool.
+-//
+-// Panics if the object is not a []bool.
+-func (v *Value) MustBoolSlice() []bool {
+-	return v.data.([]bool)
+-}
+-
+-// IsBool gets whether the object contained is a bool or not.
+-func (v *Value) IsBool() bool {
+-	_, ok := v.data.(bool)
+-	return ok
+-}
+-
+-// IsBoolSlice gets whether the object contained is a []bool or not.
+-func (v *Value) IsBoolSlice() bool {
+-	_, ok := v.data.([]bool)
+-	return ok
+-}
+-
+-// EachBool calls the specified callback for each object
+-// in the []bool.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachBool(callback func(int, bool) bool) *Value {
+-
+-	for index, val := range v.MustBoolSlice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereBool uses the specified decider function to select items
+-// from the []bool.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereBool(decider func(int, bool) bool) *Value {
+-
+-	var selected []bool
+-
+-	v.EachBool(func(index int, val bool) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupBool uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]bool.
+-func (v *Value) GroupBool(grouper func(int, bool) string) *Value {
+-
+-	groups := make(map[string][]bool)
+-
+-	v.EachBool(func(index int, val bool) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]bool, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceBool uses the specified function to replace each bools
+-// by iterating each item.  The data in the returned result will be a
+-// []bool containing the replaced items.
+-func (v *Value) ReplaceBool(replacer func(int, bool) bool) *Value {
+-
+-	arr := v.MustBoolSlice()
+-	replaced := make([]bool, len(arr))
+-
+-	v.EachBool(func(index int, val bool) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectBool uses the specified collector function to collect a value
+-// for each of the bools in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectBool(collector func(int, bool) interface{}) *Value {
+-
+-	arr := v.MustBoolSlice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachBool(func(index int, val bool) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Str (string and []string)
+-	--------------------------------------------------
+-*/
+-
+-// Str gets the value as a string, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Str(optionalDefault ...string) string {
+-	if s, ok := v.data.(string); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return ""
+-}
+-
+-// MustStr gets the value as a string.
+-//
+-// Panics if the object is not a string.
+-func (v *Value) MustStr() string {
+-	return v.data.(string)
+-}
+-
+-// StrSlice gets the value as a []string, returns the optionalDefault
+-// value or nil if the value is not a []string.
+-func (v *Value) StrSlice(optionalDefault ...[]string) []string {
+-	if s, ok := v.data.([]string); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustStrSlice gets the value as a []string.
+-//
+-// Panics if the object is not a []string.
+-func (v *Value) MustStrSlice() []string {
+-	return v.data.([]string)
+-}
+-
+-// IsStr gets whether the object contained is a string or not.
+-func (v *Value) IsStr() bool {
+-	_, ok := v.data.(string)
+-	return ok
+-}
+-
+-// IsStrSlice gets whether the object contained is a []string or not.
+-func (v *Value) IsStrSlice() bool {
+-	_, ok := v.data.([]string)
+-	return ok
+-}
+-
+-// EachStr calls the specified callback for each object
+-// in the []string.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachStr(callback func(int, string) bool) *Value {
+-
+-	for index, val := range v.MustStrSlice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereStr uses the specified decider function to select items
+-// from the []string.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereStr(decider func(int, string) bool) *Value {
+-
+-	var selected []string
+-
+-	v.EachStr(func(index int, val string) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupStr uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]string.
+-func (v *Value) GroupStr(grouper func(int, string) string) *Value {
+-
+-	groups := make(map[string][]string)
+-
+-	v.EachStr(func(index int, val string) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]string, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceStr uses the specified function to replace each strings
+-// by iterating each item.  The data in the returned result will be a
+-// []string containing the replaced items.
+-func (v *Value) ReplaceStr(replacer func(int, string) string) *Value {
+-
+-	arr := v.MustStrSlice()
+-	replaced := make([]string, len(arr))
+-
+-	v.EachStr(func(index int, val string) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectStr uses the specified collector function to collect a value
+-// for each of the strings in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectStr(collector func(int, string) interface{}) *Value {
+-
+-	arr := v.MustStrSlice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachStr(func(index int, val string) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Int (int and []int)
+-	--------------------------------------------------
+-*/
+-
+-// Int gets the value as a int, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Int(optionalDefault ...int) int {
+-	if s, ok := v.data.(int); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustInt gets the value as a int.
+-//
+-// Panics if the object is not a int.
+-func (v *Value) MustInt() int {
+-	return v.data.(int)
+-}
+-
+-// IntSlice gets the value as a []int, returns the optionalDefault
+-// value or nil if the value is not a []int.
+-func (v *Value) IntSlice(optionalDefault ...[]int) []int {
+-	if s, ok := v.data.([]int); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustIntSlice gets the value as a []int.
+-//
+-// Panics if the object is not a []int.
+-func (v *Value) MustIntSlice() []int {
+-	return v.data.([]int)
+-}
+-
+-// IsInt gets whether the object contained is a int or not.
+-func (v *Value) IsInt() bool {
+-	_, ok := v.data.(int)
+-	return ok
+-}
+-
+-// IsIntSlice gets whether the object contained is a []int or not.
+-func (v *Value) IsIntSlice() bool {
+-	_, ok := v.data.([]int)
+-	return ok
+-}
+-
+-// EachInt calls the specified callback for each object
+-// in the []int.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachInt(callback func(int, int) bool) *Value {
+-
+-	for index, val := range v.MustIntSlice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereInt uses the specified decider function to select items
+-// from the []int.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereInt(decider func(int, int) bool) *Value {
+-
+-	var selected []int
+-
+-	v.EachInt(func(index int, val int) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupInt uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]int.
+-func (v *Value) GroupInt(grouper func(int, int) string) *Value {
+-
+-	groups := make(map[string][]int)
+-
+-	v.EachInt(func(index int, val int) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]int, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceInt uses the specified function to replace each ints
+-// by iterating each item.  The data in the returned result will be a
+-// []int containing the replaced items.
+-func (v *Value) ReplaceInt(replacer func(int, int) int) *Value {
+-
+-	arr := v.MustIntSlice()
+-	replaced := make([]int, len(arr))
+-
+-	v.EachInt(func(index int, val int) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectInt uses the specified collector function to collect a value
+-// for each of the ints in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectInt(collector func(int, int) interface{}) *Value {
+-
+-	arr := v.MustIntSlice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachInt(func(index int, val int) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Int8 (int8 and []int8)
+-	--------------------------------------------------
+-*/
+-
+-// Int8 gets the value as a int8, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Int8(optionalDefault ...int8) int8 {
+-	if s, ok := v.data.(int8); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustInt8 gets the value as a int8.
+-//
+-// Panics if the object is not a int8.
+-func (v *Value) MustInt8() int8 {
+-	return v.data.(int8)
+-}
+-
+-// Int8Slice gets the value as a []int8, returns the optionalDefault
+-// value or nil if the value is not a []int8.
+-func (v *Value) Int8Slice(optionalDefault ...[]int8) []int8 {
+-	if s, ok := v.data.([]int8); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustInt8Slice gets the value as a []int8.
+-//
+-// Panics if the object is not a []int8.
+-func (v *Value) MustInt8Slice() []int8 {
+-	return v.data.([]int8)
+-}
+-
+-// IsInt8 gets whether the object contained is a int8 or not.
+-func (v *Value) IsInt8() bool {
+-	_, ok := v.data.(int8)
+-	return ok
+-}
+-
+-// IsInt8Slice gets whether the object contained is a []int8 or not.
+-func (v *Value) IsInt8Slice() bool {
+-	_, ok := v.data.([]int8)
+-	return ok
+-}
+-
+-// EachInt8 calls the specified callback for each object
+-// in the []int8.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachInt8(callback func(int, int8) bool) *Value {
+-
+-	for index, val := range v.MustInt8Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereInt8 uses the specified decider function to select items
+-// from the []int8.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereInt8(decider func(int, int8) bool) *Value {
+-
+-	var selected []int8
+-
+-	v.EachInt8(func(index int, val int8) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupInt8 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]int8.
+-func (v *Value) GroupInt8(grouper func(int, int8) string) *Value {
+-
+-	groups := make(map[string][]int8)
+-
+-	v.EachInt8(func(index int, val int8) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]int8, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceInt8 uses the specified function to replace each int8s
+-// by iterating each item.  The data in the returned result will be a
+-// []int8 containing the replaced items.
+-func (v *Value) ReplaceInt8(replacer func(int, int8) int8) *Value {
+-
+-	arr := v.MustInt8Slice()
+-	replaced := make([]int8, len(arr))
+-
+-	v.EachInt8(func(index int, val int8) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectInt8 uses the specified collector function to collect a value
+-// for each of the int8s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectInt8(collector func(int, int8) interface{}) *Value {
+-
+-	arr := v.MustInt8Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachInt8(func(index int, val int8) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Int16 (int16 and []int16)
+-	--------------------------------------------------
+-*/
+-
+-// Int16 gets the value as a int16, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Int16(optionalDefault ...int16) int16 {
+-	if s, ok := v.data.(int16); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustInt16 gets the value as a int16.
+-//
+-// Panics if the object is not a int16.
+-func (v *Value) MustInt16() int16 {
+-	return v.data.(int16)
+-}
+-
+-// Int16Slice gets the value as a []int16, returns the optionalDefault
+-// value or nil if the value is not a []int16.
+-func (v *Value) Int16Slice(optionalDefault ...[]int16) []int16 {
+-	if s, ok := v.data.([]int16); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustInt16Slice gets the value as a []int16.
+-//
+-// Panics if the object is not a []int16.
+-func (v *Value) MustInt16Slice() []int16 {
+-	return v.data.([]int16)
+-}
+-
+-// IsInt16 gets whether the object contained is a int16 or not.
+-func (v *Value) IsInt16() bool {
+-	_, ok := v.data.(int16)
+-	return ok
+-}
+-
+-// IsInt16Slice gets whether the object contained is a []int16 or not.
+-func (v *Value) IsInt16Slice() bool {
+-	_, ok := v.data.([]int16)
+-	return ok
+-}
+-
+-// EachInt16 calls the specified callback for each object
+-// in the []int16.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachInt16(callback func(int, int16) bool) *Value {
+-
+-	for index, val := range v.MustInt16Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereInt16 uses the specified decider function to select items
+-// from the []int16.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereInt16(decider func(int, int16) bool) *Value {
+-
+-	var selected []int16
+-
+-	v.EachInt16(func(index int, val int16) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupInt16 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]int16.
+-func (v *Value) GroupInt16(grouper func(int, int16) string) *Value {
+-
+-	groups := make(map[string][]int16)
+-
+-	v.EachInt16(func(index int, val int16) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]int16, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceInt16 uses the specified function to replace each int16s
+-// by iterating each item.  The data in the returned result will be a
+-// []int16 containing the replaced items.
+-func (v *Value) ReplaceInt16(replacer func(int, int16) int16) *Value {
+-
+-	arr := v.MustInt16Slice()
+-	replaced := make([]int16, len(arr))
+-
+-	v.EachInt16(func(index int, val int16) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectInt16 uses the specified collector function to collect a value
+-// for each of the int16s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectInt16(collector func(int, int16) interface{}) *Value {
+-
+-	arr := v.MustInt16Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachInt16(func(index int, val int16) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Int32 (int32 and []int32)
+-	--------------------------------------------------
+-*/
+-
+-// Int32 gets the value as a int32, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Int32(optionalDefault ...int32) int32 {
+-	if s, ok := v.data.(int32); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustInt32 gets the value as a int32.
+-//
+-// Panics if the object is not a int32.
+-func (v *Value) MustInt32() int32 {
+-	return v.data.(int32)
+-}
+-
+-// Int32Slice gets the value as a []int32, returns the optionalDefault
+-// value or nil if the value is not a []int32.
+-func (v *Value) Int32Slice(optionalDefault ...[]int32) []int32 {
+-	if s, ok := v.data.([]int32); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustInt32Slice gets the value as a []int32.
+-//
+-// Panics if the object is not a []int32.
+-func (v *Value) MustInt32Slice() []int32 {
+-	return v.data.([]int32)
+-}
+-
+-// IsInt32 gets whether the object contained is a int32 or not.
+-func (v *Value) IsInt32() bool {
+-	_, ok := v.data.(int32)
+-	return ok
+-}
+-
+-// IsInt32Slice gets whether the object contained is a []int32 or not.
+-func (v *Value) IsInt32Slice() bool {
+-	_, ok := v.data.([]int32)
+-	return ok
+-}
+-
+-// EachInt32 calls the specified callback for each object
+-// in the []int32.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachInt32(callback func(int, int32) bool) *Value {
+-
+-	for index, val := range v.MustInt32Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereInt32 uses the specified decider function to select items
+-// from the []int32.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereInt32(decider func(int, int32) bool) *Value {
+-
+-	var selected []int32
+-
+-	v.EachInt32(func(index int, val int32) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupInt32 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]int32.
+-func (v *Value) GroupInt32(grouper func(int, int32) string) *Value {
+-
+-	groups := make(map[string][]int32)
+-
+-	v.EachInt32(func(index int, val int32) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]int32, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceInt32 uses the specified function to replace each int32s
+-// by iterating each item.  The data in the returned result will be a
+-// []int32 containing the replaced items.
+-func (v *Value) ReplaceInt32(replacer func(int, int32) int32) *Value {
+-
+-	arr := v.MustInt32Slice()
+-	replaced := make([]int32, len(arr))
+-
+-	v.EachInt32(func(index int, val int32) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectInt32 uses the specified collector function to collect a value
+-// for each of the int32s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectInt32(collector func(int, int32) interface{}) *Value {
+-
+-	arr := v.MustInt32Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachInt32(func(index int, val int32) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Int64 (int64 and []int64)
+-	--------------------------------------------------
+-*/
+-
+-// Int64 gets the value as a int64, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Int64(optionalDefault ...int64) int64 {
+-	if s, ok := v.data.(int64); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustInt64 gets the value as a int64.
+-//
+-// Panics if the object is not a int64.
+-func (v *Value) MustInt64() int64 {
+-	return v.data.(int64)
+-}
+-
+-// Int64Slice gets the value as a []int64, returns the optionalDefault
+-// value or nil if the value is not a []int64.
+-func (v *Value) Int64Slice(optionalDefault ...[]int64) []int64 {
+-	if s, ok := v.data.([]int64); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustInt64Slice gets the value as a []int64.
+-//
+-// Panics if the object is not a []int64.
+-func (v *Value) MustInt64Slice() []int64 {
+-	return v.data.([]int64)
+-}
+-
+-// IsInt64 gets whether the object contained is a int64 or not.
+-func (v *Value) IsInt64() bool {
+-	_, ok := v.data.(int64)
+-	return ok
+-}
+-
+-// IsInt64Slice gets whether the object contained is a []int64 or not.
+-func (v *Value) IsInt64Slice() bool {
+-	_, ok := v.data.([]int64)
+-	return ok
+-}
+-
+-// EachInt64 calls the specified callback for each object
+-// in the []int64.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachInt64(callback func(int, int64) bool) *Value {
+-
+-	for index, val := range v.MustInt64Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereInt64 uses the specified decider function to select items
+-// from the []int64.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereInt64(decider func(int, int64) bool) *Value {
+-
+-	var selected []int64
+-
+-	v.EachInt64(func(index int, val int64) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupInt64 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]int64.
+-func (v *Value) GroupInt64(grouper func(int, int64) string) *Value {
+-
+-	groups := make(map[string][]int64)
+-
+-	v.EachInt64(func(index int, val int64) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]int64, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceInt64 uses the specified function to replace each int64s
+-// by iterating each item.  The data in the returned result will be a
+-// []int64 containing the replaced items.
+-func (v *Value) ReplaceInt64(replacer func(int, int64) int64) *Value {
+-
+-	arr := v.MustInt64Slice()
+-	replaced := make([]int64, len(arr))
+-
+-	v.EachInt64(func(index int, val int64) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectInt64 uses the specified collector function to collect a value
+-// for each of the int64s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectInt64(collector func(int, int64) interface{}) *Value {
+-
+-	arr := v.MustInt64Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachInt64(func(index int, val int64) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Uint (uint and []uint)
+-	--------------------------------------------------
+-*/
+-
+-// Uint gets the value as a uint, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Uint(optionalDefault ...uint) uint {
+-	if s, ok := v.data.(uint); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustUint gets the value as a uint.
+-//
+-// Panics if the object is not a uint.
+-func (v *Value) MustUint() uint {
+-	return v.data.(uint)
+-}
+-
+-// UintSlice gets the value as a []uint, returns the optionalDefault
+-// value or nil if the value is not a []uint.
+-func (v *Value) UintSlice(optionalDefault ...[]uint) []uint {
+-	if s, ok := v.data.([]uint); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustUintSlice gets the value as a []uint.
+-//
+-// Panics if the object is not a []uint.
+-func (v *Value) MustUintSlice() []uint {
+-	return v.data.([]uint)
+-}
+-
+-// IsUint gets whether the object contained is a uint or not.
+-func (v *Value) IsUint() bool {
+-	_, ok := v.data.(uint)
+-	return ok
+-}
+-
+-// IsUintSlice gets whether the object contained is a []uint or not.
+-func (v *Value) IsUintSlice() bool {
+-	_, ok := v.data.([]uint)
+-	return ok
+-}
+-
+-// EachUint calls the specified callback for each object
+-// in the []uint.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachUint(callback func(int, uint) bool) *Value {
+-
+-	for index, val := range v.MustUintSlice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereUint uses the specified decider function to select items
+-// from the []uint.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereUint(decider func(int, uint) bool) *Value {
+-
+-	var selected []uint
+-
+-	v.EachUint(func(index int, val uint) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupUint uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]uint.
+-func (v *Value) GroupUint(grouper func(int, uint) string) *Value {
+-
+-	groups := make(map[string][]uint)
+-
+-	v.EachUint(func(index int, val uint) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]uint, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceUint uses the specified function to replace each uints
+-// by iterating each item.  The data in the returned result will be a
+-// []uint containing the replaced items.
+-func (v *Value) ReplaceUint(replacer func(int, uint) uint) *Value {
+-
+-	arr := v.MustUintSlice()
+-	replaced := make([]uint, len(arr))
+-
+-	v.EachUint(func(index int, val uint) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectUint uses the specified collector function to collect a value
+-// for each of the uints in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectUint(collector func(int, uint) interface{}) *Value {
+-
+-	arr := v.MustUintSlice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachUint(func(index int, val uint) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Uint8 (uint8 and []uint8)
+-	--------------------------------------------------
+-*/
+-
+-// Uint8 gets the value as a uint8, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Uint8(optionalDefault ...uint8) uint8 {
+-	if s, ok := v.data.(uint8); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustUint8 gets the value as a uint8.
+-//
+-// Panics if the object is not a uint8.
+-func (v *Value) MustUint8() uint8 {
+-	return v.data.(uint8)
+-}
+-
+-// Uint8Slice gets the value as a []uint8, returns the optionalDefault
+-// value or nil if the value is not a []uint8.
+-func (v *Value) Uint8Slice(optionalDefault ...[]uint8) []uint8 {
+-	if s, ok := v.data.([]uint8); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustUint8Slice gets the value as a []uint8.
+-//
+-// Panics if the object is not a []uint8.
+-func (v *Value) MustUint8Slice() []uint8 {
+-	return v.data.([]uint8)
+-}
+-
+-// IsUint8 gets whether the object contained is a uint8 or not.
+-func (v *Value) IsUint8() bool {
+-	_, ok := v.data.(uint8)
+-	return ok
+-}
+-
+-// IsUint8Slice gets whether the object contained is a []uint8 or not.
+-func (v *Value) IsUint8Slice() bool {
+-	_, ok := v.data.([]uint8)
+-	return ok
+-}
+-
+-// EachUint8 calls the specified callback for each object
+-// in the []uint8.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachUint8(callback func(int, uint8) bool) *Value {
+-
+-	for index, val := range v.MustUint8Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereUint8 uses the specified decider function to select items
+-// from the []uint8.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereUint8(decider func(int, uint8) bool) *Value {
+-
+-	var selected []uint8
+-
+-	v.EachUint8(func(index int, val uint8) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupUint8 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]uint8.
+-func (v *Value) GroupUint8(grouper func(int, uint8) string) *Value {
+-
+-	groups := make(map[string][]uint8)
+-
+-	v.EachUint8(func(index int, val uint8) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]uint8, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceUint8 uses the specified function to replace each uint8s
+-// by iterating each item.  The data in the returned result will be a
+-// []uint8 containing the replaced items.
+-func (v *Value) ReplaceUint8(replacer func(int, uint8) uint8) *Value {
+-
+-	arr := v.MustUint8Slice()
+-	replaced := make([]uint8, len(arr))
+-
+-	v.EachUint8(func(index int, val uint8) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectUint8 uses the specified collector function to collect a value
+-// for each of the uint8s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectUint8(collector func(int, uint8) interface{}) *Value {
+-
+-	arr := v.MustUint8Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachUint8(func(index int, val uint8) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Uint16 (uint16 and []uint16)
+-	--------------------------------------------------
+-*/
+-
+-// Uint16 gets the value as a uint16, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Uint16(optionalDefault ...uint16) uint16 {
+-	if s, ok := v.data.(uint16); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustUint16 gets the value as a uint16.
+-//
+-// Panics if the object is not a uint16.
+-func (v *Value) MustUint16() uint16 {
+-	return v.data.(uint16)
+-}
+-
+-// Uint16Slice gets the value as a []uint16, returns the optionalDefault
+-// value or nil if the value is not a []uint16.
+-func (v *Value) Uint16Slice(optionalDefault ...[]uint16) []uint16 {
+-	if s, ok := v.data.([]uint16); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustUint16Slice gets the value as a []uint16.
+-//
+-// Panics if the object is not a []uint16.
+-func (v *Value) MustUint16Slice() []uint16 {
+-	return v.data.([]uint16)
+-}
+-
+-// IsUint16 gets whether the object contained is a uint16 or not.
+-func (v *Value) IsUint16() bool {
+-	_, ok := v.data.(uint16)
+-	return ok
+-}
+-
+-// IsUint16Slice gets whether the object contained is a []uint16 or not.
+-func (v *Value) IsUint16Slice() bool {
+-	_, ok := v.data.([]uint16)
+-	return ok
+-}
+-
+-// EachUint16 calls the specified callback for each object
+-// in the []uint16.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachUint16(callback func(int, uint16) bool) *Value {
+-
+-	for index, val := range v.MustUint16Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereUint16 uses the specified decider function to select items
+-// from the []uint16.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereUint16(decider func(int, uint16) bool) *Value {
+-
+-	var selected []uint16
+-
+-	v.EachUint16(func(index int, val uint16) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupUint16 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]uint16.
+-func (v *Value) GroupUint16(grouper func(int, uint16) string) *Value {
+-
+-	groups := make(map[string][]uint16)
+-
+-	v.EachUint16(func(index int, val uint16) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]uint16, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceUint16 uses the specified function to replace each uint16s
+-// by iterating each item.  The data in the returned result will be a
+-// []uint16 containing the replaced items.
+-func (v *Value) ReplaceUint16(replacer func(int, uint16) uint16) *Value {
+-
+-	arr := v.MustUint16Slice()
+-	replaced := make([]uint16, len(arr))
+-
+-	v.EachUint16(func(index int, val uint16) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectUint16 uses the specified collector function to collect a value
+-// for each of the uint16s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectUint16(collector func(int, uint16) interface{}) *Value {
+-
+-	arr := v.MustUint16Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachUint16(func(index int, val uint16) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Uint32 (uint32 and []uint32)
+-	--------------------------------------------------
+-*/
+-
+-// Uint32 gets the value as a uint32, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Uint32(optionalDefault ...uint32) uint32 {
+-	if s, ok := v.data.(uint32); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustUint32 gets the value as a uint32.
+-//
+-// Panics if the object is not a uint32.
+-func (v *Value) MustUint32() uint32 {
+-	return v.data.(uint32)
+-}
+-
+-// Uint32Slice gets the value as a []uint32, returns the optionalDefault
+-// value or nil if the value is not a []uint32.
+-func (v *Value) Uint32Slice(optionalDefault ...[]uint32) []uint32 {
+-	if s, ok := v.data.([]uint32); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustUint32Slice gets the value as a []uint32.
+-//
+-// Panics if the object is not a []uint32.
+-func (v *Value) MustUint32Slice() []uint32 {
+-	return v.data.([]uint32)
+-}
+-
+-// IsUint32 gets whether the object contained is a uint32 or not.
+-func (v *Value) IsUint32() bool {
+-	_, ok := v.data.(uint32)
+-	return ok
+-}
+-
+-// IsUint32Slice gets whether the object contained is a []uint32 or not.
+-func (v *Value) IsUint32Slice() bool {
+-	_, ok := v.data.([]uint32)
+-	return ok
+-}
+-
+-// EachUint32 calls the specified callback for each object
+-// in the []uint32.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachUint32(callback func(int, uint32) bool) *Value {
+-
+-	for index, val := range v.MustUint32Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereUint32 uses the specified decider function to select items
+-// from the []uint32.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereUint32(decider func(int, uint32) bool) *Value {
+-
+-	var selected []uint32
+-
+-	v.EachUint32(func(index int, val uint32) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupUint32 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]uint32.
+-func (v *Value) GroupUint32(grouper func(int, uint32) string) *Value {
+-
+-	groups := make(map[string][]uint32)
+-
+-	v.EachUint32(func(index int, val uint32) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]uint32, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceUint32 uses the specified function to replace each uint32s
+-// by iterating each item.  The data in the returned result will be a
+-// []uint32 containing the replaced items.
+-func (v *Value) ReplaceUint32(replacer func(int, uint32) uint32) *Value {
+-
+-	arr := v.MustUint32Slice()
+-	replaced := make([]uint32, len(arr))
+-
+-	v.EachUint32(func(index int, val uint32) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectUint32 uses the specified collector function to collect a value
+-// for each of the uint32s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectUint32(collector func(int, uint32) interface{}) *Value {
+-
+-	arr := v.MustUint32Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachUint32(func(index int, val uint32) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Uint64 (uint64 and []uint64)
+-	--------------------------------------------------
+-*/
+-
+-// Uint64 gets the value as a uint64, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Uint64(optionalDefault ...uint64) uint64 {
+-	if s, ok := v.data.(uint64); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustUint64 gets the value as a uint64.
+-//
+-// Panics if the object is not a uint64.
+-func (v *Value) MustUint64() uint64 {
+-	return v.data.(uint64)
+-}
+-
+-// Uint64Slice gets the value as a []uint64, returns the optionalDefault
+-// value or nil if the value is not a []uint64.
+-func (v *Value) Uint64Slice(optionalDefault ...[]uint64) []uint64 {
+-	if s, ok := v.data.([]uint64); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustUint64Slice gets the value as a []uint64.
+-//
+-// Panics if the object is not a []uint64.
+-func (v *Value) MustUint64Slice() []uint64 {
+-	return v.data.([]uint64)
+-}
+-
+-// IsUint64 gets whether the object contained is a uint64 or not.
+-func (v *Value) IsUint64() bool {
+-	_, ok := v.data.(uint64)
+-	return ok
+-}
+-
+-// IsUint64Slice gets whether the object contained is a []uint64 or not.
+-func (v *Value) IsUint64Slice() bool {
+-	_, ok := v.data.([]uint64)
+-	return ok
+-}
+-
+-// EachUint64 calls the specified callback for each object
+-// in the []uint64.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachUint64(callback func(int, uint64) bool) *Value {
+-
+-	for index, val := range v.MustUint64Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereUint64 uses the specified decider function to select items
+-// from the []uint64.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereUint64(decider func(int, uint64) bool) *Value {
+-
+-	var selected []uint64
+-
+-	v.EachUint64(func(index int, val uint64) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupUint64 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]uint64.
+-func (v *Value) GroupUint64(grouper func(int, uint64) string) *Value {
+-
+-	groups := make(map[string][]uint64)
+-
+-	v.EachUint64(func(index int, val uint64) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]uint64, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceUint64 uses the specified function to replace each uint64s
+-// by iterating each item.  The data in the returned result will be a
+-// []uint64 containing the replaced items.
+-func (v *Value) ReplaceUint64(replacer func(int, uint64) uint64) *Value {
+-
+-	arr := v.MustUint64Slice()
+-	replaced := make([]uint64, len(arr))
+-
+-	v.EachUint64(func(index int, val uint64) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectUint64 uses the specified collector function to collect a value
+-// for each of the uint64s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectUint64(collector func(int, uint64) interface{}) *Value {
+-
+-	arr := v.MustUint64Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachUint64(func(index int, val uint64) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Uintptr (uintptr and []uintptr)
+-	--------------------------------------------------
+-*/
+-
+-// Uintptr gets the value as a uintptr, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Uintptr(optionalDefault ...uintptr) uintptr {
+-	if s, ok := v.data.(uintptr); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustUintptr gets the value as a uintptr.
+-//
+-// Panics if the object is not a uintptr.
+-func (v *Value) MustUintptr() uintptr {
+-	return v.data.(uintptr)
+-}
+-
+-// UintptrSlice gets the value as a []uintptr, returns the optionalDefault
+-// value or nil if the value is not a []uintptr.
+-func (v *Value) UintptrSlice(optionalDefault ...[]uintptr) []uintptr {
+-	if s, ok := v.data.([]uintptr); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustUintptrSlice gets the value as a []uintptr.
+-//
+-// Panics if the object is not a []uintptr.
+-func (v *Value) MustUintptrSlice() []uintptr {
+-	return v.data.([]uintptr)
+-}
+-
+-// IsUintptr gets whether the object contained is a uintptr or not.
+-func (v *Value) IsUintptr() bool {
+-	_, ok := v.data.(uintptr)
+-	return ok
+-}
+-
+-// IsUintptrSlice gets whether the object contained is a []uintptr or not.
+-func (v *Value) IsUintptrSlice() bool {
+-	_, ok := v.data.([]uintptr)
+-	return ok
+-}
+-
+-// EachUintptr calls the specified callback for each object
+-// in the []uintptr.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachUintptr(callback func(int, uintptr) bool) *Value {
+-
+-	for index, val := range v.MustUintptrSlice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereUintptr uses the specified decider function to select items
+-// from the []uintptr.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereUintptr(decider func(int, uintptr) bool) *Value {
+-
+-	var selected []uintptr
+-
+-	v.EachUintptr(func(index int, val uintptr) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupUintptr uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]uintptr.
+-func (v *Value) GroupUintptr(grouper func(int, uintptr) string) *Value {
+-
+-	groups := make(map[string][]uintptr)
+-
+-	v.EachUintptr(func(index int, val uintptr) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]uintptr, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceUintptr uses the specified function to replace each uintptrs
+-// by iterating each item.  The data in the returned result will be a
+-// []uintptr containing the replaced items.
+-func (v *Value) ReplaceUintptr(replacer func(int, uintptr) uintptr) *Value {
+-
+-	arr := v.MustUintptrSlice()
+-	replaced := make([]uintptr, len(arr))
+-
+-	v.EachUintptr(func(index int, val uintptr) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectUintptr uses the specified collector function to collect a value
+-// for each of the uintptrs in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectUintptr(collector func(int, uintptr) interface{}) *Value {
+-
+-	arr := v.MustUintptrSlice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachUintptr(func(index int, val uintptr) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Float32 (float32 and []float32)
+-	--------------------------------------------------
+-*/
+-
+-// Float32 gets the value as a float32, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Float32(optionalDefault ...float32) float32 {
+-	if s, ok := v.data.(float32); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustFloat32 gets the value as a float32.
+-//
+-// Panics if the object is not a float32.
+-func (v *Value) MustFloat32() float32 {
+-	return v.data.(float32)
+-}
+-
+-// Float32Slice gets the value as a []float32, returns the optionalDefault
+-// value or nil if the value is not a []float32.
+-func (v *Value) Float32Slice(optionalDefault ...[]float32) []float32 {
+-	if s, ok := v.data.([]float32); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustFloat32Slice gets the value as a []float32.
+-//
+-// Panics if the object is not a []float32.
+-func (v *Value) MustFloat32Slice() []float32 {
+-	return v.data.([]float32)
+-}
+-
+-// IsFloat32 gets whether the object contained is a float32 or not.
+-func (v *Value) IsFloat32() bool {
+-	_, ok := v.data.(float32)
+-	return ok
+-}
+-
+-// IsFloat32Slice gets whether the object contained is a []float32 or not.
+-func (v *Value) IsFloat32Slice() bool {
+-	_, ok := v.data.([]float32)
+-	return ok
+-}
+-
+-// EachFloat32 calls the specified callback for each object
+-// in the []float32.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachFloat32(callback func(int, float32) bool) *Value {
+-
+-	for index, val := range v.MustFloat32Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereFloat32 uses the specified decider function to select items
+-// from the []float32.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereFloat32(decider func(int, float32) bool) *Value {
+-
+-	var selected []float32
+-
+-	v.EachFloat32(func(index int, val float32) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupFloat32 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]float32.
+-func (v *Value) GroupFloat32(grouper func(int, float32) string) *Value {
+-
+-	groups := make(map[string][]float32)
+-
+-	v.EachFloat32(func(index int, val float32) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]float32, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceFloat32 uses the specified function to replace each float32s
+-// by iterating each item.  The data in the returned result will be a
+-// []float32 containing the replaced items.
+-func (v *Value) ReplaceFloat32(replacer func(int, float32) float32) *Value {
+-
+-	arr := v.MustFloat32Slice()
+-	replaced := make([]float32, len(arr))
+-
+-	v.EachFloat32(func(index int, val float32) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectFloat32 uses the specified collector function to collect a value
+-// for each of the float32s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectFloat32(collector func(int, float32) interface{}) *Value {
+-
+-	arr := v.MustFloat32Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachFloat32(func(index int, val float32) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Float64 (float64 and []float64)
+-	--------------------------------------------------
+-*/
+-
+-// Float64 gets the value as a float64, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Float64(optionalDefault ...float64) float64 {
+-	if s, ok := v.data.(float64); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustFloat64 gets the value as a float64.
+-//
+-// Panics if the object is not a float64.
+-func (v *Value) MustFloat64() float64 {
+-	return v.data.(float64)
+-}
+-
+-// Float64Slice gets the value as a []float64, returns the optionalDefault
+-// value or nil if the value is not a []float64.
+-func (v *Value) Float64Slice(optionalDefault ...[]float64) []float64 {
+-	if s, ok := v.data.([]float64); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustFloat64Slice gets the value as a []float64.
+-//
+-// Panics if the object is not a []float64.
+-func (v *Value) MustFloat64Slice() []float64 {
+-	return v.data.([]float64)
+-}
+-
+-// IsFloat64 gets whether the object contained is a float64 or not.
+-func (v *Value) IsFloat64() bool {
+-	_, ok := v.data.(float64)
+-	return ok
+-}
+-
+-// IsFloat64Slice gets whether the object contained is a []float64 or not.
+-func (v *Value) IsFloat64Slice() bool {
+-	_, ok := v.data.([]float64)
+-	return ok
+-}
+-
+-// EachFloat64 calls the specified callback for each object
+-// in the []float64.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachFloat64(callback func(int, float64) bool) *Value {
+-
+-	for index, val := range v.MustFloat64Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereFloat64 uses the specified decider function to select items
+-// from the []float64.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereFloat64(decider func(int, float64) bool) *Value {
+-
+-	var selected []float64
+-
+-	v.EachFloat64(func(index int, val float64) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupFloat64 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]float64.
+-func (v *Value) GroupFloat64(grouper func(int, float64) string) *Value {
+-
+-	groups := make(map[string][]float64)
+-
+-	v.EachFloat64(func(index int, val float64) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]float64, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceFloat64 uses the specified function to replace each float64s
+-// by iterating each item.  The data in the returned result will be a
+-// []float64 containing the replaced items.
+-func (v *Value) ReplaceFloat64(replacer func(int, float64) float64) *Value {
+-
+-	arr := v.MustFloat64Slice()
+-	replaced := make([]float64, len(arr))
+-
+-	v.EachFloat64(func(index int, val float64) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectFloat64 uses the specified collector function to collect a value
+-// for each of the float64s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectFloat64(collector func(int, float64) interface{}) *Value {
+-
+-	arr := v.MustFloat64Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachFloat64(func(index int, val float64) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Complex64 (complex64 and []complex64)
+-	--------------------------------------------------
+-*/
+-
+-// Complex64 gets the value as a complex64, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Complex64(optionalDefault ...complex64) complex64 {
+-	if s, ok := v.data.(complex64); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustComplex64 gets the value as a complex64.
+-//
+-// Panics if the object is not a complex64.
+-func (v *Value) MustComplex64() complex64 {
+-	return v.data.(complex64)
+-}
+-
+-// Complex64Slice gets the value as a []complex64, returns the optionalDefault
+-// value or nil if the value is not a []complex64.
+-func (v *Value) Complex64Slice(optionalDefault ...[]complex64) []complex64 {
+-	if s, ok := v.data.([]complex64); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustComplex64Slice gets the value as a []complex64.
+-//
+-// Panics if the object is not a []complex64.
+-func (v *Value) MustComplex64Slice() []complex64 {
+-	return v.data.([]complex64)
+-}
+-
+-// IsComplex64 gets whether the object contained is a complex64 or not.
+-func (v *Value) IsComplex64() bool {
+-	_, ok := v.data.(complex64)
+-	return ok
+-}
+-
+-// IsComplex64Slice gets whether the object contained is a []complex64 or not.
+-func (v *Value) IsComplex64Slice() bool {
+-	_, ok := v.data.([]complex64)
+-	return ok
+-}
+-
+-// EachComplex64 calls the specified callback for each object
+-// in the []complex64.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachComplex64(callback func(int, complex64) bool) *Value {
+-
+-	for index, val := range v.MustComplex64Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereComplex64 uses the specified decider function to select items
+-// from the []complex64.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereComplex64(decider func(int, complex64) bool) *Value {
+-
+-	var selected []complex64
+-
+-	v.EachComplex64(func(index int, val complex64) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupComplex64 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]complex64.
+-func (v *Value) GroupComplex64(grouper func(int, complex64) string) *Value {
+-
+-	groups := make(map[string][]complex64)
+-
+-	v.EachComplex64(func(index int, val complex64) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]complex64, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceComplex64 uses the specified function to replace each complex64s
+-// by iterating each item.  The data in the returned result will be a
+-// []complex64 containing the replaced items.
+-func (v *Value) ReplaceComplex64(replacer func(int, complex64) complex64) *Value {
+-
+-	arr := v.MustComplex64Slice()
+-	replaced := make([]complex64, len(arr))
+-
+-	v.EachComplex64(func(index int, val complex64) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectComplex64 uses the specified collector function to collect a value
+-// for each of the complex64s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectComplex64(collector func(int, complex64) interface{}) *Value {
+-
+-	arr := v.MustComplex64Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachComplex64(func(index int, val complex64) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+-
+-/*
+-	Complex128 (complex128 and []complex128)
+-	--------------------------------------------------
+-*/
+-
+-// Complex128 gets the value as a complex128, returns the optionalDefault
+-// value or a system default object if the value is the wrong type.
+-func (v *Value) Complex128(optionalDefault ...complex128) complex128 {
+-	if s, ok := v.data.(complex128); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return 0
+-}
+-
+-// MustComplex128 gets the value as a complex128.
+-//
+-// Panics if the object is not a complex128.
+-func (v *Value) MustComplex128() complex128 {
+-	return v.data.(complex128)
+-}
+-
+-// Complex128Slice gets the value as a []complex128, returns the optionalDefault
+-// value or nil if the value is not a []complex128.
+-func (v *Value) Complex128Slice(optionalDefault ...[]complex128) []complex128 {
+-	if s, ok := v.data.([]complex128); ok {
+-		return s
+-	}
+-	if len(optionalDefault) == 1 {
+-		return optionalDefault[0]
+-	}
+-	return nil
+-}
+-
+-// MustComplex128Slice gets the value as a []complex128.
+-//
+-// Panics if the object is not a []complex128.
+-func (v *Value) MustComplex128Slice() []complex128 {
+-	return v.data.([]complex128)
+-}
+-
+-// IsComplex128 gets whether the object contained is a complex128 or not.
+-func (v *Value) IsComplex128() bool {
+-	_, ok := v.data.(complex128)
+-	return ok
+-}
+-
+-// IsComplex128Slice gets whether the object contained is a []complex128 or not.
+-func (v *Value) IsComplex128Slice() bool {
+-	_, ok := v.data.([]complex128)
+-	return ok
+-}
+-
+-// EachComplex128 calls the specified callback for each object
+-// in the []complex128.
+-//
+-// Panics if the object is the wrong type.
+-func (v *Value) EachComplex128(callback func(int, complex128) bool) *Value {
+-
+-	for index, val := range v.MustComplex128Slice() {
+-		carryon := callback(index, val)
+-		if carryon == false {
+-			break
+-		}
+-	}
+-
+-	return v
+-
+-}
+-
+-// WhereComplex128 uses the specified decider function to select items
+-// from the []complex128.  The object contained in the result will contain
+-// only the selected items.
+-func (v *Value) WhereComplex128(decider func(int, complex128) bool) *Value {
+-
+-	var selected []complex128
+-
+-	v.EachComplex128(func(index int, val complex128) bool {
+-		shouldSelect := decider(index, val)
+-		if shouldSelect == false {
+-			selected = append(selected, val)
+-		}
+-		return true
+-	})
+-
+-	return &Value{data: selected}
+-
+-}
+-
+-// GroupComplex128 uses the specified grouper function to group the items
+-// keyed by the return of the grouper.  The object contained in the
+-// result will contain a map[string][]complex128.
+-func (v *Value) GroupComplex128(grouper func(int, complex128) string) *Value {
+-
+-	groups := make(map[string][]complex128)
+-
+-	v.EachComplex128(func(index int, val complex128) bool {
+-		group := grouper(index, val)
+-		if _, ok := groups[group]; !ok {
+-			groups[group] = make([]complex128, 0)
+-		}
+-		groups[group] = append(groups[group], val)
+-		return true
+-	})
+-
+-	return &Value{data: groups}
+-
+-}
+-
+-// ReplaceComplex128 uses the specified function to replace each complex128s
+-// by iterating each item.  The data in the returned result will be a
+-// []complex128 containing the replaced items.
+-func (v *Value) ReplaceComplex128(replacer func(int, complex128) complex128) *Value {
+-
+-	arr := v.MustComplex128Slice()
+-	replaced := make([]complex128, len(arr))
+-
+-	v.EachComplex128(func(index int, val complex128) bool {
+-		replaced[index] = replacer(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: replaced}
+-
+-}
+-
+-// CollectComplex128 uses the specified collector function to collect a value
+-// for each of the complex128s in the slice.  The data returned will be a
+-// []interface{}.
+-func (v *Value) CollectComplex128(collector func(int, complex128) interface{}) *Value {
+-
+-	arr := v.MustComplex128Slice()
+-	collected := make([]interface{}, len(arr))
+-
+-	v.EachComplex128(func(index int, val complex128) bool {
+-		collected[index] = collector(index, val)
+-		return true
+-	})
+-
+-	return &Value{data: collected}
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/type_specific_codegen_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/type_specific_codegen_test.go
+deleted file mode 100644
+index f7a4fce..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/type_specific_codegen_test.go
++++ /dev/null
+@@ -1,2867 +0,0 @@
+-package objx
+-
+-import (
+-	"fmt"
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestInter(t *testing.T) {
+-
+-	val := interface{}("something")
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Inter())
+-	assert.Equal(t, val, New(m).Get("value").MustInter())
+-	assert.Equal(t, interface{}(nil), New(m).Get("nothing").Inter())
+-	assert.Equal(t, val, New(m).Get("nothing").Inter("something"))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustInter()
+-	})
+-
+-}
+-
+-func TestInterSlice(t *testing.T) {
+-
+-	val := interface{}("something")
+-	m := map[string]interface{}{"value": []interface{}{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").InterSlice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustInterSlice()[0])
+-	assert.Equal(t, []interface{}(nil), New(m).Get("nothing").InterSlice())
+-	assert.Equal(t, val, New(m).Get("nothing").InterSlice([]interface{}{interface{}("something")})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustInterSlice()
+-	})
+-
+-}
+-
+-func TestIsInter(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: interface{}("something")}
+-	assert.True(t, v.IsInter())
+-
+-	v = &Value{data: []interface{}{interface{}("something")}}
+-	assert.True(t, v.IsInterSlice())
+-
+-}
+-
+-func TestEachInter(t *testing.T) {
+-
+-	v := &Value{data: []interface{}{interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something")}}
+-	count := 0
+-	replacedVals := make([]interface{}, 0)
+-	assert.Equal(t, v, v.EachInter(func(i int, val interface{}) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustInterSlice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustInterSlice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustInterSlice()[2])
+-
+-}
+-
+-func TestWhereInter(t *testing.T) {
+-
+-	v := &Value{data: []interface{}{interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something")}}
+-
+-	selected := v.WhereInter(func(i int, val interface{}) bool {
+-		return i%2 == 0
+-	}).MustInterSlice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupInter(t *testing.T) {
+-
+-	v := &Value{data: []interface{}{interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something")}}
+-
+-	grouped := v.GroupInter(func(i int, val interface{}) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]interface{})
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceInter(t *testing.T) {
+-
+-	v := &Value{data: []interface{}{interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something")}}
+-
+-	rawArr := v.MustInterSlice()
+-
+-	replaced := v.ReplaceInter(func(index int, val interface{}) interface{} {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustInterSlice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectInter(t *testing.T) {
+-
+-	v := &Value{data: []interface{}{interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something"), interface{}("something")}}
+-
+-	collected := v.CollectInter(func(index int, val interface{}) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestMSI(t *testing.T) {
+-
+-	val := map[string]interface{}(map[string]interface{}{"name": "Tyler"})
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").MSI())
+-	assert.Equal(t, val, New(m).Get("value").MustMSI())
+-	assert.Equal(t, map[string]interface{}(nil), New(m).Get("nothing").MSI())
+-	assert.Equal(t, val, New(m).Get("nothing").MSI(map[string]interface{}{"name": "Tyler"}))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustMSI()
+-	})
+-
+-}
+-
+-func TestMSISlice(t *testing.T) {
+-
+-	val := map[string]interface{}(map[string]interface{}{"name": "Tyler"})
+-	m := map[string]interface{}{"value": []map[string]interface{}{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").MSISlice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustMSISlice()[0])
+-	assert.Equal(t, []map[string]interface{}(nil), New(m).Get("nothing").MSISlice())
+-	assert.Equal(t, val, New(m).Get("nothing").MSISlice([]map[string]interface{}{map[string]interface{}(map[string]interface{}{"name": "Tyler"})})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustMSISlice()
+-	})
+-
+-}
+-
+-func TestIsMSI(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: map[string]interface{}(map[string]interface{}{"name": "Tyler"})}
+-	assert.True(t, v.IsMSI())
+-
+-	v = &Value{data: []map[string]interface{}{map[string]interface{}(map[string]interface{}{"name": "Tyler"})}}
+-	assert.True(t, v.IsMSISlice())
+-
+-}
+-
+-func TestEachMSI(t *testing.T) {
+-
+-	v := &Value{data: []map[string]interface{}{map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"})}}
+-	count := 0
+-	replacedVals := make([]map[string]interface{}, 0)
+-	assert.Equal(t, v, v.EachMSI(func(i int, val map[string]interface{}) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustMSISlice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustMSISlice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustMSISlice()[2])
+-
+-}
+-
+-func TestWhereMSI(t *testing.T) {
+-
+-	v := &Value{data: []map[string]interface{}{map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"})}}
+-
+-	selected := v.WhereMSI(func(i int, val map[string]interface{}) bool {
+-		return i%2 == 0
+-	}).MustMSISlice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupMSI(t *testing.T) {
+-
+-	v := &Value{data: []map[string]interface{}{map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"})}}
+-
+-	grouped := v.GroupMSI(func(i int, val map[string]interface{}) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]map[string]interface{})
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceMSI(t *testing.T) {
+-
+-	v := &Value{data: []map[string]interface{}{map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"})}}
+-
+-	rawArr := v.MustMSISlice()
+-
+-	replaced := v.ReplaceMSI(func(index int, val map[string]interface{}) map[string]interface{} {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustMSISlice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectMSI(t *testing.T) {
+-
+-	v := &Value{data: []map[string]interface{}{map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"}), map[string]interface{}(map[string]interface{}{"name": "Tyler"})}}
+-
+-	collected := v.CollectMSI(func(index int, val map[string]interface{}) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestObjxMap(t *testing.T) {
+-
+-	val := (Map)(New(1))
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").ObjxMap())
+-	assert.Equal(t, val, New(m).Get("value").MustObjxMap())
+-	assert.Equal(t, (Map)(New(nil)), New(m).Get("nothing").ObjxMap())
+-	assert.Equal(t, val, New(m).Get("nothing").ObjxMap(New(1)))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustObjxMap()
+-	})
+-
+-}
+-
+-func TestObjxMapSlice(t *testing.T) {
+-
+-	val := (Map)(New(1))
+-	m := map[string]interface{}{"value": [](Map){val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").ObjxMapSlice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustObjxMapSlice()[0])
+-	assert.Equal(t, [](Map)(nil), New(m).Get("nothing").ObjxMapSlice())
+-	assert.Equal(t, val, New(m).Get("nothing").ObjxMapSlice([](Map){(Map)(New(1))})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustObjxMapSlice()
+-	})
+-
+-}
+-
+-func TestIsObjxMap(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: (Map)(New(1))}
+-	assert.True(t, v.IsObjxMap())
+-
+-	v = &Value{data: [](Map){(Map)(New(1))}}
+-	assert.True(t, v.IsObjxMapSlice())
+-
+-}
+-
+-func TestEachObjxMap(t *testing.T) {
+-
+-	v := &Value{data: [](Map){(Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1))}}
+-	count := 0
+-	replacedVals := make([](Map), 0)
+-	assert.Equal(t, v, v.EachObjxMap(func(i int, val Map) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustObjxMapSlice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustObjxMapSlice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustObjxMapSlice()[2])
+-
+-}
+-
+-func TestWhereObjxMap(t *testing.T) {
+-
+-	v := &Value{data: [](Map){(Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1))}}
+-
+-	selected := v.WhereObjxMap(func(i int, val Map) bool {
+-		return i%2 == 0
+-	}).MustObjxMapSlice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupObjxMap(t *testing.T) {
+-
+-	v := &Value{data: [](Map){(Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1))}}
+-
+-	grouped := v.GroupObjxMap(func(i int, val Map) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][](Map))
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceObjxMap(t *testing.T) {
+-
+-	v := &Value{data: [](Map){(Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1))}}
+-
+-	rawArr := v.MustObjxMapSlice()
+-
+-	replaced := v.ReplaceObjxMap(func(index int, val Map) Map {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustObjxMapSlice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectObjxMap(t *testing.T) {
+-
+-	v := &Value{data: [](Map){(Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1)), (Map)(New(1))}}
+-
+-	collected := v.CollectObjxMap(func(index int, val Map) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestBool(t *testing.T) {
+-
+-	val := bool(true)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Bool())
+-	assert.Equal(t, val, New(m).Get("value").MustBool())
+-	assert.Equal(t, bool(false), New(m).Get("nothing").Bool())
+-	assert.Equal(t, val, New(m).Get("nothing").Bool(true))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustBool()
+-	})
+-
+-}
+-
+-func TestBoolSlice(t *testing.T) {
+-
+-	val := bool(true)
+-	m := map[string]interface{}{"value": []bool{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").BoolSlice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustBoolSlice()[0])
+-	assert.Equal(t, []bool(nil), New(m).Get("nothing").BoolSlice())
+-	assert.Equal(t, val, New(m).Get("nothing").BoolSlice([]bool{bool(true)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustBoolSlice()
+-	})
+-
+-}
+-
+-func TestIsBool(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: bool(true)}
+-	assert.True(t, v.IsBool())
+-
+-	v = &Value{data: []bool{bool(true)}}
+-	assert.True(t, v.IsBoolSlice())
+-
+-}
+-
+-func TestEachBool(t *testing.T) {
+-
+-	v := &Value{data: []bool{bool(true), bool(true), bool(true), bool(true), bool(true)}}
+-	count := 0
+-	replacedVals := make([]bool, 0)
+-	assert.Equal(t, v, v.EachBool(func(i int, val bool) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustBoolSlice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustBoolSlice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustBoolSlice()[2])
+-
+-}
+-
+-func TestWhereBool(t *testing.T) {
+-
+-	v := &Value{data: []bool{bool(true), bool(true), bool(true), bool(true), bool(true), bool(true)}}
+-
+-	selected := v.WhereBool(func(i int, val bool) bool {
+-		return i%2 == 0
+-	}).MustBoolSlice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupBool(t *testing.T) {
+-
+-	v := &Value{data: []bool{bool(true), bool(true), bool(true), bool(true), bool(true), bool(true)}}
+-
+-	grouped := v.GroupBool(func(i int, val bool) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]bool)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceBool(t *testing.T) {
+-
+-	v := &Value{data: []bool{bool(true), bool(true), bool(true), bool(true), bool(true), bool(true)}}
+-
+-	rawArr := v.MustBoolSlice()
+-
+-	replaced := v.ReplaceBool(func(index int, val bool) bool {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustBoolSlice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectBool(t *testing.T) {
+-
+-	v := &Value{data: []bool{bool(true), bool(true), bool(true), bool(true), bool(true), bool(true)}}
+-
+-	collected := v.CollectBool(func(index int, val bool) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestStr(t *testing.T) {
+-
+-	val := string("hello")
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Str())
+-	assert.Equal(t, val, New(m).Get("value").MustStr())
+-	assert.Equal(t, string(""), New(m).Get("nothing").Str())
+-	assert.Equal(t, val, New(m).Get("nothing").Str("hello"))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustStr()
+-	})
+-
+-}
+-
+-func TestStrSlice(t *testing.T) {
+-
+-	val := string("hello")
+-	m := map[string]interface{}{"value": []string{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").StrSlice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustStrSlice()[0])
+-	assert.Equal(t, []string(nil), New(m).Get("nothing").StrSlice())
+-	assert.Equal(t, val, New(m).Get("nothing").StrSlice([]string{string("hello")})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustStrSlice()
+-	})
+-
+-}
+-
+-func TestIsStr(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: string("hello")}
+-	assert.True(t, v.IsStr())
+-
+-	v = &Value{data: []string{string("hello")}}
+-	assert.True(t, v.IsStrSlice())
+-
+-}
+-
+-func TestEachStr(t *testing.T) {
+-
+-	v := &Value{data: []string{string("hello"), string("hello"), string("hello"), string("hello"), string("hello")}}
+-	count := 0
+-	replacedVals := make([]string, 0)
+-	assert.Equal(t, v, v.EachStr(func(i int, val string) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustStrSlice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustStrSlice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustStrSlice()[2])
+-
+-}
+-
+-func TestWhereStr(t *testing.T) {
+-
+-	v := &Value{data: []string{string("hello"), string("hello"), string("hello"), string("hello"), string("hello"), string("hello")}}
+-
+-	selected := v.WhereStr(func(i int, val string) bool {
+-		return i%2 == 0
+-	}).MustStrSlice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupStr(t *testing.T) {
+-
+-	v := &Value{data: []string{string("hello"), string("hello"), string("hello"), string("hello"), string("hello"), string("hello")}}
+-
+-	grouped := v.GroupStr(func(i int, val string) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]string)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceStr(t *testing.T) {
+-
+-	v := &Value{data: []string{string("hello"), string("hello"), string("hello"), string("hello"), string("hello"), string("hello")}}
+-
+-	rawArr := v.MustStrSlice()
+-
+-	replaced := v.ReplaceStr(func(index int, val string) string {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustStrSlice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectStr(t *testing.T) {
+-
+-	v := &Value{data: []string{string("hello"), string("hello"), string("hello"), string("hello"), string("hello"), string("hello")}}
+-
+-	collected := v.CollectStr(func(index int, val string) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestInt(t *testing.T) {
+-
+-	val := int(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Int())
+-	assert.Equal(t, val, New(m).Get("value").MustInt())
+-	assert.Equal(t, int(0), New(m).Get("nothing").Int())
+-	assert.Equal(t, val, New(m).Get("nothing").Int(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustInt()
+-	})
+-
+-}
+-
+-func TestIntSlice(t *testing.T) {
+-
+-	val := int(1)
+-	m := map[string]interface{}{"value": []int{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").IntSlice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustIntSlice()[0])
+-	assert.Equal(t, []int(nil), New(m).Get("nothing").IntSlice())
+-	assert.Equal(t, val, New(m).Get("nothing").IntSlice([]int{int(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustIntSlice()
+-	})
+-
+-}
+-
+-func TestIsInt(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: int(1)}
+-	assert.True(t, v.IsInt())
+-
+-	v = &Value{data: []int{int(1)}}
+-	assert.True(t, v.IsIntSlice())
+-
+-}
+-
+-func TestEachInt(t *testing.T) {
+-
+-	v := &Value{data: []int{int(1), int(1), int(1), int(1), int(1)}}
+-	count := 0
+-	replacedVals := make([]int, 0)
+-	assert.Equal(t, v, v.EachInt(func(i int, val int) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustIntSlice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustIntSlice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustIntSlice()[2])
+-
+-}
+-
+-func TestWhereInt(t *testing.T) {
+-
+-	v := &Value{data: []int{int(1), int(1), int(1), int(1), int(1), int(1)}}
+-
+-	selected := v.WhereInt(func(i int, val int) bool {
+-		return i%2 == 0
+-	}).MustIntSlice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupInt(t *testing.T) {
+-
+-	v := &Value{data: []int{int(1), int(1), int(1), int(1), int(1), int(1)}}
+-
+-	grouped := v.GroupInt(func(i int, val int) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]int)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceInt(t *testing.T) {
+-
+-	v := &Value{data: []int{int(1), int(1), int(1), int(1), int(1), int(1)}}
+-
+-	rawArr := v.MustIntSlice()
+-
+-	replaced := v.ReplaceInt(func(index int, val int) int {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustIntSlice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectInt(t *testing.T) {
+-
+-	v := &Value{data: []int{int(1), int(1), int(1), int(1), int(1), int(1)}}
+-
+-	collected := v.CollectInt(func(index int, val int) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestInt8(t *testing.T) {
+-
+-	val := int8(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Int8())
+-	assert.Equal(t, val, New(m).Get("value").MustInt8())
+-	assert.Equal(t, int8(0), New(m).Get("nothing").Int8())
+-	assert.Equal(t, val, New(m).Get("nothing").Int8(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustInt8()
+-	})
+-
+-}
+-
+-func TestInt8Slice(t *testing.T) {
+-
+-	val := int8(1)
+-	m := map[string]interface{}{"value": []int8{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Int8Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustInt8Slice()[0])
+-	assert.Equal(t, []int8(nil), New(m).Get("nothing").Int8Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Int8Slice([]int8{int8(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustInt8Slice()
+-	})
+-
+-}
+-
+-func TestIsInt8(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: int8(1)}
+-	assert.True(t, v.IsInt8())
+-
+-	v = &Value{data: []int8{int8(1)}}
+-	assert.True(t, v.IsInt8Slice())
+-
+-}
+-
+-func TestEachInt8(t *testing.T) {
+-
+-	v := &Value{data: []int8{int8(1), int8(1), int8(1), int8(1), int8(1)}}
+-	count := 0
+-	replacedVals := make([]int8, 0)
+-	assert.Equal(t, v, v.EachInt8(func(i int, val int8) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustInt8Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustInt8Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustInt8Slice()[2])
+-
+-}
+-
+-func TestWhereInt8(t *testing.T) {
+-
+-	v := &Value{data: []int8{int8(1), int8(1), int8(1), int8(1), int8(1), int8(1)}}
+-
+-	selected := v.WhereInt8(func(i int, val int8) bool {
+-		return i%2 == 0
+-	}).MustInt8Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupInt8(t *testing.T) {
+-
+-	v := &Value{data: []int8{int8(1), int8(1), int8(1), int8(1), int8(1), int8(1)}}
+-
+-	grouped := v.GroupInt8(func(i int, val int8) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]int8)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceInt8(t *testing.T) {
+-
+-	v := &Value{data: []int8{int8(1), int8(1), int8(1), int8(1), int8(1), int8(1)}}
+-
+-	rawArr := v.MustInt8Slice()
+-
+-	replaced := v.ReplaceInt8(func(index int, val int8) int8 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustInt8Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectInt8(t *testing.T) {
+-
+-	v := &Value{data: []int8{int8(1), int8(1), int8(1), int8(1), int8(1), int8(1)}}
+-
+-	collected := v.CollectInt8(func(index int, val int8) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestInt16(t *testing.T) {
+-
+-	val := int16(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Int16())
+-	assert.Equal(t, val, New(m).Get("value").MustInt16())
+-	assert.Equal(t, int16(0), New(m).Get("nothing").Int16())
+-	assert.Equal(t, val, New(m).Get("nothing").Int16(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustInt16()
+-	})
+-
+-}
+-
+-func TestInt16Slice(t *testing.T) {
+-
+-	val := int16(1)
+-	m := map[string]interface{}{"value": []int16{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Int16Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustInt16Slice()[0])
+-	assert.Equal(t, []int16(nil), New(m).Get("nothing").Int16Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Int16Slice([]int16{int16(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustInt16Slice()
+-	})
+-
+-}
+-
+-func TestIsInt16(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: int16(1)}
+-	assert.True(t, v.IsInt16())
+-
+-	v = &Value{data: []int16{int16(1)}}
+-	assert.True(t, v.IsInt16Slice())
+-
+-}
+-
+-func TestEachInt16(t *testing.T) {
+-
+-	v := &Value{data: []int16{int16(1), int16(1), int16(1), int16(1), int16(1)}}
+-	count := 0
+-	replacedVals := make([]int16, 0)
+-	assert.Equal(t, v, v.EachInt16(func(i int, val int16) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustInt16Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustInt16Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustInt16Slice()[2])
+-
+-}
+-
+-func TestWhereInt16(t *testing.T) {
+-
+-	v := &Value{data: []int16{int16(1), int16(1), int16(1), int16(1), int16(1), int16(1)}}
+-
+-	selected := v.WhereInt16(func(i int, val int16) bool {
+-		return i%2 == 0
+-	}).MustInt16Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupInt16(t *testing.T) {
+-
+-	v := &Value{data: []int16{int16(1), int16(1), int16(1), int16(1), int16(1), int16(1)}}
+-
+-	grouped := v.GroupInt16(func(i int, val int16) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]int16)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceInt16(t *testing.T) {
+-
+-	v := &Value{data: []int16{int16(1), int16(1), int16(1), int16(1), int16(1), int16(1)}}
+-
+-	rawArr := v.MustInt16Slice()
+-
+-	replaced := v.ReplaceInt16(func(index int, val int16) int16 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustInt16Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectInt16(t *testing.T) {
+-
+-	v := &Value{data: []int16{int16(1), int16(1), int16(1), int16(1), int16(1), int16(1)}}
+-
+-	collected := v.CollectInt16(func(index int, val int16) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestInt32(t *testing.T) {
+-
+-	val := int32(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Int32())
+-	assert.Equal(t, val, New(m).Get("value").MustInt32())
+-	assert.Equal(t, int32(0), New(m).Get("nothing").Int32())
+-	assert.Equal(t, val, New(m).Get("nothing").Int32(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustInt32()
+-	})
+-
+-}
+-
+-func TestInt32Slice(t *testing.T) {
+-
+-	val := int32(1)
+-	m := map[string]interface{}{"value": []int32{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Int32Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustInt32Slice()[0])
+-	assert.Equal(t, []int32(nil), New(m).Get("nothing").Int32Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Int32Slice([]int32{int32(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustInt32Slice()
+-	})
+-
+-}
+-
+-func TestIsInt32(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: int32(1)}
+-	assert.True(t, v.IsInt32())
+-
+-	v = &Value{data: []int32{int32(1)}}
+-	assert.True(t, v.IsInt32Slice())
+-
+-}
+-
+-func TestEachInt32(t *testing.T) {
+-
+-	v := &Value{data: []int32{int32(1), int32(1), int32(1), int32(1), int32(1)}}
+-	count := 0
+-	replacedVals := make([]int32, 0)
+-	assert.Equal(t, v, v.EachInt32(func(i int, val int32) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustInt32Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustInt32Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustInt32Slice()[2])
+-
+-}
+-
+-func TestWhereInt32(t *testing.T) {
+-
+-	v := &Value{data: []int32{int32(1), int32(1), int32(1), int32(1), int32(1), int32(1)}}
+-
+-	selected := v.WhereInt32(func(i int, val int32) bool {
+-		return i%2 == 0
+-	}).MustInt32Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupInt32(t *testing.T) {
+-
+-	v := &Value{data: []int32{int32(1), int32(1), int32(1), int32(1), int32(1), int32(1)}}
+-
+-	grouped := v.GroupInt32(func(i int, val int32) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]int32)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceInt32(t *testing.T) {
+-
+-	v := &Value{data: []int32{int32(1), int32(1), int32(1), int32(1), int32(1), int32(1)}}
+-
+-	rawArr := v.MustInt32Slice()
+-
+-	replaced := v.ReplaceInt32(func(index int, val int32) int32 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustInt32Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectInt32(t *testing.T) {
+-
+-	v := &Value{data: []int32{int32(1), int32(1), int32(1), int32(1), int32(1), int32(1)}}
+-
+-	collected := v.CollectInt32(func(index int, val int32) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestInt64(t *testing.T) {
+-
+-	val := int64(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Int64())
+-	assert.Equal(t, val, New(m).Get("value").MustInt64())
+-	assert.Equal(t, int64(0), New(m).Get("nothing").Int64())
+-	assert.Equal(t, val, New(m).Get("nothing").Int64(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustInt64()
+-	})
+-
+-}
+-
+-func TestInt64Slice(t *testing.T) {
+-
+-	val := int64(1)
+-	m := map[string]interface{}{"value": []int64{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Int64Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustInt64Slice()[0])
+-	assert.Equal(t, []int64(nil), New(m).Get("nothing").Int64Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Int64Slice([]int64{int64(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustInt64Slice()
+-	})
+-
+-}
+-
+-func TestIsInt64(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: int64(1)}
+-	assert.True(t, v.IsInt64())
+-
+-	v = &Value{data: []int64{int64(1)}}
+-	assert.True(t, v.IsInt64Slice())
+-
+-}
+-
+-func TestEachInt64(t *testing.T) {
+-
+-	v := &Value{data: []int64{int64(1), int64(1), int64(1), int64(1), int64(1)}}
+-	count := 0
+-	replacedVals := make([]int64, 0)
+-	assert.Equal(t, v, v.EachInt64(func(i int, val int64) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustInt64Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustInt64Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustInt64Slice()[2])
+-
+-}
+-
+-func TestWhereInt64(t *testing.T) {
+-
+-	v := &Value{data: []int64{int64(1), int64(1), int64(1), int64(1), int64(1), int64(1)}}
+-
+-	selected := v.WhereInt64(func(i int, val int64) bool {
+-		return i%2 == 0
+-	}).MustInt64Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupInt64(t *testing.T) {
+-
+-	v := &Value{data: []int64{int64(1), int64(1), int64(1), int64(1), int64(1), int64(1)}}
+-
+-	grouped := v.GroupInt64(func(i int, val int64) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]int64)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceInt64(t *testing.T) {
+-
+-	v := &Value{data: []int64{int64(1), int64(1), int64(1), int64(1), int64(1), int64(1)}}
+-
+-	rawArr := v.MustInt64Slice()
+-
+-	replaced := v.ReplaceInt64(func(index int, val int64) int64 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustInt64Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectInt64(t *testing.T) {
+-
+-	v := &Value{data: []int64{int64(1), int64(1), int64(1), int64(1), int64(1), int64(1)}}
+-
+-	collected := v.CollectInt64(func(index int, val int64) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestUint(t *testing.T) {
+-
+-	val := uint(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uint())
+-	assert.Equal(t, val, New(m).Get("value").MustUint())
+-	assert.Equal(t, uint(0), New(m).Get("nothing").Uint())
+-	assert.Equal(t, val, New(m).Get("nothing").Uint(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustUint()
+-	})
+-
+-}
+-
+-func TestUintSlice(t *testing.T) {
+-
+-	val := uint(1)
+-	m := map[string]interface{}{"value": []uint{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").UintSlice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustUintSlice()[0])
+-	assert.Equal(t, []uint(nil), New(m).Get("nothing").UintSlice())
+-	assert.Equal(t, val, New(m).Get("nothing").UintSlice([]uint{uint(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustUintSlice()
+-	})
+-
+-}
+-
+-func TestIsUint(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: uint(1)}
+-	assert.True(t, v.IsUint())
+-
+-	v = &Value{data: []uint{uint(1)}}
+-	assert.True(t, v.IsUintSlice())
+-
+-}
+-
+-func TestEachUint(t *testing.T) {
+-
+-	v := &Value{data: []uint{uint(1), uint(1), uint(1), uint(1), uint(1)}}
+-	count := 0
+-	replacedVals := make([]uint, 0)
+-	assert.Equal(t, v, v.EachUint(func(i int, val uint) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustUintSlice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustUintSlice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustUintSlice()[2])
+-
+-}
+-
+-func TestWhereUint(t *testing.T) {
+-
+-	v := &Value{data: []uint{uint(1), uint(1), uint(1), uint(1), uint(1), uint(1)}}
+-
+-	selected := v.WhereUint(func(i int, val uint) bool {
+-		return i%2 == 0
+-	}).MustUintSlice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupUint(t *testing.T) {
+-
+-	v := &Value{data: []uint{uint(1), uint(1), uint(1), uint(1), uint(1), uint(1)}}
+-
+-	grouped := v.GroupUint(func(i int, val uint) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]uint)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceUint(t *testing.T) {
+-
+-	v := &Value{data: []uint{uint(1), uint(1), uint(1), uint(1), uint(1), uint(1)}}
+-
+-	rawArr := v.MustUintSlice()
+-
+-	replaced := v.ReplaceUint(func(index int, val uint) uint {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustUintSlice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectUint(t *testing.T) {
+-
+-	v := &Value{data: []uint{uint(1), uint(1), uint(1), uint(1), uint(1), uint(1)}}
+-
+-	collected := v.CollectUint(func(index int, val uint) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestUint8(t *testing.T) {
+-
+-	val := uint8(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uint8())
+-	assert.Equal(t, val, New(m).Get("value").MustUint8())
+-	assert.Equal(t, uint8(0), New(m).Get("nothing").Uint8())
+-	assert.Equal(t, val, New(m).Get("nothing").Uint8(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustUint8()
+-	})
+-
+-}
+-
+-func TestUint8Slice(t *testing.T) {
+-
+-	val := uint8(1)
+-	m := map[string]interface{}{"value": []uint8{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uint8Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustUint8Slice()[0])
+-	assert.Equal(t, []uint8(nil), New(m).Get("nothing").Uint8Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Uint8Slice([]uint8{uint8(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustUint8Slice()
+-	})
+-
+-}
+-
+-func TestIsUint8(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: uint8(1)}
+-	assert.True(t, v.IsUint8())
+-
+-	v = &Value{data: []uint8{uint8(1)}}
+-	assert.True(t, v.IsUint8Slice())
+-
+-}
+-
+-func TestEachUint8(t *testing.T) {
+-
+-	v := &Value{data: []uint8{uint8(1), uint8(1), uint8(1), uint8(1), uint8(1)}}
+-	count := 0
+-	replacedVals := make([]uint8, 0)
+-	assert.Equal(t, v, v.EachUint8(func(i int, val uint8) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustUint8Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustUint8Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustUint8Slice()[2])
+-
+-}
+-
+-func TestWhereUint8(t *testing.T) {
+-
+-	v := &Value{data: []uint8{uint8(1), uint8(1), uint8(1), uint8(1), uint8(1), uint8(1)}}
+-
+-	selected := v.WhereUint8(func(i int, val uint8) bool {
+-		return i%2 == 0
+-	}).MustUint8Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupUint8(t *testing.T) {
+-
+-	v := &Value{data: []uint8{uint8(1), uint8(1), uint8(1), uint8(1), uint8(1), uint8(1)}}
+-
+-	grouped := v.GroupUint8(func(i int, val uint8) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]uint8)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceUint8(t *testing.T) {
+-
+-	v := &Value{data: []uint8{uint8(1), uint8(1), uint8(1), uint8(1), uint8(1), uint8(1)}}
+-
+-	rawArr := v.MustUint8Slice()
+-
+-	replaced := v.ReplaceUint8(func(index int, val uint8) uint8 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustUint8Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectUint8(t *testing.T) {
+-
+-	v := &Value{data: []uint8{uint8(1), uint8(1), uint8(1), uint8(1), uint8(1), uint8(1)}}
+-
+-	collected := v.CollectUint8(func(index int, val uint8) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestUint16(t *testing.T) {
+-
+-	val := uint16(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uint16())
+-	assert.Equal(t, val, New(m).Get("value").MustUint16())
+-	assert.Equal(t, uint16(0), New(m).Get("nothing").Uint16())
+-	assert.Equal(t, val, New(m).Get("nothing").Uint16(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustUint16()
+-	})
+-
+-}
+-
+-func TestUint16Slice(t *testing.T) {
+-
+-	val := uint16(1)
+-	m := map[string]interface{}{"value": []uint16{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uint16Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustUint16Slice()[0])
+-	assert.Equal(t, []uint16(nil), New(m).Get("nothing").Uint16Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Uint16Slice([]uint16{uint16(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustUint16Slice()
+-	})
+-
+-}
+-
+-func TestIsUint16(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: uint16(1)}
+-	assert.True(t, v.IsUint16())
+-
+-	v = &Value{data: []uint16{uint16(1)}}
+-	assert.True(t, v.IsUint16Slice())
+-
+-}
+-
+-func TestEachUint16(t *testing.T) {
+-
+-	v := &Value{data: []uint16{uint16(1), uint16(1), uint16(1), uint16(1), uint16(1)}}
+-	count := 0
+-	replacedVals := make([]uint16, 0)
+-	assert.Equal(t, v, v.EachUint16(func(i int, val uint16) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustUint16Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustUint16Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustUint16Slice()[2])
+-
+-}
+-
+-func TestWhereUint16(t *testing.T) {
+-
+-	v := &Value{data: []uint16{uint16(1), uint16(1), uint16(1), uint16(1), uint16(1), uint16(1)}}
+-
+-	selected := v.WhereUint16(func(i int, val uint16) bool {
+-		return i%2 == 0
+-	}).MustUint16Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupUint16(t *testing.T) {
+-
+-	v := &Value{data: []uint16{uint16(1), uint16(1), uint16(1), uint16(1), uint16(1), uint16(1)}}
+-
+-	grouped := v.GroupUint16(func(i int, val uint16) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]uint16)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceUint16(t *testing.T) {
+-
+-	v := &Value{data: []uint16{uint16(1), uint16(1), uint16(1), uint16(1), uint16(1), uint16(1)}}
+-
+-	rawArr := v.MustUint16Slice()
+-
+-	replaced := v.ReplaceUint16(func(index int, val uint16) uint16 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustUint16Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectUint16(t *testing.T) {
+-
+-	v := &Value{data: []uint16{uint16(1), uint16(1), uint16(1), uint16(1), uint16(1), uint16(1)}}
+-
+-	collected := v.CollectUint16(func(index int, val uint16) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestUint32(t *testing.T) {
+-
+-	val := uint32(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uint32())
+-	assert.Equal(t, val, New(m).Get("value").MustUint32())
+-	assert.Equal(t, uint32(0), New(m).Get("nothing").Uint32())
+-	assert.Equal(t, val, New(m).Get("nothing").Uint32(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustUint32()
+-	})
+-
+-}
+-
+-func TestUint32Slice(t *testing.T) {
+-
+-	val := uint32(1)
+-	m := map[string]interface{}{"value": []uint32{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uint32Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustUint32Slice()[0])
+-	assert.Equal(t, []uint32(nil), New(m).Get("nothing").Uint32Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Uint32Slice([]uint32{uint32(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustUint32Slice()
+-	})
+-
+-}
+-
+-func TestIsUint32(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: uint32(1)}
+-	assert.True(t, v.IsUint32())
+-
+-	v = &Value{data: []uint32{uint32(1)}}
+-	assert.True(t, v.IsUint32Slice())
+-
+-}
+-
+-func TestEachUint32(t *testing.T) {
+-
+-	v := &Value{data: []uint32{uint32(1), uint32(1), uint32(1), uint32(1), uint32(1)}}
+-	count := 0
+-	replacedVals := make([]uint32, 0)
+-	assert.Equal(t, v, v.EachUint32(func(i int, val uint32) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustUint32Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustUint32Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustUint32Slice()[2])
+-
+-}
+-
+-func TestWhereUint32(t *testing.T) {
+-
+-	v := &Value{data: []uint32{uint32(1), uint32(1), uint32(1), uint32(1), uint32(1), uint32(1)}}
+-
+-	selected := v.WhereUint32(func(i int, val uint32) bool {
+-		return i%2 == 0
+-	}).MustUint32Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupUint32(t *testing.T) {
+-
+-	v := &Value{data: []uint32{uint32(1), uint32(1), uint32(1), uint32(1), uint32(1), uint32(1)}}
+-
+-	grouped := v.GroupUint32(func(i int, val uint32) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]uint32)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceUint32(t *testing.T) {
+-
+-	v := &Value{data: []uint32{uint32(1), uint32(1), uint32(1), uint32(1), uint32(1), uint32(1)}}
+-
+-	rawArr := v.MustUint32Slice()
+-
+-	replaced := v.ReplaceUint32(func(index int, val uint32) uint32 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustUint32Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectUint32(t *testing.T) {
+-
+-	v := &Value{data: []uint32{uint32(1), uint32(1), uint32(1), uint32(1), uint32(1), uint32(1)}}
+-
+-	collected := v.CollectUint32(func(index int, val uint32) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestUint64(t *testing.T) {
+-
+-	val := uint64(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uint64())
+-	assert.Equal(t, val, New(m).Get("value").MustUint64())
+-	assert.Equal(t, uint64(0), New(m).Get("nothing").Uint64())
+-	assert.Equal(t, val, New(m).Get("nothing").Uint64(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustUint64()
+-	})
+-
+-}
+-
+-func TestUint64Slice(t *testing.T) {
+-
+-	val := uint64(1)
+-	m := map[string]interface{}{"value": []uint64{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uint64Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustUint64Slice()[0])
+-	assert.Equal(t, []uint64(nil), New(m).Get("nothing").Uint64Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Uint64Slice([]uint64{uint64(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustUint64Slice()
+-	})
+-
+-}
+-
+-func TestIsUint64(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: uint64(1)}
+-	assert.True(t, v.IsUint64())
+-
+-	v = &Value{data: []uint64{uint64(1)}}
+-	assert.True(t, v.IsUint64Slice())
+-
+-}
+-
+-func TestEachUint64(t *testing.T) {
+-
+-	v := &Value{data: []uint64{uint64(1), uint64(1), uint64(1), uint64(1), uint64(1)}}
+-	count := 0
+-	replacedVals := make([]uint64, 0)
+-	assert.Equal(t, v, v.EachUint64(func(i int, val uint64) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustUint64Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustUint64Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustUint64Slice()[2])
+-
+-}
+-
+-func TestWhereUint64(t *testing.T) {
+-
+-	v := &Value{data: []uint64{uint64(1), uint64(1), uint64(1), uint64(1), uint64(1), uint64(1)}}
+-
+-	selected := v.WhereUint64(func(i int, val uint64) bool {
+-		return i%2 == 0
+-	}).MustUint64Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupUint64(t *testing.T) {
+-
+-	v := &Value{data: []uint64{uint64(1), uint64(1), uint64(1), uint64(1), uint64(1), uint64(1)}}
+-
+-	grouped := v.GroupUint64(func(i int, val uint64) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]uint64)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceUint64(t *testing.T) {
+-
+-	v := &Value{data: []uint64{uint64(1), uint64(1), uint64(1), uint64(1), uint64(1), uint64(1)}}
+-
+-	rawArr := v.MustUint64Slice()
+-
+-	replaced := v.ReplaceUint64(func(index int, val uint64) uint64 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustUint64Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectUint64(t *testing.T) {
+-
+-	v := &Value{data: []uint64{uint64(1), uint64(1), uint64(1), uint64(1), uint64(1), uint64(1)}}
+-
+-	collected := v.CollectUint64(func(index int, val uint64) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestUintptr(t *testing.T) {
+-
+-	val := uintptr(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Uintptr())
+-	assert.Equal(t, val, New(m).Get("value").MustUintptr())
+-	assert.Equal(t, uintptr(0), New(m).Get("nothing").Uintptr())
+-	assert.Equal(t, val, New(m).Get("nothing").Uintptr(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustUintptr()
+-	})
+-
+-}
+-
+-func TestUintptrSlice(t *testing.T) {
+-
+-	val := uintptr(1)
+-	m := map[string]interface{}{"value": []uintptr{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").UintptrSlice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustUintptrSlice()[0])
+-	assert.Equal(t, []uintptr(nil), New(m).Get("nothing").UintptrSlice())
+-	assert.Equal(t, val, New(m).Get("nothing").UintptrSlice([]uintptr{uintptr(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustUintptrSlice()
+-	})
+-
+-}
+-
+-func TestIsUintptr(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: uintptr(1)}
+-	assert.True(t, v.IsUintptr())
+-
+-	v = &Value{data: []uintptr{uintptr(1)}}
+-	assert.True(t, v.IsUintptrSlice())
+-
+-}
+-
+-func TestEachUintptr(t *testing.T) {
+-
+-	v := &Value{data: []uintptr{uintptr(1), uintptr(1), uintptr(1), uintptr(1), uintptr(1)}}
+-	count := 0
+-	replacedVals := make([]uintptr, 0)
+-	assert.Equal(t, v, v.EachUintptr(func(i int, val uintptr) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustUintptrSlice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustUintptrSlice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustUintptrSlice()[2])
+-
+-}
+-
+-func TestWhereUintptr(t *testing.T) {
+-
+-	v := &Value{data: []uintptr{uintptr(1), uintptr(1), uintptr(1), uintptr(1), uintptr(1), uintptr(1)}}
+-
+-	selected := v.WhereUintptr(func(i int, val uintptr) bool {
+-		return i%2 == 0
+-	}).MustUintptrSlice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupUintptr(t *testing.T) {
+-
+-	v := &Value{data: []uintptr{uintptr(1), uintptr(1), uintptr(1), uintptr(1), uintptr(1), uintptr(1)}}
+-
+-	grouped := v.GroupUintptr(func(i int, val uintptr) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]uintptr)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceUintptr(t *testing.T) {
+-
+-	v := &Value{data: []uintptr{uintptr(1), uintptr(1), uintptr(1), uintptr(1), uintptr(1), uintptr(1)}}
+-
+-	rawArr := v.MustUintptrSlice()
+-
+-	replaced := v.ReplaceUintptr(func(index int, val uintptr) uintptr {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustUintptrSlice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectUintptr(t *testing.T) {
+-
+-	v := &Value{data: []uintptr{uintptr(1), uintptr(1), uintptr(1), uintptr(1), uintptr(1), uintptr(1)}}
+-
+-	collected := v.CollectUintptr(func(index int, val uintptr) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestFloat32(t *testing.T) {
+-
+-	val := float32(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Float32())
+-	assert.Equal(t, val, New(m).Get("value").MustFloat32())
+-	assert.Equal(t, float32(0), New(m).Get("nothing").Float32())
+-	assert.Equal(t, val, New(m).Get("nothing").Float32(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustFloat32()
+-	})
+-
+-}
+-
+-func TestFloat32Slice(t *testing.T) {
+-
+-	val := float32(1)
+-	m := map[string]interface{}{"value": []float32{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Float32Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustFloat32Slice()[0])
+-	assert.Equal(t, []float32(nil), New(m).Get("nothing").Float32Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Float32Slice([]float32{float32(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustFloat32Slice()
+-	})
+-
+-}
+-
+-func TestIsFloat32(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: float32(1)}
+-	assert.True(t, v.IsFloat32())
+-
+-	v = &Value{data: []float32{float32(1)}}
+-	assert.True(t, v.IsFloat32Slice())
+-
+-}
+-
+-func TestEachFloat32(t *testing.T) {
+-
+-	v := &Value{data: []float32{float32(1), float32(1), float32(1), float32(1), float32(1)}}
+-	count := 0
+-	replacedVals := make([]float32, 0)
+-	assert.Equal(t, v, v.EachFloat32(func(i int, val float32) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustFloat32Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustFloat32Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustFloat32Slice()[2])
+-
+-}
+-
+-func TestWhereFloat32(t *testing.T) {
+-
+-	v := &Value{data: []float32{float32(1), float32(1), float32(1), float32(1), float32(1), float32(1)}}
+-
+-	selected := v.WhereFloat32(func(i int, val float32) bool {
+-		return i%2 == 0
+-	}).MustFloat32Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupFloat32(t *testing.T) {
+-
+-	v := &Value{data: []float32{float32(1), float32(1), float32(1), float32(1), float32(1), float32(1)}}
+-
+-	grouped := v.GroupFloat32(func(i int, val float32) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]float32)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceFloat32(t *testing.T) {
+-
+-	v := &Value{data: []float32{float32(1), float32(1), float32(1), float32(1), float32(1), float32(1)}}
+-
+-	rawArr := v.MustFloat32Slice()
+-
+-	replaced := v.ReplaceFloat32(func(index int, val float32) float32 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustFloat32Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectFloat32(t *testing.T) {
+-
+-	v := &Value{data: []float32{float32(1), float32(1), float32(1), float32(1), float32(1), float32(1)}}
+-
+-	collected := v.CollectFloat32(func(index int, val float32) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestFloat64(t *testing.T) {
+-
+-	val := float64(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Float64())
+-	assert.Equal(t, val, New(m).Get("value").MustFloat64())
+-	assert.Equal(t, float64(0), New(m).Get("nothing").Float64())
+-	assert.Equal(t, val, New(m).Get("nothing").Float64(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustFloat64()
+-	})
+-
+-}
+-
+-func TestFloat64Slice(t *testing.T) {
+-
+-	val := float64(1)
+-	m := map[string]interface{}{"value": []float64{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Float64Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustFloat64Slice()[0])
+-	assert.Equal(t, []float64(nil), New(m).Get("nothing").Float64Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Float64Slice([]float64{float64(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustFloat64Slice()
+-	})
+-
+-}
+-
+-func TestIsFloat64(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: float64(1)}
+-	assert.True(t, v.IsFloat64())
+-
+-	v = &Value{data: []float64{float64(1)}}
+-	assert.True(t, v.IsFloat64Slice())
+-
+-}
+-
+-func TestEachFloat64(t *testing.T) {
+-
+-	v := &Value{data: []float64{float64(1), float64(1), float64(1), float64(1), float64(1)}}
+-	count := 0
+-	replacedVals := make([]float64, 0)
+-	assert.Equal(t, v, v.EachFloat64(func(i int, val float64) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustFloat64Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustFloat64Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustFloat64Slice()[2])
+-
+-}
+-
+-func TestWhereFloat64(t *testing.T) {
+-
+-	v := &Value{data: []float64{float64(1), float64(1), float64(1), float64(1), float64(1), float64(1)}}
+-
+-	selected := v.WhereFloat64(func(i int, val float64) bool {
+-		return i%2 == 0
+-	}).MustFloat64Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupFloat64(t *testing.T) {
+-
+-	v := &Value{data: []float64{float64(1), float64(1), float64(1), float64(1), float64(1), float64(1)}}
+-
+-	grouped := v.GroupFloat64(func(i int, val float64) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]float64)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceFloat64(t *testing.T) {
+-
+-	v := &Value{data: []float64{float64(1), float64(1), float64(1), float64(1), float64(1), float64(1)}}
+-
+-	rawArr := v.MustFloat64Slice()
+-
+-	replaced := v.ReplaceFloat64(func(index int, val float64) float64 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustFloat64Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectFloat64(t *testing.T) {
+-
+-	v := &Value{data: []float64{float64(1), float64(1), float64(1), float64(1), float64(1), float64(1)}}
+-
+-	collected := v.CollectFloat64(func(index int, val float64) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestComplex64(t *testing.T) {
+-
+-	val := complex64(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Complex64())
+-	assert.Equal(t, val, New(m).Get("value").MustComplex64())
+-	assert.Equal(t, complex64(0), New(m).Get("nothing").Complex64())
+-	assert.Equal(t, val, New(m).Get("nothing").Complex64(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustComplex64()
+-	})
+-
+-}
+-
+-func TestComplex64Slice(t *testing.T) {
+-
+-	val := complex64(1)
+-	m := map[string]interface{}{"value": []complex64{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Complex64Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustComplex64Slice()[0])
+-	assert.Equal(t, []complex64(nil), New(m).Get("nothing").Complex64Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Complex64Slice([]complex64{complex64(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustComplex64Slice()
+-	})
+-
+-}
+-
+-func TestIsComplex64(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: complex64(1)}
+-	assert.True(t, v.IsComplex64())
+-
+-	v = &Value{data: []complex64{complex64(1)}}
+-	assert.True(t, v.IsComplex64Slice())
+-
+-}
+-
+-func TestEachComplex64(t *testing.T) {
+-
+-	v := &Value{data: []complex64{complex64(1), complex64(1), complex64(1), complex64(1), complex64(1)}}
+-	count := 0
+-	replacedVals := make([]complex64, 0)
+-	assert.Equal(t, v, v.EachComplex64(func(i int, val complex64) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustComplex64Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustComplex64Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustComplex64Slice()[2])
+-
+-}
+-
+-func TestWhereComplex64(t *testing.T) {
+-
+-	v := &Value{data: []complex64{complex64(1), complex64(1), complex64(1), complex64(1), complex64(1), complex64(1)}}
+-
+-	selected := v.WhereComplex64(func(i int, val complex64) bool {
+-		return i%2 == 0
+-	}).MustComplex64Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupComplex64(t *testing.T) {
+-
+-	v := &Value{data: []complex64{complex64(1), complex64(1), complex64(1), complex64(1), complex64(1), complex64(1)}}
+-
+-	grouped := v.GroupComplex64(func(i int, val complex64) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]complex64)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceComplex64(t *testing.T) {
+-
+-	v := &Value{data: []complex64{complex64(1), complex64(1), complex64(1), complex64(1), complex64(1), complex64(1)}}
+-
+-	rawArr := v.MustComplex64Slice()
+-
+-	replaced := v.ReplaceComplex64(func(index int, val complex64) complex64 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustComplex64Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectComplex64(t *testing.T) {
+-
+-	v := &Value{data: []complex64{complex64(1), complex64(1), complex64(1), complex64(1), complex64(1), complex64(1)}}
+-
+-	collected := v.CollectComplex64(func(index int, val complex64) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+-
+-// ************************************************************
+-// TESTS
+-// ************************************************************
+-
+-func TestComplex128(t *testing.T) {
+-
+-	val := complex128(1)
+-	m := map[string]interface{}{"value": val, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Complex128())
+-	assert.Equal(t, val, New(m).Get("value").MustComplex128())
+-	assert.Equal(t, complex128(0), New(m).Get("nothing").Complex128())
+-	assert.Equal(t, val, New(m).Get("nothing").Complex128(1))
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("age").MustComplex128()
+-	})
+-
+-}
+-
+-func TestComplex128Slice(t *testing.T) {
+-
+-	val := complex128(1)
+-	m := map[string]interface{}{"value": []complex128{val}, "nothing": nil}
+-	assert.Equal(t, val, New(m).Get("value").Complex128Slice()[0])
+-	assert.Equal(t, val, New(m).Get("value").MustComplex128Slice()[0])
+-	assert.Equal(t, []complex128(nil), New(m).Get("nothing").Complex128Slice())
+-	assert.Equal(t, val, New(m).Get("nothing").Complex128Slice([]complex128{complex128(1)})[0])
+-
+-	assert.Panics(t, func() {
+-		New(m).Get("nothing").MustComplex128Slice()
+-	})
+-
+-}
+-
+-func TestIsComplex128(t *testing.T) {
+-
+-	var v *Value
+-
+-	v = &Value{data: complex128(1)}
+-	assert.True(t, v.IsComplex128())
+-
+-	v = &Value{data: []complex128{complex128(1)}}
+-	assert.True(t, v.IsComplex128Slice())
+-
+-}
+-
+-func TestEachComplex128(t *testing.T) {
+-
+-	v := &Value{data: []complex128{complex128(1), complex128(1), complex128(1), complex128(1), complex128(1)}}
+-	count := 0
+-	replacedVals := make([]complex128, 0)
+-	assert.Equal(t, v, v.EachComplex128(func(i int, val complex128) bool {
+-
+-		count++
+-		replacedVals = append(replacedVals, val)
+-
+-		// abort early
+-		if i == 2 {
+-			return false
+-		}
+-
+-		return true
+-
+-	}))
+-
+-	assert.Equal(t, count, 3)
+-	assert.Equal(t, replacedVals[0], v.MustComplex128Slice()[0])
+-	assert.Equal(t, replacedVals[1], v.MustComplex128Slice()[1])
+-	assert.Equal(t, replacedVals[2], v.MustComplex128Slice()[2])
+-
+-}
+-
+-func TestWhereComplex128(t *testing.T) {
+-
+-	v := &Value{data: []complex128{complex128(1), complex128(1), complex128(1), complex128(1), complex128(1), complex128(1)}}
+-
+-	selected := v.WhereComplex128(func(i int, val complex128) bool {
+-		return i%2 == 0
+-	}).MustComplex128Slice()
+-
+-	assert.Equal(t, 3, len(selected))
+-
+-}
+-
+-func TestGroupComplex128(t *testing.T) {
+-
+-	v := &Value{data: []complex128{complex128(1), complex128(1), complex128(1), complex128(1), complex128(1), complex128(1)}}
+-
+-	grouped := v.GroupComplex128(func(i int, val complex128) string {
+-		return fmt.Sprintf("%v", i%2 == 0)
+-	}).data.(map[string][]complex128)
+-
+-	assert.Equal(t, 2, len(grouped))
+-	assert.Equal(t, 3, len(grouped["true"]))
+-	assert.Equal(t, 3, len(grouped["false"]))
+-
+-}
+-
+-func TestReplaceComplex128(t *testing.T) {
+-
+-	v := &Value{data: []complex128{complex128(1), complex128(1), complex128(1), complex128(1), complex128(1), complex128(1)}}
+-
+-	rawArr := v.MustComplex128Slice()
+-
+-	replaced := v.ReplaceComplex128(func(index int, val complex128) complex128 {
+-		if index < len(rawArr)-1 {
+-			return rawArr[index+1]
+-		}
+-		return rawArr[0]
+-	})
+-
+-	replacedArr := replaced.MustComplex128Slice()
+-	if assert.Equal(t, 6, len(replacedArr)) {
+-		assert.Equal(t, replacedArr[0], rawArr[1])
+-		assert.Equal(t, replacedArr[1], rawArr[2])
+-		assert.Equal(t, replacedArr[2], rawArr[3])
+-		assert.Equal(t, replacedArr[3], rawArr[4])
+-		assert.Equal(t, replacedArr[4], rawArr[5])
+-		assert.Equal(t, replacedArr[5], rawArr[0])
+-	}
+-
+-}
+-
+-func TestCollectComplex128(t *testing.T) {
+-
+-	v := &Value{data: []complex128{complex128(1), complex128(1), complex128(1), complex128(1), complex128(1), complex128(1)}}
+-
+-	collected := v.CollectComplex128(func(index int, val complex128) interface{} {
+-		return index
+-	})
+-
+-	collectedArr := collected.MustInterSlice()
+-	if assert.Equal(t, 6, len(collectedArr)) {
+-		assert.Equal(t, collectedArr[0], 0)
+-		assert.Equal(t, collectedArr[1], 1)
+-		assert.Equal(t, collectedArr[2], 2)
+-		assert.Equal(t, collectedArr[3], 3)
+-		assert.Equal(t, collectedArr[4], 4)
+-		assert.Equal(t, collectedArr[5], 5)
+-	}
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/value.go b/Godeps/_workspace/src/github.com/stretchr/objx/value.go
+deleted file mode 100644
+index 7aaef06..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/value.go
++++ /dev/null
+@@ -1,13 +0,0 @@
+-package objx
+-
+-// Value provides methods for extracting interface{} data in various
+-// types.
+-type Value struct {
+-	// data contains the raw data being managed by this Value
+-	data interface{}
+-}
+-
+-// Data returns the raw data contained by this Value
+-func (v *Value) Data() interface{} {
+-	return v.data
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/objx/value_test.go b/Godeps/_workspace/src/github.com/stretchr/objx/value_test.go
+deleted file mode 100644
+index 0bc65d9..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/objx/value_test.go
++++ /dev/null
+@@ -1 +0,0 @@
+-package objx
+diff --git a/Godeps/_workspace/src/github.com/stretchr/testify/assert/assertions.go b/Godeps/_workspace/src/github.com/stretchr/testify/assert/assertions.go
+deleted file mode 100644
+index c784e1b..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/testify/assert/assertions.go
++++ /dev/null
+@@ -1,490 +0,0 @@
+-package assert
+-
+-import (
+-	"fmt"
+-	"reflect"
+-	"runtime"
+-	"strings"
+-	"time"
+-)
+-
+-// TestingT is an interface wrapper around *testing.T
+-type TestingT interface {
+-	Errorf(format string, args ...interface{})
+-}
+-
+-// Comparison a custom function that returns true on success and false on failure
+-type Comparison func() (success bool)
+-
+-/*
+-	Helper functions
+-*/
+-
+-// ObjectsAreEqual determines if two objects are considered equal.
+-//
+-// This function does no assertion of any kind.
+-func ObjectsAreEqual(expected, actual interface{}) bool {
+-
+-	if reflect.DeepEqual(expected, actual) {
+-		return true
+-	}
+-
+-	if reflect.ValueOf(expected) == reflect.ValueOf(actual) {
+-		return true
+-	}
+-
+-	// Last ditch effort
+-	if fmt.Sprintf("%#v", expected) == fmt.Sprintf("%#v", actual) {
+-		return true
+-	}
+-
+-	return false
+-
+-}
+-
+-/* CallerInfo is necessary because the assert functions use the testing object
+-internally, causing it to print the file:line of the assert method, rather than where
+-the problem actually occured in calling code.*/
+-
+-// CallerInfo returns a string containing the file and line number of the assert call
+-// that failed.
+-func CallerInfo() string {
+-
+-	file := ""
+-	line := 0
+-	ok := false
+-
+-	for i := 0; ; i++ {
+-		_, file, line, ok = runtime.Caller(i)
+-		if !ok {
+-			return ""
+-		}
+-		parts := strings.Split(file, "/")
+-		dir := parts[len(parts)-2]
+-		file = parts[len(parts)-1]
+-		if (dir != "assert" && dir != "mock") || file == "mock_test.go" {
+-			break
+-		}
+-	}
+-
+-	return fmt.Sprintf("%s:%d", file, line)
+-}
+-
+-// getWhitespaceString returns a string that is long enough to overwrite the default
+-// output from the go testing framework.
+-func getWhitespaceString() string {
+-
+-	_, file, line, ok := runtime.Caller(1)
+-	if !ok {
+-		return ""
+-	}
+-	parts := strings.Split(file, "/")
+-	file = parts[len(parts)-1]
+-
+-	return strings.Repeat(" ", len(fmt.Sprintf("%s:%d:      ", file, line)))
+-
+-}
+-
+-func messageFromMsgAndArgs(msgAndArgs ...interface{}) string {
+-	if len(msgAndArgs) == 0 || msgAndArgs == nil {
+-		return ""
+-	}
+-	if len(msgAndArgs) == 1 {
+-		return msgAndArgs[0].(string)
+-	}
+-	if len(msgAndArgs) > 1 {
+-		return fmt.Sprintf(msgAndArgs[0].(string), msgAndArgs[1:]...)
+-	}
+-	return ""
+-}
+-
+-// Fail reports a failure through
+-func Fail(t TestingT, failureMessage string, msgAndArgs ...interface{}) bool {
+-
+-	message := messageFromMsgAndArgs(msgAndArgs...)
+-
+-	if len(message) > 0 {
+-		t.Errorf("\r%s\r\tLocation:\t%s\n\r\tError:\t\t%s\n\r\tMessages:\t%s\n\r", getWhitespaceString(), CallerInfo(), failureMessage, message)
+-	} else {
+-		t.Errorf("\r%s\r\tLocation:\t%s\n\r\tError:\t\t%s\n\r", getWhitespaceString(), CallerInfo(), failureMessage)
+-	}
+-
+-	return false
+-}
+-
+-// Implements asserts that an object is implemented by the specified interface.
+-//
+-//    assert.Implements(t, (*MyInterface)(nil), new(MyObject), "MyObject")
+-func Implements(t TestingT, interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool {
+-
+-	interfaceType := reflect.TypeOf(interfaceObject).Elem()
+-
+-	if !reflect.TypeOf(object).Implements(interfaceType) {
+-		return Fail(t, fmt.Sprintf("Object must implement %v", interfaceType), msgAndArgs...)
+-	}
+-
+-	return true
+-
+-}
+-
+-// IsType asserts that the specified objects are of the same type.
+-func IsType(t TestingT, expectedType interface{}, object interface{}, msgAndArgs ...interface{}) bool {
+-
+-	if !ObjectsAreEqual(reflect.TypeOf(object), reflect.TypeOf(expectedType)) {
+-		return Fail(t, fmt.Sprintf("Object expected to be of type %v, but was %v", reflect.TypeOf(expectedType), reflect.TypeOf(object)), msgAndArgs...)
+-	}
+-
+-	return true
+-}
+-
+-// Equal asserts that two objects are equal.
+-//
+-//    assert.Equal(t, 123, 123, "123 and 123 should be equal")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func Equal(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
+-
+-	if !ObjectsAreEqual(expected, actual) {
+-		return Fail(t, fmt.Sprintf("Not equal: %#v != %#v", expected, actual), msgAndArgs...)
+-	}
+-
+-	return true
+-
+-}
+-
+-// Exactly asserts that two objects are equal is value and type.
+-//
+-//    assert.Exactly(t, int32(123), int64(123), "123 and 123 should NOT be equal")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func Exactly(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
+-
+-	aType := reflect.TypeOf(expected)
+-	bType := reflect.TypeOf(actual)
+-
+-	if aType != bType {
+-		return Fail(t, "Types expected to match exactly", "%v != %v", aType, bType)
+-	}
+-
+-	return Equal(t, expected, actual, msgAndArgs...)
+-
+-}
+-
+-// NotNil asserts that the specified object is not nil.
+-//
+-//    assert.NotNil(t, err, "err should be something")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func NotNil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
+-
+-	var success bool = true
+-
+-	if object == nil {
+-		success = false
+-	} else {
+-		value := reflect.ValueOf(object)
+-		kind := value.Kind()
+-		if kind >= reflect.Chan && kind <= reflect.Slice && value.IsNil() {
+-			success = false
+-		}
+-	}
+-
+-	if !success {
+-		Fail(t, "Expected not to be nil.", msgAndArgs...)
+-	}
+-
+-	return success
+-}
+-
+-// Nil asserts that the specified object is nil.
+-//
+-//    assert.Nil(t, err, "err should be nothing")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func Nil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
+-
+-	if object == nil {
+-		return true
+-	} else {
+-		value := reflect.ValueOf(object)
+-		kind := value.Kind()
+-		if kind >= reflect.Chan && kind <= reflect.Slice && value.IsNil() {
+-			return true
+-		}
+-	}
+-
+-	return Fail(t, fmt.Sprintf("Expected nil, but got: %#v", object), msgAndArgs...)
+-}
+-
+-// isEmpty gets whether the specified object is considered empty or not.
+-func isEmpty(object interface{}) bool {
+-
+-	if object == nil {
+-		return true
+-	} else if object == "" {
+-		return true
+-	} else if object == 0 {
+-		return true
+-	} else if object == false {
+-		return true
+-	}
+-
+-	objValue := reflect.ValueOf(object)
+-	switch objValue.Kind() {
+-	case reflect.Map:
+-		fallthrough
+-	case reflect.Slice:
+-		{
+-			return (objValue.Len() == 0)
+-		}
+-	}
+-
+-	return false
+-
+-}
+-
+-// Empty asserts that the specified object is empty.  I.e. nil, "", false, 0 or a
+-// slice with len == 0.
+-//
+-// assert.Empty(t, obj)
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func Empty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
+-
+-	pass := isEmpty(object)
+-	if !pass {
+-		Fail(t, fmt.Sprintf("Should be empty, but was %v", object), msgAndArgs...)
+-	}
+-
+-	return pass
+-
+-}
+-
+-// Empty asserts that the specified object is NOT empty.  I.e. not nil, "", false, 0 or a
+-// slice with len == 0.
+-//
+-// if assert.NotEmpty(t, obj) {
+-//   assert.Equal(t, "two", obj[1])
+-// }
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func NotEmpty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
+-
+-	pass := !isEmpty(object)
+-	if !pass {
+-		Fail(t, fmt.Sprintf("Should NOT be empty, but was %v", object), msgAndArgs...)
+-	}
+-
+-	return pass
+-
+-}
+-
+-// True asserts that the specified value is true.
+-//
+-//    assert.True(t, myBool, "myBool should be true")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func True(t TestingT, value bool, msgAndArgs ...interface{}) bool {
+-
+-	if value != true {
+-		return Fail(t, "Should be true", msgAndArgs...)
+-	}
+-
+-	return true
+-
+-}
+-
+-// False asserts that the specified value is true.
+-//
+-//    assert.False(t, myBool, "myBool should be false")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func False(t TestingT, value bool, msgAndArgs ...interface{}) bool {
+-
+-	if value != false {
+-		return Fail(t, "Should be false", msgAndArgs...)
+-	}
+-
+-	return true
+-
+-}
+-
+-// NotEqual asserts that the specified values are NOT equal.
+-//
+-//    assert.NotEqual(t, obj1, obj2, "two objects shouldn't be equal")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func NotEqual(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
+-
+-	if ObjectsAreEqual(expected, actual) {
+-		return Fail(t, "Should not be equal", msgAndArgs...)
+-	}
+-
+-	return true
+-
+-}
+-
+-// Contains asserts that the specified string contains the specified substring.
+-//
+-//    assert.Contains(t, "Hello World", "World", "But 'Hello World' does contain 'World'")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func Contains(t TestingT, s, contains string, msgAndArgs ...interface{}) bool {
+-
+-	if !strings.Contains(s, contains) {
+-		return Fail(t, fmt.Sprintf("\"%s\" does not contain \"%s\"", s, contains), msgAndArgs...)
+-	}
+-
+-	return true
+-
+-}
+-
+-// NotContains asserts that the specified string does NOT contain the specified substring.
+-//
+-//    assert.NotContains(t, "Hello World", "Earth", "But 'Hello World' does NOT contain 'Earth'")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func NotContains(t TestingT, s, contains string, msgAndArgs ...interface{}) bool {
+-
+-	if strings.Contains(s, contains) {
+-		return Fail(t, fmt.Sprintf("\"%s\" should not contain \"%s\"", s, contains), msgAndArgs...)
+-	}
+-
+-	return true
+-
+-}
+-
+-// Uses a Comparison to assert a complex condition.
+-func Condition(t TestingT, comp Comparison, msgAndArgs ...interface{}) bool {
+-	result := comp()
+-	if !result {
+-		Fail(t, "Condition failed!", msgAndArgs...)
+-	}
+-	return result
+-}
+-
+-// PanicTestFunc defines a func that should be passed to the assert.Panics and assert.NotPanics
+-// methods, and represents a simple func that takes no arguments, and returns nothing.
+-type PanicTestFunc func()
+-
+-// didPanic returns true if the function passed to it panics. Otherwise, it returns false.
+-func didPanic(f PanicTestFunc) (bool, interface{}) {
+-
+-	var didPanic bool = false
+-	var message interface{}
+-	func() {
+-
+-		defer func() {
+-			if message = recover(); message != nil {
+-				didPanic = true
+-			}
+-		}()
+-
+-		// call the target function
+-		f()
+-
+-	}()
+-
+-	return didPanic, message
+-
+-}
+-
+-// Panics asserts that the code inside the specified PanicTestFunc panics.
+-//
+-//   assert.Panics(t, func(){
+-//     GoCrazy()
+-//   }, "Calling GoCrazy() should panic")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func Panics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {
+-
+-	if funcDidPanic, panicValue := didPanic(f); !funcDidPanic {
+-		return Fail(t, fmt.Sprintf("func %#v should panic\n\r\tPanic value:\t%v", f, panicValue), msgAndArgs...)
+-	}
+-
+-	return true
+-}
+-
+-// NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic.
+-//
+-//   assert.NotPanics(t, func(){
+-//     RemainCalm()
+-//   }, "Calling RemainCalm() should NOT panic")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func NotPanics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {
+-
+-	if funcDidPanic, panicValue := didPanic(f); funcDidPanic {
+-		return Fail(t, fmt.Sprintf("func %#v should not panic\n\r\tPanic value:\t%v", f, panicValue), msgAndArgs...)
+-	}
+-
+-	return true
+-}
+-
+-// WithinDuration asserts that the two times are within duration delta of each other.
+-//
+-//   assert.WithinDuration(t, time.Now(), time.Now(), 10*time.Second, "The difference should not be more than 10s")
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func WithinDuration(t TestingT, expected, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool {
+-
+-	dt := expected.Sub(actual)
+-	if dt < -delta || dt > delta {
+-		return Fail(t, fmt.Sprintf("Max difference between %v and %v allowed is %v, but difference was %v", expected, actual, dt, delta), msgAndArgs...)
+-	}
+-
+-	return true
+-}
+-
+-/*
+-	Errors
+-*/
+-
+-// NoError asserts that a function returned no error (i.e. `nil`).
+-//
+-//   actualObj, err := SomeFunction()
+-//   if assert.NoError(t, err) {
+-//	   assert.Equal(t, actualObj, expectedObj)
+-//   }
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func NoError(t TestingT, err error, msgAndArgs ...interface{}) bool {
+-
+-	message := messageFromMsgAndArgs(msgAndArgs...)
+-	return Nil(t, err, "No error is expected but got %v %s", err, message)
+-
+-}
+-
+-// Error asserts that a function returned an error (i.e. not `nil`).
+-//
+-//   actualObj, err := SomeFunction()
+-//   if assert.Error(t, err, "An error was expected") {
+-//	   assert.Equal(t, err, expectedError)
+-//   }
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func Error(t TestingT, err error, msgAndArgs ...interface{}) bool {
+-
+-	message := messageFromMsgAndArgs(msgAndArgs...)
+-	return NotNil(t, err, "An error is expected but got nil. %s", message)
+-
+-}
+-
+-// Error asserts that a function returned an error (i.e. not `nil`).
+-//
+-//   actualObj, err := SomeFunction()
+-//   if assert.Error(t, err, "An error was expected") {
+-//	   assert.Equal(t, err, expectedError)
+-//   }
+-//
+-// Returns whether the assertion was successful (true) or not (false).
+-func EqualError(t TestingT, theError error, errString string, msgAndArgs ...interface{}) bool {
+-
+-	message := messageFromMsgAndArgs(msgAndArgs...)
+-	if !NotNil(t, theError, "An error is expected but got nil. %s", message) {
+-		return false
+-	}
+-	s := "An error with value \"%s\" is expected but got \"%s\". %s"
+-	return Equal(t, theError.Error(), errString,
+-		s, errString, theError.Error(), message)
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/testify/assert/assertions_test.go b/Godeps/_workspace/src/github.com/stretchr/testify/assert/assertions_test.go
+deleted file mode 100644
+index bf1d727..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/testify/assert/assertions_test.go
++++ /dev/null
+@@ -1,401 +0,0 @@
+-package assert
+-
+-import (
+-	"errors"
+-	"testing"
+-	"time"
+-)
+-
+-// AssertionTesterInterface defines an interface to be used for testing assertion methods
+-type AssertionTesterInterface interface {
+-	TestMethod()
+-}
+-
+-// AssertionTesterConformingObject is an object that conforms to the AssertionTesterInterface interface
+-type AssertionTesterConformingObject struct {
+-}
+-
+-func (a *AssertionTesterConformingObject) TestMethod() {
+-}
+-
+-// AssertionTesterNonConformingObject is an object that does not conform to the AssertionTesterInterface interface
+-type AssertionTesterNonConformingObject struct {
+-}
+-
+-func TestObjectsAreEqual(t *testing.T) {
+-
+-	if !ObjectsAreEqual("Hello World", "Hello World") {
+-		t.Error("objectsAreEqual should return true")
+-	}
+-	if !ObjectsAreEqual(123, 123) {
+-		t.Error("objectsAreEqual should return true")
+-	}
+-	if !ObjectsAreEqual(123.5, 123.5) {
+-		t.Error("objectsAreEqual should return true")
+-	}
+-	if !ObjectsAreEqual([]byte("Hello World"), []byte("Hello World")) {
+-		t.Error("objectsAreEqual should return true")
+-	}
+-	if !ObjectsAreEqual(nil, nil) {
+-		t.Error("objectsAreEqual should return true")
+-	}
+-
+-}
+-
+-func TestImplements(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !Implements(mockT, (*AssertionTesterInterface)(nil), new(AssertionTesterConformingObject)) {
+-		t.Error("Implements method should return true: AssertionTesterConformingObject implements AssertionTesterInterface")
+-	}
+-	if Implements(mockT, (*AssertionTesterInterface)(nil), new(AssertionTesterNonConformingObject)) {
+-		t.Error("Implements method should return false: AssertionTesterNonConformingObject does not implements AssertionTesterInterface")
+-	}
+-
+-}
+-
+-func TestIsType(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !IsType(mockT, new(AssertionTesterConformingObject), new(AssertionTesterConformingObject)) {
+-		t.Error("IsType should return true: AssertionTesterConformingObject is the same type as AssertionTesterConformingObject")
+-	}
+-	if IsType(mockT, new(AssertionTesterConformingObject), new(AssertionTesterNonConformingObject)) {
+-		t.Error("IsType should return false: AssertionTesterConformingObject is not the same type as AssertionTesterNonConformingObject")
+-	}
+-
+-}
+-
+-func TestEqual(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !Equal(mockT, "Hello World", "Hello World") {
+-		t.Error("Equal should return true")
+-	}
+-	if !Equal(mockT, 123, 123) {
+-		t.Error("Equal should return true")
+-	}
+-	if !Equal(mockT, 123.5, 123.5) {
+-		t.Error("Equal should return true")
+-	}
+-	if !Equal(mockT, []byte("Hello World"), []byte("Hello World")) {
+-		t.Error("Equal should return true")
+-	}
+-	if !Equal(mockT, nil, nil) {
+-		t.Error("Equal should return true")
+-	}
+-
+-}
+-
+-func TestNotNil(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !NotNil(mockT, new(AssertionTesterConformingObject)) {
+-		t.Error("NotNil should return true: object is not nil")
+-	}
+-	if NotNil(mockT, nil) {
+-		t.Error("NotNil should return false: object is nil")
+-	}
+-
+-}
+-
+-func TestNil(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !Nil(mockT, nil) {
+-		t.Error("Nil should return true: object is nil")
+-	}
+-	if Nil(mockT, new(AssertionTesterConformingObject)) {
+-		t.Error("Nil should return false: object is not nil")
+-	}
+-
+-}
+-
+-func TestTrue(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !True(mockT, true) {
+-		t.Error("True should return true")
+-	}
+-	if True(mockT, false) {
+-		t.Error("True should return false")
+-	}
+-
+-}
+-
+-func TestFalse(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !False(mockT, false) {
+-		t.Error("False should return true")
+-	}
+-	if False(mockT, true) {
+-		t.Error("False should return false")
+-	}
+-
+-}
+-
+-func TestExactly(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	a := float32(1)
+-	b := float64(1)
+-	c := float32(1)
+-	d := float32(2)
+-
+-	if Exactly(mockT, a, b) {
+-		t.Error("Exactly should return false")
+-	}
+-	if Exactly(mockT, a, d) {
+-		t.Error("Exactly should return false")
+-	}
+-	if !Exactly(mockT, a, c) {
+-		t.Error("Exactly should return true")
+-	}
+-
+-	if Exactly(mockT, nil, a) {
+-		t.Error("Exactly should return false")
+-	}
+-	if Exactly(mockT, a, nil) {
+-		t.Error("Exactly should return false")
+-	}
+-
+-}
+-
+-func TestNotEqual(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !NotEqual(mockT, "Hello World", "Hello World!") {
+-		t.Error("NotEqual should return true")
+-	}
+-	if !NotEqual(mockT, 123, 1234) {
+-		t.Error("NotEqual should return true")
+-	}
+-	if !NotEqual(mockT, 123.5, 123.55) {
+-		t.Error("NotEqual should return true")
+-	}
+-	if !NotEqual(mockT, []byte("Hello World"), []byte("Hello World!")) {
+-		t.Error("NotEqual should return true")
+-	}
+-	if !NotEqual(mockT, nil, new(AssertionTesterConformingObject)) {
+-		t.Error("NotEqual should return true")
+-	}
+-}
+-
+-func TestContains(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !Contains(mockT, "Hello World", "Hello") {
+-		t.Error("Contains should return true: \"Hello World\" contains \"Hello\"")
+-	}
+-	if Contains(mockT, "Hello World", "Salut") {
+-		t.Error("Contains should return false: \"Hello World\" does not contain \"Salut\"")
+-	}
+-
+-}
+-
+-func TestNotContains(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !NotContains(mockT, "Hello World", "Hello!") {
+-		t.Error("NotContains should return true: \"Hello World\" does not contain \"Hello!\"")
+-	}
+-	if NotContains(mockT, "Hello World", "Hello") {
+-		t.Error("NotContains should return false: \"Hello World\" contains \"Hello\"")
+-	}
+-
+-}
+-
+-func TestDidPanic(t *testing.T) {
+-
+-	if funcDidPanic, _ := didPanic(func() {
+-		panic("Panic!")
+-	}); !funcDidPanic {
+-		t.Error("didPanic should return true")
+-	}
+-
+-	if funcDidPanic, _ := didPanic(func() {
+-	}); funcDidPanic {
+-		t.Error("didPanic should return false")
+-	}
+-
+-}
+-
+-func TestPanics(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !Panics(mockT, func() {
+-		panic("Panic!")
+-	}) {
+-		t.Error("Panics should return true")
+-	}
+-
+-	if Panics(mockT, func() {
+-	}) {
+-		t.Error("Panics should return false")
+-	}
+-
+-}
+-
+-func TestNotPanics(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	if !NotPanics(mockT, func() {
+-	}) {
+-		t.Error("NotPanics should return true")
+-	}
+-
+-	if NotPanics(mockT, func() {
+-		panic("Panic!")
+-	}) {
+-		t.Error("NotPanics should return false")
+-	}
+-
+-}
+-
+-func TestEqual_Funcs(t *testing.T) {
+-
+-	type f func() int
+-	var f1 f = func() int { return 1 }
+-	var f2 f = func() int { return 2 }
+-
+-	var f1_copy f = f1
+-
+-	Equal(t, f1_copy, f1, "Funcs are the same and should be considered equal")
+-	NotEqual(t, f1, f2, "f1 and f2 are different")
+-
+-}
+-
+-func TestNoError(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	// start with a nil error
+-	var err error = nil
+-
+-	True(t, NoError(mockT, err), "NoError should return True for nil arg")
+-
+-	// now set an error
+-	err = errors.New("Some error")
+-
+-	False(t, NoError(mockT, err), "NoError with error should return False")
+-
+-}
+-
+-func TestError(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	// start with a nil error
+-	var err error = nil
+-
+-	False(t, Error(mockT, err), "Error should return False for nil arg")
+-
+-	// now set an error
+-	err = errors.New("Some error")
+-
+-	True(t, Error(mockT, err), "Error with error should return True")
+-
+-}
+-
+-func TestEqualError(t *testing.T) {
+-	mockT := new(testing.T)
+-
+-	// start with a nil error
+-	var err error = nil
+-	False(t, EqualError(mockT, err, ""),
+-		"EqualError should return false for nil arg")
+-
+-	// now set an error
+-	err = errors.New("Some error")
+-	False(t, EqualError(mockT, err, "Not some error"),
+-		"EqualError should return false for different error string")
+-	True(t, EqualError(mockT, err, "Some error"),
+-		"EqualError should return true")
+-}
+-
+-func Test_isEmpty(t *testing.T) {
+-
+-	True(t, isEmpty(""))
+-	True(t, isEmpty(nil))
+-	True(t, isEmpty([]string{}))
+-	True(t, isEmpty(0))
+-	True(t, isEmpty(false))
+-	True(t, isEmpty(map[string]string{}))
+-
+-	False(t, isEmpty("something"))
+-	False(t, isEmpty(errors.New("something")))
+-	False(t, isEmpty([]string{"something"}))
+-	False(t, isEmpty(1))
+-	False(t, isEmpty(true))
+-	False(t, isEmpty(map[string]string{"Hello": "World"}))
+-
+-}
+-
+-func TestEmpty(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	True(t, Empty(mockT, ""), "Empty string is empty")
+-	True(t, Empty(mockT, nil), "Nil is empty")
+-	True(t, Empty(mockT, []string{}), "Empty string array is empty")
+-	True(t, Empty(mockT, 0), "Zero int value is empty")
+-	True(t, Empty(mockT, false), "False value is empty")
+-
+-	False(t, Empty(mockT, "something"), "Non Empty string is not empty")
+-	False(t, Empty(mockT, errors.New("something")), "Non nil object is not empty")
+-	False(t, Empty(mockT, []string{"something"}), "Non empty string array is not empty")
+-	False(t, Empty(mockT, 1), "Non-zero int value is not empty")
+-	False(t, Empty(mockT, true), "True value is not empty")
+-
+-}
+-
+-func TestNotEmpty(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-
+-	False(t, NotEmpty(mockT, ""), "Empty string is empty")
+-	False(t, NotEmpty(mockT, nil), "Nil is empty")
+-	False(t, NotEmpty(mockT, []string{}), "Empty string array is empty")
+-	False(t, NotEmpty(mockT, 0), "Zero int value is empty")
+-	False(t, NotEmpty(mockT, false), "False value is empty")
+-
+-	True(t, NotEmpty(mockT, "something"), "Non Empty string is not empty")
+-	True(t, NotEmpty(mockT, errors.New("something")), "Non nil object is not empty")
+-	True(t, NotEmpty(mockT, []string{"something"}), "Non empty string array is not empty")
+-	True(t, NotEmpty(mockT, 1), "Non-zero int value is not empty")
+-	True(t, NotEmpty(mockT, true), "True value is not empty")
+-
+-}
+-
+-func TestWithinDuration(t *testing.T) {
+-
+-	mockT := new(testing.T)
+-	a := time.Now()
+-	b := a.Add(10 * time.Second)
+-
+-	True(t, WithinDuration(mockT, a, b, 10*time.Second), "A 10s difference is within a 10s time difference")
+-	True(t, WithinDuration(mockT, b, a, 10*time.Second), "A 10s difference is within a 10s time difference")
+-
+-	False(t, WithinDuration(mockT, a, b, 9*time.Second), "A 10s difference is not within a 9s time difference")
+-	False(t, WithinDuration(mockT, b, a, 9*time.Second), "A 10s difference is not within a 9s time difference")
+-
+-	False(t, WithinDuration(mockT, a, b, -9*time.Second), "A 10s difference is not within a 9s time difference")
+-	False(t, WithinDuration(mockT, b, a, -9*time.Second), "A 10s difference is not within a 9s time difference")
+-
+-	False(t, WithinDuration(mockT, a, b, -11*time.Second), "A 10s difference is not within a 9s time difference")
+-	False(t, WithinDuration(mockT, b, a, -11*time.Second), "A 10s difference is not within a 9s time difference")
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/testify/assert/doc.go b/Godeps/_workspace/src/github.com/stretchr/testify/assert/doc.go
+deleted file mode 100644
+index 25f699b..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/testify/assert/doc.go
++++ /dev/null
+@@ -1,74 +0,0 @@
+-// A set of comprehensive testing tools for use with the normal Go testing system.
+-//
+-// Example Usage
+-//
+-// The following is a complete example using assert in a standard test function:
+-//    import (
+-//      "testing"
+-//      "github.com/stretchr/testify/assert"
+-//    )
+-//
+-//    func TestSomething(t *testing.T) {
+-//
+-//      var a string = "Hello"
+-//      var b string = "Hello"
+-//
+-//      assert.Equal(t, a, b, "The two words should be the same.")
+-//
+-//    }
+-//
+-// Assertions
+-//
+-// Assertions allow you to easily write test code, and are global funcs in the `assert` package.
+-// All assertion functions take, as the first argument, the `*testing.T` object provided by the
+-// testing framework. This allows the assertion funcs to write the failings and other details to
+-// the correct place.
+-//
+-// Every assertion function also takes an optional string message as the final argument,
+-// allowing custom error messages to be appended to the message the assertion method outputs.
+-//
+-// Here is an overview of the assert functions:
+-//
+-//    assert.Equal(t, expected, actual [, message [, format-args])
+-//
+-//    assert.NotEqual(t, notExpected, actual [, message [, format-args]])
+-//
+-//    assert.True(t, actualBool [, message [, format-args]])
+-//
+-//    assert.False(t, actualBool [, message [, format-args]])
+-//
+-//    assert.Nil(t, actualObject [, message [, format-args]])
+-//
+-//    assert.NotNil(t, actualObject [, message [, format-args]])
+-//
+-//    assert.Empty(t, actualObject [, message [, format-args]])
+-//
+-//    assert.NotEmpty(t, actualObject [, message [, format-args]])
+-//
+-//    assert.Error(t, errorObject [, message [, format-args]])
+-//
+-//    assert.NoError(t, errorObject [, message [, format-args]])
+-//
+-//    assert.Implements(t, (*MyInterface)(nil), new(MyObject) [,message [, format-args]])
+-//
+-//    assert.IsType(t, expectedObject, actualObject [, message [, format-args]])
+-//
+-//    assert.Contains(t, string, substring [, message [, format-args]])
+-//
+-//    assert.NotContains(t, string, substring [, message [, format-args]])
+-//
+-//    assert.Panics(t, func(){
+-//
+-//	    // call code that should panic
+-//
+-//    } [, message [, format-args]])
+-//
+-//    assert.NotPanics(t, func(){
+-//
+-//	    // call code that should not panic
+-//
+-//    } [, message [, format-args]])
+-//
+-//    assert.WithinDuration(t, timeA, timeB, deltaTime, [, message [, format-args]])
+-
+-package assert
+diff --git a/Godeps/_workspace/src/github.com/stretchr/testify/assert/errors.go b/Godeps/_workspace/src/github.com/stretchr/testify/assert/errors.go
+deleted file mode 100644
+index da004d1..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/testify/assert/errors.go
++++ /dev/null
+@@ -1,10 +0,0 @@
+-package assert
+-
+-import (
+-	"errors"
+-)
+-
+-// AnError is an error instance useful for testing.  If the code does not care
+-// about error specifics, and only needs to return the error for example, this
+-// error should be used to make the test code more readable.
+-var AnError error = errors.New("assert.AnError general error for testing.")
+diff --git a/Godeps/_workspace/src/github.com/stretchr/testify/mock/doc.go b/Godeps/_workspace/src/github.com/stretchr/testify/mock/doc.go
+deleted file mode 100644
+index 7d4e7b8..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/testify/mock/doc.go
++++ /dev/null
+@@ -1,43 +0,0 @@
+-// Provides a system by which it is possible to mock your objects and verify calls are happening as expected.
+-//
+-// Example Usage
+-//
+-// The mock package provides an object, Mock, that tracks activity on another object.  It is usually
+-// embedded into a test object as shown below:
+-//
+-//   type MyTestObject struct {
+-//     // add a Mock object instance
+-//     mock.Mock
+-//
+-//     // other fields go here as normal
+-//   }
+-//
+-// When implementing the methods of an interface, you wire your functions up
+-// to call the Mock.Called(args...) method, and return the appropriate values.
+-//
+-// For example, to mock a method that saves the name and age of a person and returns
+-// the year of their birth or an error, you might write this:
+-//
+-//     func (o *MyTestObject) SavePersonDetails(firstname, lastname string, age int) (int, error) {
+-//       args := o.Mock.Called(firstname, lastname, age)
+-//       return args.Int(0), args.Error(1)
+-//     }
+-//
+-// The Int, Error and Bool methods are examples of strongly typed getters that take the argument
+-// index position. Given this argument list:
+-//
+-//     (12, true, "Something")
+-//
+-// You could read them out strongly typed like this:
+-//
+-//     args.Int(0)
+-//     args.Bool(1)
+-//     args.String(2)
+-//
+-// For objects of your own type, use the generic Arguments.Get(index) method and make a type assertion:
+-//
+-//     return args.Get(0).(*MyObject), args.Get(1).(*AnotherObjectOfMine)
+-//
+-// This may cause a panic if the object you are getting is nil (the type assertion will fail), in those
+-// cases you should check for nil first.
+-package mock
+diff --git a/Godeps/_workspace/src/github.com/stretchr/testify/mock/mock.go b/Godeps/_workspace/src/github.com/stretchr/testify/mock/mock.go
+deleted file mode 100644
+index 4320e6f..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/testify/mock/mock.go
++++ /dev/null
+@@ -1,505 +0,0 @@
+-package mock
+-
+-import (
+-	"fmt"
+-	"github.com/stretchr/objx"
+-	"github.com/stretchr/testify/assert"
+-	"reflect"
+-	"runtime"
+-	"strings"
+-)
+-
+-// TestingT is an interface wrapper around *testing.T
+-type TestingT interface {
+-	Logf(format string, args ...interface{})
+-	Errorf(format string, args ...interface{})
+-}
+-
+-/*
+-	Call
+-*/
+-
+-// Call represents a method call and is used for setting expectations,
+-// as well as recording activity.
+-type Call struct {
+-
+-	// The name of the method that was or will be called.
+-	Method string
+-
+-	// Holds the arguments of the method.
+-	Arguments Arguments
+-
+-	// Holds the arguments that should be returned when
+-	// this method is called.
+-	ReturnArguments Arguments
+-
+-	// The number of times to return the return arguments when setting
+-	// expectations. 0 means to always return the value.
+-	Repeatability int
+-}
+-
+-// Mock is the workhorse used to track activity on another object.
+-// For an example of its usage, refer to the "Example Usage" section at the top of this document.
+-type Mock struct {
+-
+-	// The method name that is currently
+-	// being referred to by the On method.
+-	onMethodName string
+-
+-	// An array of the arguments that are
+-	// currently being referred to by the On method.
+-	onMethodArguments Arguments
+-
+-	// Represents the calls that are expected of
+-	// an object.
+-	ExpectedCalls []Call
+-
+-	// Holds the calls that were made to this mocked object.
+-	Calls []Call
+-
+-	// TestData holds any data that might be useful for testing.  Testify ignores
+-	// this data completely allowing you to do whatever you like with it.
+-	testData objx.Map
+-}
+-
+-// TestData holds any data that might be useful for testing.  Testify ignores
+-// this data completely allowing you to do whatever you like with it.
+-func (m *Mock) TestData() objx.Map {
+-
+-	if m.testData == nil {
+-		m.testData = make(objx.Map)
+-	}
+-
+-	return m.testData
+-}
+-
+-/*
+-	Setting expectations
+-*/
+-
+-// On starts a description of an expectation of the specified method
+-// being called.
+-//
+-//     Mock.On("MyMethod", arg1, arg2)
+-func (m *Mock) On(methodName string, arguments ...interface{}) *Mock {
+-	m.onMethodName = methodName
+-	m.onMethodArguments = arguments
+-	return m
+-}
+-
+-// Return finishes a description of an expectation of the method (and arguments)
+-// specified in the most recent On method call.
+-//
+-//     Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2)
+-func (m *Mock) Return(returnArguments ...interface{}) *Mock {
+-	m.ExpectedCalls = append(m.ExpectedCalls, Call{m.onMethodName, m.onMethodArguments, returnArguments, 0})
+-	return m
+-}
+-
+-// Once indicates that that the mock should only return the value once.
+-//
+-//    Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Once()
+-func (m *Mock) Once() {
+-	m.ExpectedCalls[len(m.ExpectedCalls)-1].Repeatability = 1
+-}
+-
+-// Twice indicates that that the mock should only return the value twice.
+-//
+-//    Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Twice()
+-func (m *Mock) Twice() {
+-	m.ExpectedCalls[len(m.ExpectedCalls)-1].Repeatability = 2
+-}
+-
+-// Times indicates that that the mock should only return the indicated number
+-// of times.
+-//
+-//    Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Times(5)
+-func (m *Mock) Times(i int) {
+-	m.ExpectedCalls[len(m.ExpectedCalls)-1].Repeatability = i
+-}
+-
+-/*
+-	Recording and responding to activity
+-*/
+-
+-func (m *Mock) findExpectedCall(method string, arguments ...interface{}) (int, *Call) {
+-	for i, call := range m.ExpectedCalls {
+-		if call.Method == method && call.Repeatability > -1 {
+-
+-			_, diffCount := call.Arguments.Diff(arguments)
+-			if diffCount == 0 {
+-				return i, &call
+-			}
+-
+-		}
+-	}
+-	return -1, nil
+-}
+-
+-func (m *Mock) findClosestCall(method string, arguments ...interface{}) (bool, *Call) {
+-
+-	diffCount := 0
+-	var closestCall *Call = nil
+-
+-	for _, call := range m.ExpectedCalls {
+-		if call.Method == method {
+-
+-			_, tempDiffCount := call.Arguments.Diff(arguments)
+-			if tempDiffCount < diffCount || diffCount == 0 {
+-				diffCount = tempDiffCount
+-				closestCall = &call
+-			}
+-
+-		}
+-	}
+-
+-	if closestCall == nil {
+-		return false, nil
+-	}
+-
+-	return true, closestCall
+-}
+-
+-func callString(method string, arguments Arguments, includeArgumentValues bool) string {
+-
+-	var argValsString string = ""
+-	if includeArgumentValues {
+-		var argVals []string
+-		for argIndex, arg := range arguments {
+-			argVals = append(argVals, fmt.Sprintf("%d: %v", argIndex, arg))
+-		}
+-		argValsString = fmt.Sprintf("\n\t\t%s", strings.Join(argVals, "\n\t\t"))
+-	}
+-
+-	return fmt.Sprintf("%s(%s)%s", method, arguments.String(), argValsString)
+-}
+-
+-// Called tells the mock object that a method has been called, and gets an array
+-// of arguments to return.  Panics if the call is unexpected (i.e. not preceeded by
+-// appropriate .On .Return() calls)
+-func (m *Mock) Called(arguments ...interface{}) Arguments {
+-
+-	// get the calling function's name
+-	pc, _, _, ok := runtime.Caller(1)
+-	if !ok {
+-		panic("Couldn't get the caller information")
+-	}
+-	functionPath := runtime.FuncForPC(pc).Name()
+-	parts := strings.Split(functionPath, ".")
+-	functionName := parts[len(parts)-1]
+-
+-	found, call := m.findExpectedCall(functionName, arguments...)
+-
+-	switch {
+-	case found < 0:
+-		// we have to fail here - because we don't know what to do
+-		// as the return arguments.  This is because:
+-		//
+-		//   a) this is a totally unexpected call to this method,
+-		//   b) the arguments are not what was expected, or
+-		//   c) the developer has forgotten to add an accompanying On...Return pair.
+-
+-		closestFound, closestCall := m.findClosestCall(functionName, arguments...)
+-
+-		if closestFound {
+-			panic(fmt.Sprintf("\n\nmock: Unexpected Method Call\n-----------------------------\n\n%s\n\nThe closest call I have is: \n\n%s\n", callString(functionName, arguments, true), callString(functionName, closestCall.Arguments, true)))
+-		} else {
+-			panic(fmt.Sprintf("\nassert: mock: I don't know what to return because the method call was unexpected.\n\tEither do Mock.On(\"%s\").Return(...) first, or remove the %s() call.\n\tThis method was unexpected:\n\t\t%s\n\tat: %s", functionName, functionName, callString(functionName, arguments, true), assert.CallerInfo()))
+-		}
+-	case call.Repeatability == 1:
+-		call.Repeatability = -1
+-		m.ExpectedCalls[found] = *call
+-	case call.Repeatability > 1:
+-		call.Repeatability -= 1
+-		m.ExpectedCalls[found] = *call
+-	}
+-
+-	// add the call
+-	m.Calls = append(m.Calls, Call{functionName, arguments, make([]interface{}, 0), 0})
+-
+-	return call.ReturnArguments
+-
+-}
+-
+-/*
+-	Assertions
+-*/
+-
+-// AssertExpectationsForObjects asserts that everything specified with On and Return
+-// of the specified objects was in fact called as expected.
+-//
+-// Calls may have occurred in any order.
+-func AssertExpectationsForObjects(t TestingT, testObjects ...interface{}) bool {
+-	var success bool = true
+-	for _, obj := range testObjects {
+-		mockObj := obj.(Mock)
+-		success = success && mockObj.AssertExpectations(t)
+-	}
+-	return success
+-}
+-
+-// AssertExpectations asserts that everything specified with On and Return was
+-// in fact called as expected.  Calls may have occurred in any order.
+-func (m *Mock) AssertExpectations(t TestingT) bool {
+-
+-	var somethingMissing bool = false
+-	var failedExpectations int = 0
+-
+-	// iterate through each expectation
+-	for _, expectedCall := range m.ExpectedCalls {
+-		switch {
+-		case !m.methodWasCalled(expectedCall.Method, expectedCall.Arguments):
+-			somethingMissing = true
+-			failedExpectations++
+-			t.Logf("\u274C\t%s(%s)", expectedCall.Method, expectedCall.Arguments.String())
+-		case expectedCall.Repeatability > 0:
+-			somethingMissing = true
+-			failedExpectations++
+-		default:
+-			t.Logf("\u2705\t%s(%s)", expectedCall.Method, expectedCall.Arguments.String())
+-		}
+-	}
+-
+-	if somethingMissing {
+-		t.Errorf("FAIL: %d out of %d expectation(s) were met.\n\tThe code you are testing needs to make %d more call(s).\n\tat: %s", len(m.ExpectedCalls)-failedExpectations, len(m.ExpectedCalls), failedExpectations, assert.CallerInfo())
+-	}
+-
+-	return !somethingMissing
+-}
+-
+-// AssertNumberOfCalls asserts that the method was called expectedCalls times.
+-func (m *Mock) AssertNumberOfCalls(t TestingT, methodName string, expectedCalls int) bool {
+-	var actualCalls int = 0
+-	for _, call := range m.Calls {
+-		if call.Method == methodName {
+-			actualCalls++
+-		}
+-	}
+-	return assert.Equal(t, actualCalls, expectedCalls, fmt.Sprintf("Expected number of calls (%d) does not match the actual number of calls (%d).", expectedCalls, actualCalls))
+-}
+-
+-// AssertCalled asserts that the method was called.
+-func (m *Mock) AssertCalled(t TestingT, methodName string, arguments ...interface{}) bool {
+-	if !assert.True(t, m.methodWasCalled(methodName, arguments), fmt.Sprintf("The \"%s\" method should have been called with %d argument(s), but was not.", methodName, len(arguments))) {
+-		t.Logf("%s", m.ExpectedCalls)
+-		return false
+-	}
+-	return true
+-}
+-
+-// AssertNotCalled asserts that the method was not called.
+-func (m *Mock) AssertNotCalled(t TestingT, methodName string, arguments ...interface{}) bool {
+-	if !assert.False(t, m.methodWasCalled(methodName, arguments), fmt.Sprintf("The \"%s\" method was called with %d argument(s), but should NOT have been.", methodName, len(arguments))) {
+-		t.Logf("%s", m.ExpectedCalls)
+-		return false
+-	}
+-	return true
+-}
+-
+-func (m *Mock) methodWasCalled(methodName string, arguments []interface{}) bool {
+-	for _, call := range m.Calls {
+-		if call.Method == methodName {
+-
+-			_, differences := call.Arguments.Diff(arguments)
+-
+-			if differences == 0 {
+-				// found the expected call
+-				return true
+-			}
+-
+-		}
+-	}
+-	// we didn't find the expected call
+-	return false
+-}
+-
+-/*
+-	Arguments
+-*/
+-
+-// Arguments holds an array of method arguments or return values.
+-type Arguments []interface{}
+-
+-const (
+-	// The "any" argument.  Used in Diff and Assert when
+-	// the argument being tested shouldn't be taken into consideration.
+-	Anything string = "mock.Anything"
+-)
+-
+-// AnythingOfTypeArgument is a string that contains the type of an argument
+-// for use when type checking.  Used in Diff and Assert.
+-type AnythingOfTypeArgument string
+-
+-// AnythingOfType returns an AnythingOfTypeArgument object containing the
+-// name of the type to check for.  Used in Diff and Assert.
+-//
+-// For example:
+-//	Assert(t, AnythingOfType("string"), AnythingOfType("int"))
+-func AnythingOfType(t string) AnythingOfTypeArgument {
+-	return AnythingOfTypeArgument(t)
+-}
+-
+-// Get Returns the argument at the specified index.
+-func (args Arguments) Get(index int) interface{} {
+-	if index+1 > len(args) {
+-		panic(fmt.Sprintf("assert: arguments: Cannot call Get(%d) because there are %d argument(s).", index, len(args)))
+-	}
+-	return args[index]
+-}
+-
+-// Is gets whether the objects match the arguments specified.
+-func (args Arguments) Is(objects ...interface{}) bool {
+-	for i, obj := range args {
+-		if obj != objects[i] {
+-			return false
+-		}
+-	}
+-	return true
+-}
+-
+-// Diff gets a string describing the differences between the arguments
+-// and the specified objects.
+-//
+-// Returns the diff string and number of differences found.
+-func (args Arguments) Diff(objects []interface{}) (string, int) {
+-
+-	var output string = "\n"
+-	var differences int
+-
+-	var maxArgCount int = len(args)
+-	if len(objects) > maxArgCount {
+-		maxArgCount = len(objects)
+-	}
+-
+-	for i := 0; i < maxArgCount; i++ {
+-		var actual, expected interface{}
+-
+-		if len(objects) <= i {
+-			actual = "(Missing)"
+-		} else {
+-			actual = objects[i]
+-		}
+-
+-		if len(args) <= i {
+-			expected = "(Missing)"
+-		} else {
+-			expected = args[i]
+-		}
+-
+-		if reflect.TypeOf(expected) == reflect.TypeOf((*AnythingOfTypeArgument)(nil)).Elem() {
+-
+-			// type checking
+-			if reflect.TypeOf(actual).Name() != string(expected.(AnythingOfTypeArgument)) && reflect.TypeOf(actual).String() != string(expected.(AnythingOfTypeArgument)) {
+-				// not match
+-				differences++
+-				output = fmt.Sprintf("%s\t%d: \u274C  type %s != type %s - %s\n", output, i, expected, reflect.TypeOf(actual).Name(), actual)
+-			}
+-
+-		} else {
+-
+-			// normal checking
+-
+-			if assert.ObjectsAreEqual(expected, Anything) || assert.ObjectsAreEqual(actual, Anything) || assert.ObjectsAreEqual(actual, expected) {
+-				// match
+-				output = fmt.Sprintf("%s\t%d: \u2705  %s == %s\n", output, i, actual, expected)
+-			} else {
+-				// not match
+-				differences++
+-				output = fmt.Sprintf("%s\t%d: \u274C  %s != %s\n", output, i, actual, expected)
+-			}
+-		}
+-
+-	}
+-
+-	if differences == 0 {
+-		return "No differences.", differences
+-	}
+-
+-	return output, differences
+-
+-}
+-
+-// Assert compares the arguments with the specified objects and fails if
+-// they do not exactly match.
+-func (args Arguments) Assert(t TestingT, objects ...interface{}) bool {
+-
+-	// get the differences
+-	diff, diffCount := args.Diff(objects)
+-
+-	if diffCount == 0 {
+-		return true
+-	}
+-
+-	// there are differences... report them...
+-	t.Logf(diff)
+-	t.Errorf("%sArguments do not match.", assert.CallerInfo())
+-
+-	return false
+-
+-}
+-
+-// String gets the argument at the specified index. Panics if there is no argument, or
+-// if the argument is of the wrong type.
+-//
+-// If no index is provided, String() returns a complete string representation
+-// of the arguments.
+-func (args Arguments) String(indexOrNil ...int) string {
+-
+-	if len(indexOrNil) == 0 {
+-		// normal String() method - return a string representation of the args
+-		var argsStr []string
+-		for _, arg := range args {
+-			argsStr = append(argsStr, fmt.Sprintf("%s", reflect.TypeOf(arg)))
+-		}
+-		return strings.Join(argsStr, ",")
+-	} else if len(indexOrNil) == 1 {
+-		// Index has been specified - get the argument at that index
+-		var index int = indexOrNil[0]
+-		var s string
+-		var ok bool
+-		if s, ok = args.Get(index).(string); !ok {
+-			panic(fmt.Sprintf("assert: arguments: String(%d) failed because object wasn't correct type: %s", index, args.Get(index)))
+-		}
+-		return s
+-	}
+-
+-	panic(fmt.Sprintf("assert: arguments: Wrong number of arguments passed to String.  Must be 0 or 1, not %d", len(indexOrNil)))
+-
+-}
+-
+-// Int gets the argument at the specified index. Panics if there is no argument, or
+-// if the argument is of the wrong type.
+-func (args Arguments) Int(index int) int {
+-	var s int
+-	var ok bool
+-	if s, ok = args.Get(index).(int); !ok {
+-		panic(fmt.Sprintf("assert: arguments: Int(%d) failed because object wasn't correct type: %s", index, args.Get(index)))
+-	}
+-	return s
+-}
+-
+-// Error gets the argument at the specified index. Panics if there is no argument, or
+-// if the argument is of the wrong type.
+-func (args Arguments) Error(index int) error {
+-	obj := args.Get(index)
+-	var s error
+-	var ok bool
+-	if obj == nil {
+-		return nil
+-	}
+-	if s, ok = obj.(error); !ok {
+-		panic(fmt.Sprintf("assert: arguments: Error(%d) failed because object wasn't correct type: %s", index, args.Get(index)))
+-	}
+-	return s
+-}
+-
+-// Bool gets the argument at the specified index. Panics if there is no argument, or
+-// if the argument is of the wrong type.
+-func (args Arguments) Bool(index int) bool {
+-	var s bool
+-	var ok bool
+-	if s, ok = args.Get(index).(bool); !ok {
+-		panic(fmt.Sprintf("assert: arguments: Bool(%d) failed because object wasn't correct type: %s", index, args.Get(index)))
+-	}
+-	return s
+-}
+diff --git a/Godeps/_workspace/src/github.com/stretchr/testify/mock/mock_test.go b/Godeps/_workspace/src/github.com/stretchr/testify/mock/mock_test.go
+deleted file mode 100644
+index cd06451..0000000
+--- a/Godeps/_workspace/src/github.com/stretchr/testify/mock/mock_test.go
++++ /dev/null
+@@ -1,657 +0,0 @@
+-package mock
+-
+-import (
+-	"errors"
+-	"github.com/stretchr/testify/assert"
+-	"testing"
+-)
+-
+-/*
+-	Test objects
+-*/
+-
+-// ExampleInterface represents an example interface.
+-type ExampleInterface interface {
+-	TheExampleMethod(a, b, c int) (int, error)
+-}
+-
+-// TestExampleImplementation is a test implementation of ExampleInterface
+-type TestExampleImplementation struct {
+-	Mock
+-}
+-
+-func (i *TestExampleImplementation) TheExampleMethod(a, b, c int) (int, error) {
+-	args := i.Mock.Called(a, b, c)
+-	return args.Int(0), errors.New("Whoops")
+-}
+-
+-func (i *TestExampleImplementation) TheExampleMethod2(yesorno bool) {
+-	i.Mock.Called(yesorno)
+-}
+-
+-type ExampleType struct{}
+-
+-func (i *TestExampleImplementation) TheExampleMethod3(et *ExampleType) error {
+-	args := i.Mock.Called(et)
+-	return args.Error(0)
+-}
+-
+-/*
+-	Mock
+-*/
+-
+-func Test_Mock_TestData(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	if assert.NotNil(t, mockedService.TestData()) {
+-
+-		mockedService.TestData().Set("something", 123)
+-		assert.Equal(t, 123, mockedService.TestData().Get("something").Data())
+-
+-	}
+-
+-}
+-
+-func Test_Mock_On(t *testing.T) {
+-
+-	// make a test impl object
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	assert.Equal(t, mockedService.Mock.On("TheExampleMethod"), &mockedService.Mock)
+-	assert.Equal(t, "TheExampleMethod", mockedService.Mock.onMethodName)
+-
+-}
+-
+-func Test_Mock_On_WithArgs(t *testing.T) {
+-
+-	// make a test impl object
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	assert.Equal(t, mockedService.Mock.On("TheExampleMethod", 1, 2, 3), &mockedService.Mock)
+-	assert.Equal(t, "TheExampleMethod", mockedService.Mock.onMethodName)
+-	assert.Equal(t, 1, mockedService.Mock.onMethodArguments[0])
+-	assert.Equal(t, 2, mockedService.Mock.onMethodArguments[1])
+-	assert.Equal(t, 3, mockedService.Mock.onMethodArguments[2])
+-
+-}
+-
+-func Test_Mock_Return(t *testing.T) {
+-
+-	// make a test impl object
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	assert.Equal(t, mockedService.Mock.On("TheExampleMethod", "A", "B", true).Return(1, "two", true), &mockedService.Mock)
+-
+-	// ensure the call was created
+-	if assert.Equal(t, 1, len(mockedService.Mock.ExpectedCalls)) {
+-		call := mockedService.Mock.ExpectedCalls[0]
+-
+-		assert.Equal(t, "TheExampleMethod", call.Method)
+-		assert.Equal(t, "A", call.Arguments[0])
+-		assert.Equal(t, "B", call.Arguments[1])
+-		assert.Equal(t, true, call.Arguments[2])
+-		assert.Equal(t, 1, call.ReturnArguments[0])
+-		assert.Equal(t, "two", call.ReturnArguments[1])
+-		assert.Equal(t, true, call.ReturnArguments[2])
+-		assert.Equal(t, 0, call.Repeatability)
+-
+-	}
+-
+-}
+-
+-func Test_Mock_Return_Once(t *testing.T) {
+-
+-	// make a test impl object
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("TheExampleMethod", "A", "B", true).Return(1, "two", true).Once()
+-
+-	// ensure the call was created
+-	if assert.Equal(t, 1, len(mockedService.Mock.ExpectedCalls)) {
+-		call := mockedService.Mock.ExpectedCalls[0]
+-
+-		assert.Equal(t, "TheExampleMethod", call.Method)
+-		assert.Equal(t, "A", call.Arguments[0])
+-		assert.Equal(t, "B", call.Arguments[1])
+-		assert.Equal(t, true, call.Arguments[2])
+-		assert.Equal(t, 1, call.ReturnArguments[0])
+-		assert.Equal(t, "two", call.ReturnArguments[1])
+-		assert.Equal(t, true, call.ReturnArguments[2])
+-		assert.Equal(t, 1, call.Repeatability)
+-
+-	}
+-
+-}
+-
+-func Test_Mock_Return_Twice(t *testing.T) {
+-
+-	// make a test impl object
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("TheExampleMethod", "A", "B", true).Return(1, "two", true).Twice()
+-
+-	// ensure the call was created
+-	if assert.Equal(t, 1, len(mockedService.Mock.ExpectedCalls)) {
+-		call := mockedService.Mock.ExpectedCalls[0]
+-
+-		assert.Equal(t, "TheExampleMethod", call.Method)
+-		assert.Equal(t, "A", call.Arguments[0])
+-		assert.Equal(t, "B", call.Arguments[1])
+-		assert.Equal(t, true, call.Arguments[2])
+-		assert.Equal(t, 1, call.ReturnArguments[0])
+-		assert.Equal(t, "two", call.ReturnArguments[1])
+-		assert.Equal(t, true, call.ReturnArguments[2])
+-		assert.Equal(t, 2, call.Repeatability)
+-
+-	}
+-
+-}
+-
+-func Test_Mock_Return_Times(t *testing.T) {
+-
+-	// make a test impl object
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("TheExampleMethod", "A", "B", true).Return(1, "two", true).Times(5)
+-
+-	// ensure the call was created
+-	if assert.Equal(t, 1, len(mockedService.Mock.ExpectedCalls)) {
+-		call := mockedService.Mock.ExpectedCalls[0]
+-
+-		assert.Equal(t, "TheExampleMethod", call.Method)
+-		assert.Equal(t, "A", call.Arguments[0])
+-		assert.Equal(t, "B", call.Arguments[1])
+-		assert.Equal(t, true, call.Arguments[2])
+-		assert.Equal(t, 1, call.ReturnArguments[0])
+-		assert.Equal(t, "two", call.ReturnArguments[1])
+-		assert.Equal(t, true, call.ReturnArguments[2])
+-		assert.Equal(t, 5, call.Repeatability)
+-
+-	}
+-
+-}
+-
+-func Test_Mock_Return_Nothing(t *testing.T) {
+-
+-	// make a test impl object
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	assert.Equal(t, mockedService.Mock.On("TheExampleMethod", "A", "B", true).Return(), &mockedService.Mock)
+-
+-	// ensure the call was created
+-	if assert.Equal(t, 1, len(mockedService.Mock.ExpectedCalls)) {
+-		call := mockedService.Mock.ExpectedCalls[0]
+-
+-		assert.Equal(t, "TheExampleMethod", call.Method)
+-		assert.Equal(t, "A", call.Arguments[0])
+-		assert.Equal(t, "B", call.Arguments[1])
+-		assert.Equal(t, true, call.Arguments[2])
+-		assert.Equal(t, 0, len(call.ReturnArguments))
+-
+-	}
+-
+-}
+-
+-func Test_Mock_findExpectedCall(t *testing.T) {
+-
+-	m := new(Mock)
+-	m.On("One", 1).Return("one")
+-	m.On("Two", 2).Return("two")
+-	m.On("Two", 3).Return("three")
+-
+-	f, c := m.findExpectedCall("Two", 3)
+-
+-	if assert.Equal(t, 2, f) {
+-		if assert.NotNil(t, c) {
+-			assert.Equal(t, "Two", c.Method)
+-			assert.Equal(t, 3, c.Arguments[0])
+-			assert.Equal(t, "three", c.ReturnArguments[0])
+-		}
+-	}
+-
+-}
+-
+-func Test_Mock_findExpectedCall_For_Unknown_Method(t *testing.T) {
+-
+-	m := new(Mock)
+-	m.On("One", 1).Return("one")
+-	m.On("Two", 2).Return("two")
+-	m.On("Two", 3).Return("three")
+-
+-	f, _ := m.findExpectedCall("Two")
+-
+-	assert.Equal(t, -1, f)
+-
+-}
+-
+-func Test_Mock_findExpectedCall_Respects_Repeatability(t *testing.T) {
+-
+-	m := new(Mock)
+-	m.On("One", 1).Return("one")
+-	m.On("Two", 2).Return("two").Once()
+-	m.On("Two", 3).Return("three").Twice()
+-	m.On("Two", 3).Return("three").Times(8)
+-
+-	f, c := m.findExpectedCall("Two", 3)
+-
+-	if assert.Equal(t, 2, f) {
+-		if assert.NotNil(t, c) {
+-			assert.Equal(t, "Two", c.Method)
+-			assert.Equal(t, 3, c.Arguments[0])
+-			assert.Equal(t, "three", c.ReturnArguments[0])
+-		}
+-	}
+-
+-}
+-
+-func Test_callString(t *testing.T) {
+-
+-	assert.Equal(t, `Method(int,bool,string)`, callString("Method", []interface{}{1, true, "something"}, false))
+-
+-}
+-
+-func Test_Mock_Called(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_Called", 1, 2, 3).Return(5, "6", true)
+-
+-	returnArguments := mockedService.Mock.Called(1, 2, 3)
+-
+-	if assert.Equal(t, 1, len(mockedService.Mock.Calls)) {
+-		assert.Equal(t, "Test_Mock_Called", mockedService.Mock.Calls[0].Method)
+-		assert.Equal(t, 1, mockedService.Mock.Calls[0].Arguments[0])
+-		assert.Equal(t, 2, mockedService.Mock.Calls[0].Arguments[1])
+-		assert.Equal(t, 3, mockedService.Mock.Calls[0].Arguments[2])
+-	}
+-
+-	if assert.Equal(t, 3, len(returnArguments)) {
+-		assert.Equal(t, 5, returnArguments[0])
+-		assert.Equal(t, "6", returnArguments[1])
+-		assert.Equal(t, true, returnArguments[2])
+-	}
+-
+-}
+-
+-func Test_Mock_Called_For_Bounded_Repeatability(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_Called_For_Bounded_Repeatability", 1, 2, 3).Return(5, "6", true).Once()
+-	mockedService.Mock.On("Test_Mock_Called_For_Bounded_Repeatability", 1, 2, 3).Return(-1, "hi", false)
+-
+-	returnArguments1 := mockedService.Mock.Called(1, 2, 3)
+-	returnArguments2 := mockedService.Mock.Called(1, 2, 3)
+-
+-	if assert.Equal(t, 2, len(mockedService.Mock.Calls)) {
+-		assert.Equal(t, "Test_Mock_Called_For_Bounded_Repeatability", mockedService.Mock.Calls[0].Method)
+-		assert.Equal(t, 1, mockedService.Mock.Calls[0].Arguments[0])
+-		assert.Equal(t, 2, mockedService.Mock.Calls[0].Arguments[1])
+-		assert.Equal(t, 3, mockedService.Mock.Calls[0].Arguments[2])
+-
+-		assert.Equal(t, "Test_Mock_Called_For_Bounded_Repeatability", mockedService.Mock.Calls[1].Method)
+-		assert.Equal(t, 1, mockedService.Mock.Calls[1].Arguments[0])
+-		assert.Equal(t, 2, mockedService.Mock.Calls[1].Arguments[1])
+-		assert.Equal(t, 3, mockedService.Mock.Calls[1].Arguments[2])
+-	}
+-
+-	if assert.Equal(t, 3, len(returnArguments1)) {
+-		assert.Equal(t, 5, returnArguments1[0])
+-		assert.Equal(t, "6", returnArguments1[1])
+-		assert.Equal(t, true, returnArguments1[2])
+-	}
+-
+-	if assert.Equal(t, 3, len(returnArguments2)) {
+-		assert.Equal(t, -1, returnArguments2[0])
+-		assert.Equal(t, "hi", returnArguments2[1])
+-		assert.Equal(t, false, returnArguments2[2])
+-	}
+-
+-}
+-
+-func Test_Mock_Called_For_SetTime_Expectation(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("TheExampleMethod", 1, 2, 3).Return(5, "6", true).Times(4)
+-
+-	mockedService.TheExampleMethod(1, 2, 3)
+-	mockedService.TheExampleMethod(1, 2, 3)
+-	mockedService.TheExampleMethod(1, 2, 3)
+-	mockedService.TheExampleMethod(1, 2, 3)
+-	assert.Panics(t, func() {
+-		mockedService.TheExampleMethod(1, 2, 3)
+-	})
+-
+-}
+-
+-func Test_Mock_Called_Unexpected(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	// make sure it panics if no expectation was made
+-	assert.Panics(t, func() {
+-		mockedService.Mock.Called(1, 2, 3)
+-	}, "Calling unexpected method should panic")
+-
+-}
+-
+-func Test_AssertExpectationsForObjects_Helper(t *testing.T) {
+-
+-	var mockedService1 *TestExampleImplementation = new(TestExampleImplementation)
+-	var mockedService2 *TestExampleImplementation = new(TestExampleImplementation)
+-	var mockedService3 *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService1.Mock.On("Test_AssertExpectationsForObjects_Helper", 1).Return()
+-	mockedService2.Mock.On("Test_AssertExpectationsForObjects_Helper", 2).Return()
+-	mockedService3.Mock.On("Test_AssertExpectationsForObjects_Helper", 3).Return()
+-
+-	mockedService1.Called(1)
+-	mockedService2.Called(2)
+-	mockedService3.Called(3)
+-
+-	assert.True(t, AssertExpectationsForObjects(t, mockedService1.Mock, mockedService2.Mock, mockedService3.Mock))
+-
+-}
+-
+-func Test_AssertExpectationsForObjects_Helper_Failed(t *testing.T) {
+-
+-	var mockedService1 *TestExampleImplementation = new(TestExampleImplementation)
+-	var mockedService2 *TestExampleImplementation = new(TestExampleImplementation)
+-	var mockedService3 *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService1.Mock.On("Test_AssertExpectationsForObjects_Helper_Failed", 1).Return()
+-	mockedService2.Mock.On("Test_AssertExpectationsForObjects_Helper_Failed", 2).Return()
+-	mockedService3.Mock.On("Test_AssertExpectationsForObjects_Helper_Failed", 3).Return()
+-
+-	mockedService1.Called(1)
+-	mockedService3.Called(3)
+-
+-	tt := new(testing.T)
+-	assert.False(t, AssertExpectationsForObjects(tt, mockedService1.Mock, mockedService2.Mock, mockedService3.Mock))
+-
+-}
+-
+-func Test_Mock_AssertExpectations(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_AssertExpectations", 1, 2, 3).Return(5, 6, 7)
+-
+-	tt := new(testing.T)
+-	assert.False(t, mockedService.AssertExpectations(tt))
+-
+-	// make the call now
+-	mockedService.Mock.Called(1, 2, 3)
+-
+-	// now assert expectations
+-	assert.True(t, mockedService.AssertExpectations(tt))
+-
+-}
+-
+-func Test_Mock_AssertExpectationsCustomType(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("TheExampleMethod3", AnythingOfType("*mock.ExampleType")).Return(nil).Once()
+-
+-	tt := new(testing.T)
+-	assert.False(t, mockedService.AssertExpectations(tt))
+-
+-	// make the call now
+-	mockedService.TheExampleMethod3(&ExampleType{})
+-
+-	// now assert expectations
+-	assert.True(t, mockedService.AssertExpectations(tt))
+-
+-}
+-
+-func Test_Mock_AssertExpectations_With_Repeatability(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_AssertExpectations_With_Repeatability", 1, 2, 3).Return(5, 6, 7).Twice()
+-
+-	tt := new(testing.T)
+-	assert.False(t, mockedService.AssertExpectations(tt))
+-
+-	// make the call now
+-	mockedService.Mock.Called(1, 2, 3)
+-
+-	assert.False(t, mockedService.AssertExpectations(tt))
+-
+-	mockedService.Mock.Called(1, 2, 3)
+-
+-	// now assert expectations
+-	assert.True(t, mockedService.AssertExpectations(tt))
+-
+-}
+-
+-func Test_Mock_TwoCallsWithDifferentArguments(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_TwoCallsWithDifferentArguments", 1, 2, 3).Return(5, 6, 7)
+-	mockedService.Mock.On("Test_Mock_TwoCallsWithDifferentArguments", 4, 5, 6).Return(5, 6, 7)
+-
+-	args1 := mockedService.Mock.Called(1, 2, 3)
+-	assert.Equal(t, 5, args1.Int(0))
+-	assert.Equal(t, 6, args1.Int(1))
+-	assert.Equal(t, 7, args1.Int(2))
+-
+-	args2 := mockedService.Mock.Called(4, 5, 6)
+-	assert.Equal(t, 5, args2.Int(0))
+-	assert.Equal(t, 6, args2.Int(1))
+-	assert.Equal(t, 7, args2.Int(2))
+-
+-}
+-
+-func Test_Mock_AssertNumberOfCalls(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_AssertNumberOfCalls", 1, 2, 3).Return(5, 6, 7)
+-
+-	mockedService.Mock.Called(1, 2, 3)
+-	assert.True(t, mockedService.AssertNumberOfCalls(t, "Test_Mock_AssertNumberOfCalls", 1))
+-
+-	mockedService.Mock.Called(1, 2, 3)
+-	assert.True(t, mockedService.AssertNumberOfCalls(t, "Test_Mock_AssertNumberOfCalls", 2))
+-
+-}
+-
+-func Test_Mock_AssertCalled(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_AssertCalled", 1, 2, 3).Return(5, 6, 7)
+-
+-	mockedService.Mock.Called(1, 2, 3)
+-
+-	assert.True(t, mockedService.AssertCalled(t, "Test_Mock_AssertCalled", 1, 2, 3))
+-
+-}
+-
+-func Test_Mock_AssertCalled_WithArguments(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_AssertCalled_WithArguments", 1, 2, 3).Return(5, 6, 7)
+-
+-	mockedService.Mock.Called(1, 2, 3)
+-
+-	tt := new(testing.T)
+-	assert.True(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments", 1, 2, 3))
+-	assert.False(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments", 2, 3, 4))
+-
+-}
+-
+-func Test_Mock_AssertCalled_WithArguments_With_Repeatability(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_AssertCalled_WithArguments_With_Repeatability", 1, 2, 3).Return(5, 6, 7).Once()
+-	mockedService.Mock.On("Test_Mock_AssertCalled_WithArguments_With_Repeatability", 2, 3, 4).Return(5, 6, 7).Once()
+-
+-	mockedService.Mock.Called(1, 2, 3)
+-	mockedService.Mock.Called(2, 3, 4)
+-
+-	tt := new(testing.T)
+-	assert.True(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments_With_Repeatability", 1, 2, 3))
+-	assert.True(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments_With_Repeatability", 2, 3, 4))
+-	assert.False(t, mockedService.AssertCalled(tt, "Test_Mock_AssertCalled_WithArguments_With_Repeatability", 3, 4, 5))
+-
+-}
+-
+-func Test_Mock_AssertNotCalled(t *testing.T) {
+-
+-	var mockedService *TestExampleImplementation = new(TestExampleImplementation)
+-
+-	mockedService.Mock.On("Test_Mock_AssertNotCalled", 1, 2, 3).Return(5, 6, 7)
+-
+-	mockedService.Mock.Called(1, 2, 3)
+-
+-	assert.True(t, mockedService.AssertNotCalled(t, "Test_Mock_NotCalled"))
+-
+-}
+-
+-/*
+-	Arguments helper methods
+-*/
+-func Test_Arguments_Get(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true}
+-
+-	assert.Equal(t, "string", args.Get(0).(string))
+-	assert.Equal(t, 123, args.Get(1).(int))
+-	assert.Equal(t, true, args.Get(2).(bool))
+-
+-}
+-
+-func Test_Arguments_Is(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true}
+-
+-	assert.True(t, args.Is("string", 123, true))
+-	assert.False(t, args.Is("wrong", 456, false))
+-
+-}
+-
+-func Test_Arguments_Diff(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"Hello World", 123, true}
+-	var diff string
+-	var count int
+-	diff, count = args.Diff([]interface{}{"Hello World", 456, "false"})
+-
+-	assert.Equal(t, 2, count)
+-	assert.Contains(t, diff, `%!s(int=456) != %!s(int=123)`)
+-	assert.Contains(t, diff, `false != %!s(bool=true)`)
+-
+-}
+-
+-func Test_Arguments_Diff_DifferentNumberOfArgs(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true}
+-	var diff string
+-	var count int
+-	diff, count = args.Diff([]interface{}{"string", 456, "false", "extra"})
+-
+-	assert.Equal(t, 3, count)
+-	assert.Contains(t, diff, `extra != (Missing)`)
+-
+-}
+-
+-func Test_Arguments_Diff_WithAnythingArgument(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true}
+-	var count int
+-	_, count = args.Diff([]interface{}{"string", Anything, true})
+-
+-	assert.Equal(t, 0, count)
+-
+-}
+-
+-func Test_Arguments_Diff_WithAnythingArgument_InActualToo(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", Anything, true}
+-	var count int
+-	_, count = args.Diff([]interface{}{"string", 123, true})
+-
+-	assert.Equal(t, 0, count)
+-
+-}
+-
+-func Test_Arguments_Diff_WithAnythingOfTypeArgument(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", AnythingOfType("int"), true}
+-	var count int
+-	_, count = args.Diff([]interface{}{"string", 123, true})
+-
+-	assert.Equal(t, 0, count)
+-
+-}
+-
+-func Test_Arguments_Diff_WithAnythingOfTypeArgument_Failing(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", AnythingOfType("string"), true}
+-	var count int
+-	var diff string
+-	diff, count = args.Diff([]interface{}{"string", 123, true})
+-
+-	assert.Equal(t, 1, count)
+-	assert.Contains(t, diff, `string != type int - %!s(int=123)`)
+-
+-}
+-
+-func Test_Arguments_Assert(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true}
+-
+-	assert.True(t, args.Assert(t, "string", 123, true))
+-
+-}
+-
+-func Test_Arguments_String_Representation(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true}
+-	assert.Equal(t, `string,int,bool`, args.String())
+-
+-}
+-
+-func Test_Arguments_String(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true}
+-	assert.Equal(t, "string", args.String(0))
+-
+-}
+-
+-func Test_Arguments_Error(t *testing.T) {
+-
+-	var err error = errors.New("An Error")
+-	var args Arguments = []interface{}{"string", 123, true, err}
+-	assert.Equal(t, err, args.Error(3))
+-
+-}
+-
+-func Test_Arguments_Error_Nil(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true, nil}
+-	assert.Equal(t, nil, args.Error(3))
+-
+-}
+-
+-func Test_Arguments_Int(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true}
+-	assert.Equal(t, 123, args.Int(1))
+-
+-}
+-
+-func Test_Arguments_Bool(t *testing.T) {
+-
+-	var args Arguments = []interface{}{"string", 123, true}
+-	assert.Equal(t, true, args.Bool(2))
+-
+-}
+diff --git a/Godeps/_workspace/src/github.com/vaughan0/go-ini/LICENSE b/Godeps/_workspace/src/github.com/vaughan0/go-ini/LICENSE
+deleted file mode 100644
+index 968b453..0000000
+--- a/Godeps/_workspace/src/github.com/vaughan0/go-ini/LICENSE
++++ /dev/null
+@@ -1,14 +0,0 @@
+-Copyright (c) 2013 Vaughan Newton
+-
+-Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
+-documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
+-rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
+-persons to whom the Software is furnished to do so, subject to the following conditions:
+-
+-The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
+-Software.
+-
+-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+-WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
+-COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+-OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+diff --git a/Godeps/_workspace/src/github.com/vaughan0/go-ini/README.md b/Godeps/_workspace/src/github.com/vaughan0/go-ini/README.md
+deleted file mode 100644
+index d5cd4e7..0000000
+--- a/Godeps/_workspace/src/github.com/vaughan0/go-ini/README.md
++++ /dev/null
+@@ -1,70 +0,0 @@
+-go-ini
+-======
+-
+-INI parsing library for Go (golang).
+-
+-View the API documentation [here](http://godoc.org/github.com/vaughan0/go-ini).
+-
+-Usage
+------
+-
+-Parse an INI file:
+-
+-```go
+-import "github.com/vaughan0/go-ini"
+-
+-file, err := ini.LoadFile("myfile.ini")
+-```
+-
+-Get data from the parsed file:
+-
+-```go
+-name, ok := file.Get("person", "name")
+-if !ok {
+-  panic("'name' variable missing from 'person' section")
+-}
+-```
+-
+-Iterate through values in a section:
+-
+-```go
+-for key, value := range file["mysection"] {
+-  fmt.Printf("%s => %s\n", key, value)
+-}
+-```
+-
+-Iterate through sections in a file:
+-
+-```go
+-for name, section := range file {
+-  fmt.Printf("Section name: %s\n", name)
+-}
+-```
+-
+-File Format
+------------
+-
+-INI files are parsed by go-ini line-by-line. Each line may be one of the following:
+-
+-  * A section definition: [section-name]
+-  * A property: key = value
+-  * A comment: #blahblah _or_ ;blahblah
+-  * Blank. The line will be ignored.
+-
+-Properties defined before any section headers are placed in the default section, which has
+-the empty string as it's key.
+-
+-Example:
+-
+-```ini
+-# I am a comment
+-; So am I!
+-
+-[apples]
+-colour = red or green
+-shape = applish
+-
+-[oranges]
+-shape = square
+-colour = blue
+-```
+diff --git a/Godeps/_workspace/src/github.com/vaughan0/go-ini/ini.go b/Godeps/_workspace/src/github.com/vaughan0/go-ini/ini.go
+deleted file mode 100644
+index 81aeb32..0000000
+--- a/Godeps/_workspace/src/github.com/vaughan0/go-ini/ini.go
++++ /dev/null
+@@ -1,123 +0,0 @@
+-// Package ini provides functions for parsing INI configuration files.
+-package ini
+-
+-import (
+-	"bufio"
+-	"fmt"
+-	"io"
+-	"os"
+-	"regexp"
+-	"strings"
+-)
+-
+-var (
+-	sectionRegex = regexp.MustCompile(`^\[(.*)\]$`)
+-	assignRegex  = regexp.MustCompile(`^([^=]+)=(.*)$`)
+-)
+-
+-// ErrSyntax is returned when there is a syntax error in an INI file.
+-type ErrSyntax struct {
+-	Line   int
+-	Source string // The contents of the erroneous line, without leading or trailing whitespace
+-}
+-
+-func (e ErrSyntax) Error() string {
+-	return fmt.Sprintf("invalid INI syntax on line %d: %s", e.Line, e.Source)
+-}
+-
+-// A File represents a parsed INI file.
+-type File map[string]Section
+-
+-// A Section represents a single section of an INI file.
+-type Section map[string]string
+-
+-// Returns a named Section. A Section will be created if one does not already exist for the given name.
+-func (f File) Section(name string) Section {
+-	section := f[name]
+-	if section == nil {
+-		section = make(Section)
+-		f[name] = section
+-	}
+-	return section
+-}
+-
+-// Looks up a value for a key in a section and returns that value, along with a boolean result similar to a map lookup.
+-func (f File) Get(section, key string) (value string, ok bool) {
+-	if s := f[section]; s != nil {
+-		value, ok = s[key]
+-	}
+-	return
+-}
+-
+-// Loads INI data from a reader and stores the data in the File.
+-func (f File) Load(in io.Reader) (err error) {
+-	bufin, ok := in.(*bufio.Reader)
+-	if !ok {
+-		bufin = bufio.NewReader(in)
+-	}
+-	return parseFile(bufin, f)
+-}
+-
+-// Loads INI data from a named file and stores the data in the File.
+-func (f File) LoadFile(file string) (err error) {
+-	in, err := os.Open(file)
+-	if err != nil {
+-		return
+-	}
+-	defer in.Close()
+-	return f.Load(in)
+-}
+-
+-func parseFile(in *bufio.Reader, file File) (err error) {
+-	section := ""
+-	lineNum := 0
+-	for done := false; !done; {
+-		var line string
+-		if line, err = in.ReadString('\n'); err != nil {
+-			if err == io.EOF {
+-				done = true
+-			} else {
+-				return
+-			}
+-		}
+-		lineNum++
+-		line = strings.TrimSpace(line)
+-		if len(line) == 0 {
+-			// Skip blank lines
+-			continue
+-		}
+-		if line[0] == ';' || line[0] == '#' {
+-			// Skip comments
+-			continue
+-		}
+-
+-		if groups := assignRegex.FindStringSubmatch(line); groups != nil {
+-			key, val := groups[1], groups[2]
+-			key, val = strings.TrimSpace(key), strings.TrimSpace(val)
+-			file.Section(section)[key] = val
+-		} else if groups := sectionRegex.FindStringSubmatch(line); groups != nil {
+-			name := strings.TrimSpace(groups[1])
+-			section = name
+-			// Create the section if it does not exist
+-			file.Section(section)
+-		} else {
+-			return ErrSyntax{lineNum, line}
+-		}
+-
+-	}
+-	return nil
+-}
+-
+-// Loads and returns a File from a reader.
+-func Load(in io.Reader) (File, error) {
+-	file := make(File)
+-	err := file.Load(in)
+-	return file, err
+-}
+-
+-// Loads and returns an INI File from a file on disk.
+-func LoadFile(filename string) (File, error) {
+-	file := make(File)
+-	err := file.LoadFile(filename)
+-	return file, err
+-}
+diff --git a/Godeps/_workspace/src/github.com/vaughan0/go-ini/ini_linux_test.go b/Godeps/_workspace/src/github.com/vaughan0/go-ini/ini_linux_test.go
+deleted file mode 100644
+index 38a6f00..0000000
+--- a/Godeps/_workspace/src/github.com/vaughan0/go-ini/ini_linux_test.go
++++ /dev/null
+@@ -1,43 +0,0 @@
+-package ini
+-
+-import (
+-	"reflect"
+-	"syscall"
+-	"testing"
+-)
+-
+-func TestLoadFile(t *testing.T) {
+-	originalOpenFiles := numFilesOpen(t)
+-
+-	file, err := LoadFile("test.ini")
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-
+-	if originalOpenFiles != numFilesOpen(t) {
+-		t.Error("test.ini not closed")
+-	}
+-
+-	if !reflect.DeepEqual(file, File{"default": {"stuff": "things"}}) {
+-		t.Error("file not read correctly")
+-	}
+-}
+-
+-func numFilesOpen(t *testing.T) (num uint64) {
+-	var rlimit syscall.Rlimit
+-	err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rlimit)
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	maxFds := int(rlimit.Cur)
+-
+-	var stat syscall.Stat_t
+-	for i := 0; i < maxFds; i++ {
+-		if syscall.Fstat(i, &stat) == nil {
+-			num++
+-		} else {
+-			return
+-		}
+-	}
+-	return
+-}
+diff --git a/Godeps/_workspace/src/github.com/vaughan0/go-ini/ini_test.go b/Godeps/_workspace/src/github.com/vaughan0/go-ini/ini_test.go
+deleted file mode 100644
+index 06a4d05..0000000
+--- a/Godeps/_workspace/src/github.com/vaughan0/go-ini/ini_test.go
++++ /dev/null
+@@ -1,89 +0,0 @@
+-package ini
+-
+-import (
+-	"reflect"
+-	"strings"
+-	"testing"
+-)
+-
+-func TestLoad(t *testing.T) {
+-	src := `
+-  # Comments are ignored
+-
+-  herp = derp
+-
+-  [foo]
+-  hello=world
+-  whitespace should   =   not matter   
+-  ; sneaky semicolon-style comment
+-  multiple = equals = signs
+-
+-  [bar]
+-  this = that`
+-
+-	file, err := Load(strings.NewReader(src))
+-	if err != nil {
+-		t.Fatal(err)
+-	}
+-	check := func(section, key, expect string) {
+-		if value, _ := file.Get(section, key); value != expect {
+-			t.Errorf("Get(%q, %q): expected %q, got %q", section, key, expect, value)
+-		}
+-	}
+-
+-	check("", "herp", "derp")
+-	check("foo", "hello", "world")
+-	check("foo", "whitespace should", "not matter")
+-	check("foo", "multiple", "equals = signs")
+-	check("bar", "this", "that")
+-}
+-
+-func TestSyntaxError(t *testing.T) {
+-	src := `
+-  # Line 2
+-  [foo]
+-  bar = baz
+-  # Here's an error on line 6:
+-  wut?
+-  herp = derp`
+-	_, err := Load(strings.NewReader(src))
+-	t.Logf("%T: %v", err, err)
+-	if err == nil {
+-		t.Fatal("expected an error, got nil")
+-	}
+-	syntaxErr, ok := err.(ErrSyntax)
+-	if !ok {
+-		t.Fatal("expected an error of type ErrSyntax")
+-	}
+-	if syntaxErr.Line != 6 {
+-		t.Fatal("incorrect line number")
+-	}
+-	if syntaxErr.Source != "wut?" {
+-		t.Fatal("incorrect source")
+-	}
+-}
+-
+-func TestDefinedSectionBehaviour(t *testing.T) {
+-	check := func(src string, expect File) {
+-		file, err := Load(strings.NewReader(src))
+-		if err != nil {
+-			t.Fatal(err)
+-		}
+-		if !reflect.DeepEqual(file, expect) {
+-			t.Errorf("expected %v, got %v", expect, file)
+-		}
+-	}
+-	// No sections for an empty file
+-	check("", File{})
+-	// Default section only if there are actually values for it
+-	check("foo=bar", File{"": {"foo": "bar"}})
+-	// User-defined sections should always be present, even if empty
+-	check("[a]\n[b]\nfoo=bar", File{
+-		"a": {},
+-		"b": {"foo": "bar"},
+-	})
+-	check("foo=bar\n[a]\nthis=that", File{
+-		"":  {"foo": "bar"},
+-		"a": {"this": "that"},
+-	})
+-}
+diff --git a/Godeps/_workspace/src/github.com/vaughan0/go-ini/test.ini b/Godeps/_workspace/src/github.com/vaughan0/go-ini/test.ini
+deleted file mode 100644
+index d13c999..0000000
+--- a/Godeps/_workspace/src/github.com/vaughan0/go-ini/test.ini
++++ /dev/null
+@@ -1,2 +0,0 @@
+-[default]
+-stuff = things
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/LICENSE b/Godeps/_workspace/src/gopkg.in/v1/yaml/LICENSE
+deleted file mode 100644
+index 53320c3..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/LICENSE
++++ /dev/null
+@@ -1,185 +0,0 @@
+-This software is licensed under the LGPLv3, included below.
+-
+-As a special exception to the GNU Lesser General Public License version 3
+-("LGPL3"), the copyright holders of this Library give you permission to
+-convey to a third party a Combined Work that links statically or dynamically
+-to this Library without providing any Minimal Corresponding Source or
+-Minimal Application Code as set out in 4d or providing the installation
+-information set out in section 4e, provided that you comply with the other
+-provisions of LGPL3 and provided that you meet, for the Application the
+-terms and conditions of the license(s) which apply to the Application.
+-
+-Except as stated in this special exception, the provisions of LGPL3 will
+-continue to comply in full to this Library. If you modify this Library, you
+-may apply this exception to your version of this Library, but you are not
+-obliged to do so. If you do not wish to do so, delete this exception
+-statement from your version. This exception does not (and cannot) modify any
+-license terms which apply to the Application, with which you must still
+-comply.
+-
+-
+-                   GNU LESSER GENERAL PUBLIC LICENSE
+-                       Version 3, 29 June 2007
+-
+- Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
+- Everyone is permitted to copy and distribute verbatim copies
+- of this license document, but changing it is not allowed.
+-
+-
+-  This version of the GNU Lesser General Public License incorporates
+-the terms and conditions of version 3 of the GNU General Public
+-License, supplemented by the additional permissions listed below.
+-
+-  0. Additional Definitions.
+-
+-  As used herein, "this License" refers to version 3 of the GNU Lesser
+-General Public License, and the "GNU GPL" refers to version 3 of the GNU
+-General Public License.
+-
+-  "The Library" refers to a covered work governed by this License,
+-other than an Application or a Combined Work as defined below.
+-
+-  An "Application" is any work that makes use of an interface provided
+-by the Library, but which is not otherwise based on the Library.
+-Defining a subclass of a class defined by the Library is deemed a mode
+-of using an interface provided by the Library.
+-
+-  A "Combined Work" is a work produced by combining or linking an
+-Application with the Library.  The particular version of the Library
+-with which the Combined Work was made is also called the "Linked
+-Version".
+-
+-  The "Minimal Corresponding Source" for a Combined Work means the
+-Corresponding Source for the Combined Work, excluding any source code
+-for portions of the Combined Work that, considered in isolation, are
+-based on the Application, and not on the Linked Version.
+-
+-  The "Corresponding Application Code" for a Combined Work means the
+-object code and/or source code for the Application, including any data
+-and utility programs needed for reproducing the Combined Work from the
+-Application, but excluding the System Libraries of the Combined Work.
+-
+-  1. Exception to Section 3 of the GNU GPL.
+-
+-  You may convey a covered work under sections 3 and 4 of this License
+-without being bound by section 3 of the GNU GPL.
+-
+-  2. Conveying Modified Versions.
+-
+-  If you modify a copy of the Library, and, in your modifications, a
+-facility refers to a function or data to be supplied by an Application
+-that uses the facility (other than as an argument passed when the
+-facility is invoked), then you may convey a copy of the modified
+-version:
+-
+-   a) under this License, provided that you make a good faith effort to
+-   ensure that, in the event an Application does not supply the
+-   function or data, the facility still operates, and performs
+-   whatever part of its purpose remains meaningful, or
+-
+-   b) under the GNU GPL, with none of the additional permissions of
+-   this License applicable to that copy.
+-
+-  3. Object Code Incorporating Material from Library Header Files.
+-
+-  The object code form of an Application may incorporate material from
+-a header file that is part of the Library.  You may convey such object
+-code under terms of your choice, provided that, if the incorporated
+-material is not limited to numerical parameters, data structure
+-layouts and accessors, or small macros, inline functions and templates
+-(ten or fewer lines in length), you do both of the following:
+-
+-   a) Give prominent notice with each copy of the object code that the
+-   Library is used in it and that the Library and its use are
+-   covered by this License.
+-
+-   b) Accompany the object code with a copy of the GNU GPL and this license
+-   document.
+-
+-  4. Combined Works.
+-
+-  You may convey a Combined Work under terms of your choice that,
+-taken together, effectively do not restrict modification of the
+-portions of the Library contained in the Combined Work and reverse
+-engineering for debugging such modifications, if you also do each of
+-the following:
+-
+-   a) Give prominent notice with each copy of the Combined Work that
+-   the Library is used in it and that the Library and its use are
+-   covered by this License.
+-
+-   b) Accompany the Combined Work with a copy of the GNU GPL and this license
+-   document.
+-
+-   c) For a Combined Work that displays copyright notices during
+-   execution, include the copyright notice for the Library among
+-   these notices, as well as a reference directing the user to the
+-   copies of the GNU GPL and this license document.
+-
+-   d) Do one of the following:
+-
+-       0) Convey the Minimal Corresponding Source under the terms of this
+-       License, and the Corresponding Application Code in a form
+-       suitable for, and under terms that permit, the user to
+-       recombine or relink the Application with a modified version of
+-       the Linked Version to produce a modified Combined Work, in the
+-       manner specified by section 6 of the GNU GPL for conveying
+-       Corresponding Source.
+-
+-       1) Use a suitable shared library mechanism for linking with the
+-       Library.  A suitable mechanism is one that (a) uses at run time
+-       a copy of the Library already present on the user's computer
+-       system, and (b) will operate properly with a modified version
+-       of the Library that is interface-compatible with the Linked
+-       Version.
+-
+-   e) Provide Installation Information, but only if you would otherwise
+-   be required to provide such information under section 6 of the
+-   GNU GPL, and only to the extent that such information is
+-   necessary to install and execute a modified version of the
+-   Combined Work produced by recombining or relinking the
+-   Application with a modified version of the Linked Version. (If
+-   you use option 4d0, the Installation Information must accompany
+-   the Minimal Corresponding Source and Corresponding Application
+-   Code. If you use option 4d1, you must provide the Installation
+-   Information in the manner specified by section 6 of the GNU GPL
+-   for conveying Corresponding Source.)
+-
+-  5. Combined Libraries.
+-
+-  You may place library facilities that are a work based on the
+-Library side by side in a single library together with other library
+-facilities that are not Applications and are not covered by this
+-License, and convey such a combined library under terms of your
+-choice, if you do both of the following:
+-
+-   a) Accompany the combined library with a copy of the same work based
+-   on the Library, uncombined with any other library facilities,
+-   conveyed under the terms of this License.
+-
+-   b) Give prominent notice with the combined library that part of it
+-   is a work based on the Library, and explaining where to find the
+-   accompanying uncombined form of the same work.
+-
+-  6. Revised Versions of the GNU Lesser General Public License.
+-
+-  The Free Software Foundation may publish revised and/or new versions
+-of the GNU Lesser General Public License from time to time. Such new
+-versions will be similar in spirit to the present version, but may
+-differ in detail to address new problems or concerns.
+-
+-  Each version is given a distinguishing version number. If the
+-Library as you received it specifies that a certain numbered version
+-of the GNU Lesser General Public License "or any later version"
+-applies to it, you have the option of following the terms and
+-conditions either of that published version or of any later version
+-published by the Free Software Foundation. If the Library as you
+-received it does not specify a version number of the GNU Lesser
+-General Public License, you may choose any version of the GNU Lesser
+-General Public License ever published by the Free Software Foundation.
+-
+-  If the Library as you received it specifies that a proxy can decide
+-whether future versions of the GNU Lesser General Public License shall
+-apply, that proxy's public statement of acceptance of any version is
+-permanent authorization for you to choose that version for the
+-Library.
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/LICENSE.libyaml b/Godeps/_workspace/src/gopkg.in/v1/yaml/LICENSE.libyaml
+deleted file mode 100644
+index 8da58fb..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/LICENSE.libyaml
++++ /dev/null
+@@ -1,31 +0,0 @@
+-The following files were ported to Go from C files of libyaml, and thus
+-are still covered by their original copyright and license:
+-
+-    apic.go
+-    emitterc.go
+-    parserc.go
+-    readerc.go
+-    scannerc.go
+-    writerc.go
+-    yamlh.go
+-    yamlprivateh.go
+-
+-Copyright (c) 2006 Kirill Simonov
+-
+-Permission is hereby granted, free of charge, to any person obtaining a copy of
+-this software and associated documentation files (the "Software"), to deal in
+-the Software without restriction, including without limitation the rights to
+-use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+-of the Software, and to permit persons to whom the Software is furnished to do
+-so, subject to the following conditions:
+-
+-The above copyright notice and this permission notice shall be included in all
+-copies or substantial portions of the Software.
+-
+-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+-SOFTWARE.
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/README.md b/Godeps/_workspace/src/gopkg.in/v1/yaml/README.md
+deleted file mode 100644
+index 896687b..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/README.md
++++ /dev/null
+@@ -1,128 +0,0 @@
+-# YAML support for the Go language
+-
+-Introduction
+-------------
+-
+-The yaml package enables Go programs to comfortably encode and decode YAML
+-values. It was developed within [Canonical](https://www.canonical.com) as
+-part of the [juju](https://juju.ubuntu.com) project, and is based on a
+-pure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML)
+-C library to parse and generate YAML data quickly and reliably.
+-
+-Compatibility
+--------------
+-
+-The yaml package is almost compatible with YAML 1.1, including support for
+-anchors, tags, etc. There are still a few missing bits, such as document
+-merging, base-60 floats (huh?), and multi-document unmarshalling. These
+-features are not hard to add, and will be introduced as necessary.
+-
+-Installation and usage
+-----------------------
+-
+-The import path for the package is *gopkg.in/yaml.v1*.
+-
+-To install it, run:
+-
+-    go get gopkg.in/yaml.v1
+-
+-API documentation
+------------------
+-
+-If opened in a browser, the import path itself leads to the API documentation:
+-
+-  * [https://gopkg.in/yaml.v1](https://gopkg.in/yaml.v1)
+-
+-API stability
+--------------
+-
+-The package API for yaml v1 will remain stable as described in [gopkg.in](https://gopkg.in).
+-
+-
+-License
+--------
+-
+-The yaml package is licensed under the LGPL with an exception that allows it to be linked statically. Please see the LICENSE file for details.
+-
+-
+-Example
+--------
+-
+-```Go
+-package main
+-
+-import (
+-        "fmt"
+-        "log"
+-
+-        "gopkg.in/yaml.v1"
+-)
+-
+-var data = `
+-a: Easy!
+-b:
+-  c: 2
+-  d: [3, 4]
+-`
+-
+-type T struct {
+-        A string
+-        B struct{C int; D []int ",flow"}
+-}
+-
+-func main() {
+-        t := T{}
+-    
+-        err := yaml.Unmarshal([]byte(data), &t)
+-        if err != nil {
+-                log.Fatalf("error: %v", err)
+-        }
+-        fmt.Printf("--- t:\n%v\n\n", t)
+-    
+-        d, err := yaml.Marshal(&t)
+-        if err != nil {
+-                log.Fatalf("error: %v", err)
+-        }
+-        fmt.Printf("--- t dump:\n%s\n\n", string(d))
+-    
+-        m := make(map[interface{}]interface{})
+-    
+-        err = yaml.Unmarshal([]byte(data), &m)
+-        if err != nil {
+-                log.Fatalf("error: %v", err)
+-        }
+-        fmt.Printf("--- m:\n%v\n\n", m)
+-    
+-        d, err = yaml.Marshal(&m)
+-        if err != nil {
+-                log.Fatalf("error: %v", err)
+-        }
+-        fmt.Printf("--- m dump:\n%s\n\n", string(d))
+-}
+-```
+-
+-This example will generate the following output:
+-
+-```
+---- t:
+-{Easy! {2 [3 4]}}
+-
+---- t dump:
+-a: Easy!
+-b:
+-  c: 2
+-  d: [3, 4]
+-
+-
+---- m:
+-map[a:Easy! b:map[c:2 d:[3 4]]]
+-
+---- m dump:
+-a: Easy!
+-b:
+-  c: 2
+-  d:
+-  - 3
+-  - 4
+-```
+-
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/apic.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/apic.go
+deleted file mode 100644
+index 95ec014..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/apic.go
++++ /dev/null
+@@ -1,742 +0,0 @@
+-package yaml
+-
+-import (
+-	"io"
+-	"os"
+-)
+-
+-func yaml_insert_token(parser *yaml_parser_t, pos int, token *yaml_token_t) {
+-	//fmt.Println("yaml_insert_token", "pos:", pos, "typ:", token.typ, "head:", parser.tokens_head, "len:", len(parser.tokens))
+-
+-	// Check if we can move the queue at the beginning of the buffer.
+-	if parser.tokens_head > 0 && len(parser.tokens) == cap(parser.tokens) {
+-		if parser.tokens_head != len(parser.tokens) {
+-			copy(parser.tokens, parser.tokens[parser.tokens_head:])
+-		}
+-		parser.tokens = parser.tokens[:len(parser.tokens)-parser.tokens_head]
+-		parser.tokens_head = 0
+-	}
+-	parser.tokens = append(parser.tokens, *token)
+-	if pos < 0 {
+-		return
+-	}
+-	copy(parser.tokens[parser.tokens_head+pos+1:], parser.tokens[parser.tokens_head+pos:])
+-	parser.tokens[parser.tokens_head+pos] = *token
+-}
+-
+-// Create a new parser object.
+-func yaml_parser_initialize(parser *yaml_parser_t) bool {
+-	*parser = yaml_parser_t{
+-		raw_buffer: make([]byte, 0, input_raw_buffer_size),
+-		buffer:     make([]byte, 0, input_buffer_size),
+-	}
+-	return true
+-}
+-
+-// Destroy a parser object.
+-func yaml_parser_delete(parser *yaml_parser_t) {
+-	*parser = yaml_parser_t{}
+-}
+-
+-// String read handler.
+-func yaml_string_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
+-	if parser.input_pos == len(parser.input) {
+-		return 0, io.EOF
+-	}
+-	n = copy(buffer, parser.input[parser.input_pos:])
+-	parser.input_pos += n
+-	return n, nil
+-}
+-
+-// File read handler.
+-func yaml_file_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
+-	return parser.input_file.Read(buffer)
+-}
+-
+-// Set a string input.
+-func yaml_parser_set_input_string(parser *yaml_parser_t, input []byte) {
+-	if parser.read_handler != nil {
+-		panic("must set the input source only once")
+-	}
+-	parser.read_handler = yaml_string_read_handler
+-	parser.input = input
+-	parser.input_pos = 0
+-}
+-
+-// Set a file input.
+-func yaml_parser_set_input_file(parser *yaml_parser_t, file *os.File) {
+-	if parser.read_handler != nil {
+-		panic("must set the input source only once")
+-	}
+-	parser.read_handler = yaml_file_read_handler
+-	parser.input_file = file
+-}
+-
+-// Set the source encoding.
+-func yaml_parser_set_encoding(parser *yaml_parser_t, encoding yaml_encoding_t) {
+-	if parser.encoding != yaml_ANY_ENCODING {
+-		panic("must set the encoding only once")
+-	}
+-	parser.encoding = encoding
+-}
+-
+-// Create a new emitter object.
+-func yaml_emitter_initialize(emitter *yaml_emitter_t) bool {
+-	*emitter = yaml_emitter_t{
+-		buffer:     make([]byte, output_buffer_size),
+-		raw_buffer: make([]byte, 0, output_raw_buffer_size),
+-		states:     make([]yaml_emitter_state_t, 0, initial_stack_size),
+-		events:     make([]yaml_event_t, 0, initial_queue_size),
+-	}
+-	return true
+-}
+-
+-// Destroy an emitter object.
+-func yaml_emitter_delete(emitter *yaml_emitter_t) {
+-	*emitter = yaml_emitter_t{}
+-}
+-
+-// String write handler.
+-func yaml_string_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
+-	*emitter.output_buffer = append(*emitter.output_buffer, buffer...)
+-	return nil
+-}
+-
+-// File write handler.
+-func yaml_file_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
+-	_, err := emitter.output_file.Write(buffer)
+-	return err
+-}
+-
+-// Set a string output.
+-func yaml_emitter_set_output_string(emitter *yaml_emitter_t, output_buffer *[]byte) {
+-	if emitter.write_handler != nil {
+-		panic("must set the output target only once")
+-	}
+-	emitter.write_handler = yaml_string_write_handler
+-	emitter.output_buffer = output_buffer
+-}
+-
+-// Set a file output.
+-func yaml_emitter_set_output_file(emitter *yaml_emitter_t, file io.Writer) {
+-	if emitter.write_handler != nil {
+-		panic("must set the output target only once")
+-	}
+-	emitter.write_handler = yaml_file_write_handler
+-	emitter.output_file = file
+-}
+-
+-// Set the output encoding.
+-func yaml_emitter_set_encoding(emitter *yaml_emitter_t, encoding yaml_encoding_t) {
+-	if emitter.encoding != yaml_ANY_ENCODING {
+-		panic("must set the output encoding only once")
+-	}
+-	emitter.encoding = encoding
+-}
+-
+-// Set the canonical output style.
+-func yaml_emitter_set_canonical(emitter *yaml_emitter_t, canonical bool) {
+-	emitter.canonical = canonical
+-}
+-
+-//// Set the indentation increment.
+-func yaml_emitter_set_indent(emitter *yaml_emitter_t, indent int) {
+-	if indent < 2 || indent > 9 {
+-		indent = 2
+-	}
+-	emitter.best_indent = indent
+-}
+-
+-// Set the preferred line width.
+-func yaml_emitter_set_width(emitter *yaml_emitter_t, width int) {
+-	if width < 0 {
+-		width = -1
+-	}
+-	emitter.best_width = width
+-}
+-
+-// Set if unescaped non-ASCII characters are allowed.
+-func yaml_emitter_set_unicode(emitter *yaml_emitter_t, unicode bool) {
+-	emitter.unicode = unicode
+-}
+-
+-// Set the preferred line break character.
+-func yaml_emitter_set_break(emitter *yaml_emitter_t, line_break yaml_break_t) {
+-	emitter.line_break = line_break
+-}
+-
+-///*
+-// * Destroy a token object.
+-// */
+-//
+-//YAML_DECLARE(void)
+-//yaml_token_delete(yaml_token_t *token)
+-//{
+-//    assert(token);  // Non-NULL token object expected.
+-//
+-//    switch (token.type)
+-//    {
+-//        case YAML_TAG_DIRECTIVE_TOKEN:
+-//            yaml_free(token.data.tag_directive.handle);
+-//            yaml_free(token.data.tag_directive.prefix);
+-//            break;
+-//
+-//        case YAML_ALIAS_TOKEN:
+-//            yaml_free(token.data.alias.value);
+-//            break;
+-//
+-//        case YAML_ANCHOR_TOKEN:
+-//            yaml_free(token.data.anchor.value);
+-//            break;
+-//
+-//        case YAML_TAG_TOKEN:
+-//            yaml_free(token.data.tag.handle);
+-//            yaml_free(token.data.tag.suffix);
+-//            break;
+-//
+-//        case YAML_SCALAR_TOKEN:
+-//            yaml_free(token.data.scalar.value);
+-//            break;
+-//
+-//        default:
+-//            break;
+-//    }
+-//
+-//    memset(token, 0, sizeof(yaml_token_t));
+-//}
+-//
+-///*
+-// * Check if a string is a valid UTF-8 sequence.
+-// *
+-// * Check 'reader.c' for more details on UTF-8 encoding.
+-// */
+-//
+-//static int
+-//yaml_check_utf8(yaml_char_t *start, size_t length)
+-//{
+-//    yaml_char_t *end = start+length;
+-//    yaml_char_t *pointer = start;
+-//
+-//    while (pointer < end) {
+-//        unsigned char octet;
+-//        unsigned int width;
+-//        unsigned int value;
+-//        size_t k;
+-//
+-//        octet = pointer[0];
+-//        width = (octet & 0x80) == 0x00 ? 1 :
+-//                (octet & 0xE0) == 0xC0 ? 2 :
+-//                (octet & 0xF0) == 0xE0 ? 3 :
+-//                (octet & 0xF8) == 0xF0 ? 4 : 0;
+-//        value = (octet & 0x80) == 0x00 ? octet & 0x7F :
+-//                (octet & 0xE0) == 0xC0 ? octet & 0x1F :
+-//                (octet & 0xF0) == 0xE0 ? octet & 0x0F :
+-//                (octet & 0xF8) == 0xF0 ? octet & 0x07 : 0;
+-//        if (!width) return 0;
+-//        if (pointer+width > end) return 0;
+-//        for (k = 1; k < width; k ++) {
+-//            octet = pointer[k];
+-//            if ((octet & 0xC0) != 0x80) return 0;
+-//            value = (value << 6) + (octet & 0x3F);
+-//        }
+-//        if (!((width == 1) ||
+-//            (width == 2 && value >= 0x80) ||
+-//            (width == 3 && value >= 0x800) ||
+-//            (width == 4 && value >= 0x10000))) return 0;
+-//
+-//        pointer += width;
+-//    }
+-//
+-//    return 1;
+-//}
+-//
+-
+-// Create STREAM-START.
+-func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) bool {
+-	*event = yaml_event_t{
+-		typ:      yaml_STREAM_START_EVENT,
+-		encoding: encoding,
+-	}
+-	return true
+-}
+-
+-// Create STREAM-END.
+-func yaml_stream_end_event_initialize(event *yaml_event_t) bool {
+-	*event = yaml_event_t{
+-		typ: yaml_STREAM_END_EVENT,
+-	}
+-	return true
+-}
+-
+-// Create DOCUMENT-START.
+-func yaml_document_start_event_initialize(event *yaml_event_t, version_directive *yaml_version_directive_t,
+-	tag_directives []yaml_tag_directive_t, implicit bool) bool {
+-	*event = yaml_event_t{
+-		typ:               yaml_DOCUMENT_START_EVENT,
+-		version_directive: version_directive,
+-		tag_directives:    tag_directives,
+-		implicit:          implicit,
+-	}
+-	return true
+-}
+-
+-// Create DOCUMENT-END.
+-func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) bool {
+-	*event = yaml_event_t{
+-		typ:      yaml_DOCUMENT_END_EVENT,
+-		implicit: implicit,
+-	}
+-	return true
+-}
+-
+-///*
+-// * Create ALIAS.
+-// */
+-//
+-//YAML_DECLARE(int)
+-//yaml_alias_event_initialize(event *yaml_event_t, anchor *yaml_char_t)
+-//{
+-//    mark yaml_mark_t = { 0, 0, 0 }
+-//    anchor_copy *yaml_char_t = NULL
+-//
+-//    assert(event) // Non-NULL event object is expected.
+-//    assert(anchor) // Non-NULL anchor is expected.
+-//
+-//    if (!yaml_check_utf8(anchor, strlen((char *)anchor))) return 0
+-//
+-//    anchor_copy = yaml_strdup(anchor)
+-//    if (!anchor_copy)
+-//        return 0
+-//
+-//    ALIAS_EVENT_INIT(*event, anchor_copy, mark, mark)
+-//
+-//    return 1
+-//}
+-
+-// Create SCALAR.
+-func yaml_scalar_event_initialize(event *yaml_event_t, anchor, tag, value []byte, plain_implicit, quoted_implicit bool, style yaml_scalar_style_t) bool {
+-	*event = yaml_event_t{
+-		typ:             yaml_SCALAR_EVENT,
+-		anchor:          anchor,
+-		tag:             tag,
+-		value:           value,
+-		implicit:        plain_implicit,
+-		quoted_implicit: quoted_implicit,
+-		style:           yaml_style_t(style),
+-	}
+-	return true
+-}
+-
+-// Create SEQUENCE-START.
+-func yaml_sequence_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_sequence_style_t) bool {
+-	*event = yaml_event_t{
+-		typ:      yaml_SEQUENCE_START_EVENT,
+-		anchor:   anchor,
+-		tag:      tag,
+-		implicit: implicit,
+-		style:    yaml_style_t(style),
+-	}
+-	return true
+-}
+-
+-// Create SEQUENCE-END.
+-func yaml_sequence_end_event_initialize(event *yaml_event_t) bool {
+-	*event = yaml_event_t{
+-		typ: yaml_SEQUENCE_END_EVENT,
+-	}
+-	return true
+-}
+-
+-// Create MAPPING-START.
+-func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) bool {
+-	*event = yaml_event_t{
+-		typ:      yaml_MAPPING_START_EVENT,
+-		anchor:   anchor,
+-		tag:      tag,
+-		implicit: implicit,
+-		style:    yaml_style_t(style),
+-	}
+-	return true
+-}
+-
+-// Create MAPPING-END.
+-func yaml_mapping_end_event_initialize(event *yaml_event_t) bool {
+-	*event = yaml_event_t{
+-		typ: yaml_MAPPING_END_EVENT,
+-	}
+-	return true
+-}
+-
+-// Destroy an event object.
+-func yaml_event_delete(event *yaml_event_t) {
+-	*event = yaml_event_t{}
+-}
+-
+-///*
+-// * Create a document object.
+-// */
+-//
+-//YAML_DECLARE(int)
+-//yaml_document_initialize(document *yaml_document_t,
+-//        version_directive *yaml_version_directive_t,
+-//        tag_directives_start *yaml_tag_directive_t,
+-//        tag_directives_end *yaml_tag_directive_t,
+-//        start_implicit int, end_implicit int)
+-//{
+-//    struct {
+-//        error yaml_error_type_t
+-//    } context
+-//    struct {
+-//        start *yaml_node_t
+-//        end *yaml_node_t
+-//        top *yaml_node_t
+-//    } nodes = { NULL, NULL, NULL }
+-//    version_directive_copy *yaml_version_directive_t = NULL
+-//    struct {
+-//        start *yaml_tag_directive_t
+-//        end *yaml_tag_directive_t
+-//        top *yaml_tag_directive_t
+-//    } tag_directives_copy = { NULL, NULL, NULL }
+-//    value yaml_tag_directive_t = { NULL, NULL }
+-//    mark yaml_mark_t = { 0, 0, 0 }
+-//
+-//    assert(document) // Non-NULL document object is expected.
+-//    assert((tag_directives_start && tag_directives_end) ||
+-//            (tag_directives_start == tag_directives_end))
+-//                            // Valid tag directives are expected.
+-//
+-//    if (!STACK_INIT(&context, nodes, INITIAL_STACK_SIZE)) goto error
+-//
+-//    if (version_directive) {
+-//        version_directive_copy = yaml_malloc(sizeof(yaml_version_directive_t))
+-//        if (!version_directive_copy) goto error
+-//        version_directive_copy.major = version_directive.major
+-//        version_directive_copy.minor = version_directive.minor
+-//    }
+-//
+-//    if (tag_directives_start != tag_directives_end) {
+-//        tag_directive *yaml_tag_directive_t
+-//        if (!STACK_INIT(&context, tag_directives_copy, INITIAL_STACK_SIZE))
+-//            goto error
+-//        for (tag_directive = tag_directives_start
+-//                tag_directive != tag_directives_end; tag_directive ++) {
+-//            assert(tag_directive.handle)
+-//            assert(tag_directive.prefix)
+-//            if (!yaml_check_utf8(tag_directive.handle,
+-//                        strlen((char *)tag_directive.handle)))
+-//                goto error
+-//            if (!yaml_check_utf8(tag_directive.prefix,
+-//                        strlen((char *)tag_directive.prefix)))
+-//                goto error
+-//            value.handle = yaml_strdup(tag_directive.handle)
+-//            value.prefix = yaml_strdup(tag_directive.prefix)
+-//            if (!value.handle || !value.prefix) goto error
+-//            if (!PUSH(&context, tag_directives_copy, value))
+-//                goto error
+-//            value.handle = NULL
+-//            value.prefix = NULL
+-//        }
+-//    }
+-//
+-//    DOCUMENT_INIT(*document, nodes.start, nodes.end, version_directive_copy,
+-//            tag_directives_copy.start, tag_directives_copy.top,
+-//            start_implicit, end_implicit, mark, mark)
+-//
+-//    return 1
+-//
+-//error:
+-//    STACK_DEL(&context, nodes)
+-//    yaml_free(version_directive_copy)
+-//    while (!STACK_EMPTY(&context, tag_directives_copy)) {
+-//        value yaml_tag_directive_t = POP(&context, tag_directives_copy)
+-//        yaml_free(value.handle)
+-//        yaml_free(value.prefix)
+-//    }
+-//    STACK_DEL(&context, tag_directives_copy)
+-//    yaml_free(value.handle)
+-//    yaml_free(value.prefix)
+-//
+-//    return 0
+-//}
+-//
+-///*
+-// * Destroy a document object.
+-// */
+-//
+-//YAML_DECLARE(void)
+-//yaml_document_delete(document *yaml_document_t)
+-//{
+-//    struct {
+-//        error yaml_error_type_t
+-//    } context
+-//    tag_directive *yaml_tag_directive_t
+-//
+-//    context.error = YAML_NO_ERROR // Eliminate a compliler warning.
+-//
+-//    assert(document) // Non-NULL document object is expected.
+-//
+-//    while (!STACK_EMPTY(&context, document.nodes)) {
+-//        node yaml_node_t = POP(&context, document.nodes)
+-//        yaml_free(node.tag)
+-//        switch (node.type) {
+-//            case YAML_SCALAR_NODE:
+-//                yaml_free(node.data.scalar.value)
+-//                break
+-//            case YAML_SEQUENCE_NODE:
+-//                STACK_DEL(&context, node.data.sequence.items)
+-//                break
+-//            case YAML_MAPPING_NODE:
+-//                STACK_DEL(&context, node.data.mapping.pairs)
+-//                break
+-//            default:
+-//                assert(0) // Should not happen.
+-//        }
+-//    }
+-//    STACK_DEL(&context, document.nodes)
+-//
+-//    yaml_free(document.version_directive)
+-//    for (tag_directive = document.tag_directives.start
+-//            tag_directive != document.tag_directives.end
+-//            tag_directive++) {
+-//        yaml_free(tag_directive.handle)
+-//        yaml_free(tag_directive.prefix)
+-//    }
+-//    yaml_free(document.tag_directives.start)
+-//
+-//    memset(document, 0, sizeof(yaml_document_t))
+-//}
+-//
+-///**
+-// * Get a document node.
+-// */
+-//
+-//YAML_DECLARE(yaml_node_t *)
+-//yaml_document_get_node(document *yaml_document_t, index int)
+-//{
+-//    assert(document) // Non-NULL document object is expected.
+-//
+-//    if (index > 0 && document.nodes.start + index <= document.nodes.top) {
+-//        return document.nodes.start + index - 1
+-//    }
+-//    return NULL
+-//}
+-//
+-///**
+-// * Get the root object.
+-// */
+-//
+-//YAML_DECLARE(yaml_node_t *)
+-//yaml_document_get_root_node(document *yaml_document_t)
+-//{
+-//    assert(document) // Non-NULL document object is expected.
+-//
+-//    if (document.nodes.top != document.nodes.start) {
+-//        return document.nodes.start
+-//    }
+-//    return NULL
+-//}
+-//
+-///*
+-// * Add a scalar node to a document.
+-// */
+-//
+-//YAML_DECLARE(int)
+-//yaml_document_add_scalar(document *yaml_document_t,
+-//        tag *yaml_char_t, value *yaml_char_t, length int,
+-//        style yaml_scalar_style_t)
+-//{
+-//    struct {
+-//        error yaml_error_type_t
+-//    } context
+-//    mark yaml_mark_t = { 0, 0, 0 }
+-//    tag_copy *yaml_char_t = NULL
+-//    value_copy *yaml_char_t = NULL
+-//    node yaml_node_t
+-//
+-//    assert(document) // Non-NULL document object is expected.
+-//    assert(value) // Non-NULL value is expected.
+-//
+-//    if (!tag) {
+-//        tag = (yaml_char_t *)YAML_DEFAULT_SCALAR_TAG
+-//    }
+-//
+-//    if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
+-//    tag_copy = yaml_strdup(tag)
+-//    if (!tag_copy) goto error
+-//
+-//    if (length < 0) {
+-//        length = strlen((char *)value)
+-//    }
+-//
+-//    if (!yaml_check_utf8(value, length)) goto error
+-//    value_copy = yaml_malloc(length+1)
+-//    if (!value_copy) goto error
+-//    memcpy(value_copy, value, length)
+-//    value_copy[length] = '\0'
+-//
+-//    SCALAR_NODE_INIT(node, tag_copy, value_copy, length, style, mark, mark)
+-//    if (!PUSH(&context, document.nodes, node)) goto error
+-//
+-//    return document.nodes.top - document.nodes.start
+-//
+-//error:
+-//    yaml_free(tag_copy)
+-//    yaml_free(value_copy)
+-//
+-//    return 0
+-//}
+-//
+-///*
+-// * Add a sequence node to a document.
+-// */
+-//
+-//YAML_DECLARE(int)
+-//yaml_document_add_sequence(document *yaml_document_t,
+-//        tag *yaml_char_t, style yaml_sequence_style_t)
+-//{
+-//    struct {
+-//        error yaml_error_type_t
+-//    } context
+-//    mark yaml_mark_t = { 0, 0, 0 }
+-//    tag_copy *yaml_char_t = NULL
+-//    struct {
+-//        start *yaml_node_item_t
+-//        end *yaml_node_item_t
+-//        top *yaml_node_item_t
+-//    } items = { NULL, NULL, NULL }
+-//    node yaml_node_t
+-//
+-//    assert(document) // Non-NULL document object is expected.
+-//
+-//    if (!tag) {
+-//        tag = (yaml_char_t *)YAML_DEFAULT_SEQUENCE_TAG
+-//    }
+-//
+-//    if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
+-//    tag_copy = yaml_strdup(tag)
+-//    if (!tag_copy) goto error
+-//
+-//    if (!STACK_INIT(&context, items, INITIAL_STACK_SIZE)) goto error
+-//
+-//    SEQUENCE_NODE_INIT(node, tag_copy, items.start, items.end,
+-//            style, mark, mark)
+-//    if (!PUSH(&context, document.nodes, node)) goto error
+-//
+-//    return document.nodes.top - document.nodes.start
+-//
+-//error:
+-//    STACK_DEL(&context, items)
+-//    yaml_free(tag_copy)
+-//
+-//    return 0
+-//}
+-//
+-///*
+-// * Add a mapping node to a document.
+-// */
+-//
+-//YAML_DECLARE(int)
+-//yaml_document_add_mapping(document *yaml_document_t,
+-//        tag *yaml_char_t, style yaml_mapping_style_t)
+-//{
+-//    struct {
+-//        error yaml_error_type_t
+-//    } context
+-//    mark yaml_mark_t = { 0, 0, 0 }
+-//    tag_copy *yaml_char_t = NULL
+-//    struct {
+-//        start *yaml_node_pair_t
+-//        end *yaml_node_pair_t
+-//        top *yaml_node_pair_t
+-//    } pairs = { NULL, NULL, NULL }
+-//    node yaml_node_t
+-//
+-//    assert(document) // Non-NULL document object is expected.
+-//
+-//    if (!tag) {
+-//        tag = (yaml_char_t *)YAML_DEFAULT_MAPPING_TAG
+-//    }
+-//
+-//    if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
+-//    tag_copy = yaml_strdup(tag)
+-//    if (!tag_copy) goto error
+-//
+-//    if (!STACK_INIT(&context, pairs, INITIAL_STACK_SIZE)) goto error
+-//
+-//    MAPPING_NODE_INIT(node, tag_copy, pairs.start, pairs.end,
+-//            style, mark, mark)
+-//    if (!PUSH(&context, document.nodes, node)) goto error
+-//
+-//    return document.nodes.top - document.nodes.start
+-//
+-//error:
+-//    STACK_DEL(&context, pairs)
+-//    yaml_free(tag_copy)
+-//
+-//    return 0
+-//}
+-//
+-///*
+-// * Append an item to a sequence node.
+-// */
+-//
+-//YAML_DECLARE(int)
+-//yaml_document_append_sequence_item(document *yaml_document_t,
+-//        sequence int, item int)
+-//{
+-//    struct {
+-//        error yaml_error_type_t
+-//    } context
+-//
+-//    assert(document) // Non-NULL document is required.
+-//    assert(sequence > 0
+-//            && document.nodes.start + sequence <= document.nodes.top)
+-//                            // Valid sequence id is required.
+-//    assert(document.nodes.start[sequence-1].type == YAML_SEQUENCE_NODE)
+-//                            // A sequence node is required.
+-//    assert(item > 0 && document.nodes.start + item <= document.nodes.top)
+-//                            // Valid item id is required.
+-//
+-//    if (!PUSH(&context,
+-//                document.nodes.start[sequence-1].data.sequence.items, item))
+-//        return 0
+-//
+-//    return 1
+-//}
+-//
+-///*
+-// * Append a pair of a key and a value to a mapping node.
+-// */
+-//
+-//YAML_DECLARE(int)
+-//yaml_document_append_mapping_pair(document *yaml_document_t,
+-//        mapping int, key int, value int)
+-//{
+-//    struct {
+-//        error yaml_error_type_t
+-//    } context
+-//
+-//    pair yaml_node_pair_t
+-//
+-//    assert(document) // Non-NULL document is required.
+-//    assert(mapping > 0
+-//            && document.nodes.start + mapping <= document.nodes.top)
+-//                            // Valid mapping id is required.
+-//    assert(document.nodes.start[mapping-1].type == YAML_MAPPING_NODE)
+-//                            // A mapping node is required.
+-//    assert(key > 0 && document.nodes.start + key <= document.nodes.top)
+-//                            // Valid key id is required.
+-//    assert(value > 0 && document.nodes.start + value <= document.nodes.top)
+-//                            // Valid value id is required.
+-//
+-//    pair.key = key
+-//    pair.value = value
+-//
+-//    if (!PUSH(&context,
+-//                document.nodes.start[mapping-1].data.mapping.pairs, pair))
+-//        return 0
+-//
+-//    return 1
+-//}
+-//
+-//
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/decode.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/decode.go
+deleted file mode 100644
+index 74eda3c..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/decode.go
++++ /dev/null
+@@ -1,538 +0,0 @@
+-package yaml
+-
+-import (
+-	"reflect"
+-	"strconv"
+-	"time"
+-)
+-
+-const (
+-	documentNode = 1 << iota
+-	mappingNode
+-	sequenceNode
+-	scalarNode
+-	aliasNode
+-)
+-
+-type node struct {
+-	kind         int
+-	line, column int
+-	tag          string
+-	value        string
+-	implicit     bool
+-	children     []*node
+-	anchors      map[string]*node
+-}
+-
+-// ----------------------------------------------------------------------------
+-// Parser, produces a node tree out of a libyaml event stream.
+-
+-type parser struct {
+-	parser yaml_parser_t
+-	event  yaml_event_t
+-	doc    *node
+-}
+-
+-func newParser(b []byte) *parser {
+-	p := parser{}
+-	if !yaml_parser_initialize(&p.parser) {
+-		panic("Failed to initialize YAML emitter")
+-	}
+-
+-	if len(b) == 0 {
+-		b = []byte{'\n'}
+-	}
+-
+-	yaml_parser_set_input_string(&p.parser, b)
+-
+-	p.skip()
+-	if p.event.typ != yaml_STREAM_START_EVENT {
+-		panic("Expected stream start event, got " + strconv.Itoa(int(p.event.typ)))
+-	}
+-	p.skip()
+-	return &p
+-}
+-
+-func (p *parser) destroy() {
+-	if p.event.typ != yaml_NO_EVENT {
+-		yaml_event_delete(&p.event)
+-	}
+-	yaml_parser_delete(&p.parser)
+-}
+-
+-func (p *parser) skip() {
+-	if p.event.typ != yaml_NO_EVENT {
+-		if p.event.typ == yaml_STREAM_END_EVENT {
+-			panic("Attempted to go past the end of stream. Corrupted value?")
+-		}
+-		yaml_event_delete(&p.event)
+-	}
+-	if !yaml_parser_parse(&p.parser, &p.event) {
+-		p.fail()
+-	}
+-}
+-
+-func (p *parser) fail() {
+-	var where string
+-	var line int
+-	if p.parser.problem_mark.line != 0 {
+-		line = p.parser.problem_mark.line
+-	} else if p.parser.context_mark.line != 0 {
+-		line = p.parser.context_mark.line
+-	}
+-	if line != 0 {
+-		where = "line " + strconv.Itoa(line) + ": "
+-	}
+-	var msg string
+-	if len(p.parser.problem) > 0 {
+-		msg = p.parser.problem
+-	} else {
+-		msg = "Unknown problem parsing YAML content"
+-	}
+-	panic(where + msg)
+-}
+-
+-func (p *parser) anchor(n *node, anchor []byte) {
+-	if anchor != nil {
+-		p.doc.anchors[string(anchor)] = n
+-	}
+-}
+-
+-func (p *parser) parse() *node {
+-	switch p.event.typ {
+-	case yaml_SCALAR_EVENT:
+-		return p.scalar()
+-	case yaml_ALIAS_EVENT:
+-		return p.alias()
+-	case yaml_MAPPING_START_EVENT:
+-		return p.mapping()
+-	case yaml_SEQUENCE_START_EVENT:
+-		return p.sequence()
+-	case yaml_DOCUMENT_START_EVENT:
+-		return p.document()
+-	case yaml_STREAM_END_EVENT:
+-		// Happens when attempting to decode an empty buffer.
+-		return nil
+-	default:
+-		panic("Attempted to parse unknown event: " +
+-			strconv.Itoa(int(p.event.typ)))
+-	}
+-	panic("Unreachable")
+-}
+-
+-func (p *parser) node(kind int) *node {
+-	return &node{
+-		kind:   kind,
+-		line:   p.event.start_mark.line,
+-		column: p.event.start_mark.column,
+-	}
+-}
+-
+-func (p *parser) document() *node {
+-	n := p.node(documentNode)
+-	n.anchors = make(map[string]*node)
+-	p.doc = n
+-	p.skip()
+-	n.children = append(n.children, p.parse())
+-	if p.event.typ != yaml_DOCUMENT_END_EVENT {
+-		panic("Expected end of document event but got " +
+-			strconv.Itoa(int(p.event.typ)))
+-	}
+-	p.skip()
+-	return n
+-}
+-
+-func (p *parser) alias() *node {
+-	n := p.node(aliasNode)
+-	n.value = string(p.event.anchor)
+-	p.skip()
+-	return n
+-}
+-
+-func (p *parser) scalar() *node {
+-	n := p.node(scalarNode)
+-	n.value = string(p.event.value)
+-	n.tag = string(p.event.tag)
+-	n.implicit = p.event.implicit
+-	p.anchor(n, p.event.anchor)
+-	p.skip()
+-	return n
+-}
+-
+-func (p *parser) sequence() *node {
+-	n := p.node(sequenceNode)
+-	p.anchor(n, p.event.anchor)
+-	p.skip()
+-	for p.event.typ != yaml_SEQUENCE_END_EVENT {
+-		n.children = append(n.children, p.parse())
+-	}
+-	p.skip()
+-	return n
+-}
+-
+-func (p *parser) mapping() *node {
+-	n := p.node(mappingNode)
+-	p.anchor(n, p.event.anchor)
+-	p.skip()
+-	for p.event.typ != yaml_MAPPING_END_EVENT {
+-		n.children = append(n.children, p.parse(), p.parse())
+-	}
+-	p.skip()
+-	return n
+-}
+-
+-// ----------------------------------------------------------------------------
+-// Decoder, unmarshals a node into a provided value.
+-
+-type decoder struct {
+-	doc     *node
+-	aliases map[string]bool
+-}
+-
+-func newDecoder() *decoder {
+-	d := &decoder{}
+-	d.aliases = make(map[string]bool)
+-	return d
+-}
+-
+-// d.setter deals with setters and pointer dereferencing and initialization.
+-//
+-// It's a slightly convoluted case to handle properly:
+-//
+-// - nil pointers should be initialized, unless being set to nil
+-// - we don't know at this point yet what's the value to SetYAML() with.
+-// - we can't separate pointer deref/init and setter checking, because
+-//   a setter may be found while going down a pointer chain.
+-//
+-// Thus, here is how it takes care of it:
+-//
+-// - out is provided as a pointer, so that it can be replaced.
+-// - when looking at a non-setter ptr, *out=ptr.Elem(), unless tag=!!null
+-// - when a setter is found, *out=interface{}, and a set() function is
+-//   returned to call SetYAML() with the value of *out once it's defined.
+-//
+-func (d *decoder) setter(tag string, out *reflect.Value, good *bool) (set func()) {
+-	if (*out).Kind() != reflect.Ptr && (*out).CanAddr() {
+-		setter, _ := (*out).Addr().Interface().(Setter)
+-		if setter != nil {
+-			var arg interface{}
+-			*out = reflect.ValueOf(&arg).Elem()
+-			return func() {
+-				*good = setter.SetYAML(tag, arg)
+-			}
+-		}
+-	}
+-	again := true
+-	for again {
+-		again = false
+-		setter, _ := (*out).Interface().(Setter)
+-		if tag != "!!null" || setter != nil {
+-			if pv := (*out); pv.Kind() == reflect.Ptr {
+-				if pv.IsNil() {
+-					*out = reflect.New(pv.Type().Elem()).Elem()
+-					pv.Set((*out).Addr())
+-				} else {
+-					*out = pv.Elem()
+-				}
+-				setter, _ = pv.Interface().(Setter)
+-				again = true
+-			}
+-		}
+-		if setter != nil {
+-			var arg interface{}
+-			*out = reflect.ValueOf(&arg).Elem()
+-			return func() {
+-				*good = setter.SetYAML(tag, arg)
+-			}
+-		}
+-	}
+-	return nil
+-}
+-
+-func (d *decoder) unmarshal(n *node, out reflect.Value) (good bool) {
+-	switch n.kind {
+-	case documentNode:
+-		good = d.document(n, out)
+-	case scalarNode:
+-		good = d.scalar(n, out)
+-	case aliasNode:
+-		good = d.alias(n, out)
+-	case mappingNode:
+-		good = d.mapping(n, out)
+-	case sequenceNode:
+-		good = d.sequence(n, out)
+-	default:
+-		panic("Internal error: unknown node kind: " + strconv.Itoa(n.kind))
+-	}
+-	return
+-}
+-
+-func (d *decoder) document(n *node, out reflect.Value) (good bool) {
+-	if len(n.children) == 1 {
+-		d.doc = n
+-		d.unmarshal(n.children[0], out)
+-		return true
+-	}
+-	return false
+-}
+-
+-func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
+-	an, ok := d.doc.anchors[n.value]
+-	if !ok {
+-		panic("Unknown anchor '" + n.value + "' referenced")
+-	}
+-	if d.aliases[n.value] {
+-		panic("Anchor '" + n.value + "' value contains itself")
+-	}
+-	d.aliases[n.value] = true
+-	good = d.unmarshal(an, out)
+-	delete(d.aliases, n.value)
+-	return good
+-}
+-
+-var durationType = reflect.TypeOf(time.Duration(0))
+-
+-func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
+-	var tag string
+-	var resolved interface{}
+-	if n.tag == "" && !n.implicit {
+-		tag = "!!str"
+-		resolved = n.value
+-	} else {
+-		tag, resolved = resolve(n.tag, n.value)
+-	}
+-	if set := d.setter(tag, &out, &good); set != nil {
+-		defer set()
+-	}
+-	switch out.Kind() {
+-	case reflect.String:
+-		if resolved != nil {
+-			out.SetString(n.value)
+-			good = true
+-		}
+-	case reflect.Interface:
+-		if resolved == nil {
+-			out.Set(reflect.Zero(out.Type()))
+-		} else {
+-			out.Set(reflect.ValueOf(resolved))
+-		}
+-		good = true
+-	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+-		switch resolved := resolved.(type) {
+-		case int:
+-			if !out.OverflowInt(int64(resolved)) {
+-				out.SetInt(int64(resolved))
+-				good = true
+-			}
+-		case int64:
+-			if !out.OverflowInt(resolved) {
+-				out.SetInt(resolved)
+-				good = true
+-			}
+-		case float64:
+-			if resolved < 1<<63-1 && !out.OverflowInt(int64(resolved)) {
+-				out.SetInt(int64(resolved))
+-				good = true
+-			}
+-		case string:
+-			if out.Type() == durationType {
+-				d, err := time.ParseDuration(resolved)
+-				if err == nil {
+-					out.SetInt(int64(d))
+-					good = true
+-				}
+-			}
+-		}
+-	case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+-		switch resolved := resolved.(type) {
+-		case int:
+-			if resolved >= 0 {
+-				out.SetUint(uint64(resolved))
+-				good = true
+-			}
+-		case int64:
+-			if resolved >= 0 {
+-				out.SetUint(uint64(resolved))
+-				good = true
+-			}
+-		case float64:
+-			if resolved < 1<<64-1 && !out.OverflowUint(uint64(resolved)) {
+-				out.SetUint(uint64(resolved))
+-				good = true
+-			}
+-		}
+-	case reflect.Bool:
+-		switch resolved := resolved.(type) {
+-		case bool:
+-			out.SetBool(resolved)
+-			good = true
+-		}
+-	case reflect.Float32, reflect.Float64:
+-		switch resolved := resolved.(type) {
+-		case int:
+-			out.SetFloat(float64(resolved))
+-			good = true
+-		case int64:
+-			out.SetFloat(float64(resolved))
+-			good = true
+-		case float64:
+-			out.SetFloat(resolved)
+-			good = true
+-		}
+-	case reflect.Ptr:
+-		switch resolved.(type) {
+-		case nil:
+-			out.Set(reflect.Zero(out.Type()))
+-			good = true
+-		default:
+-			if out.Type().Elem() == reflect.TypeOf(resolved) {
+-				elem := reflect.New(out.Type().Elem())
+-				elem.Elem().Set(reflect.ValueOf(resolved))
+-				out.Set(elem)
+-				good = true
+-			}
+-		}
+-	}
+-	return good
+-}
+-
+-func settableValueOf(i interface{}) reflect.Value {
+-	v := reflect.ValueOf(i)
+-	sv := reflect.New(v.Type()).Elem()
+-	sv.Set(v)
+-	return sv
+-}
+-
+-func (d *decoder) sequence(n *node, out reflect.Value) (good bool) {
+-	if set := d.setter("!!seq", &out, &good); set != nil {
+-		defer set()
+-	}
+-	var iface reflect.Value
+-	if out.Kind() == reflect.Interface {
+-		// No type hints. Will have to use a generic sequence.
+-		iface = out
+-		out = settableValueOf(make([]interface{}, 0))
+-	}
+-
+-	if out.Kind() != reflect.Slice {
+-		return false
+-	}
+-	et := out.Type().Elem()
+-
+-	l := len(n.children)
+-	for i := 0; i < l; i++ {
+-		e := reflect.New(et).Elem()
+-		if ok := d.unmarshal(n.children[i], e); ok {
+-			out.Set(reflect.Append(out, e))
+-		}
+-	}
+-	if iface.IsValid() {
+-		iface.Set(out)
+-	}
+-	return true
+-}
+-
+-func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
+-	if set := d.setter("!!map", &out, &good); set != nil {
+-		defer set()
+-	}
+-	if out.Kind() == reflect.Struct {
+-		return d.mappingStruct(n, out)
+-	}
+-
+-	if out.Kind() == reflect.Interface {
+-		// No type hints. Will have to use a generic map.
+-		iface := out
+-		out = settableValueOf(make(map[interface{}]interface{}))
+-		iface.Set(out)
+-	}
+-
+-	if out.Kind() != reflect.Map {
+-		return false
+-	}
+-	outt := out.Type()
+-	kt := outt.Key()
+-	et := outt.Elem()
+-
+-	if out.IsNil() {
+-		out.Set(reflect.MakeMap(outt))
+-	}
+-	l := len(n.children)
+-	for i := 0; i < l; i += 2 {
+-		if isMerge(n.children[i]) {
+-			d.merge(n.children[i+1], out)
+-			continue
+-		}
+-		k := reflect.New(kt).Elem()
+-		if d.unmarshal(n.children[i], k) {
+-			e := reflect.New(et).Elem()
+-			if d.unmarshal(n.children[i+1], e) {
+-				out.SetMapIndex(k, e)
+-			}
+-		}
+-	}
+-	return true
+-}
+-
+-func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
+-	sinfo, err := getStructInfo(out.Type())
+-	if err != nil {
+-		panic(err)
+-	}
+-	name := settableValueOf("")
+-	l := len(n.children)
+-	for i := 0; i < l; i += 2 {
+-		ni := n.children[i]
+-		if isMerge(ni) {
+-			d.merge(n.children[i+1], out)
+-			continue
+-		}
+-		if !d.unmarshal(ni, name) {
+-			continue
+-		}
+-		if info, ok := sinfo.FieldsMap[name.String()]; ok {
+-			var field reflect.Value
+-			if info.Inline == nil {
+-				field = out.Field(info.Num)
+-			} else {
+-				field = out.FieldByIndex(info.Inline)
+-			}
+-			d.unmarshal(n.children[i+1], field)
+-		}
+-	}
+-	return true
+-}
+-
+-func (d *decoder) merge(n *node, out reflect.Value) {
+-	const wantMap = "map merge requires map or sequence of maps as the value"
+-	switch n.kind {
+-	case mappingNode:
+-		d.unmarshal(n, out)
+-	case aliasNode:
+-		an, ok := d.doc.anchors[n.value]
+-		if ok && an.kind != mappingNode {
+-			panic(wantMap)
+-		}
+-		d.unmarshal(n, out)
+-	case sequenceNode:
+-		// Step backwards as earlier nodes take precedence.
+-		for i := len(n.children)-1; i >= 0; i-- {
+-			ni := n.children[i]
+-			if ni.kind == aliasNode {
+-				an, ok := d.doc.anchors[ni.value]
+-				if ok && an.kind != mappingNode {
+-					panic(wantMap)
+-				}
+-			} else if ni.kind != mappingNode {
+-				panic(wantMap)
+-			}
+-			d.unmarshal(ni, out)
+-		}
+-	default:
+-		panic(wantMap)
+-	}
+-}
+-
+-func isMerge(n *node) bool {
+-	return n.kind == scalarNode && n.value == "<<" && (n.implicit == true || n.tag == "!!merge" || n.tag == "tag:yaml.org,2002:merge")
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/decode_test.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/decode_test.go
+deleted file mode 100644
+index d2b45b3..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/decode_test.go
++++ /dev/null
+@@ -1,648 +0,0 @@
+-package yaml_test
+-
+-import (
+-	. "gopkg.in/check.v1"
+-	"gopkg.in/yaml.v1"
+-	"math"
+-	"reflect"
+-	"time"
+-)
+-
+-var unmarshalIntTest = 123
+-
+-var unmarshalTests = []struct {
+-	data  string
+-	value interface{}
+-}{
+-	{
+-		"",
+-		&struct{}{},
+-	}, {
+-		"{}", &struct{}{},
+-	}, {
+-		"v: hi",
+-		map[string]string{"v": "hi"},
+-	}, {
+-		"v: hi", map[string]interface{}{"v": "hi"},
+-	}, {
+-		"v: true",
+-		map[string]string{"v": "true"},
+-	}, {
+-		"v: true",
+-		map[string]interface{}{"v": true},
+-	}, {
+-		"v: 10",
+-		map[string]interface{}{"v": 10},
+-	}, {
+-		"v: 0b10",
+-		map[string]interface{}{"v": 2},
+-	}, {
+-		"v: 0xA",
+-		map[string]interface{}{"v": 10},
+-	}, {
+-		"v: 4294967296",
+-		map[string]int64{"v": 4294967296},
+-	}, {
+-		"v: 0.1",
+-		map[string]interface{}{"v": 0.1},
+-	}, {
+-		"v: .1",
+-		map[string]interface{}{"v": 0.1},
+-	}, {
+-		"v: .Inf",
+-		map[string]interface{}{"v": math.Inf(+1)},
+-	}, {
+-		"v: -.Inf",
+-		map[string]interface{}{"v": math.Inf(-1)},
+-	}, {
+-		"v: -10",
+-		map[string]interface{}{"v": -10},
+-	}, {
+-		"v: -.1",
+-		map[string]interface{}{"v": -0.1},
+-	},
+-
+-	// Simple values.
+-	{
+-		"123",
+-		&unmarshalIntTest,
+-	},
+-
+-	// Floats from spec
+-	{
+-		"canonical: 6.8523e+5",
+-		map[string]interface{}{"canonical": 6.8523e+5},
+-	}, {
+-		"expo: 685.230_15e+03",
+-		map[string]interface{}{"expo": 685.23015e+03},
+-	}, {
+-		"fixed: 685_230.15",
+-		map[string]interface{}{"fixed": 685230.15},
+-	}, {
+-		"neginf: -.inf",
+-		map[string]interface{}{"neginf": math.Inf(-1)},
+-	}, {
+-		"fixed: 685_230.15",
+-		map[string]float64{"fixed": 685230.15},
+-	},
+-	//{"sexa: 190:20:30.15", map[string]interface{}{"sexa": 0}}, // Unsupported
+-	//{"notanum: .NaN", map[string]interface{}{"notanum": math.NaN()}}, // Equality of NaN fails.
+-
+-	// Bools from spec
+-	{
+-		"canonical: y",
+-		map[string]interface{}{"canonical": true},
+-	}, {
+-		"answer: NO",
+-		map[string]interface{}{"answer": false},
+-	}, {
+-		"logical: True",
+-		map[string]interface{}{"logical": true},
+-	}, {
+-		"option: on",
+-		map[string]interface{}{"option": true},
+-	}, {
+-		"option: on",
+-		map[string]bool{"option": true},
+-	},
+-	// Ints from spec
+-	{
+-		"canonical: 685230",
+-		map[string]interface{}{"canonical": 685230},
+-	}, {
+-		"decimal: +685_230",
+-		map[string]interface{}{"decimal": 685230},
+-	}, {
+-		"octal: 02472256",
+-		map[string]interface{}{"octal": 685230},
+-	}, {
+-		"hexa: 0x_0A_74_AE",
+-		map[string]interface{}{"hexa": 685230},
+-	}, {
+-		"bin: 0b1010_0111_0100_1010_1110",
+-		map[string]interface{}{"bin": 685230},
+-	}, {
+-		"bin: -0b101010",
+-		map[string]interface{}{"bin": -42},
+-	}, {
+-		"decimal: +685_230",
+-		map[string]int{"decimal": 685230},
+-	},
+-
+-	//{"sexa: 190:20:30", map[string]interface{}{"sexa": 0}}, // Unsupported
+-
+-	// Nulls from spec
+-	{
+-		"empty:",
+-		map[string]interface{}{"empty": nil},
+-	}, {
+-		"canonical: ~",
+-		map[string]interface{}{"canonical": nil},
+-	}, {
+-		"english: null",
+-		map[string]interface{}{"english": nil},
+-	}, {
+-		"~: null key",
+-		map[interface{}]string{nil: "null key"},
+-	}, {
+-		"empty:",
+-		map[string]*bool{"empty": nil},
+-	},
+-
+-	// Flow sequence
+-	{
+-		"seq: [A,B]",
+-		map[string]interface{}{"seq": []interface{}{"A", "B"}},
+-	}, {
+-		"seq: [A,B,C,]",
+-		map[string][]string{"seq": []string{"A", "B", "C"}},
+-	}, {
+-		"seq: [A,1,C]",
+-		map[string][]string{"seq": []string{"A", "1", "C"}},
+-	}, {
+-		"seq: [A,1,C]",
+-		map[string][]int{"seq": []int{1}},
+-	}, {
+-		"seq: [A,1,C]",
+-		map[string]interface{}{"seq": []interface{}{"A", 1, "C"}},
+-	},
+-	// Block sequence
+-	{
+-		"seq:\n - A\n - B",
+-		map[string]interface{}{"seq": []interface{}{"A", "B"}},
+-	}, {
+-		"seq:\n - A\n - B\n - C",
+-		map[string][]string{"seq": []string{"A", "B", "C"}},
+-	}, {
+-		"seq:\n - A\n - 1\n - C",
+-		map[string][]string{"seq": []string{"A", "1", "C"}},
+-	}, {
+-		"seq:\n - A\n - 1\n - C",
+-		map[string][]int{"seq": []int{1}},
+-	}, {
+-		"seq:\n - A\n - 1\n - C",
+-		map[string]interface{}{"seq": []interface{}{"A", 1, "C"}},
+-	},
+-
+-	// Literal block scalar
+-	{
+-		"scalar: | # Comment\n\n literal\n\n \ttext\n\n",
+-		map[string]string{"scalar": "\nliteral\n\n\ttext\n"},
+-	},
+-
+-	// Folded block scalar
+-	{
+-		"scalar: > # Comment\n\n folded\n line\n \n next\n line\n  * one\n  * two\n\n last\n line\n\n",
+-		map[string]string{"scalar": "\nfolded line\nnext line\n * one\n * two\n\nlast line\n"},
+-	},
+-
+-	// Map inside interface with no type hints.
+-	{
+-		"a: {b: c}",
+-		map[string]interface{}{"a": map[interface{}]interface{}{"b": "c"}},
+-	},
+-
+-	// Structs and type conversions.
+-	{
+-		"hello: world",
+-		&struct{ Hello string }{"world"},
+-	}, {
+-		"a: {b: c}",
+-		&struct{ A struct{ B string } }{struct{ B string }{"c"}},
+-	}, {
+-		"a: {b: c}",
+-		&struct{ A *struct{ B string } }{&struct{ B string }{"c"}},
+-	}, {
+-		"a: {b: c}",
+-		&struct{ A map[string]string }{map[string]string{"b": "c"}},
+-	}, {
+-		"a: {b: c}",
+-		&struct{ A *map[string]string }{&map[string]string{"b": "c"}},
+-	}, {
+-		"a:",
+-		&struct{ A map[string]string }{},
+-	}, {
+-		"a: 1",
+-		&struct{ A int }{1},
+-	}, {
+-		"a: 1",
+-		&struct{ A float64 }{1},
+-	}, {
+-		"a: 1.0",
+-		&struct{ A int }{1},
+-	}, {
+-		"a: 1.0",
+-		&struct{ A uint }{1},
+-	}, {
+-		"a: [1, 2]",
+-		&struct{ A []int }{[]int{1, 2}},
+-	}, {
+-		"a: 1",
+-		&struct{ B int }{0},
+-	}, {
+-		"a: 1",
+-		&struct {
+-			B int "a"
+-		}{1},
+-	}, {
+-		"a: y",
+-		&struct{ A bool }{true},
+-	},
+-
+-	// Some cross type conversions
+-	{
+-		"v: 42",
+-		map[string]uint{"v": 42},
+-	}, {
+-		"v: -42",
+-		map[string]uint{},
+-	}, {
+-		"v: 4294967296",
+-		map[string]uint64{"v": 4294967296},
+-	}, {
+-		"v: -4294967296",
+-		map[string]uint64{},
+-	},
+-
+-	// Overflow cases.
+-	{
+-		"v: 4294967297",
+-		map[string]int32{},
+-	}, {
+-		"v: 128",
+-		map[string]int8{},
+-	},
+-
+-	// Quoted values.
+-	{
+-		"'1': '\"2\"'",
+-		map[interface{}]interface{}{"1": "\"2\""},
+-	}, {
+-		"v:\n- A\n- 'B\n\n  C'\n",
+-		map[string][]string{"v": []string{"A", "B\nC"}},
+-	},
+-
+-	// Explicit tags.
+-	{
+-		"v: !!float '1.1'",
+-		map[string]interface{}{"v": 1.1},
+-	}, {
+-		"v: !!null ''",
+-		map[string]interface{}{"v": nil},
+-	}, {
+-		"%TAG !y! tag:yaml.org,2002:\n---\nv: !y!int '1'",
+-		map[string]interface{}{"v": 1},
+-	},
+-
+-	// Anchors and aliases.
+-	{
+-		"a: &x 1\nb: &y 2\nc: *x\nd: *y\n",
+-		&struct{ A, B, C, D int }{1, 2, 1, 2},
+-	}, {
+-		"a: &a {c: 1}\nb: *a",
+-		&struct {
+-			A, B struct {
+-				C int
+-			}
+-		}{struct{ C int }{1}, struct{ C int }{1}},
+-	}, {
+-		"a: &a [1, 2]\nb: *a",
+-		&struct{ B []int }{[]int{1, 2}},
+-	},
+-
+-	// Bug #1133337
+-	{
+-		"foo: ''",
+-		map[string]*string{"foo": new(string)},
+-	}, {
+-		"foo: null",
+-		map[string]string{},
+-	},
+-
+-	// Ignored field
+-	{
+-		"a: 1\nb: 2\n",
+-		&struct {
+-			A int
+-			B int "-"
+-		}{1, 0},
+-	},
+-
+-	// Bug #1191981
+-	{
+-		"" +
+-			"%YAML 1.1\n" +
+-			"--- !!str\n" +
+-			`"Generic line break (no glyph)\n\` + "\n" +
+-			` Generic line break (glyphed)\n\` + "\n" +
+-			` Line separator\u2028\` + "\n" +
+-			` Paragraph separator\u2029"` + "\n",
+-		"" +
+-			"Generic line break (no glyph)\n" +
+-			"Generic line break (glyphed)\n" +
+-			"Line separator\u2028Paragraph separator\u2029",
+-	},
+-
+-	// Struct inlining
+-	{
+-		"a: 1\nb: 2\nc: 3\n",
+-		&struct {
+-			A int
+-			C inlineB `yaml:",inline"`
+-		}{1, inlineB{2, inlineC{3}}},
+-	},
+-
+-	// bug 1243827
+-	{
+-		"a: -b_c",
+-		map[string]interface{}{"a": "-b_c"},
+-	},
+-	{
+-		"a: +b_c",
+-		map[string]interface{}{"a": "+b_c"},
+-	},
+-	{
+-		"a: 50cent_of_dollar",
+-		map[string]interface{}{"a": "50cent_of_dollar"},
+-	},
+-
+-	// Duration
+-	{
+-		"a: 3s",
+-		map[string]time.Duration{"a": 3 * time.Second},
+-	},
+-}
+-
+-type inlineB struct {
+-	B       int
+-	inlineC `yaml:",inline"`
+-}
+-
+-type inlineC struct {
+-	C int
+-}
+-
+-func (s *S) TestUnmarshal(c *C) {
+-	for i, item := range unmarshalTests {
+-		t := reflect.ValueOf(item.value).Type()
+-		var value interface{}
+-		switch t.Kind() {
+-		case reflect.Map:
+-			value = reflect.MakeMap(t).Interface()
+-		case reflect.String:
+-			t := reflect.ValueOf(item.value).Type()
+-			v := reflect.New(t)
+-			value = v.Interface()
+-		default:
+-			pt := reflect.ValueOf(item.value).Type()
+-			pv := reflect.New(pt.Elem())
+-			value = pv.Interface()
+-		}
+-		err := yaml.Unmarshal([]byte(item.data), value)
+-		c.Assert(err, IsNil, Commentf("Item #%d", i))
+-		if t.Kind() == reflect.String {
+-			c.Assert(*value.(*string), Equals, item.value, Commentf("Item #%d", i))
+-		} else {
+-			c.Assert(value, DeepEquals, item.value, Commentf("Item #%d", i))
+-		}
+-	}
+-}
+-
+-func (s *S) TestUnmarshalNaN(c *C) {
+-	value := map[string]interface{}{}
+-	err := yaml.Unmarshal([]byte("notanum: .NaN"), &value)
+-	c.Assert(err, IsNil)
+-	c.Assert(math.IsNaN(value["notanum"].(float64)), Equals, true)
+-}
+-
+-var unmarshalErrorTests = []struct {
+-	data, error string
+-}{
+-	{"v: !!float 'error'", "YAML error: Can't decode !!str 'error' as a !!float"},
+-	{"v: [A,", "YAML error: line 1: did not find expected node content"},
+-	{"v:\n- [A,", "YAML error: line 2: did not find expected node content"},
+-	{"a: *b\n", "YAML error: Unknown anchor 'b' referenced"},
+-	{"a: &a\n  b: *a\n", "YAML error: Anchor 'a' value contains itself"},
+-	{"value: -", "YAML error: block sequence entries are not allowed in this context"},
+-}
+-
+-func (s *S) TestUnmarshalErrors(c *C) {
+-	for _, item := range unmarshalErrorTests {
+-		var value interface{}
+-		err := yaml.Unmarshal([]byte(item.data), &value)
+-		c.Assert(err, ErrorMatches, item.error, Commentf("Partial unmarshal: %#v", value))
+-	}
+-}
+-
+-var setterTests = []struct {
+-	data, tag string
+-	value     interface{}
+-}{
+-	{"_: {hi: there}", "!!map", map[interface{}]interface{}{"hi": "there"}},
+-	{"_: [1,A]", "!!seq", []interface{}{1, "A"}},
+-	{"_: 10", "!!int", 10},
+-	{"_: null", "!!null", nil},
+-	{`_: BAR!`, "!!str", "BAR!"},
+-	{`_: "BAR!"`, "!!str", "BAR!"},
+-	{"_: !!foo 'BAR!'", "!!foo", "BAR!"},
+-}
+-
+-var setterResult = map[int]bool{}
+-
+-type typeWithSetter struct {
+-	tag   string
+-	value interface{}
+-}
+-
+-func (o *typeWithSetter) SetYAML(tag string, value interface{}) (ok bool) {
+-	o.tag = tag
+-	o.value = value
+-	if i, ok := value.(int); ok {
+-		if result, ok := setterResult[i]; ok {
+-			return result
+-		}
+-	}
+-	return true
+-}
+-
+-type setterPointerType struct {
+-	Field *typeWithSetter "_"
+-}
+-
+-type setterValueType struct {
+-	Field typeWithSetter "_"
+-}
+-
+-func (s *S) TestUnmarshalWithPointerSetter(c *C) {
+-	for _, item := range setterTests {
+-		obj := &setterPointerType{}
+-		err := yaml.Unmarshal([]byte(item.data), obj)
+-		c.Assert(err, IsNil)
+-		c.Assert(obj.Field, NotNil, Commentf("Pointer not initialized (%#v)", item.value))
+-		c.Assert(obj.Field.tag, Equals, item.tag)
+-		c.Assert(obj.Field.value, DeepEquals, item.value)
+-	}
+-}
+-
+-func (s *S) TestUnmarshalWithValueSetter(c *C) {
+-	for _, item := range setterTests {
+-		obj := &setterValueType{}
+-		err := yaml.Unmarshal([]byte(item.data), obj)
+-		c.Assert(err, IsNil)
+-		c.Assert(obj.Field, NotNil, Commentf("Pointer not initialized (%#v)", item.value))
+-		c.Assert(obj.Field.tag, Equals, item.tag)
+-		c.Assert(obj.Field.value, DeepEquals, item.value)
+-	}
+-}
+-
+-func (s *S) TestUnmarshalWholeDocumentWithSetter(c *C) {
+-	obj := &typeWithSetter{}
+-	err := yaml.Unmarshal([]byte(setterTests[0].data), obj)
+-	c.Assert(err, IsNil)
+-	c.Assert(obj.tag, Equals, setterTests[0].tag)
+-	value, ok := obj.value.(map[interface{}]interface{})
+-	c.Assert(ok, Equals, true)
+-	c.Assert(value["_"], DeepEquals, setterTests[0].value)
+-}
+-
+-func (s *S) TestUnmarshalWithFalseSetterIgnoresValue(c *C) {
+-	setterResult[2] = false
+-	setterResult[4] = false
+-	defer func() {
+-		delete(setterResult, 2)
+-		delete(setterResult, 4)
+-	}()
+-
+-	m := map[string]*typeWithSetter{}
+-	data := `{abc: 1, def: 2, ghi: 3, jkl: 4}`
+-	err := yaml.Unmarshal([]byte(data), m)
+-	c.Assert(err, IsNil)
+-	c.Assert(m["abc"], NotNil)
+-	c.Assert(m["def"], IsNil)
+-	c.Assert(m["ghi"], NotNil)
+-	c.Assert(m["jkl"], IsNil)
+-
+-	c.Assert(m["abc"].value, Equals, 1)
+-	c.Assert(m["ghi"].value, Equals, 3)
+-}
+-
+-// From http://yaml.org/type/merge.html
+-var mergeTests = `
+-anchors:
+-  - &CENTER { "x": 1, "y": 2 }
+-  - &LEFT   { "x": 0, "y": 2 }
+-  - &BIG    { "r": 10 }
+-  - &SMALL  { "r": 1 }
+-
+-# All the following maps are equal:
+-
+-plain:
+-  # Explicit keys
+-  "x": 1
+-  "y": 2
+-  "r": 10
+-  label: center/big
+-
+-mergeOne:
+-  # Merge one map
+-  << : *CENTER
+-  "r": 10
+-  label: center/big
+-
+-mergeMultiple:
+-  # Merge multiple maps
+-  << : [ *CENTER, *BIG ]
+-  label: center/big
+-
+-override:
+-  # Override
+-  << : [ *BIG, *LEFT, *SMALL ]
+-  "x": 1
+-  label: center/big
+-
+-shortTag:
+-  # Explicit short merge tag
+-  !!merge "<<" : [ *CENTER, *BIG ]
+-  label: center/big
+-
+-longTag:
+-  # Explicit merge long tag
+-  !<tag:yaml.org,2002:merge> "<<" : [ *CENTER, *BIG ]
+-  label: center/big
+-
+-inlineMap:
+-  # Inlined map 
+-  << : {"x": 1, "y": 2, "r": 10}
+-  label: center/big
+-
+-inlineSequenceMap:
+-  # Inlined map in sequence
+-  << : [ *CENTER, {"r": 10} ]
+-  label: center/big
+-`
+-
+-func (s *S) TestMerge(c *C) {
+-	var want = map[interface{}]interface{}{
+-		"x":     1,
+-		"y":     2,
+-		"r":     10,
+-		"label": "center/big",
+-	}
+-
+-	var m map[string]interface{}
+-	err := yaml.Unmarshal([]byte(mergeTests), &m)
+-	c.Assert(err, IsNil)
+-	for name, test := range m {
+-		if name == "anchors" {
+-			continue
+-		}
+-		c.Assert(test, DeepEquals, want, Commentf("test %q failed", name))
+-	}
+-}
+-
+-func (s *S) TestMergeStruct(c *C) {
+-	type Data struct {
+-		X, Y, R int
+-		Label   string
+-	}
+-	want := Data{1, 2, 10, "center/big"}
+-
+-	var m map[string]Data
+-	err := yaml.Unmarshal([]byte(mergeTests), &m)
+-	c.Assert(err, IsNil)
+-	for name, test := range m {
+-		if name == "anchors" {
+-			continue
+-		}
+-		c.Assert(test, Equals, want, Commentf("test %q failed", name))
+-	}
+-}
+-
+-//var data []byte
+-//func init() {
+-//	var err error
+-//	data, err = ioutil.ReadFile("/tmp/file.yaml")
+-//	if err != nil {
+-//		panic(err)
+-//	}
+-//}
+-//
+-//func (s *S) BenchmarkUnmarshal(c *C) {
+-//	var err error
+-//	for i := 0; i < c.N; i++ {
+-//		var v map[string]interface{}
+-//		err = yaml.Unmarshal(data, &v)
+-//	}
+-//	if err != nil {
+-//		panic(err)
+-//	}
+-//}
+-//
+-//func (s *S) BenchmarkMarshal(c *C) {
+-//	var v map[string]interface{}
+-//	yaml.Unmarshal(data, &v)
+-//	c.ResetTimer()
+-//	for i := 0; i < c.N; i++ {
+-//		yaml.Marshal(&v)
+-//	}
+-//}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/emitterc.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/emitterc.go
+deleted file mode 100644
+index 542ffd2..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/emitterc.go
++++ /dev/null
+@@ -1,1682 +0,0 @@
+-package yaml
+-
+-import (
+-	"bytes"
+-)
+-
+-// Flush the buffer if needed.
+-func flush(emitter *yaml_emitter_t) bool {
+-	if emitter.buffer_pos+5 >= len(emitter.buffer) {
+-		return yaml_emitter_flush(emitter)
+-	}
+-	return true
+-}
+-
+-// Put a character to the output buffer.
+-func put(emitter *yaml_emitter_t, value byte) bool {
+-	if emitter.buffer_pos+5 >= len(emitter.buffer) && !yaml_emitter_flush(emitter) {
+-		return false
+-	}
+-	emitter.buffer[emitter.buffer_pos] = value
+-	emitter.buffer_pos++
+-	emitter.column++
+-	return true
+-}
+-
+-// Put a line break to the output buffer.
+-func put_break(emitter *yaml_emitter_t) bool {
+-	if emitter.buffer_pos+5 >= len(emitter.buffer) && !yaml_emitter_flush(emitter) {
+-		return false
+-	}
+-	switch emitter.line_break {
+-	case yaml_CR_BREAK:
+-		emitter.buffer[emitter.buffer_pos] = '\r'
+-		emitter.buffer_pos += 1
+-	case yaml_LN_BREAK:
+-		emitter.buffer[emitter.buffer_pos] = '\n'
+-		emitter.buffer_pos += 1
+-	case yaml_CRLN_BREAK:
+-		emitter.buffer[emitter.buffer_pos+0] = '\r'
+-		emitter.buffer[emitter.buffer_pos+1] = '\n'
+-		emitter.buffer_pos += 2
+-	default:
+-		panic("unknown line break setting")
+-	}
+-	emitter.column = 0
+-	emitter.line++
+-	return true
+-}
+-
+-// Copy a character from a string into buffer.
+-func write(emitter *yaml_emitter_t, s []byte, i *int) bool {
+-	if emitter.buffer_pos+5 >= len(emitter.buffer) && !yaml_emitter_flush(emitter) {
+-		return false
+-	}
+-	p := emitter.buffer_pos
+-	w := width(s[*i])
+-	switch w {
+-	case 4:
+-		emitter.buffer[p+3] = s[*i+3]
+-		fallthrough
+-	case 3:
+-		emitter.buffer[p+2] = s[*i+2]
+-		fallthrough
+-	case 2:
+-		emitter.buffer[p+1] = s[*i+1]
+-		fallthrough
+-	case 1:
+-		emitter.buffer[p+0] = s[*i+0]
+-	default:
+-		panic("unknown character width")
+-	}
+-	emitter.column++
+-	emitter.buffer_pos += w
+-	*i += w
+-	return true
+-}
+-
+-// Write a whole string into buffer.
+-func write_all(emitter *yaml_emitter_t, s []byte) bool {
+-	for i := 0; i < len(s); {
+-		if !write(emitter, s, &i) {
+-			return false
+-		}
+-	}
+-	return true
+-}
+-
+-// Copy a line break character from a string into buffer.
+-func write_break(emitter *yaml_emitter_t, s []byte, i *int) bool {
+-	if s[*i] == '\n' {
+-		if !put_break(emitter) {
+-			return false
+-		}
+-		*i++
+-	} else {
+-		if !write(emitter, s, i) {
+-			return false
+-		}
+-		emitter.column = 0
+-		emitter.line++
+-	}
+-	return true
+-}
+-
+-// Set an emitter error and return false.
+-func yaml_emitter_set_emitter_error(emitter *yaml_emitter_t, problem string) bool {
+-	emitter.error = yaml_EMITTER_ERROR
+-	emitter.problem = problem
+-	return false
+-}
+-
+-// Emit an event.
+-func yaml_emitter_emit(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-	emitter.events = append(emitter.events, *event)
+-	for !yaml_emitter_need_more_events(emitter) {
+-		event := &emitter.events[emitter.events_head]
+-		if !yaml_emitter_analyze_event(emitter, event) {
+-			return false
+-		}
+-		if !yaml_emitter_state_machine(emitter, event) {
+-			return false
+-		}
+-		yaml_event_delete(event)
+-		emitter.events_head++
+-	}
+-	return true
+-}
+-
+-// Check if we need to accumulate more events before emitting.
+-//
+-// We accumulate extra
+-//  - 1 event for DOCUMENT-START
+-//  - 2 events for SEQUENCE-START
+-//  - 3 events for MAPPING-START
+-//
+-func yaml_emitter_need_more_events(emitter *yaml_emitter_t) bool {
+-	if emitter.events_head == len(emitter.events) {
+-		return true
+-	}
+-	var accumulate int
+-	switch emitter.events[emitter.events_head].typ {
+-	case yaml_DOCUMENT_START_EVENT:
+-		accumulate = 1
+-		break
+-	case yaml_SEQUENCE_START_EVENT:
+-		accumulate = 2
+-		break
+-	case yaml_MAPPING_START_EVENT:
+-		accumulate = 3
+-		break
+-	default:
+-		return false
+-	}
+-	if len(emitter.events)-emitter.events_head > accumulate {
+-		return false
+-	}
+-	var level int
+-	for i := emitter.events_head; i < len(emitter.events); i++ {
+-		switch emitter.events[i].typ {
+-		case yaml_STREAM_START_EVENT, yaml_DOCUMENT_START_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT:
+-			level++
+-		case yaml_STREAM_END_EVENT, yaml_DOCUMENT_END_EVENT, yaml_SEQUENCE_END_EVENT, yaml_MAPPING_END_EVENT:
+-			level--
+-		}
+-		if level == 0 {
+-			return false
+-		}
+-	}
+-	return true
+-}
+-
+-// Append a directive to the directives stack.
+-func yaml_emitter_append_tag_directive(emitter *yaml_emitter_t, value *yaml_tag_directive_t, allow_duplicates bool) bool {
+-	for i := 0; i < len(emitter.tag_directives); i++ {
+-		if bytes.Equal(value.handle, emitter.tag_directives[i].handle) {
+-			if allow_duplicates {
+-				return true
+-			}
+-			return yaml_emitter_set_emitter_error(emitter, "duplicate %TAG directive")
+-		}
+-	}
+-
+-	// [Go] Do we actually need to copy this given garbage collection
+-	// and the lack of deallocating destructors?
+-	tag_copy := yaml_tag_directive_t{
+-		handle: make([]byte, len(value.handle)),
+-		prefix: make([]byte, len(value.prefix)),
+-	}
+-	copy(tag_copy.handle, value.handle)
+-	copy(tag_copy.prefix, value.prefix)
+-	emitter.tag_directives = append(emitter.tag_directives, tag_copy)
+-	return true
+-}
+-
+-// Increase the indentation level.
+-func yaml_emitter_increase_indent(emitter *yaml_emitter_t, flow, indentless bool) bool {
+-	emitter.indents = append(emitter.indents, emitter.indent)
+-	if emitter.indent < 0 {
+-		if flow {
+-			emitter.indent = emitter.best_indent
+-		} else {
+-			emitter.indent = 0
+-		}
+-	} else if !indentless {
+-		emitter.indent += emitter.best_indent
+-	}
+-	return true
+-}
+-
+-// State dispatcher.
+-func yaml_emitter_state_machine(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-	switch emitter.state {
+-	default:
+-	case yaml_EMIT_STREAM_START_STATE:
+-		return yaml_emitter_emit_stream_start(emitter, event)
+-
+-	case yaml_EMIT_FIRST_DOCUMENT_START_STATE:
+-		return yaml_emitter_emit_document_start(emitter, event, true)
+-
+-	case yaml_EMIT_DOCUMENT_START_STATE:
+-		return yaml_emitter_emit_document_start(emitter, event, false)
+-
+-	case yaml_EMIT_DOCUMENT_CONTENT_STATE:
+-		return yaml_emitter_emit_document_content(emitter, event)
+-
+-	case yaml_EMIT_DOCUMENT_END_STATE:
+-		return yaml_emitter_emit_document_end(emitter, event)
+-
+-	case yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE:
+-		return yaml_emitter_emit_flow_sequence_item(emitter, event, true)
+-
+-	case yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE:
+-		return yaml_emitter_emit_flow_sequence_item(emitter, event, false)
+-
+-	case yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE:
+-		return yaml_emitter_emit_flow_mapping_key(emitter, event, true)
+-
+-	case yaml_EMIT_FLOW_MAPPING_KEY_STATE:
+-		return yaml_emitter_emit_flow_mapping_key(emitter, event, false)
+-
+-	case yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE:
+-		return yaml_emitter_emit_flow_mapping_value(emitter, event, true)
+-
+-	case yaml_EMIT_FLOW_MAPPING_VALUE_STATE:
+-		return yaml_emitter_emit_flow_mapping_value(emitter, event, false)
+-
+-	case yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE:
+-		return yaml_emitter_emit_block_sequence_item(emitter, event, true)
+-
+-	case yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE:
+-		return yaml_emitter_emit_block_sequence_item(emitter, event, false)
+-
+-	case yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE:
+-		return yaml_emitter_emit_block_mapping_key(emitter, event, true)
+-
+-	case yaml_EMIT_BLOCK_MAPPING_KEY_STATE:
+-		return yaml_emitter_emit_block_mapping_key(emitter, event, false)
+-
+-	case yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE:
+-		return yaml_emitter_emit_block_mapping_value(emitter, event, true)
+-
+-	case yaml_EMIT_BLOCK_MAPPING_VALUE_STATE:
+-		return yaml_emitter_emit_block_mapping_value(emitter, event, false)
+-
+-	case yaml_EMIT_END_STATE:
+-		return yaml_emitter_set_emitter_error(emitter, "expected nothing after STREAM-END")
+-	}
+-	panic("invalid emitter state")
+-}
+-
+-// Expect STREAM-START.
+-func yaml_emitter_emit_stream_start(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-	if event.typ != yaml_STREAM_START_EVENT {
+-		return yaml_emitter_set_emitter_error(emitter, "expected STREAM-START")
+-	}
+-	if emitter.encoding == yaml_ANY_ENCODING {
+-		emitter.encoding = event.encoding
+-		if emitter.encoding == yaml_ANY_ENCODING {
+-			emitter.encoding = yaml_UTF8_ENCODING
+-		}
+-	}
+-	if emitter.best_indent < 2 || emitter.best_indent > 9 {
+-		emitter.best_indent = 2
+-	}
+-	if emitter.best_width >= 0 && emitter.best_width <= emitter.best_indent*2 {
+-		emitter.best_width = 80
+-	}
+-	if emitter.best_width < 0 {
+-		emitter.best_width = 1<<31 - 1
+-	}
+-	if emitter.line_break == yaml_ANY_BREAK {
+-		emitter.line_break = yaml_LN_BREAK
+-	}
+-
+-	emitter.indent = -1
+-	emitter.line = 0
+-	emitter.column = 0
+-	emitter.whitespace = true
+-	emitter.indention = true
+-
+-	if emitter.encoding != yaml_UTF8_ENCODING {
+-		if !yaml_emitter_write_bom(emitter) {
+-			return false
+-		}
+-	}
+-	emitter.state = yaml_EMIT_FIRST_DOCUMENT_START_STATE
+-	return true
+-}
+-
+-// Expect DOCUMENT-START or STREAM-END.
+-func yaml_emitter_emit_document_start(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool {
+-
+-	if event.typ == yaml_DOCUMENT_START_EVENT {
+-
+-		if event.version_directive != nil {
+-			if !yaml_emitter_analyze_version_directive(emitter, event.version_directive) {
+-				return false
+-			}
+-		}
+-
+-		for i := 0; i < len(event.tag_directives); i++ {
+-			tag_directive := &event.tag_directives[i]
+-			if !yaml_emitter_analyze_tag_directive(emitter, tag_directive) {
+-				return false
+-			}
+-			if !yaml_emitter_append_tag_directive(emitter, tag_directive, false) {
+-				return false
+-			}
+-		}
+-
+-		for i := 0; i < len(default_tag_directives); i++ {
+-			tag_directive := &default_tag_directives[i]
+-			if !yaml_emitter_append_tag_directive(emitter, tag_directive, true) {
+-				return false
+-			}
+-		}
+-
+-		implicit := event.implicit
+-		if !first || emitter.canonical {
+-			implicit = false
+-		}
+-
+-		if emitter.open_ended && (event.version_directive != nil || len(event.tag_directives) > 0) {
+-			if !yaml_emitter_write_indicator(emitter, []byte("..."), true, false, false) {
+-				return false
+-			}
+-			if !yaml_emitter_write_indent(emitter) {
+-				return false
+-			}
+-		}
+-
+-		if event.version_directive != nil {
+-			implicit = false
+-			if !yaml_emitter_write_indicator(emitter, []byte("%YAML"), true, false, false) {
+-				return false
+-			}
+-			if !yaml_emitter_write_indicator(emitter, []byte("1.1"), true, false, false) {
+-				return false
+-			}
+-			if !yaml_emitter_write_indent(emitter) {
+-				return false
+-			}
+-		}
+-
+-		if len(event.tag_directives) > 0 {
+-			implicit = false
+-			for i := 0; i < len(event.tag_directives); i++ {
+-				tag_directive := &event.tag_directives[i]
+-				if !yaml_emitter_write_indicator(emitter, []byte("%TAG"), true, false, false) {
+-					return false
+-				}
+-				if !yaml_emitter_write_tag_handle(emitter, tag_directive.handle) {
+-					return false
+-				}
+-				if !yaml_emitter_write_tag_content(emitter, tag_directive.prefix, true) {
+-					return false
+-				}
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-			}
+-		}
+-
+-		if yaml_emitter_check_empty_document(emitter) {
+-			implicit = false
+-		}
+-		if !implicit {
+-			if !yaml_emitter_write_indent(emitter) {
+-				return false
+-			}
+-			if !yaml_emitter_write_indicator(emitter, []byte("---"), true, false, false) {
+-				return false
+-			}
+-			if emitter.canonical {
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-			}
+-		}
+-
+-		emitter.state = yaml_EMIT_DOCUMENT_CONTENT_STATE
+-		return true
+-	}
+-
+-	if event.typ == yaml_STREAM_END_EVENT {
+-		if emitter.open_ended {
+-			if !yaml_emitter_write_indicator(emitter, []byte("..."), true, false, false) {
+-				return false
+-			}
+-			if !yaml_emitter_write_indent(emitter) {
+-				return false
+-			}
+-		}
+-		if !yaml_emitter_flush(emitter) {
+-			return false
+-		}
+-		emitter.state = yaml_EMIT_END_STATE
+-		return true
+-	}
+-
+-	return yaml_emitter_set_emitter_error(emitter, "expected DOCUMENT-START or STREAM-END")
+-}
+-
+-// Expect the root node.
+-func yaml_emitter_emit_document_content(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-	emitter.states = append(emitter.states, yaml_EMIT_DOCUMENT_END_STATE)
+-	return yaml_emitter_emit_node(emitter, event, true, false, false, false)
+-}
+-
+-// Expect DOCUMENT-END.
+-func yaml_emitter_emit_document_end(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-	if event.typ != yaml_DOCUMENT_END_EVENT {
+-		return yaml_emitter_set_emitter_error(emitter, "expected DOCUMENT-END")
+-	}
+-	if !yaml_emitter_write_indent(emitter) {
+-		return false
+-	}
+-	if !event.implicit {
+-		// [Go] Allocate the slice elsewhere.
+-		if !yaml_emitter_write_indicator(emitter, []byte("..."), true, false, false) {
+-			return false
+-		}
+-		if !yaml_emitter_write_indent(emitter) {
+-			return false
+-		}
+-	}
+-	if !yaml_emitter_flush(emitter) {
+-		return false
+-	}
+-	emitter.state = yaml_EMIT_DOCUMENT_START_STATE
+-	emitter.tag_directives = emitter.tag_directives[:0]
+-	return true
+-}
+-
+-// Expect a flow item node.
+-func yaml_emitter_emit_flow_sequence_item(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool {
+-	if first {
+-		if !yaml_emitter_write_indicator(emitter, []byte{'['}, true, true, false) {
+-			return false
+-		}
+-		if !yaml_emitter_increase_indent(emitter, true, false) {
+-			return false
+-		}
+-		emitter.flow_level++
+-	}
+-
+-	if event.typ == yaml_SEQUENCE_END_EVENT {
+-		emitter.flow_level--
+-		emitter.indent = emitter.indents[len(emitter.indents)-1]
+-		emitter.indents = emitter.indents[:len(emitter.indents)-1]
+-		if emitter.canonical && !first {
+-			if !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {
+-				return false
+-			}
+-			if !yaml_emitter_write_indent(emitter) {
+-				return false
+-			}
+-		}
+-		if !yaml_emitter_write_indicator(emitter, []byte{']'}, false, false, false) {
+-			return false
+-		}
+-		emitter.state = emitter.states[len(emitter.states)-1]
+-		emitter.states = emitter.states[:len(emitter.states)-1]
+-
+-		return true
+-	}
+-
+-	if !first {
+-		if !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {
+-			return false
+-		}
+-	}
+-
+-	if emitter.canonical || emitter.column > emitter.best_width {
+-		if !yaml_emitter_write_indent(emitter) {
+-			return false
+-		}
+-	}
+-	emitter.states = append(emitter.states, yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE)
+-	return yaml_emitter_emit_node(emitter, event, false, true, false, false)
+-}
+-
+-// Expect a flow key node.
+-func yaml_emitter_emit_flow_mapping_key(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool {
+-	if first {
+-		if !yaml_emitter_write_indicator(emitter, []byte{'{'}, true, true, false) {
+-			return false
+-		}
+-		if !yaml_emitter_increase_indent(emitter, true, false) {
+-			return false
+-		}
+-		emitter.flow_level++
+-	}
+-
+-	if event.typ == yaml_MAPPING_END_EVENT {
+-		emitter.flow_level--
+-		emitter.indent = emitter.indents[len(emitter.indents)-1]
+-		emitter.indents = emitter.indents[:len(emitter.indents)-1]
+-		if emitter.canonical && !first {
+-			if !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {
+-				return false
+-			}
+-			if !yaml_emitter_write_indent(emitter) {
+-				return false
+-			}
+-		}
+-		if !yaml_emitter_write_indicator(emitter, []byte{'}'}, false, false, false) {
+-			return false
+-		}
+-		emitter.state = emitter.states[len(emitter.states)-1]
+-		emitter.states = emitter.states[:len(emitter.states)-1]
+-		return true
+-	}
+-
+-	if !first {
+-		if !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {
+-			return false
+-		}
+-	}
+-	if emitter.canonical || emitter.column > emitter.best_width {
+-		if !yaml_emitter_write_indent(emitter) {
+-			return false
+-		}
+-	}
+-
+-	if !emitter.canonical && yaml_emitter_check_simple_key(emitter) {
+-		emitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE)
+-		return yaml_emitter_emit_node(emitter, event, false, false, true, true)
+-	}
+-	if !yaml_emitter_write_indicator(emitter, []byte{'?'}, true, false, false) {
+-		return false
+-	}
+-	emitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_VALUE_STATE)
+-	return yaml_emitter_emit_node(emitter, event, false, false, true, false)
+-}
+-
+-// Expect a flow value node.
+-func yaml_emitter_emit_flow_mapping_value(emitter *yaml_emitter_t, event *yaml_event_t, simple bool) bool {
+-	if simple {
+-		if !yaml_emitter_write_indicator(emitter, []byte{':'}, false, false, false) {
+-			return false
+-		}
+-	} else {
+-		if emitter.canonical || emitter.column > emitter.best_width {
+-			if !yaml_emitter_write_indent(emitter) {
+-				return false
+-			}
+-		}
+-		if !yaml_emitter_write_indicator(emitter, []byte{':'}, true, false, false) {
+-			return false
+-		}
+-	}
+-	emitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_KEY_STATE)
+-	return yaml_emitter_emit_node(emitter, event, false, false, true, false)
+-}
+-
+-// Expect a block item node.
+-func yaml_emitter_emit_block_sequence_item(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool {
+-	if first {
+-		if !yaml_emitter_increase_indent(emitter, false, emitter.mapping_context && !emitter.indention) {
+-			return false
+-		}
+-	}
+-	if event.typ == yaml_SEQUENCE_END_EVENT {
+-		emitter.indent = emitter.indents[len(emitter.indents)-1]
+-		emitter.indents = emitter.indents[:len(emitter.indents)-1]
+-		emitter.state = emitter.states[len(emitter.states)-1]
+-		emitter.states = emitter.states[:len(emitter.states)-1]
+-		return true
+-	}
+-	if !yaml_emitter_write_indent(emitter) {
+-		return false
+-	}
+-	if !yaml_emitter_write_indicator(emitter, []byte{'-'}, true, false, true) {
+-		return false
+-	}
+-	emitter.states = append(emitter.states, yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE)
+-	return yaml_emitter_emit_node(emitter, event, false, true, false, false)
+-}
+-
+-// Expect a block key node.
+-func yaml_emitter_emit_block_mapping_key(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool {
+-	if first {
+-		if !yaml_emitter_increase_indent(emitter, false, false) {
+-			return false
+-		}
+-	}
+-	if event.typ == yaml_MAPPING_END_EVENT {
+-		emitter.indent = emitter.indents[len(emitter.indents)-1]
+-		emitter.indents = emitter.indents[:len(emitter.indents)-1]
+-		emitter.state = emitter.states[len(emitter.states)-1]
+-		emitter.states = emitter.states[:len(emitter.states)-1]
+-		return true
+-	}
+-	if !yaml_emitter_write_indent(emitter) {
+-		return false
+-	}
+-	if yaml_emitter_check_simple_key(emitter) {
+-		emitter.states = append(emitter.states, yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE)
+-		return yaml_emitter_emit_node(emitter, event, false, false, true, true)
+-	}
+-	if !yaml_emitter_write_indicator(emitter, []byte{'?'}, true, false, true) {
+-		return false
+-	}
+-	emitter.states = append(emitter.states, yaml_EMIT_BLOCK_MAPPING_VALUE_STATE)
+-	return yaml_emitter_emit_node(emitter, event, false, false, true, false)
+-}
+-
+-// Expect a block value node.
+-func yaml_emitter_emit_block_mapping_value(emitter *yaml_emitter_t, event *yaml_event_t, simple bool) bool {
+-	if simple {
+-		if !yaml_emitter_write_indicator(emitter, []byte{':'}, false, false, false) {
+-			return false
+-		}
+-	} else {
+-		if !yaml_emitter_write_indent(emitter) {
+-			return false
+-		}
+-		if !yaml_emitter_write_indicator(emitter, []byte{':'}, true, false, true) {
+-			return false
+-		}
+-	}
+-	emitter.states = append(emitter.states, yaml_EMIT_BLOCK_MAPPING_KEY_STATE)
+-	return yaml_emitter_emit_node(emitter, event, false, false, true, false)
+-}
+-
+-// Expect a node.
+-func yaml_emitter_emit_node(emitter *yaml_emitter_t, event *yaml_event_t,
+-	root bool, sequence bool, mapping bool, simple_key bool) bool {
+-
+-	emitter.root_context = root
+-	emitter.sequence_context = sequence
+-	emitter.mapping_context = mapping
+-	emitter.simple_key_context = simple_key
+-
+-	switch event.typ {
+-	case yaml_ALIAS_EVENT:
+-		return yaml_emitter_emit_alias(emitter, event)
+-	case yaml_SCALAR_EVENT:
+-		return yaml_emitter_emit_scalar(emitter, event)
+-	case yaml_SEQUENCE_START_EVENT:
+-		return yaml_emitter_emit_sequence_start(emitter, event)
+-	case yaml_MAPPING_START_EVENT:
+-		return yaml_emitter_emit_mapping_start(emitter, event)
+-	default:
+-		return yaml_emitter_set_emitter_error(emitter,
+-			"expected SCALAR, SEQUENCE-START, MAPPING-START, or ALIAS")
+-	}
+-	return false
+-}
+-
+-// Expect ALIAS.
+-func yaml_emitter_emit_alias(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-	if !yaml_emitter_process_anchor(emitter) {
+-		return false
+-	}
+-	emitter.state = emitter.states[len(emitter.states)-1]
+-	emitter.states = emitter.states[:len(emitter.states)-1]
+-	return true
+-}
+-
+-// Expect SCALAR.
+-func yaml_emitter_emit_scalar(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-	if !yaml_emitter_select_scalar_style(emitter, event) {
+-		return false
+-	}
+-	if !yaml_emitter_process_anchor(emitter) {
+-		return false
+-	}
+-	if !yaml_emitter_process_tag(emitter) {
+-		return false
+-	}
+-	if !yaml_emitter_increase_indent(emitter, true, false) {
+-		return false
+-	}
+-	if !yaml_emitter_process_scalar(emitter) {
+-		return false
+-	}
+-	emitter.indent = emitter.indents[len(emitter.indents)-1]
+-	emitter.indents = emitter.indents[:len(emitter.indents)-1]
+-	emitter.state = emitter.states[len(emitter.states)-1]
+-	emitter.states = emitter.states[:len(emitter.states)-1]
+-	return true
+-}
+-
+-// Expect SEQUENCE-START.
+-func yaml_emitter_emit_sequence_start(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-	if !yaml_emitter_process_anchor(emitter) {
+-		return false
+-	}
+-	if !yaml_emitter_process_tag(emitter) {
+-		return false
+-	}
+-	if emitter.flow_level > 0 || emitter.canonical || event.sequence_style() == yaml_FLOW_SEQUENCE_STYLE ||
+-		yaml_emitter_check_empty_sequence(emitter) {
+-		emitter.state = yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE
+-	} else {
+-		emitter.state = yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE
+-	}
+-	return true
+-}
+-
+-// Expect MAPPING-START.
+-func yaml_emitter_emit_mapping_start(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-	if !yaml_emitter_process_anchor(emitter) {
+-		return false
+-	}
+-	if !yaml_emitter_process_tag(emitter) {
+-		return false
+-	}
+-	if emitter.flow_level > 0 || emitter.canonical || event.mapping_style() == yaml_FLOW_MAPPING_STYLE ||
+-		yaml_emitter_check_empty_mapping(emitter) {
+-		emitter.state = yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE
+-	} else {
+-		emitter.state = yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE
+-	}
+-	return true
+-}
+-
+-// Check if the document content is an empty scalar.
+-func yaml_emitter_check_empty_document(emitter *yaml_emitter_t) bool {
+-	return false // [Go] Huh?
+-}
+-
+-// Check if the next events represent an empty sequence.
+-func yaml_emitter_check_empty_sequence(emitter *yaml_emitter_t) bool {
+-	if len(emitter.events)-emitter.events_head < 2 {
+-		return false
+-	}
+-	return emitter.events[emitter.events_head].typ == yaml_SEQUENCE_START_EVENT &&
+-		emitter.events[emitter.events_head+1].typ == yaml_SEQUENCE_END_EVENT
+-}
+-
+-// Check if the next events represent an empty mapping.
+-func yaml_emitter_check_empty_mapping(emitter *yaml_emitter_t) bool {
+-	if len(emitter.events)-emitter.events_head < 2 {
+-		return false
+-	}
+-	return emitter.events[emitter.events_head].typ == yaml_MAPPING_START_EVENT &&
+-		emitter.events[emitter.events_head+1].typ == yaml_MAPPING_END_EVENT
+-}
+-
+-// Check if the next node can be expressed as a simple key.
+-func yaml_emitter_check_simple_key(emitter *yaml_emitter_t) bool {
+-	length := 0
+-	switch emitter.events[emitter.events_head].typ {
+-	case yaml_ALIAS_EVENT:
+-		length += len(emitter.anchor_data.anchor)
+-	case yaml_SCALAR_EVENT:
+-		if emitter.scalar_data.multiline {
+-			return false
+-		}
+-		length += len(emitter.anchor_data.anchor) +
+-			len(emitter.tag_data.handle) +
+-			len(emitter.tag_data.suffix) +
+-			len(emitter.scalar_data.value)
+-	case yaml_SEQUENCE_START_EVENT:
+-		if !yaml_emitter_check_empty_sequence(emitter) {
+-			return false
+-		}
+-		length += len(emitter.anchor_data.anchor) +
+-			len(emitter.tag_data.handle) +
+-			len(emitter.tag_data.suffix)
+-	case yaml_MAPPING_START_EVENT:
+-		if !yaml_emitter_check_empty_mapping(emitter) {
+-			return false
+-		}
+-		length += len(emitter.anchor_data.anchor) +
+-			len(emitter.tag_data.handle) +
+-			len(emitter.tag_data.suffix)
+-	default:
+-		return false
+-	}
+-	return length <= 128
+-}
+-
+-// Determine an acceptable scalar style.
+-func yaml_emitter_select_scalar_style(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-
+-	no_tag := len(emitter.tag_data.handle) == 0 && len(emitter.tag_data.suffix) == 0
+-	if no_tag && !event.implicit && !event.quoted_implicit {
+-		return yaml_emitter_set_emitter_error(emitter, "neither tag nor implicit flags are specified")
+-	}
+-
+-	style := event.scalar_style()
+-	if style == yaml_ANY_SCALAR_STYLE {
+-		style = yaml_PLAIN_SCALAR_STYLE
+-	}
+-	if emitter.canonical {
+-		style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
+-	}
+-	if emitter.simple_key_context && emitter.scalar_data.multiline {
+-		style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
+-	}
+-
+-	if style == yaml_PLAIN_SCALAR_STYLE {
+-		if emitter.flow_level > 0 && !emitter.scalar_data.flow_plain_allowed ||
+-			emitter.flow_level == 0 && !emitter.scalar_data.block_plain_allowed {
+-			style = yaml_SINGLE_QUOTED_SCALAR_STYLE
+-		}
+-		if len(emitter.scalar_data.value) == 0 && (emitter.flow_level > 0 || emitter.simple_key_context) {
+-			style = yaml_SINGLE_QUOTED_SCALAR_STYLE
+-		}
+-		if no_tag && !event.implicit {
+-			style = yaml_SINGLE_QUOTED_SCALAR_STYLE
+-		}
+-	}
+-	if style == yaml_SINGLE_QUOTED_SCALAR_STYLE {
+-		if !emitter.scalar_data.single_quoted_allowed {
+-			style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
+-		}
+-	}
+-	if style == yaml_LITERAL_SCALAR_STYLE || style == yaml_FOLDED_SCALAR_STYLE {
+-		if !emitter.scalar_data.block_allowed || emitter.flow_level > 0 || emitter.simple_key_context {
+-			style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
+-		}
+-	}
+-
+-	if no_tag && !event.quoted_implicit && style != yaml_PLAIN_SCALAR_STYLE {
+-		emitter.tag_data.handle = []byte{'!'}
+-	}
+-	emitter.scalar_data.style = style
+-	return true
+-}
+-
+-// Write an achor.
+-func yaml_emitter_process_anchor(emitter *yaml_emitter_t) bool {
+-	if emitter.anchor_data.anchor == nil {
+-		return true
+-	}
+-	c := []byte{'&'}
+-	if emitter.anchor_data.alias {
+-		c[0] = '*'
+-	}
+-	if !yaml_emitter_write_indicator(emitter, c, true, false, false) {
+-		return false
+-	}
+-	return yaml_emitter_write_anchor(emitter, emitter.anchor_data.anchor)
+-}
+-
+-// Write a tag.
+-func yaml_emitter_process_tag(emitter *yaml_emitter_t) bool {
+-	if len(emitter.tag_data.handle) == 0 && len(emitter.tag_data.suffix) == 0 {
+-		return true
+-	}
+-	if len(emitter.tag_data.handle) > 0 {
+-		if !yaml_emitter_write_tag_handle(emitter, emitter.tag_data.handle) {
+-			return false
+-		}
+-		if len(emitter.tag_data.suffix) > 0 {
+-			if !yaml_emitter_write_tag_content(emitter, emitter.tag_data.suffix, false) {
+-				return false
+-			}
+-		}
+-	} else {
+-		// [Go] Allocate these slices elsewhere.
+-		if !yaml_emitter_write_indicator(emitter, []byte("!<"), true, false, false) {
+-			return false
+-		}
+-		if !yaml_emitter_write_tag_content(emitter, emitter.tag_data.suffix, false) {
+-			return false
+-		}
+-		if !yaml_emitter_write_indicator(emitter, []byte{'>'}, false, false, false) {
+-			return false
+-		}
+-	}
+-	return true
+-}
+-
+-// Write a scalar.
+-func yaml_emitter_process_scalar(emitter *yaml_emitter_t) bool {
+-	switch emitter.scalar_data.style {
+-	case yaml_PLAIN_SCALAR_STYLE:
+-		return yaml_emitter_write_plain_scalar(emitter, emitter.scalar_data.value, !emitter.simple_key_context)
+-
+-	case yaml_SINGLE_QUOTED_SCALAR_STYLE:
+-		return yaml_emitter_write_single_quoted_scalar(emitter, emitter.scalar_data.value, !emitter.simple_key_context)
+-
+-	case yaml_DOUBLE_QUOTED_SCALAR_STYLE:
+-		return yaml_emitter_write_double_quoted_scalar(emitter, emitter.scalar_data.value, !emitter.simple_key_context)
+-
+-	case yaml_LITERAL_SCALAR_STYLE:
+-		return yaml_emitter_write_literal_scalar(emitter, emitter.scalar_data.value)
+-
+-	case yaml_FOLDED_SCALAR_STYLE:
+-		return yaml_emitter_write_folded_scalar(emitter, emitter.scalar_data.value)
+-	}
+-	panic("unknown scalar style")
+-}
+-
+-// Check if a %YAML directive is valid.
+-func yaml_emitter_analyze_version_directive(emitter *yaml_emitter_t, version_directive *yaml_version_directive_t) bool {
+-	if version_directive.major != 1 || version_directive.minor != 1 {
+-		return yaml_emitter_set_emitter_error(emitter, "incompatible %YAML directive")
+-	}
+-	return true
+-}
+-
+-// Check if a %TAG directive is valid.
+-func yaml_emitter_analyze_tag_directive(emitter *yaml_emitter_t, tag_directive *yaml_tag_directive_t) bool {
+-	handle := tag_directive.handle
+-	prefix := tag_directive.prefix
+-	if len(handle) == 0 {
+-		return yaml_emitter_set_emitter_error(emitter, "tag handle must not be empty")
+-	}
+-	if handle[0] != '!' {
+-		return yaml_emitter_set_emitter_error(emitter, "tag handle must start with '!'")
+-	}
+-	if handle[len(handle)-1] != '!' {
+-		return yaml_emitter_set_emitter_error(emitter, "tag handle must end with '!'")
+-	}
+-	for i := 1; i < len(handle)-1; i += width(handle[i]) {
+-		if !is_alpha(handle, i) {
+-			return yaml_emitter_set_emitter_error(emitter, "tag handle must contain alphanumerical characters only")
+-		}
+-	}
+-	if len(prefix) == 0 {
+-		return yaml_emitter_set_emitter_error(emitter, "tag prefix must not be empty")
+-	}
+-	return true
+-}
+-
+-// Check if an anchor is valid.
+-func yaml_emitter_analyze_anchor(emitter *yaml_emitter_t, anchor []byte, alias bool) bool {
+-	if len(anchor) == 0 {
+-		problem := "anchor value must not be empty"
+-		if alias {
+-			problem = "alias value must not be empty"
+-		}
+-		return yaml_emitter_set_emitter_error(emitter, problem)
+-	}
+-	for i := 0; i < len(anchor); i += width(anchor[i]) {
+-		if !is_alpha(anchor, i) {
+-			problem := "anchor value must contain alphanumerical characters only"
+-			if alias {
+-				problem = "alias value must contain alphanumerical characters only"
+-			}
+-			return yaml_emitter_set_emitter_error(emitter, problem)
+-		}
+-	}
+-	emitter.anchor_data.anchor = anchor
+-	emitter.anchor_data.alias = alias
+-	return true
+-}
+-
+-// Check if a tag is valid.
+-func yaml_emitter_analyze_tag(emitter *yaml_emitter_t, tag []byte) bool {
+-	if len(tag) == 0 {
+-		return yaml_emitter_set_emitter_error(emitter, "tag value must not be empty")
+-	}
+-	for i := 0; i < len(emitter.tag_directives); i++ {
+-		tag_directive := &emitter.tag_directives[i]
+-		if bytes.HasPrefix(tag, tag_directive.prefix) {
+-			emitter.tag_data.handle = tag_directive.handle
+-			emitter.tag_data.suffix = tag[len(tag_directive.prefix):]
+-		}
+-		return true
+-	}
+-	emitter.tag_data.suffix = tag
+-	return true
+-}
+-
+-// Check if a scalar is valid.
+-func yaml_emitter_analyze_scalar(emitter *yaml_emitter_t, value []byte) bool {
+-	var (
+-		block_indicators   = false
+-		flow_indicators    = false
+-		line_breaks        = false
+-		special_characters = false
+-
+-		leading_space  = false
+-		leading_break  = false
+-		trailing_space = false
+-		trailing_break = false
+-		break_space    = false
+-		space_break    = false
+-
+-		preceeded_by_whitespace = false
+-		followed_by_whitespace  = false
+-		previous_space          = false
+-		previous_break          = false
+-	)
+-
+-	emitter.scalar_data.value = value
+-
+-	if len(value) == 0 {
+-		emitter.scalar_data.multiline = false
+-		emitter.scalar_data.flow_plain_allowed = false
+-		emitter.scalar_data.block_plain_allowed = true
+-		emitter.scalar_data.single_quoted_allowed = true
+-		emitter.scalar_data.block_allowed = false
+-		return true
+-	}
+-
+-	if len(value) >= 3 && ((value[0] == '-' && value[1] == '-' && value[2] == '-') || (value[0] == '.' && value[1] == '.' && value[2] == '.')) {
+-		block_indicators = true
+-		flow_indicators = true
+-	}
+-
+-	preceeded_by_whitespace = true
+-	for i, w := 0, 0; i < len(value); i += w {
+-		w = width(value[0])
+-		followed_by_whitespace = i+w >= len(value) || is_blank(value, i+w)
+-
+-		if i == 0 {
+-			switch value[i] {
+-			case '#', ',', '[', ']', '{', '}', '&', '*', '!', '|', '>', '\'', '"', '%', '@', '`':
+-				flow_indicators = true
+-				block_indicators = true
+-			case '?', ':':
+-				flow_indicators = true
+-				if followed_by_whitespace {
+-					block_indicators = true
+-				}
+-			case '-':
+-				if followed_by_whitespace {
+-					flow_indicators = true
+-					block_indicators = true
+-				}
+-			}
+-		} else {
+-			switch value[i] {
+-			case ',', '?', '[', ']', '{', '}':
+-				flow_indicators = true
+-			case ':':
+-				flow_indicators = true
+-				if followed_by_whitespace {
+-					block_indicators = true
+-				}
+-			case '#':
+-				if preceeded_by_whitespace {
+-					flow_indicators = true
+-					block_indicators = true
+-				}
+-			}
+-		}
+-
+-		if !is_printable(value, i) || !is_ascii(value, i) && !emitter.unicode {
+-			special_characters = true
+-		}
+-		if is_space(value, i) {
+-			if i == 0 {
+-				leading_space = true
+-			}
+-			if i+width(value[i]) == len(value) {
+-				trailing_space = true
+-			}
+-			if previous_break {
+-				break_space = true
+-			}
+-			previous_space = true
+-			previous_break = false
+-		} else if is_break(value, i) {
+-			line_breaks = true
+-			if i == 0 {
+-				leading_break = true
+-			}
+-			if i+width(value[i]) == len(value) {
+-				trailing_break = true
+-			}
+-			if previous_space {
+-				space_break = true
+-			}
+-			previous_space = false
+-			previous_break = true
+-		} else {
+-			previous_space = false
+-			previous_break = false
+-		}
+-
+-		// [Go]: Why 'z'? Couldn't be the end of the string as that's the loop condition.
+-		preceeded_by_whitespace = is_blankz(value, i)
+-	}
+-
+-	emitter.scalar_data.multiline = line_breaks
+-	emitter.scalar_data.flow_plain_allowed = true
+-	emitter.scalar_data.block_plain_allowed = true
+-	emitter.scalar_data.single_quoted_allowed = true
+-	emitter.scalar_data.block_allowed = true
+-
+-	if leading_space || leading_break || trailing_space || trailing_break {
+-		emitter.scalar_data.flow_plain_allowed = false
+-		emitter.scalar_data.block_plain_allowed = false
+-	}
+-	if trailing_space {
+-		emitter.scalar_data.block_allowed = false
+-	}
+-	if break_space {
+-		emitter.scalar_data.flow_plain_allowed = false
+-		emitter.scalar_data.block_plain_allowed = false
+-		emitter.scalar_data.single_quoted_allowed = false
+-	}
+-	if space_break || special_characters {
+-		emitter.scalar_data.flow_plain_allowed = false
+-		emitter.scalar_data.block_plain_allowed = false
+-		emitter.scalar_data.single_quoted_allowed = false
+-		emitter.scalar_data.block_allowed = false
+-	}
+-	if line_breaks {
+-		emitter.scalar_data.flow_plain_allowed = false
+-		emitter.scalar_data.block_plain_allowed = false
+-	}
+-	if flow_indicators {
+-		emitter.scalar_data.flow_plain_allowed = false
+-	}
+-	if block_indicators {
+-		emitter.scalar_data.block_plain_allowed = false
+-	}
+-	return true
+-}
+-
+-// Check if the event data is valid.
+-func yaml_emitter_analyze_event(emitter *yaml_emitter_t, event *yaml_event_t) bool {
+-
+-	emitter.anchor_data.anchor = nil
+-	emitter.tag_data.handle = nil
+-	emitter.tag_data.suffix = nil
+-	emitter.scalar_data.value = nil
+-
+-	switch event.typ {
+-	case yaml_ALIAS_EVENT:
+-		if !yaml_emitter_analyze_anchor(emitter, event.anchor, true) {
+-			return false
+-		}
+-
+-	case yaml_SCALAR_EVENT:
+-		if len(event.anchor) > 0 {
+-			if !yaml_emitter_analyze_anchor(emitter, event.anchor, false) {
+-				return false
+-			}
+-		}
+-		if len(event.tag) > 0 && (emitter.canonical || (!event.implicit && !event.quoted_implicit)) {
+-			if !yaml_emitter_analyze_tag(emitter, event.tag) {
+-				return false
+-			}
+-		}
+-		if !yaml_emitter_analyze_scalar(emitter, event.value) {
+-			return false
+-		}
+-
+-	case yaml_SEQUENCE_START_EVENT:
+-		if len(event.anchor) > 0 {
+-			if !yaml_emitter_analyze_anchor(emitter, event.anchor, false) {
+-				return false
+-			}
+-		}
+-		if len(event.tag) > 0 && (emitter.canonical || !event.implicit) {
+-			if !yaml_emitter_analyze_tag(emitter, event.tag) {
+-				return false
+-			}
+-		}
+-
+-	case yaml_MAPPING_START_EVENT:
+-		if len(event.anchor) > 0 {
+-			if !yaml_emitter_analyze_anchor(emitter, event.anchor, false) {
+-				return false
+-			}
+-		}
+-		if len(event.tag) > 0 && (emitter.canonical || !event.implicit) {
+-			if !yaml_emitter_analyze_tag(emitter, event.tag) {
+-				return false
+-			}
+-		}
+-	}
+-	return true
+-}
+-
+-// Write the BOM character.
+-func yaml_emitter_write_bom(emitter *yaml_emitter_t) bool {
+-	if !flush(emitter) {
+-		return false
+-	}
+-	pos := emitter.buffer_pos
+-	emitter.buffer[pos+0] = '\xEF'
+-	emitter.buffer[pos+1] = '\xBB'
+-	emitter.buffer[pos+2] = '\xBF'
+-	emitter.buffer_pos += 3
+-	return true
+-}
+-
+-func yaml_emitter_write_indent(emitter *yaml_emitter_t) bool {
+-	indent := emitter.indent
+-	if indent < 0 {
+-		indent = 0
+-	}
+-	if !emitter.indention || emitter.column > indent || (emitter.column == indent && !emitter.whitespace) {
+-		if !put_break(emitter) {
+-			return false
+-		}
+-	}
+-	for emitter.column < indent {
+-		if !put(emitter, ' ') {
+-			return false
+-		}
+-	}
+-	emitter.whitespace = true
+-	emitter.indention = true
+-	return true
+-}
+-
+-func yaml_emitter_write_indicator(emitter *yaml_emitter_t, indicator []byte, need_whitespace, is_whitespace, is_indention bool) bool {
+-	if need_whitespace && !emitter.whitespace {
+-		if !put(emitter, ' ') {
+-			return false
+-		}
+-	}
+-	if !write_all(emitter, indicator) {
+-		return false
+-	}
+-	emitter.whitespace = is_whitespace
+-	emitter.indention = (emitter.indention && is_indention)
+-	emitter.open_ended = false
+-	return true
+-}
+-
+-func yaml_emitter_write_anchor(emitter *yaml_emitter_t, value []byte) bool {
+-	if !write_all(emitter, value) {
+-		return false
+-	}
+-	emitter.whitespace = false
+-	emitter.indention = false
+-	return true
+-}
+-
+-func yaml_emitter_write_tag_handle(emitter *yaml_emitter_t, value []byte) bool {
+-	if !emitter.whitespace {
+-		if !put(emitter, ' ') {
+-			return false
+-		}
+-	}
+-	if !write_all(emitter, value) {
+-		return false
+-	}
+-	emitter.whitespace = false
+-	emitter.indention = false
+-	return true
+-}
+-
+-func yaml_emitter_write_tag_content(emitter *yaml_emitter_t, value []byte, need_whitespace bool) bool {
+-	if need_whitespace && !emitter.whitespace {
+-		if !put(emitter, ' ') {
+-			return false
+-		}
+-	}
+-	for i := 0; i < len(value); {
+-		var must_write bool
+-		switch value[i] {
+-		case ';', '/', '?', ':', '@', '&', '=', '+', '$', ',', '_', '.', '~', '*', '\'', '(', ')', '[', ']':
+-			must_write = true
+-		default:
+-			must_write = is_alpha(value, i)
+-		}
+-		if must_write {
+-			if !write(emitter, value, &i) {
+-				return false
+-			}
+-		} else {
+-			w := width(value[i])
+-			for k := 0; k < w; k++ {
+-				octet := value[i]
+-				i++
+-
+-				c := octet >> 4
+-				if c < 10 {
+-					c += '0'
+-				} else {
+-					c += 'A' - 10
+-				}
+-				if !put(emitter, c) {
+-					return false
+-				}
+-
+-				c = octet & 0x0f
+-				if c < 10 {
+-					c += '0'
+-				} else {
+-					c += 'A' - 10
+-				}
+-				if !put(emitter, c) {
+-					return false
+-				}
+-			}
+-		}
+-	}
+-	emitter.whitespace = false
+-	emitter.indention = false
+-	return true
+-}
+-
+-func yaml_emitter_write_plain_scalar(emitter *yaml_emitter_t, value []byte, allow_breaks bool) bool {
+-	if !emitter.whitespace {
+-		if !put(emitter, ' ') {
+-			return false
+-		}
+-	}
+-
+-	spaces := false
+-	breaks := false
+-	for i := 0; i < len(value); {
+-		if is_space(value, i) {
+-			if allow_breaks && !spaces && emitter.column > emitter.best_width && !is_space(value, i+1) {
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-				i += width(value[i])
+-			} else {
+-				if !write(emitter, value, &i) {
+-					return false
+-				}
+-			}
+-			spaces = true
+-		} else if is_break(value, i) {
+-			if !breaks && value[i] == '\n' {
+-				if !put_break(emitter) {
+-					return false
+-				}
+-			}
+-			if !write_break(emitter, value, &i) {
+-				return false
+-			}
+-			emitter.indention = true
+-			breaks = true
+-		} else {
+-			if breaks {
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-			}
+-			if !write(emitter, value, &i) {
+-				return false
+-			}
+-			emitter.indention = false
+-			spaces = false
+-			breaks = false
+-		}
+-	}
+-
+-	emitter.whitespace = false
+-	emitter.indention = false
+-	if emitter.root_context {
+-		emitter.open_ended = true
+-	}
+-
+-	return true
+-}
+-
+-func yaml_emitter_write_single_quoted_scalar(emitter *yaml_emitter_t, value []byte, allow_breaks bool) bool {
+-
+-	if !yaml_emitter_write_indicator(emitter, []byte{'\''}, true, false, false) {
+-		return false
+-	}
+-
+-	spaces := false
+-	breaks := false
+-	for i := 0; i < len(value); {
+-		if is_space(value, i) {
+-			if allow_breaks && !spaces && emitter.column > emitter.best_width && i > 0 && i < len(value)-1 && !is_space(value, i+1) {
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-				i += width(value[i])
+-			} else {
+-				if !write(emitter, value, &i) {
+-					return false
+-				}
+-			}
+-			spaces = true
+-		} else if is_break(value, i) {
+-			if !breaks && value[i] == '\n' {
+-				if !put_break(emitter) {
+-					return false
+-				}
+-			}
+-			if !write_break(emitter, value, &i) {
+-				return false
+-			}
+-			emitter.indention = true
+-			breaks = true
+-		} else {
+-			if breaks {
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-			}
+-			if value[i] == '\'' {
+-				if !put(emitter, '\'') {
+-					return false
+-				}
+-			}
+-			if !write(emitter, value, &i) {
+-				return false
+-			}
+-			emitter.indention = false
+-			spaces = false
+-			breaks = false
+-		}
+-	}
+-	if !yaml_emitter_write_indicator(emitter, []byte{'\''}, false, false, false) {
+-		return false
+-	}
+-	emitter.whitespace = false
+-	emitter.indention = false
+-	return true
+-}
+-
+-func yaml_emitter_write_double_quoted_scalar(emitter *yaml_emitter_t, value []byte, allow_breaks bool) bool {
+-	spaces := false
+-	if !yaml_emitter_write_indicator(emitter, []byte{'"'}, true, false, false) {
+-		return false
+-	}
+-
+-	for i := 0; i < len(value); {
+-		if !is_printable(value, i) || (!emitter.unicode && !is_ascii(value, i)) ||
+-			is_bom(value, i) || is_break(value, i) ||
+-			value[i] == '"' || value[i] == '\\' {
+-
+-			octet := value[i]
+-
+-			var w int
+-			var v rune
+-			switch {
+-			case octet&0x80 == 0x00:
+-				w, v = 1, rune(octet&0x7F)
+-			case octet&0xE0 == 0xC0:
+-				w, v = 2, rune(octet&0x1F)
+-			case octet&0xF0 == 0xE0:
+-				w, v = 3, rune(octet&0x0F)
+-			case octet&0xF8 == 0xF0:
+-				w, v = 4, rune(octet&0x07)
+-			}
+-			for k := 1; k < w; k++ {
+-				octet = value[i+k]
+-				v = (v << 6) + (rune(octet) & 0x3F)
+-			}
+-			i += w
+-
+-			if !put(emitter, '\\') {
+-				return false
+-			}
+-
+-			var ok bool
+-			switch v {
+-			case 0x00:
+-				ok = put(emitter, '0')
+-			case 0x07:
+-				ok = put(emitter, 'a')
+-			case 0x08:
+-				ok = put(emitter, 'b')
+-			case 0x09:
+-				ok = put(emitter, 't')
+-			case 0x0A:
+-				ok = put(emitter, 'n')
+-			case 0x0b:
+-				ok = put(emitter, 'v')
+-			case 0x0c:
+-				ok = put(emitter, 'f')
+-			case 0x0d:
+-				ok = put(emitter, 'r')
+-			case 0x1b:
+-				ok = put(emitter, 'e')
+-			case 0x22:
+-				ok = put(emitter, '"')
+-			case 0x5c:
+-				ok = put(emitter, '\\')
+-			case 0x85:
+-				ok = put(emitter, 'N')
+-			case 0xA0:
+-				ok = put(emitter, '_')
+-			case 0x2028:
+-				ok = put(emitter, 'L')
+-			case 0x2029:
+-				ok = put(emitter, 'P')
+-			default:
+-				if v <= 0xFF {
+-					ok = put(emitter, 'x')
+-					w = 2
+-				} else if v <= 0xFFFF {
+-					ok = put(emitter, 'u')
+-					w = 4
+-				} else {
+-					ok = put(emitter, 'U')
+-					w = 8
+-				}
+-				for k := (w - 1) * 4; ok && k >= 0; k -= 4 {
+-					digit := byte((v >> uint(k)) & 0x0F)
+-					if digit < 10 {
+-						ok = put(emitter, digit+'0')
+-					} else {
+-						ok = put(emitter, digit+'A'-10)
+-					}
+-				}
+-			}
+-			if !ok {
+-				return false
+-			}
+-			spaces = false
+-		} else if is_space(value, i) {
+-			if allow_breaks && !spaces && emitter.column > emitter.best_width && i > 0 && i < len(value)-1 {
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-				if is_space(value, i+1) {
+-					if !put(emitter, '\\') {
+-						return false
+-					}
+-				}
+-				i += width(value[i])
+-			} else if !write(emitter, value, &i) {
+-				return false
+-			}
+-			spaces = true
+-		} else {
+-			if !write(emitter, value, &i) {
+-				return false
+-			}
+-			spaces = false
+-		}
+-	}
+-	if !yaml_emitter_write_indicator(emitter, []byte{'"'}, false, false, false) {
+-		return false
+-	}
+-	emitter.whitespace = false
+-	emitter.indention = false
+-	return true
+-}
+-
+-func yaml_emitter_write_block_scalar_hints(emitter *yaml_emitter_t, value []byte) bool {
+-	if is_space(value, 0) || is_break(value, 0) {
+-		indent_hint := []byte{'0' + byte(emitter.best_indent)}
+-		if !yaml_emitter_write_indicator(emitter, indent_hint, false, false, false) {
+-			return false
+-		}
+-	}
+-
+-	emitter.open_ended = false
+-
+-	var chomp_hint [1]byte
+-	if len(value) == 0 {
+-		chomp_hint[0] = '-'
+-	} else {
+-		i := len(value) - 1
+-		for value[i]&0xC0 == 0x80 {
+-			i--
+-		}
+-		if !is_break(value, i) {
+-			chomp_hint[0] = '-'
+-		} else if i == 0 {
+-			chomp_hint[0] = '+'
+-			emitter.open_ended = true
+-		} else {
+-			i--
+-			for value[i]&0xC0 == 0x80 {
+-				i--
+-			}
+-			if is_break(value, i) {
+-				chomp_hint[0] = '+'
+-				emitter.open_ended = true
+-			}
+-		}
+-	}
+-	if chomp_hint[0] != 0 {
+-		if !yaml_emitter_write_indicator(emitter, chomp_hint[:], false, false, false) {
+-			return false
+-		}
+-	}
+-	return true
+-}
+-
+-func yaml_emitter_write_literal_scalar(emitter *yaml_emitter_t, value []byte) bool {
+-	if !yaml_emitter_write_indicator(emitter, []byte{'|'}, true, false, false) {
+-		return false
+-	}
+-	if !yaml_emitter_write_block_scalar_hints(emitter, value) {
+-		return false
+-	}
+-	if !put_break(emitter) {
+-		return false
+-	}
+-	emitter.indention = true
+-	emitter.whitespace = true
+-	breaks := true
+-	for i := 0; i < len(value); {
+-		if is_break(value, i) {
+-			if !write_break(emitter, value, &i) {
+-				return false
+-			}
+-			emitter.indention = true
+-			breaks = true
+-		} else {
+-			if breaks {
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-			}
+-			if !write(emitter, value, &i) {
+-				return false
+-			}
+-			emitter.indention = false
+-			breaks = false
+-		}
+-	}
+-
+-	return true
+-}
+-
+-func yaml_emitter_write_folded_scalar(emitter *yaml_emitter_t, value []byte) bool {
+-	if !yaml_emitter_write_indicator(emitter, []byte{'>'}, true, false, false) {
+-		return false
+-	}
+-	if !yaml_emitter_write_block_scalar_hints(emitter, value) {
+-		return false
+-	}
+-
+-	if !put_break(emitter) {
+-		return false
+-	}
+-	emitter.indention = true
+-	emitter.whitespace = true
+-
+-	breaks := true
+-	leading_spaces := true
+-	for i := 0; i < len(value); {
+-		if is_break(value, i) {
+-			if !breaks && !leading_spaces && value[i] == '\n' {
+-				k := 0
+-				for is_break(value, k) {
+-					k += width(value[k])
+-				}
+-				if !is_blankz(value, k) {
+-					if !put_break(emitter) {
+-						return false
+-					}
+-				}
+-			}
+-			if !write_break(emitter, value, &i) {
+-				return false
+-			}
+-			emitter.indention = true
+-			breaks = true
+-		} else {
+-			if breaks {
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-				leading_spaces = is_blank(value, i)
+-			}
+-			if !breaks && is_space(value, i) && !is_space(value, i+1) && emitter.column > emitter.best_width {
+-				if !yaml_emitter_write_indent(emitter) {
+-					return false
+-				}
+-				i += width(value[i])
+-			} else {
+-				if !write(emitter, value, &i) {
+-					return false
+-				}
+-			}
+-			emitter.indention = false
+-			breaks = false
+-		}
+-	}
+-	return true
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/encode.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/encode.go
+deleted file mode 100644
+index 1d928b0..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/encode.go
++++ /dev/null
+@@ -1,226 +0,0 @@
+-package yaml
+-
+-import (
+-	"reflect"
+-	"sort"
+-	"strconv"
+-	"time"
+-)
+-
+-type encoder struct {
+-	emitter yaml_emitter_t
+-	event   yaml_event_t
+-	out     []byte
+-	flow    bool
+-}
+-
+-func newEncoder() (e *encoder) {
+-	e = &encoder{}
+-	e.must(yaml_emitter_initialize(&e.emitter))
+-	yaml_emitter_set_output_string(&e.emitter, &e.out)
+-	e.must(yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING))
+-	e.emit()
+-	e.must(yaml_document_start_event_initialize(&e.event, nil, nil, true))
+-	e.emit()
+-	return e
+-}
+-
+-func (e *encoder) finish() {
+-	e.must(yaml_document_end_event_initialize(&e.event, true))
+-	e.emit()
+-	e.emitter.open_ended = false
+-	e.must(yaml_stream_end_event_initialize(&e.event))
+-	e.emit()
+-}
+-
+-func (e *encoder) destroy() {
+-	yaml_emitter_delete(&e.emitter)
+-}
+-
+-func (e *encoder) emit() {
+-	// This will internally delete the e.event value.
+-	if !yaml_emitter_emit(&e.emitter, &e.event) && e.event.typ != yaml_DOCUMENT_END_EVENT && e.event.typ != yaml_STREAM_END_EVENT {
+-		e.must(false)
+-	}
+-}
+-
+-func (e *encoder) must(ok bool) {
+-	if !ok {
+-		msg := e.emitter.problem
+-		if msg == "" {
+-			msg = "Unknown problem generating YAML content"
+-		}
+-		panic(msg)
+-	}
+-}
+-
+-func (e *encoder) marshal(tag string, in reflect.Value) {
+-	var value interface{}
+-	if getter, ok := in.Interface().(Getter); ok {
+-		tag, value = getter.GetYAML()
+-		if value == nil {
+-			e.nilv()
+-			return
+-		}
+-		in = reflect.ValueOf(value)
+-	}
+-	switch in.Kind() {
+-	case reflect.Interface:
+-		if in.IsNil() {
+-			e.nilv()
+-		} else {
+-			e.marshal(tag, in.Elem())
+-		}
+-	case reflect.Map:
+-		e.mapv(tag, in)
+-	case reflect.Ptr:
+-		if in.IsNil() {
+-			e.nilv()
+-		} else {
+-			e.marshal(tag, in.Elem())
+-		}
+-	case reflect.Struct:
+-		e.structv(tag, in)
+-	case reflect.Slice:
+-		e.slicev(tag, in)
+-	case reflect.String:
+-		e.stringv(tag, in)
+-	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+-		if in.Type() == durationType {
+-			e.stringv(tag, reflect.ValueOf(in.Interface().(time.Duration).String()))
+-		} else {
+-			e.intv(tag, in)
+-		}
+-	case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+-		e.uintv(tag, in)
+-	case reflect.Float32, reflect.Float64:
+-		e.floatv(tag, in)
+-	case reflect.Bool:
+-		e.boolv(tag, in)
+-	default:
+-		panic("Can't marshal type yet: " + in.Type().String())
+-	}
+-}
+-
+-func (e *encoder) mapv(tag string, in reflect.Value) {
+-	e.mappingv(tag, func() {
+-		keys := keyList(in.MapKeys())
+-		sort.Sort(keys)
+-		for _, k := range keys {
+-			e.marshal("", k)
+-			e.marshal("", in.MapIndex(k))
+-		}
+-	})
+-}
+-
+-func (e *encoder) structv(tag string, in reflect.Value) {
+-	sinfo, err := getStructInfo(in.Type())
+-	if err != nil {
+-		panic(err)
+-	}
+-	e.mappingv(tag, func() {
+-		for _, info := range sinfo.FieldsList {
+-			var value reflect.Value
+-			if info.Inline == nil {
+-				value = in.Field(info.Num)
+-			} else {
+-				value = in.FieldByIndex(info.Inline)
+-			}
+-			if info.OmitEmpty && isZero(value) {
+-				continue
+-			}
+-			e.marshal("", reflect.ValueOf(info.Key))
+-			e.flow = info.Flow
+-			e.marshal("", value)
+-		}
+-	})
+-}
+-
+-func (e *encoder) mappingv(tag string, f func()) {
+-	implicit := tag == ""
+-	style := yaml_BLOCK_MAPPING_STYLE
+-	if e.flow {
+-		e.flow = false
+-		style = yaml_FLOW_MAPPING_STYLE
+-	}
+-	e.must(yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style))
+-	e.emit()
+-	f()
+-	e.must(yaml_mapping_end_event_initialize(&e.event))
+-	e.emit()
+-}
+-
+-func (e *encoder) slicev(tag string, in reflect.Value) {
+-	implicit := tag == ""
+-	style := yaml_BLOCK_SEQUENCE_STYLE
+-	if e.flow {
+-		e.flow = false
+-		style = yaml_FLOW_SEQUENCE_STYLE
+-	}
+-	e.must(yaml_sequence_start_event_initialize(&e.event, nil, []byte(tag), implicit, style))
+-	e.emit()
+-	n := in.Len()
+-	for i := 0; i < n; i++ {
+-		e.marshal("", in.Index(i))
+-	}
+-	e.must(yaml_sequence_end_event_initialize(&e.event))
+-	e.emit()
+-}
+-
+-func (e *encoder) stringv(tag string, in reflect.Value) {
+-	var style yaml_scalar_style_t
+-	s := in.String()
+-	if rtag, _ := resolve("", s); rtag != "!!str" {
+-		style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
+-	} else {
+-		style = yaml_PLAIN_SCALAR_STYLE
+-	}
+-	e.emitScalar(s, "", tag, style)
+-}
+-
+-func (e *encoder) boolv(tag string, in reflect.Value) {
+-	var s string
+-	if in.Bool() {
+-		s = "true"
+-	} else {
+-		s = "false"
+-	}
+-	e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
+-}
+-
+-func (e *encoder) intv(tag string, in reflect.Value) {
+-	s := strconv.FormatInt(in.Int(), 10)
+-	e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
+-}
+-
+-func (e *encoder) uintv(tag string, in reflect.Value) {
+-	s := strconv.FormatUint(in.Uint(), 10)
+-	e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
+-}
+-
+-func (e *encoder) floatv(tag string, in reflect.Value) {
+-	// FIXME: Handle 64 bits here.
+-	s := strconv.FormatFloat(float64(in.Float()), 'g', -1, 32)
+-	switch s {
+-	case "+Inf":
+-		s = ".inf"
+-	case "-Inf":
+-		s = "-.inf"
+-	case "NaN":
+-		s = ".nan"
+-	}
+-	e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
+-}
+-
+-func (e *encoder) nilv() {
+-	e.emitScalar("null", "", "", yaml_PLAIN_SCALAR_STYLE)
+-}
+-
+-func (e *encoder) emitScalar(value, anchor, tag string, style yaml_scalar_style_t) {
+-	implicit := tag == ""
+-	if !implicit {
+-		style = yaml_PLAIN_SCALAR_STYLE
+-	}
+-	e.must(yaml_scalar_event_initialize(&e.event, []byte(anchor), []byte(tag), []byte(value), implicit, implicit, style))
+-	e.emit()
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/encode_test.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/encode_test.go
+deleted file mode 100644
+index c7461d5..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/encode_test.go
++++ /dev/null
+@@ -1,386 +0,0 @@
+-package yaml_test
+-
+-import (
+-	"fmt"
+-	"gopkg.in/yaml.v1"
+-	. "gopkg.in/check.v1"
+-	"math"
+-	"strconv"
+-	"strings"
+-	"time"
+-)
+-
+-var marshalIntTest = 123
+-
+-var marshalTests = []struct {
+-	value interface{}
+-	data  string
+-}{
+-	{
+-		&struct{}{},
+-		"{}\n",
+-	}, {
+-		map[string]string{"v": "hi"},
+-		"v: hi\n",
+-	}, {
+-		map[string]interface{}{"v": "hi"},
+-		"v: hi\n",
+-	}, {
+-		map[string]string{"v": "true"},
+-		"v: \"true\"\n",
+-	}, {
+-		map[string]string{"v": "false"},
+-		"v: \"false\"\n",
+-	}, {
+-		map[string]interface{}{"v": true},
+-		"v: true\n",
+-	}, {
+-		map[string]interface{}{"v": false},
+-		"v: false\n",
+-	}, {
+-		map[string]interface{}{"v": 10},
+-		"v: 10\n",
+-	}, {
+-		map[string]interface{}{"v": -10},
+-		"v: -10\n",
+-	}, {
+-		map[string]uint{"v": 42},
+-		"v: 42\n",
+-	}, {
+-		map[string]interface{}{"v": int64(4294967296)},
+-		"v: 4294967296\n",
+-	}, {
+-		map[string]int64{"v": int64(4294967296)},
+-		"v: 4294967296\n",
+-	}, {
+-		map[string]uint64{"v": 4294967296},
+-		"v: 4294967296\n",
+-	}, {
+-		map[string]interface{}{"v": "10"},
+-		"v: \"10\"\n",
+-	}, {
+-		map[string]interface{}{"v": 0.1},
+-		"v: 0.1\n",
+-	}, {
+-		map[string]interface{}{"v": float64(0.1)},
+-		"v: 0.1\n",
+-	}, {
+-		map[string]interface{}{"v": -0.1},
+-		"v: -0.1\n",
+-	}, {
+-		map[string]interface{}{"v": math.Inf(+1)},
+-		"v: .inf\n",
+-	}, {
+-		map[string]interface{}{"v": math.Inf(-1)},
+-		"v: -.inf\n",
+-	}, {
+-		map[string]interface{}{"v": math.NaN()},
+-		"v: .nan\n",
+-	}, {
+-		map[string]interface{}{"v": nil},
+-		"v: null\n",
+-	}, {
+-		map[string]interface{}{"v": ""},
+-		"v: \"\"\n",
+-	}, {
+-		map[string][]string{"v": []string{"A", "B"}},
+-		"v:\n- A\n- B\n",
+-	}, {
+-		map[string][]string{"v": []string{"A", "B\nC"}},
+-		"v:\n- A\n- 'B\n\n  C'\n",
+-	}, {
+-		map[string][]interface{}{"v": []interface{}{"A", 1, map[string][]int{"B": []int{2, 3}}}},
+-		"v:\n- A\n- 1\n- B:\n  - 2\n  - 3\n",
+-	}, {
+-		map[string]interface{}{"a": map[interface{}]interface{}{"b": "c"}},
+-		"a:\n  b: c\n",
+-	}, {
+-		map[string]interface{}{"a": "-"},
+-		"a: '-'\n",
+-	},
+-
+-	// Simple values.
+-	{
+-		&marshalIntTest,
+-		"123\n",
+-	},
+-
+-	// Structures
+-	{
+-		&struct{ Hello string }{"world"},
+-		"hello: world\n",
+-	}, {
+-		&struct {
+-			A struct {
+-				B string
+-			}
+-		}{struct{ B string }{"c"}},
+-		"a:\n  b: c\n",
+-	}, {
+-		&struct {
+-			A *struct {
+-				B string
+-			}
+-		}{&struct{ B string }{"c"}},
+-		"a:\n  b: c\n",
+-	}, {
+-		&struct {
+-			A *struct {
+-				B string
+-			}
+-		}{},
+-		"a: null\n",
+-	}, {
+-		&struct{ A int }{1},
+-		"a: 1\n",
+-	}, {
+-		&struct{ A []int }{[]int{1, 2}},
+-		"a:\n- 1\n- 2\n",
+-	}, {
+-		&struct {
+-			B int "a"
+-		}{1},
+-		"a: 1\n",
+-	}, {
+-		&struct{ A bool }{true},
+-		"a: true\n",
+-	},
+-
+-	// Conditional flag
+-	{
+-		&struct {
+-			A int "a,omitempty"
+-			B int "b,omitempty"
+-		}{1, 0},
+-		"a: 1\n",
+-	}, {
+-		&struct {
+-			A int "a,omitempty"
+-			B int "b,omitempty"
+-		}{0, 0},
+-		"{}\n",
+-	}, {
+-		&struct {
+-			A *struct{ X int } "a,omitempty"
+-			B int              "b,omitempty"
+-		}{nil, 0},
+-		"{}\n",
+-	},
+-
+-	// Flow flag
+-	{
+-		&struct {
+-			A []int "a,flow"
+-		}{[]int{1, 2}},
+-		"a: [1, 2]\n",
+-	}, {
+-		&struct {
+-			A map[string]string "a,flow"
+-		}{map[string]string{"b": "c", "d": "e"}},
+-		"a: {b: c, d: e}\n",
+-	}, {
+-		&struct {
+-			A struct {
+-				B, D string
+-			} "a,flow"
+-		}{struct{ B, D string }{"c", "e"}},
+-		"a: {b: c, d: e}\n",
+-	},
+-
+-	// Unexported field
+-	{
+-		&struct {
+-			u int
+-			A int
+-		}{0, 1},
+-		"a: 1\n",
+-	},
+-
+-	// Ignored field
+-	{
+-		&struct {
+-			A int
+-			B int "-"
+-		}{1, 2},
+-		"a: 1\n",
+-	},
+-
+-	// Struct inlining
+-	{
+-		&struct {
+-			A int
+-			C inlineB `yaml:",inline"`
+-		}{1, inlineB{2, inlineC{3}}},
+-		"a: 1\nb: 2\nc: 3\n",
+-	},
+-
+-	// Duration
+-	{
+-		map[string]time.Duration{"a": 3 * time.Second},
+-		"a: 3s\n",
+-	},
+-}
+-
+-func (s *S) TestMarshal(c *C) {
+-	for _, item := range marshalTests {
+-		data, err := yaml.Marshal(item.value)
+-		c.Assert(err, IsNil)
+-		c.Assert(string(data), Equals, item.data)
+-	}
+-}
+-
+-var marshalErrorTests = []struct {
+-	value interface{}
+-	error string
+-}{
+-	{
+-		&struct {
+-			B       int
+-			inlineB ",inline"
+-		}{1, inlineB{2, inlineC{3}}},
+-		`Duplicated key 'b' in struct struct \{ B int; .*`,
+-	},
+-}
+-
+-func (s *S) TestMarshalErrors(c *C) {
+-	for _, item := range marshalErrorTests {
+-		_, err := yaml.Marshal(item.value)
+-		c.Assert(err, ErrorMatches, item.error)
+-	}
+-}
+-
+-var marshalTaggedIfaceTest interface{} = &struct{ A string }{"B"}
+-
+-var getterTests = []struct {
+-	data, tag string
+-	value     interface{}
+-}{
+-	{"_:\n  hi: there\n", "", map[interface{}]interface{}{"hi": "there"}},
+-	{"_:\n- 1\n- A\n", "", []interface{}{1, "A"}},
+-	{"_: 10\n", "", 10},
+-	{"_: null\n", "", nil},
+-	{"_: !foo BAR!\n", "!foo", "BAR!"},
+-	{"_: !foo 1\n", "!foo", "1"},
+-	{"_: !foo '\"1\"'\n", "!foo", "\"1\""},
+-	{"_: !foo 1.1\n", "!foo", 1.1},
+-	{"_: !foo 1\n", "!foo", 1},
+-	{"_: !foo 1\n", "!foo", uint(1)},
+-	{"_: !foo true\n", "!foo", true},
+-	{"_: !foo\n- A\n- B\n", "!foo", []string{"A", "B"}},
+-	{"_: !foo\n  A: B\n", "!foo", map[string]string{"A": "B"}},
+-	{"_: !foo\n  a: B\n", "!foo", &marshalTaggedIfaceTest},
+-}
+-
+-func (s *S) TestMarshalTypeCache(c *C) {
+-	var data []byte
+-	var err error
+-	func() {
+-		type T struct{ A int }
+-		data, err = yaml.Marshal(&T{})
+-		c.Assert(err, IsNil)
+-	}()
+-	func() {
+-		type T struct{ B int }
+-		data, err = yaml.Marshal(&T{})
+-		c.Assert(err, IsNil)
+-	}()
+-	c.Assert(string(data), Equals, "b: 0\n")
+-}
+-
+-type typeWithGetter struct {
+-	tag   string
+-	value interface{}
+-}
+-
+-func (o typeWithGetter) GetYAML() (tag string, value interface{}) {
+-	return o.tag, o.value
+-}
+-
+-type typeWithGetterField struct {
+-	Field typeWithGetter "_"
+-}
+-
+-func (s *S) TestMashalWithGetter(c *C) {
+-	for _, item := range getterTests {
+-		obj := &typeWithGetterField{}
+-		obj.Field.tag = item.tag
+-		obj.Field.value = item.value
+-		data, err := yaml.Marshal(obj)
+-		c.Assert(err, IsNil)
+-		c.Assert(string(data), Equals, string(item.data))
+-	}
+-}
+-
+-func (s *S) TestUnmarshalWholeDocumentWithGetter(c *C) {
+-	obj := &typeWithGetter{}
+-	obj.tag = ""
+-	obj.value = map[string]string{"hello": "world!"}
+-	data, err := yaml.Marshal(obj)
+-	c.Assert(err, IsNil)
+-	c.Assert(string(data), Equals, "hello: world!\n")
+-}
+-
+-func (s *S) TestSortedOutput(c *C) {
+-	order := []interface{}{
+-		false,
+-		true,
+-		1,
+-		uint(1),
+-		1.0,
+-		1.1,
+-		1.2,
+-		2,
+-		uint(2),
+-		2.0,
+-		2.1,
+-		"",
+-		".1",
+-		".2",
+-		".a",
+-		"1",
+-		"2",
+-		"a!10",
+-		"a/2",
+-		"a/10",
+-		"a~10",
+-		"ab/1",
+-		"b/1",
+-		"b/01",
+-		"b/2",
+-		"b/02",
+-		"b/3",
+-		"b/03",
+-		"b1",
+-		"b01",
+-		"b3",
+-		"c2.10",
+-		"c10.2",
+-		"d1",
+-		"d12",
+-		"d12a",
+-	}
+-	m := make(map[interface{}]int)
+-	for _, k := range order {
+-		m[k] = 1
+-	}
+-	data, err := yaml.Marshal(m)
+-	c.Assert(err, IsNil)
+-	out := "\n" + string(data)
+-	last := 0
+-	for i, k := range order {
+-		repr := fmt.Sprint(k)
+-		if s, ok := k.(string); ok {
+-			if _, err = strconv.ParseFloat(repr, 32); s == "" || err == nil {
+-				repr = `"` + repr + `"`
+-			}
+-		}
+-		index := strings.Index(out, "\n"+repr+":")
+-		if index == -1 {
+-			c.Fatalf("%#v is not in the output: %#v", k, out)
+-		}
+-		if index < last {
+-			c.Fatalf("%#v was generated before %#v: %q", k, order[i-1], out)
+-		}
+-		last = index
+-	}
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/parserc.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/parserc.go
+deleted file mode 100644
+index 0a7037a..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/parserc.go
++++ /dev/null
+@@ -1,1096 +0,0 @@
+-package yaml
+-
+-import (
+-	"bytes"
+-)
+-
+-// The parser implements the following grammar:
+-//
+-// stream               ::= STREAM-START implicit_document? explicit_document* STREAM-END
+-// implicit_document    ::= block_node DOCUMENT-END*
+-// explicit_document    ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+-// block_node_or_indentless_sequence    ::=
+-//                          ALIAS
+-//                          | properties (block_content | indentless_block_sequence)?
+-//                          | block_content
+-//                          | indentless_block_sequence
+-// block_node           ::= ALIAS
+-//                          | properties block_content?
+-//                          | block_content
+-// flow_node            ::= ALIAS
+-//                          | properties flow_content?
+-//                          | flow_content
+-// properties           ::= TAG ANCHOR? | ANCHOR TAG?
+-// block_content        ::= block_collection | flow_collection | SCALAR
+-// flow_content         ::= flow_collection | SCALAR
+-// block_collection     ::= block_sequence | block_mapping
+-// flow_collection      ::= flow_sequence | flow_mapping
+-// block_sequence       ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
+-// indentless_sequence  ::= (BLOCK-ENTRY block_node?)+
+-// block_mapping        ::= BLOCK-MAPPING_START
+-//                          ((KEY block_node_or_indentless_sequence?)?
+-//                          (VALUE block_node_or_indentless_sequence?)?)*
+-//                          BLOCK-END
+-// flow_sequence        ::= FLOW-SEQUENCE-START
+-//                          (flow_sequence_entry FLOW-ENTRY)*
+-//                          flow_sequence_entry?
+-//                          FLOW-SEQUENCE-END
+-// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-// flow_mapping         ::= FLOW-MAPPING-START
+-//                          (flow_mapping_entry FLOW-ENTRY)*
+-//                          flow_mapping_entry?
+-//                          FLOW-MAPPING-END
+-// flow_mapping_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-
+-// Peek the next token in the token queue.
+-func peek_token(parser *yaml_parser_t) *yaml_token_t {
+-	if parser.token_available || yaml_parser_fetch_more_tokens(parser) {
+-		return &parser.tokens[parser.tokens_head]
+-	}
+-	return nil
+-}
+-
+-// Remove the next token from the queue (must be called after peek_token).
+-func skip_token(parser *yaml_parser_t) {
+-	parser.token_available = false
+-	parser.tokens_parsed++
+-	parser.stream_end_produced = parser.tokens[parser.tokens_head].typ == yaml_STREAM_END_TOKEN
+-	parser.tokens_head++
+-}
+-
+-// Get the next event.
+-func yaml_parser_parse(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	// Erase the event object.
+-	*event = yaml_event_t{}
+-
+-	// No events after the end of the stream or error.
+-	if parser.stream_end_produced || parser.error != yaml_NO_ERROR || parser.state == yaml_PARSE_END_STATE {
+-		return true
+-	}
+-
+-	// Generate the next event.
+-	return yaml_parser_state_machine(parser, event)
+-}
+-
+-// Set parser error.
+-func yaml_parser_set_parser_error(parser *yaml_parser_t, problem string, problem_mark yaml_mark_t) bool {
+-	parser.error = yaml_PARSER_ERROR
+-	parser.problem = problem
+-	parser.problem_mark = problem_mark
+-	return false
+-}
+-
+-func yaml_parser_set_parser_error_context(parser *yaml_parser_t, context string, context_mark yaml_mark_t, problem string, problem_mark yaml_mark_t) bool {
+-	parser.error = yaml_PARSER_ERROR
+-	parser.context = context
+-	parser.context_mark = context_mark
+-	parser.problem = problem
+-	parser.problem_mark = problem_mark
+-	return false
+-}
+-
+-// State dispatcher.
+-func yaml_parser_state_machine(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	//trace("yaml_parser_state_machine", "state:", parser.state.String())
+-
+-	switch parser.state {
+-	case yaml_PARSE_STREAM_START_STATE:
+-		return yaml_parser_parse_stream_start(parser, event)
+-
+-	case yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE:
+-		return yaml_parser_parse_document_start(parser, event, true)
+-
+-	case yaml_PARSE_DOCUMENT_START_STATE:
+-		return yaml_parser_parse_document_start(parser, event, false)
+-
+-	case yaml_PARSE_DOCUMENT_CONTENT_STATE:
+-		return yaml_parser_parse_document_content(parser, event)
+-
+-	case yaml_PARSE_DOCUMENT_END_STATE:
+-		return yaml_parser_parse_document_end(parser, event)
+-
+-	case yaml_PARSE_BLOCK_NODE_STATE:
+-		return yaml_parser_parse_node(parser, event, true, false)
+-
+-	case yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE:
+-		return yaml_parser_parse_node(parser, event, true, true)
+-
+-	case yaml_PARSE_FLOW_NODE_STATE:
+-		return yaml_parser_parse_node(parser, event, false, false)
+-
+-	case yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE:
+-		return yaml_parser_parse_block_sequence_entry(parser, event, true)
+-
+-	case yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE:
+-		return yaml_parser_parse_block_sequence_entry(parser, event, false)
+-
+-	case yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE:
+-		return yaml_parser_parse_indentless_sequence_entry(parser, event)
+-
+-	case yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE:
+-		return yaml_parser_parse_block_mapping_key(parser, event, true)
+-
+-	case yaml_PARSE_BLOCK_MAPPING_KEY_STATE:
+-		return yaml_parser_parse_block_mapping_key(parser, event, false)
+-
+-	case yaml_PARSE_BLOCK_MAPPING_VALUE_STATE:
+-		return yaml_parser_parse_block_mapping_value(parser, event)
+-
+-	case yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE:
+-		return yaml_parser_parse_flow_sequence_entry(parser, event, true)
+-
+-	case yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE:
+-		return yaml_parser_parse_flow_sequence_entry(parser, event, false)
+-
+-	case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE:
+-		return yaml_parser_parse_flow_sequence_entry_mapping_key(parser, event)
+-
+-	case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE:
+-		return yaml_parser_parse_flow_sequence_entry_mapping_value(parser, event)
+-
+-	case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE:
+-		return yaml_parser_parse_flow_sequence_entry_mapping_end(parser, event)
+-
+-	case yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE:
+-		return yaml_parser_parse_flow_mapping_key(parser, event, true)
+-
+-	case yaml_PARSE_FLOW_MAPPING_KEY_STATE:
+-		return yaml_parser_parse_flow_mapping_key(parser, event, false)
+-
+-	case yaml_PARSE_FLOW_MAPPING_VALUE_STATE:
+-		return yaml_parser_parse_flow_mapping_value(parser, event, false)
+-
+-	case yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE:
+-		return yaml_parser_parse_flow_mapping_value(parser, event, true)
+-
+-	default:
+-		panic("invalid parser state")
+-	}
+-	return false
+-}
+-
+-// Parse the production:
+-// stream   ::= STREAM-START implicit_document? explicit_document* STREAM-END
+-//              ************
+-func yaml_parser_parse_stream_start(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-	if token.typ != yaml_STREAM_START_TOKEN {
+-		return yaml_parser_set_parser_error(parser, "did not find expected <stream-start>", token.start_mark)
+-	}
+-	parser.state = yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE
+-	*event = yaml_event_t{
+-		typ:        yaml_STREAM_START_EVENT,
+-		start_mark: token.start_mark,
+-		end_mark:   token.end_mark,
+-		encoding:   token.encoding,
+-	}
+-	skip_token(parser)
+-	return true
+-}
+-
+-// Parse the productions:
+-// implicit_document    ::= block_node DOCUMENT-END*
+-//                          *
+-// explicit_document    ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+-//                          *************************
+-func yaml_parser_parse_document_start(parser *yaml_parser_t, event *yaml_event_t, implicit bool) bool {
+-
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-
+-	// Parse extra document end indicators.
+-	if !implicit {
+-		for token.typ == yaml_DOCUMENT_END_TOKEN {
+-			skip_token(parser)
+-			token = peek_token(parser)
+-			if token == nil {
+-				return false
+-			}
+-		}
+-	}
+-
+-	if implicit && token.typ != yaml_VERSION_DIRECTIVE_TOKEN &&
+-		token.typ != yaml_TAG_DIRECTIVE_TOKEN &&
+-		token.typ != yaml_DOCUMENT_START_TOKEN &&
+-		token.typ != yaml_STREAM_END_TOKEN {
+-		// Parse an implicit document.
+-		if !yaml_parser_process_directives(parser, nil, nil) {
+-			return false
+-		}
+-		parser.states = append(parser.states, yaml_PARSE_DOCUMENT_END_STATE)
+-		parser.state = yaml_PARSE_BLOCK_NODE_STATE
+-
+-		*event = yaml_event_t{
+-			typ:        yaml_DOCUMENT_START_EVENT,
+-			start_mark: token.start_mark,
+-			end_mark:   token.end_mark,
+-		}
+-
+-	} else if token.typ != yaml_STREAM_END_TOKEN {
+-		// Parse an explicit document.
+-		var version_directive *yaml_version_directive_t
+-		var tag_directives []yaml_tag_directive_t
+-		start_mark := token.start_mark
+-		if !yaml_parser_process_directives(parser, &version_directive, &tag_directives) {
+-			return false
+-		}
+-		token = peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-		if token.typ != yaml_DOCUMENT_START_TOKEN {
+-			yaml_parser_set_parser_error(parser,
+-				"did not find expected <document start>", token.start_mark)
+-			return false
+-		}
+-		parser.states = append(parser.states, yaml_PARSE_DOCUMENT_END_STATE)
+-		parser.state = yaml_PARSE_DOCUMENT_CONTENT_STATE
+-		end_mark := token.end_mark
+-
+-		*event = yaml_event_t{
+-			typ:               yaml_DOCUMENT_START_EVENT,
+-			start_mark:        start_mark,
+-			end_mark:          end_mark,
+-			version_directive: version_directive,
+-			tag_directives:    tag_directives,
+-			implicit:          false,
+-		}
+-		skip_token(parser)
+-
+-	} else {
+-		// Parse the stream end.
+-		parser.state = yaml_PARSE_END_STATE
+-		*event = yaml_event_t{
+-			typ:        yaml_STREAM_END_EVENT,
+-			start_mark: token.start_mark,
+-			end_mark:   token.end_mark,
+-		}
+-		skip_token(parser)
+-	}
+-
+-	return true
+-}
+-
+-// Parse the productions:
+-// explicit_document    ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+-//                                                    ***********
+-//
+-func yaml_parser_parse_document_content(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-	if token.typ == yaml_VERSION_DIRECTIVE_TOKEN ||
+-		token.typ == yaml_TAG_DIRECTIVE_TOKEN ||
+-		token.typ == yaml_DOCUMENT_START_TOKEN ||
+-		token.typ == yaml_DOCUMENT_END_TOKEN ||
+-		token.typ == yaml_STREAM_END_TOKEN {
+-		parser.state = parser.states[len(parser.states)-1]
+-		parser.states = parser.states[:len(parser.states)-1]
+-		return yaml_parser_process_empty_scalar(parser, event,
+-			token.start_mark)
+-	}
+-	return yaml_parser_parse_node(parser, event, true, false)
+-}
+-
+-// Parse the productions:
+-// implicit_document    ::= block_node DOCUMENT-END*
+-//                                     *************
+-// explicit_document    ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+-//
+-func yaml_parser_parse_document_end(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-
+-	start_mark := token.start_mark
+-	end_mark := token.start_mark
+-
+-	implicit := true
+-	if token.typ == yaml_DOCUMENT_END_TOKEN {
+-		end_mark = token.end_mark
+-		skip_token(parser)
+-		implicit = false
+-	}
+-
+-	parser.tag_directives = parser.tag_directives[:0]
+-
+-	parser.state = yaml_PARSE_DOCUMENT_START_STATE
+-	*event = yaml_event_t{
+-		typ:        yaml_DOCUMENT_END_EVENT,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-		implicit:   implicit,
+-	}
+-	return true
+-}
+-
+-// Parse the productions:
+-// block_node_or_indentless_sequence    ::=
+-//                          ALIAS
+-//                          *****
+-//                          | properties (block_content | indentless_block_sequence)?
+-//                            **********  *
+-//                          | block_content | indentless_block_sequence
+-//                            *
+-// block_node           ::= ALIAS
+-//                          *****
+-//                          | properties block_content?
+-//                            ********** *
+-//                          | block_content
+-//                            *
+-// flow_node            ::= ALIAS
+-//                          *****
+-//                          | properties flow_content?
+-//                            ********** *
+-//                          | flow_content
+-//                            *
+-// properties           ::= TAG ANCHOR? | ANCHOR TAG?
+-//                          *************************
+-// block_content        ::= block_collection | flow_collection | SCALAR
+-//                                                               ******
+-// flow_content         ::= flow_collection | SCALAR
+-//                                            ******
+-func yaml_parser_parse_node(parser *yaml_parser_t, event *yaml_event_t, block, indentless_sequence bool) bool {
+-	//defer trace("yaml_parser_parse_node", "block:", block, "indentless_sequence:", indentless_sequence)()
+-
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-
+-	if token.typ == yaml_ALIAS_TOKEN {
+-		parser.state = parser.states[len(parser.states)-1]
+-		parser.states = parser.states[:len(parser.states)-1]
+-		*event = yaml_event_t{
+-			typ:        yaml_ALIAS_EVENT,
+-			start_mark: token.start_mark,
+-			end_mark:   token.end_mark,
+-			anchor:     token.value,
+-		}
+-		skip_token(parser)
+-		return true
+-	}
+-
+-	start_mark := token.start_mark
+-	end_mark := token.start_mark
+-
+-	var tag_token bool
+-	var tag_handle, tag_suffix, anchor []byte
+-	var tag_mark yaml_mark_t
+-	if token.typ == yaml_ANCHOR_TOKEN {
+-		anchor = token.value
+-		start_mark = token.start_mark
+-		end_mark = token.end_mark
+-		skip_token(parser)
+-		token = peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-		if token.typ == yaml_TAG_TOKEN {
+-			tag_token = true
+-			tag_handle = token.value
+-			tag_suffix = token.suffix
+-			tag_mark = token.start_mark
+-			end_mark = token.end_mark
+-			skip_token(parser)
+-			token = peek_token(parser)
+-			if token == nil {
+-				return false
+-			}
+-		}
+-	} else if token.typ == yaml_TAG_TOKEN {
+-		tag_token = true
+-		tag_handle = token.value
+-		tag_suffix = token.suffix
+-		start_mark = token.start_mark
+-		tag_mark = token.start_mark
+-		end_mark = token.end_mark
+-		skip_token(parser)
+-		token = peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-		if token.typ == yaml_ANCHOR_TOKEN {
+-			anchor = token.value
+-			end_mark = token.end_mark
+-			skip_token(parser)
+-			token = peek_token(parser)
+-			if token == nil {
+-				return false
+-			}
+-		}
+-	}
+-
+-	var tag []byte
+-	if tag_token {
+-		if len(tag_handle) == 0 {
+-			tag = tag_suffix
+-			tag_suffix = nil
+-		} else {
+-			for i := range parser.tag_directives {
+-				if bytes.Equal(parser.tag_directives[i].handle, tag_handle) {
+-					tag = append([]byte(nil), parser.tag_directives[i].prefix...)
+-					tag = append(tag, tag_suffix...)
+-					break
+-				}
+-			}
+-			if len(tag) == 0 {
+-				yaml_parser_set_parser_error_context(parser,
+-					"while parsing a node", start_mark,
+-					"found undefined tag handle", tag_mark)
+-				return false
+-			}
+-		}
+-	}
+-
+-	implicit := len(tag) == 0
+-	if indentless_sequence && token.typ == yaml_BLOCK_ENTRY_TOKEN {
+-		end_mark = token.end_mark
+-		parser.state = yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE
+-		*event = yaml_event_t{
+-			typ:        yaml_SEQUENCE_START_EVENT,
+-			start_mark: start_mark,
+-			end_mark:   end_mark,
+-			anchor:     anchor,
+-			tag:        tag,
+-			implicit:   implicit,
+-			style:      yaml_style_t(yaml_BLOCK_SEQUENCE_STYLE),
+-		}
+-		return true
+-	}
+-	if token.typ == yaml_SCALAR_TOKEN {
+-		var plain_implicit, quoted_implicit bool
+-		end_mark = token.end_mark
+-		if (len(tag) == 0 && token.style == yaml_PLAIN_SCALAR_STYLE) || (len(tag) == 1 && tag[0] == '!') {
+-			plain_implicit = true
+-		} else if len(tag) == 0 {
+-			quoted_implicit = true
+-		}
+-		parser.state = parser.states[len(parser.states)-1]
+-		parser.states = parser.states[:len(parser.states)-1]
+-
+-		*event = yaml_event_t{
+-			typ:             yaml_SCALAR_EVENT,
+-			start_mark:      start_mark,
+-			end_mark:        end_mark,
+-			anchor:          anchor,
+-			tag:             tag,
+-			value:           token.value,
+-			implicit:        plain_implicit,
+-			quoted_implicit: quoted_implicit,
+-			style:           yaml_style_t(token.style),
+-		}
+-		skip_token(parser)
+-		return true
+-	}
+-	if token.typ == yaml_FLOW_SEQUENCE_START_TOKEN {
+-		// [Go] Some of the events below can be merged as they differ only on style.
+-		end_mark = token.end_mark
+-		parser.state = yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE
+-		*event = yaml_event_t{
+-			typ:        yaml_SEQUENCE_START_EVENT,
+-			start_mark: start_mark,
+-			end_mark:   end_mark,
+-			anchor:     anchor,
+-			tag:        tag,
+-			implicit:   implicit,
+-			style:      yaml_style_t(yaml_FLOW_SEQUENCE_STYLE),
+-		}
+-		return true
+-	}
+-	if token.typ == yaml_FLOW_MAPPING_START_TOKEN {
+-		end_mark = token.end_mark
+-		parser.state = yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE
+-		*event = yaml_event_t{
+-			typ:        yaml_MAPPING_START_EVENT,
+-			start_mark: start_mark,
+-			end_mark:   end_mark,
+-			anchor:     anchor,
+-			tag:        tag,
+-			implicit:   implicit,
+-			style:      yaml_style_t(yaml_FLOW_MAPPING_STYLE),
+-		}
+-		return true
+-	}
+-	if block && token.typ == yaml_BLOCK_SEQUENCE_START_TOKEN {
+-		end_mark = token.end_mark
+-		parser.state = yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE
+-		*event = yaml_event_t{
+-			typ:        yaml_SEQUENCE_START_EVENT,
+-			start_mark: start_mark,
+-			end_mark:   end_mark,
+-			anchor:     anchor,
+-			tag:        tag,
+-			implicit:   implicit,
+-			style:      yaml_style_t(yaml_BLOCK_SEQUENCE_STYLE),
+-		}
+-		return true
+-	}
+-	if block && token.typ == yaml_BLOCK_MAPPING_START_TOKEN {
+-		end_mark = token.end_mark
+-		parser.state = yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE
+-		*event = yaml_event_t{
+-			typ:        yaml_MAPPING_START_EVENT,
+-			start_mark: start_mark,
+-			end_mark:   end_mark,
+-			anchor:     anchor,
+-			tag:        tag,
+-			implicit:   implicit,
+-			style:      yaml_style_t(yaml_BLOCK_MAPPING_STYLE),
+-		}
+-		return true
+-	}
+-	if len(anchor) > 0 || len(tag) > 0 {
+-		parser.state = parser.states[len(parser.states)-1]
+-		parser.states = parser.states[:len(parser.states)-1]
+-
+-		*event = yaml_event_t{
+-			typ:             yaml_SCALAR_EVENT,
+-			start_mark:      start_mark,
+-			end_mark:        end_mark,
+-			anchor:          anchor,
+-			tag:             tag,
+-			implicit:        implicit,
+-			quoted_implicit: false,
+-			style:           yaml_style_t(yaml_PLAIN_SCALAR_STYLE),
+-		}
+-		return true
+-	}
+-
+-	context := "while parsing a flow node"
+-	if block {
+-		context = "while parsing a block node"
+-	}
+-	yaml_parser_set_parser_error_context(parser, context, start_mark,
+-		"did not find expected node content", token.start_mark)
+-	return false
+-}
+-
+-// Parse the productions:
+-// block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
+-//                    ********************  *********** *             *********
+-//
+-func yaml_parser_parse_block_sequence_entry(parser *yaml_parser_t, event *yaml_event_t, first bool) bool {
+-	if first {
+-		token := peek_token(parser)
+-		parser.marks = append(parser.marks, token.start_mark)
+-		skip_token(parser)
+-	}
+-
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-
+-	if token.typ == yaml_BLOCK_ENTRY_TOKEN {
+-		mark := token.end_mark
+-		skip_token(parser)
+-		token = peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-		if token.typ != yaml_BLOCK_ENTRY_TOKEN && token.typ != yaml_BLOCK_END_TOKEN {
+-			parser.states = append(parser.states, yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE)
+-			return yaml_parser_parse_node(parser, event, true, false)
+-		} else {
+-			parser.state = yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE
+-			return yaml_parser_process_empty_scalar(parser, event, mark)
+-		}
+-	}
+-	if token.typ == yaml_BLOCK_END_TOKEN {
+-		parser.state = parser.states[len(parser.states)-1]
+-		parser.states = parser.states[:len(parser.states)-1]
+-		parser.marks = parser.marks[:len(parser.marks)-1]
+-
+-		*event = yaml_event_t{
+-			typ:        yaml_SEQUENCE_END_EVENT,
+-			start_mark: token.start_mark,
+-			end_mark:   token.end_mark,
+-		}
+-
+-		skip_token(parser)
+-		return true
+-	}
+-
+-	context_mark := parser.marks[len(parser.marks)-1]
+-	parser.marks = parser.marks[:len(parser.marks)-1]
+-	return yaml_parser_set_parser_error_context(parser,
+-		"while parsing a block collection", context_mark,
+-		"did not find expected '-' indicator", token.start_mark)
+-}
+-
+-// Parse the productions:
+-// indentless_sequence  ::= (BLOCK-ENTRY block_node?)+
+-//                           *********** *
+-func yaml_parser_parse_indentless_sequence_entry(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-
+-	if token.typ == yaml_BLOCK_ENTRY_TOKEN {
+-		mark := token.end_mark
+-		skip_token(parser)
+-		token = peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-		if token.typ != yaml_BLOCK_ENTRY_TOKEN &&
+-			token.typ != yaml_KEY_TOKEN &&
+-			token.typ != yaml_VALUE_TOKEN &&
+-			token.typ != yaml_BLOCK_END_TOKEN {
+-			parser.states = append(parser.states, yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE)
+-			return yaml_parser_parse_node(parser, event, true, false)
+-		}
+-		parser.state = yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE
+-		return yaml_parser_process_empty_scalar(parser, event, mark)
+-	}
+-	parser.state = parser.states[len(parser.states)-1]
+-	parser.states = parser.states[:len(parser.states)-1]
+-
+-	*event = yaml_event_t{
+-		typ:        yaml_SEQUENCE_END_EVENT,
+-		start_mark: token.start_mark,
+-		end_mark:   token.start_mark, // [Go] Shouldn't this be token.end_mark?
+-	}
+-	return true
+-}
+-
+-// Parse the productions:
+-// block_mapping        ::= BLOCK-MAPPING_START
+-//                          *******************
+-//                          ((KEY block_node_or_indentless_sequence?)?
+-//                            *** *
+-//                          (VALUE block_node_or_indentless_sequence?)?)*
+-//
+-//                          BLOCK-END
+-//                          *********
+-//
+-func yaml_parser_parse_block_mapping_key(parser *yaml_parser_t, event *yaml_event_t, first bool) bool {
+-	if first {
+-		token := peek_token(parser)
+-		parser.marks = append(parser.marks, token.start_mark)
+-		skip_token(parser)
+-	}
+-
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-
+-	if token.typ == yaml_KEY_TOKEN {
+-		mark := token.end_mark
+-		skip_token(parser)
+-		token = peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-		if token.typ != yaml_KEY_TOKEN &&
+-			token.typ != yaml_VALUE_TOKEN &&
+-			token.typ != yaml_BLOCK_END_TOKEN {
+-			parser.states = append(parser.states, yaml_PARSE_BLOCK_MAPPING_VALUE_STATE)
+-			return yaml_parser_parse_node(parser, event, true, true)
+-		} else {
+-			parser.state = yaml_PARSE_BLOCK_MAPPING_VALUE_STATE
+-			return yaml_parser_process_empty_scalar(parser, event, mark)
+-		}
+-	} else if token.typ == yaml_BLOCK_END_TOKEN {
+-		parser.state = parser.states[len(parser.states)-1]
+-		parser.states = parser.states[:len(parser.states)-1]
+-		parser.marks = parser.marks[:len(parser.marks)-1]
+-		*event = yaml_event_t{
+-			typ:        yaml_MAPPING_END_EVENT,
+-			start_mark: token.start_mark,
+-			end_mark:   token.end_mark,
+-		}
+-		skip_token(parser)
+-		return true
+-	}
+-
+-	context_mark := parser.marks[len(parser.marks)-1]
+-	parser.marks = parser.marks[:len(parser.marks)-1]
+-	return yaml_parser_set_parser_error_context(parser,
+-		"while parsing a block mapping", context_mark,
+-		"did not find expected key", token.start_mark)
+-}
+-
+-// Parse the productions:
+-// block_mapping        ::= BLOCK-MAPPING_START
+-//
+-//                          ((KEY block_node_or_indentless_sequence?)?
+-//
+-//                          (VALUE block_node_or_indentless_sequence?)?)*
+-//                           ***** *
+-//                          BLOCK-END
+-//
+-//
+-func yaml_parser_parse_block_mapping_value(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-	if token.typ == yaml_VALUE_TOKEN {
+-		mark := token.end_mark
+-		skip_token(parser)
+-		token = peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-		if token.typ != yaml_KEY_TOKEN &&
+-			token.typ != yaml_VALUE_TOKEN &&
+-			token.typ != yaml_BLOCK_END_TOKEN {
+-			parser.states = append(parser.states, yaml_PARSE_BLOCK_MAPPING_KEY_STATE)
+-			return yaml_parser_parse_node(parser, event, true, true)
+-		}
+-		parser.state = yaml_PARSE_BLOCK_MAPPING_KEY_STATE
+-		return yaml_parser_process_empty_scalar(parser, event, mark)
+-	}
+-	parser.state = yaml_PARSE_BLOCK_MAPPING_KEY_STATE
+-	return yaml_parser_process_empty_scalar(parser, event, token.start_mark)
+-}
+-
+-// Parse the productions:
+-// flow_sequence        ::= FLOW-SEQUENCE-START
+-//                          *******************
+-//                          (flow_sequence_entry FLOW-ENTRY)*
+-//                           *                   **********
+-//                          flow_sequence_entry?
+-//                          *
+-//                          FLOW-SEQUENCE-END
+-//                          *****************
+-// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-//                          *
+-//
+-func yaml_parser_parse_flow_sequence_entry(parser *yaml_parser_t, event *yaml_event_t, first bool) bool {
+-	if first {
+-		token := peek_token(parser)
+-		parser.marks = append(parser.marks, token.start_mark)
+-		skip_token(parser)
+-	}
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-	if token.typ != yaml_FLOW_SEQUENCE_END_TOKEN {
+-		if !first {
+-			if token.typ == yaml_FLOW_ENTRY_TOKEN {
+-				skip_token(parser)
+-				token = peek_token(parser)
+-				if token == nil {
+-					return false
+-				}
+-			} else {
+-				context_mark := parser.marks[len(parser.marks)-1]
+-				parser.marks = parser.marks[:len(parser.marks)-1]
+-				return yaml_parser_set_parser_error_context(parser,
+-					"while parsing a flow sequence", context_mark,
+-					"did not find expected ',' or ']'", token.start_mark)
+-			}
+-		}
+-
+-		if token.typ == yaml_KEY_TOKEN {
+-			parser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE
+-			*event = yaml_event_t{
+-				typ:        yaml_MAPPING_START_EVENT,
+-				start_mark: token.start_mark,
+-				end_mark:   token.end_mark,
+-				implicit:   true,
+-				style:      yaml_style_t(yaml_FLOW_MAPPING_STYLE),
+-			}
+-			skip_token(parser)
+-			return true
+-		} else if token.typ != yaml_FLOW_SEQUENCE_END_TOKEN {
+-			parser.states = append(parser.states, yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE)
+-			return yaml_parser_parse_node(parser, event, false, false)
+-		}
+-	}
+-
+-	parser.state = parser.states[len(parser.states)-1]
+-	parser.states = parser.states[:len(parser.states)-1]
+-	parser.marks = parser.marks[:len(parser.marks)-1]
+-
+-	*event = yaml_event_t{
+-		typ:        yaml_SEQUENCE_END_EVENT,
+-		start_mark: token.start_mark,
+-		end_mark:   token.end_mark,
+-	}
+-
+-	skip_token(parser)
+-	return true
+-}
+-
+-//
+-// Parse the productions:
+-// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-//                                      *** *
+-//
+-func yaml_parser_parse_flow_sequence_entry_mapping_key(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-	if token.typ != yaml_VALUE_TOKEN &&
+-		token.typ != yaml_FLOW_ENTRY_TOKEN &&
+-		token.typ != yaml_FLOW_SEQUENCE_END_TOKEN {
+-		parser.states = append(parser.states, yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE)
+-		return yaml_parser_parse_node(parser, event, false, false)
+-	}
+-	mark := token.end_mark
+-	skip_token(parser)
+-	parser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE
+-	return yaml_parser_process_empty_scalar(parser, event, mark)
+-}
+-
+-// Parse the productions:
+-// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-//                                                      ***** *
+-//
+-func yaml_parser_parse_flow_sequence_entry_mapping_value(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-	if token.typ == yaml_VALUE_TOKEN {
+-		skip_token(parser)
+-		token := peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-		if token.typ != yaml_FLOW_ENTRY_TOKEN && token.typ != yaml_FLOW_SEQUENCE_END_TOKEN {
+-			parser.states = append(parser.states, yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE)
+-			return yaml_parser_parse_node(parser, event, false, false)
+-		}
+-	}
+-	parser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE
+-	return yaml_parser_process_empty_scalar(parser, event, token.start_mark)
+-}
+-
+-// Parse the productions:
+-// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-//                                                                      *
+-//
+-func yaml_parser_parse_flow_sequence_entry_mapping_end(parser *yaml_parser_t, event *yaml_event_t) bool {
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-	parser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE
+-	*event = yaml_event_t{
+-		typ:        yaml_MAPPING_END_EVENT,
+-		start_mark: token.start_mark,
+-		end_mark:   token.start_mark, // [Go] Shouldn't this be end_mark?
+-	}
+-	return true
+-}
+-
+-// Parse the productions:
+-// flow_mapping         ::= FLOW-MAPPING-START
+-//                          ******************
+-//                          (flow_mapping_entry FLOW-ENTRY)*
+-//                           *                  **********
+-//                          flow_mapping_entry?
+-//                          ******************
+-//                          FLOW-MAPPING-END
+-//                          ****************
+-// flow_mapping_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-//                          *           *** *
+-//
+-func yaml_parser_parse_flow_mapping_key(parser *yaml_parser_t, event *yaml_event_t, first bool) bool {
+-	if first {
+-		token := peek_token(parser)
+-		parser.marks = append(parser.marks, token.start_mark)
+-		skip_token(parser)
+-	}
+-
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-
+-	if token.typ != yaml_FLOW_MAPPING_END_TOKEN {
+-		if !first {
+-			if token.typ == yaml_FLOW_ENTRY_TOKEN {
+-				skip_token(parser)
+-				token = peek_token(parser)
+-				if token == nil {
+-					return false
+-				}
+-			} else {
+-				context_mark := parser.marks[len(parser.marks)-1]
+-				parser.marks = parser.marks[:len(parser.marks)-1]
+-				return yaml_parser_set_parser_error_context(parser,
+-					"while parsing a flow mapping", context_mark,
+-					"did not find expected ',' or '}'", token.start_mark)
+-			}
+-		}
+-
+-		if token.typ == yaml_KEY_TOKEN {
+-			skip_token(parser)
+-			token = peek_token(parser)
+-			if token == nil {
+-				return false
+-			}
+-			if token.typ != yaml_VALUE_TOKEN &&
+-				token.typ != yaml_FLOW_ENTRY_TOKEN &&
+-				token.typ != yaml_FLOW_MAPPING_END_TOKEN {
+-				parser.states = append(parser.states, yaml_PARSE_FLOW_MAPPING_VALUE_STATE)
+-				return yaml_parser_parse_node(parser, event, false, false)
+-			} else {
+-				parser.state = yaml_PARSE_FLOW_MAPPING_VALUE_STATE
+-				return yaml_parser_process_empty_scalar(parser, event, token.start_mark)
+-			}
+-		} else if token.typ != yaml_FLOW_MAPPING_END_TOKEN {
+-			parser.states = append(parser.states, yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE)
+-			return yaml_parser_parse_node(parser, event, false, false)
+-		}
+-	}
+-
+-	parser.state = parser.states[len(parser.states)-1]
+-	parser.states = parser.states[:len(parser.states)-1]
+-	parser.marks = parser.marks[:len(parser.marks)-1]
+-	*event = yaml_event_t{
+-		typ:        yaml_MAPPING_END_EVENT,
+-		start_mark: token.start_mark,
+-		end_mark:   token.end_mark,
+-	}
+-	skip_token(parser)
+-	return true
+-}
+-
+-// Parse the productions:
+-// flow_mapping_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-//                                   *                  ***** *
+-//
+-func yaml_parser_parse_flow_mapping_value(parser *yaml_parser_t, event *yaml_event_t, empty bool) bool {
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-	if empty {
+-		parser.state = yaml_PARSE_FLOW_MAPPING_KEY_STATE
+-		return yaml_parser_process_empty_scalar(parser, event, token.start_mark)
+-	}
+-	if token.typ == yaml_VALUE_TOKEN {
+-		skip_token(parser)
+-		token = peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-		if token.typ != yaml_FLOW_ENTRY_TOKEN && token.typ != yaml_FLOW_MAPPING_END_TOKEN {
+-			parser.states = append(parser.states, yaml_PARSE_FLOW_MAPPING_KEY_STATE)
+-			return yaml_parser_parse_node(parser, event, false, false)
+-		}
+-	}
+-	parser.state = yaml_PARSE_FLOW_MAPPING_KEY_STATE
+-	return yaml_parser_process_empty_scalar(parser, event, token.start_mark)
+-}
+-
+-// Generate an empty scalar event.
+-func yaml_parser_process_empty_scalar(parser *yaml_parser_t, event *yaml_event_t, mark yaml_mark_t) bool {
+-	*event = yaml_event_t{
+-		typ:        yaml_SCALAR_EVENT,
+-		start_mark: mark,
+-		end_mark:   mark,
+-		value:      nil, // Empty
+-		implicit:   true,
+-		style:      yaml_style_t(yaml_PLAIN_SCALAR_STYLE),
+-	}
+-	return true
+-}
+-
+-var default_tag_directives = []yaml_tag_directive_t{
+-	{[]byte("!"), []byte("!")},
+-	{[]byte("!!"), []byte("tag:yaml.org,2002:")},
+-}
+-
+-// Parse directives.
+-func yaml_parser_process_directives(parser *yaml_parser_t,
+-	version_directive_ref **yaml_version_directive_t,
+-	tag_directives_ref *[]yaml_tag_directive_t) bool {
+-
+-	var version_directive *yaml_version_directive_t
+-	var tag_directives []yaml_tag_directive_t
+-
+-	token := peek_token(parser)
+-	if token == nil {
+-		return false
+-	}
+-
+-	for token.typ == yaml_VERSION_DIRECTIVE_TOKEN || token.typ == yaml_TAG_DIRECTIVE_TOKEN {
+-		if token.typ == yaml_VERSION_DIRECTIVE_TOKEN {
+-			if version_directive != nil {
+-				yaml_parser_set_parser_error(parser,
+-					"found duplicate %YAML directive", token.start_mark)
+-				return false
+-			}
+-			if token.major != 1 || token.minor != 1 {
+-				yaml_parser_set_parser_error(parser,
+-					"found incompatible YAML document", token.start_mark)
+-				return false
+-			}
+-			version_directive = &yaml_version_directive_t{
+-				major: token.major,
+-				minor: token.minor,
+-			}
+-		} else if token.typ == yaml_TAG_DIRECTIVE_TOKEN {
+-			value := yaml_tag_directive_t{
+-				handle: token.value,
+-				prefix: token.prefix,
+-			}
+-			if !yaml_parser_append_tag_directive(parser, value, false, token.start_mark) {
+-				return false
+-			}
+-			tag_directives = append(tag_directives, value)
+-		}
+-
+-		skip_token(parser)
+-		token = peek_token(parser)
+-		if token == nil {
+-			return false
+-		}
+-	}
+-
+-	for i := range default_tag_directives {
+-		if !yaml_parser_append_tag_directive(parser, default_tag_directives[i], true, token.start_mark) {
+-			return false
+-		}
+-	}
+-
+-	if version_directive_ref != nil {
+-		*version_directive_ref = version_directive
+-	}
+-	if tag_directives_ref != nil {
+-		*tag_directives_ref = tag_directives
+-	}
+-	return true
+-}
+-
+-// Append a tag directive to the directives stack.
+-func yaml_parser_append_tag_directive(parser *yaml_parser_t, value yaml_tag_directive_t, allow_duplicates bool, mark yaml_mark_t) bool {
+-	for i := range parser.tag_directives {
+-		if bytes.Equal(value.handle, parser.tag_directives[i].handle) {
+-			if allow_duplicates {
+-				return true
+-			}
+-			return yaml_parser_set_parser_error(parser, "found duplicate %TAG directive", mark)
+-		}
+-	}
+-
+-	// [Go] I suspect the copy is unnecessary. This was likely done
+-	// because there was no way to track ownership of the data.
+-	value_copy := yaml_tag_directive_t{
+-		handle: make([]byte, len(value.handle)),
+-		prefix: make([]byte, len(value.prefix)),
+-	}
+-	copy(value_copy.handle, value.handle)
+-	copy(value_copy.prefix, value.prefix)
+-	parser.tag_directives = append(parser.tag_directives, value_copy)
+-	return true
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/readerc.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/readerc.go
+deleted file mode 100644
+index d5fb097..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/readerc.go
++++ /dev/null
+@@ -1,391 +0,0 @@
+-package yaml
+-
+-import (
+-	"io"
+-)
+-
+-// Set the reader error and return 0.
+-func yaml_parser_set_reader_error(parser *yaml_parser_t, problem string, offset int, value int) bool {
+-	parser.error = yaml_READER_ERROR
+-	parser.problem = problem
+-	parser.problem_offset = offset
+-	parser.problem_value = value
+-	return false
+-}
+-
+-// Byte order marks.
+-const (
+-	bom_UTF8    = "\xef\xbb\xbf"
+-	bom_UTF16LE = "\xff\xfe"
+-	bom_UTF16BE = "\xfe\xff"
+-)
+-
+-// Determine the input stream encoding by checking the BOM symbol. If no BOM is
+-// found, the UTF-8 encoding is assumed. Return 1 on success, 0 on failure.
+-func yaml_parser_determine_encoding(parser *yaml_parser_t) bool {
+-	// Ensure that we had enough bytes in the raw buffer.
+-	for !parser.eof && len(parser.raw_buffer)-parser.raw_buffer_pos < 3 {
+-		if !yaml_parser_update_raw_buffer(parser) {
+-			return false
+-		}
+-	}
+-
+-	// Determine the encoding.
+-	buf := parser.raw_buffer
+-	pos := parser.raw_buffer_pos
+-	avail := len(buf) - pos
+-	if avail >= 2 && buf[pos] == bom_UTF16LE[0] && buf[pos+1] == bom_UTF16LE[1] {
+-		parser.encoding = yaml_UTF16LE_ENCODING
+-		parser.raw_buffer_pos += 2
+-		parser.offset += 2
+-	} else if avail >= 2 && buf[pos] == bom_UTF16BE[0] && buf[pos+1] == bom_UTF16BE[1] {
+-		parser.encoding = yaml_UTF16BE_ENCODING
+-		parser.raw_buffer_pos += 2
+-		parser.offset += 2
+-	} else if avail >= 3 && buf[pos] == bom_UTF8[0] && buf[pos+1] == bom_UTF8[1] && buf[pos+2] == bom_UTF8[2] {
+-		parser.encoding = yaml_UTF8_ENCODING
+-		parser.raw_buffer_pos += 3
+-		parser.offset += 3
+-	} else {
+-		parser.encoding = yaml_UTF8_ENCODING
+-	}
+-	return true
+-}
+-
+-// Update the raw buffer.
+-func yaml_parser_update_raw_buffer(parser *yaml_parser_t) bool {
+-	size_read := 0
+-
+-	// Return if the raw buffer is full.
+-	if parser.raw_buffer_pos == 0 && len(parser.raw_buffer) == cap(parser.raw_buffer) {
+-		return true
+-	}
+-
+-	// Return on EOF.
+-	if parser.eof {
+-		return true
+-	}
+-
+-	// Move the remaining bytes in the raw buffer to the beginning.
+-	if parser.raw_buffer_pos > 0 && parser.raw_buffer_pos < len(parser.raw_buffer) {
+-		copy(parser.raw_buffer, parser.raw_buffer[parser.raw_buffer_pos:])
+-	}
+-	parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)-parser.raw_buffer_pos]
+-	parser.raw_buffer_pos = 0
+-
+-	// Call the read handler to fill the buffer.
+-	size_read, err := parser.read_handler(parser, parser.raw_buffer[len(parser.raw_buffer):cap(parser.raw_buffer)])
+-	parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)+size_read]
+-	if err == io.EOF {
+-		parser.eof = true
+-	} else if err != nil {
+-		return yaml_parser_set_reader_error(parser, "input error: "+err.Error(), parser.offset, -1)
+-	}
+-	return true
+-}
+-
+-// Ensure that the buffer contains at least `length` characters.
+-// Return true on success, false on failure.
+-//
+-// The length is supposed to be significantly less that the buffer size.
+-func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool {
+-	if parser.read_handler == nil {
+-		panic("read handler must be set")
+-	}
+-
+-	// If the EOF flag is set and the raw buffer is empty, do nothing.
+-	if parser.eof && parser.raw_buffer_pos == len(parser.raw_buffer) {
+-		return true
+-	}
+-
+-	// Return if the buffer contains enough characters.
+-	if parser.unread >= length {
+-		return true
+-	}
+-
+-	// Determine the input encoding if it is not known yet.
+-	if parser.encoding == yaml_ANY_ENCODING {
+-		if !yaml_parser_determine_encoding(parser) {
+-			return false
+-		}
+-	}
+-
+-	// Move the unread characters to the beginning of the buffer.
+-	buffer_len := len(parser.buffer)
+-	if parser.buffer_pos > 0 && parser.buffer_pos < buffer_len {
+-		copy(parser.buffer, parser.buffer[parser.buffer_pos:])
+-		buffer_len -= parser.buffer_pos
+-		parser.buffer_pos = 0
+-	} else if parser.buffer_pos == buffer_len {
+-		buffer_len = 0
+-		parser.buffer_pos = 0
+-	}
+-
+-	// Open the whole buffer for writing, and cut it before returning.
+-	parser.buffer = parser.buffer[:cap(parser.buffer)]
+-
+-	// Fill the buffer until it has enough characters.
+-	first := true
+-	for parser.unread < length {
+-
+-		// Fill the raw buffer if necessary.
+-		if !first || parser.raw_buffer_pos == len(parser.raw_buffer) {
+-			if !yaml_parser_update_raw_buffer(parser) {
+-				parser.buffer = parser.buffer[:buffer_len]
+-				return false
+-			}
+-		}
+-		first = false
+-
+-		// Decode the raw buffer.
+-	inner:
+-		for parser.raw_buffer_pos != len(parser.raw_buffer) {
+-			var value rune
+-			var width int
+-
+-			raw_unread := len(parser.raw_buffer) - parser.raw_buffer_pos
+-
+-			// Decode the next character.
+-			switch parser.encoding {
+-			case yaml_UTF8_ENCODING:
+-				// Decode a UTF-8 character.  Check RFC 3629
+-				// (http://www.ietf.org/rfc/rfc3629.txt) for more details.
+-				//
+-				// The following table (taken from the RFC) is used for
+-				// decoding.
+-				//
+-				//    Char. number range |        UTF-8 octet sequence
+-				//      (hexadecimal)    |              (binary)
+-				//   --------------------+------------------------------------
+-				//   0000 0000-0000 007F | 0xxxxxxx
+-				//   0000 0080-0000 07FF | 110xxxxx 10xxxxxx
+-				//   0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx
+-				//   0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
+-				//
+-				// Additionally, the characters in the range 0xD800-0xDFFF
+-				// are prohibited as they are reserved for use with UTF-16
+-				// surrogate pairs.
+-
+-				// Determine the length of the UTF-8 sequence.
+-				octet := parser.raw_buffer[parser.raw_buffer_pos]
+-				switch {
+-				case octet&0x80 == 0x00:
+-					width = 1
+-				case octet&0xE0 == 0xC0:
+-					width = 2
+-				case octet&0xF0 == 0xE0:
+-					width = 3
+-				case octet&0xF8 == 0xF0:
+-					width = 4
+-				default:
+-					// The leading octet is invalid.
+-					return yaml_parser_set_reader_error(parser,
+-						"invalid leading UTF-8 octet",
+-						parser.offset, int(octet))
+-				}
+-
+-				// Check if the raw buffer contains an incomplete character.
+-				if width > raw_unread {
+-					if parser.eof {
+-						return yaml_parser_set_reader_error(parser,
+-							"incomplete UTF-8 octet sequence",
+-							parser.offset, -1)
+-					}
+-					break inner
+-				}
+-
+-				// Decode the leading octet.
+-				switch {
+-				case octet&0x80 == 0x00:
+-					value = rune(octet & 0x7F)
+-				case octet&0xE0 == 0xC0:
+-					value = rune(octet & 0x1F)
+-				case octet&0xF0 == 0xE0:
+-					value = rune(octet & 0x0F)
+-				case octet&0xF8 == 0xF0:
+-					value = rune(octet & 0x07)
+-				default:
+-					value = 0
+-				}
+-
+-				// Check and decode the trailing octets.
+-				for k := 1; k < width; k++ {
+-					octet = parser.raw_buffer[parser.raw_buffer_pos+k]
+-
+-					// Check if the octet is valid.
+-					if (octet & 0xC0) != 0x80 {
+-						return yaml_parser_set_reader_error(parser,
+-							"invalid trailing UTF-8 octet",
+-							parser.offset+k, int(octet))
+-					}
+-
+-					// Decode the octet.
+-					value = (value << 6) + rune(octet&0x3F)
+-				}
+-
+-				// Check the length of the sequence against the value.
+-				switch {
+-				case width == 1:
+-				case width == 2 && value >= 0x80:
+-				case width == 3 && value >= 0x800:
+-				case width == 4 && value >= 0x10000:
+-				default:
+-					return yaml_parser_set_reader_error(parser,
+-						"invalid length of a UTF-8 sequence",
+-						parser.offset, -1)
+-				}
+-
+-				// Check the range of the value.
+-				if value >= 0xD800 && value <= 0xDFFF || value > 0x10FFFF {
+-					return yaml_parser_set_reader_error(parser,
+-						"invalid Unicode character",
+-						parser.offset, int(value))
+-				}
+-
+-			case yaml_UTF16LE_ENCODING, yaml_UTF16BE_ENCODING:
+-				var low, high int
+-				if parser.encoding == yaml_UTF16LE_ENCODING {
+-					low, high = 0, 1
+-				} else {
+-					high, low = 1, 0
+-				}
+-
+-				// The UTF-16 encoding is not as simple as one might
+-				// naively think.  Check RFC 2781
+-				// (http://www.ietf.org/rfc/rfc2781.txt).
+-				//
+-				// Normally, two subsequent bytes describe a Unicode
+-				// character.  However a special technique (called a
+-				// surrogate pair) is used for specifying character
+-				// values larger than 0xFFFF.
+-				//
+-				// A surrogate pair consists of two pseudo-characters:
+-				//      high surrogate area (0xD800-0xDBFF)
+-				//      low surrogate area (0xDC00-0xDFFF)
+-				//
+-				// The following formulas are used for decoding
+-				// and encoding characters using surrogate pairs:
+-				//
+-				//  U  = U' + 0x10000   (0x01 00 00 <= U <= 0x10 FF FF)
+-				//  U' = yyyyyyyyyyxxxxxxxxxx   (0 <= U' <= 0x0F FF FF)
+-				//  W1 = 110110yyyyyyyyyy
+-				//  W2 = 110111xxxxxxxxxx
+-				//
+-				// where U is the character value, W1 is the high surrogate
+-				// area, W2 is the low surrogate area.
+-
+-				// Check for incomplete UTF-16 character.
+-				if raw_unread < 2 {
+-					if parser.eof {
+-						return yaml_parser_set_reader_error(parser,
+-							"incomplete UTF-16 character",
+-							parser.offset, -1)
+-					}
+-					break inner
+-				}
+-
+-				// Get the character.
+-				value = rune(parser.raw_buffer[parser.raw_buffer_pos+low]) +
+-					(rune(parser.raw_buffer[parser.raw_buffer_pos+high]) << 8)
+-
+-				// Check for unexpected low surrogate area.
+-				if value&0xFC00 == 0xDC00 {
+-					return yaml_parser_set_reader_error(parser,
+-						"unexpected low surrogate area",
+-						parser.offset, int(value))
+-				}
+-
+-				// Check for a high surrogate area.
+-				if value&0xFC00 == 0xD800 {
+-					width = 4
+-
+-					// Check for incomplete surrogate pair.
+-					if raw_unread < 4 {
+-						if parser.eof {
+-							return yaml_parser_set_reader_error(parser,
+-								"incomplete UTF-16 surrogate pair",
+-								parser.offset, -1)
+-						}
+-						break inner
+-					}
+-
+-					// Get the next character.
+-					value2 := rune(parser.raw_buffer[parser.raw_buffer_pos+low+2]) +
+-						(rune(parser.raw_buffer[parser.raw_buffer_pos+high+2]) << 8)
+-
+-					// Check for a low surrogate area.
+-					if value2&0xFC00 != 0xDC00 {
+-						return yaml_parser_set_reader_error(parser,
+-							"expected low surrogate area",
+-							parser.offset+2, int(value2))
+-					}
+-
+-					// Generate the value of the surrogate pair.
+-					value = 0x10000 + ((value & 0x3FF) << 10) + (value2 & 0x3FF)
+-				} else {
+-					width = 2
+-				}
+-
+-			default:
+-				panic("impossible")
+-			}
+-
+-			// Check if the character is in the allowed range:
+-			//      #x9 | #xA | #xD | [#x20-#x7E]               (8 bit)
+-			//      | #x85 | [#xA0-#xD7FF] | [#xE000-#xFFFD]    (16 bit)
+-			//      | [#x10000-#x10FFFF]                        (32 bit)
+-			switch {
+-			case value == 0x09:
+-			case value == 0x0A:
+-			case value == 0x0D:
+-			case value >= 0x20 && value <= 0x7E:
+-			case value == 0x85:
+-			case value >= 0xA0 && value <= 0xD7FF:
+-			case value >= 0xE000 && value <= 0xFFFD:
+-			case value >= 0x10000 && value <= 0x10FFFF:
+-			default:
+-				return yaml_parser_set_reader_error(parser,
+-					"control characters are not allowed",
+-					parser.offset, int(value))
+-			}
+-
+-			// Move the raw pointers.
+-			parser.raw_buffer_pos += width
+-			parser.offset += width
+-
+-			// Finally put the character into the buffer.
+-			if value <= 0x7F {
+-				// 0000 0000-0000 007F . 0xxxxxxx
+-				parser.buffer[buffer_len+0] = byte(value)
+-			} else if value <= 0x7FF {
+-				// 0000 0080-0000 07FF . 110xxxxx 10xxxxxx
+-				parser.buffer[buffer_len+0] = byte(0xC0 + (value >> 6))
+-				parser.buffer[buffer_len+1] = byte(0x80 + (value & 0x3F))
+-			} else if value <= 0xFFFF {
+-				// 0000 0800-0000 FFFF . 1110xxxx 10xxxxxx 10xxxxxx
+-				parser.buffer[buffer_len+0] = byte(0xE0 + (value >> 12))
+-				parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 6) & 0x3F))
+-				parser.buffer[buffer_len+2] = byte(0x80 + (value & 0x3F))
+-			} else {
+-				// 0001 0000-0010 FFFF . 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
+-				parser.buffer[buffer_len+0] = byte(0xF0 + (value >> 18))
+-				parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 12) & 0x3F))
+-				parser.buffer[buffer_len+2] = byte(0x80 + ((value >> 6) & 0x3F))
+-				parser.buffer[buffer_len+3] = byte(0x80 + (value & 0x3F))
+-			}
+-			buffer_len += width
+-
+-			parser.unread++
+-		}
+-
+-		// On EOF, put NUL into the buffer and return.
+-		if parser.eof {
+-			parser.buffer[buffer_len] = 0
+-			buffer_len++
+-			parser.unread++
+-			break
+-		}
+-	}
+-	parser.buffer = parser.buffer[:buffer_len]
+-	return true
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/resolve.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/resolve.go
+deleted file mode 100644
+index fdc4909..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/resolve.go
++++ /dev/null
+@@ -1,148 +0,0 @@
+-package yaml
+-
+-import (
+-	"math"
+-	"strconv"
+-	"strings"
+-)
+-
+-// TODO: merge, timestamps, base 60 floats, omap.
+-
+-type resolveMapItem struct {
+-	value interface{}
+-	tag   string
+-}
+-
+-var resolveTable = make([]byte, 256)
+-var resolveMap = make(map[string]resolveMapItem)
+-
+-func init() {
+-	t := resolveTable
+-	t[int('+')] = 'S' // Sign
+-	t[int('-')] = 'S'
+-	for _, c := range "0123456789" {
+-		t[int(c)] = 'D' // Digit
+-	}
+-	for _, c := range "yYnNtTfFoO~" {
+-		t[int(c)] = 'M' // In map
+-	}
+-	t[int('.')] = '.' // Float (potentially in map)
+-	t[int('<')] = '<' // Merge
+-
+-	var resolveMapList = []struct {
+-		v   interface{}
+-		tag string
+-		l   []string
+-	}{
+-		{true, "!!bool", []string{"y", "Y", "yes", "Yes", "YES"}},
+-		{true, "!!bool", []string{"true", "True", "TRUE"}},
+-		{true, "!!bool", []string{"on", "On", "ON"}},
+-		{false, "!!bool", []string{"n", "N", "no", "No", "NO"}},
+-		{false, "!!bool", []string{"false", "False", "FALSE"}},
+-		{false, "!!bool", []string{"off", "Off", "OFF"}},
+-		{nil, "!!null", []string{"~", "null", "Null", "NULL"}},
+-		{math.NaN(), "!!float", []string{".nan", ".NaN", ".NAN"}},
+-		{math.Inf(+1), "!!float", []string{".inf", ".Inf", ".INF"}},
+-		{math.Inf(+1), "!!float", []string{"+.inf", "+.Inf", "+.INF"}},
+-		{math.Inf(-1), "!!float", []string{"-.inf", "-.Inf", "-.INF"}},
+-		{"<<", "!!merge", []string{"<<"}},
+-	}
+-
+-	m := resolveMap
+-	for _, item := range resolveMapList {
+-		for _, s := range item.l {
+-			m[s] = resolveMapItem{item.v, item.tag}
+-		}
+-	}
+-}
+-
+-const longTagPrefix = "tag:yaml.org,2002:"
+-
+-func shortTag(tag string) string {
+-	if strings.HasPrefix(tag, longTagPrefix) {
+-		return "!!" + tag[len(longTagPrefix):]
+-	}
+-	return tag
+-}
+-
+-func resolvableTag(tag string) bool {
+-	switch tag {
+-	case "", "!!str", "!!bool", "!!int", "!!float", "!!null":
+-		return true
+-	}
+-	return false
+-}
+-
+-func resolve(tag string, in string) (rtag string, out interface{}) {
+-	tag = shortTag(tag)
+-	if !resolvableTag(tag) {
+-		return tag, in
+-	}
+-
+-	defer func() {
+-		if tag != "" && tag != rtag {
+-			panic("Can't decode " + rtag + " '" + in + "' as a " + tag)
+-		}
+-	}()
+-
+-	if in == "" {
+-		return "!!null", nil
+-	}
+-
+-	c := resolveTable[in[0]]
+-	if c == 0 {
+-		// It's a string for sure. Nothing to do.
+-		return "!!str", in
+-	}
+-
+-	// Handle things we can lookup in a map.
+-	if item, ok := resolveMap[in]; ok {
+-		return item.tag, item.value
+-	}
+-
+-	switch c {
+-	case 'M':
+-		// We've already checked the map above.
+-
+-	case '.':
+-		// Not in the map, so maybe a normal float.
+-		floatv, err := strconv.ParseFloat(in, 64)
+-		if err == nil {
+-			return "!!float", floatv
+-		}
+-	// XXX Handle base 60 floats here (WTF!)
+-
+-	case 'D', 'S':
+-		// Int, float, or timestamp.
+-		plain := strings.Replace(in, "_", "", -1)
+-		intv, err := strconv.ParseInt(plain, 0, 64)
+-		if err == nil {
+-			if intv == int64(int(intv)) {
+-				return "!!int", int(intv)
+-			} else {
+-				return "!!int", intv
+-			}
+-		}
+-		floatv, err := strconv.ParseFloat(plain, 64)
+-		if err == nil {
+-			return "!!float", floatv
+-		}
+-		if strings.HasPrefix(plain, "0b") {
+-			intv, err := strconv.ParseInt(plain[2:], 2, 64)
+-			if err == nil {
+-				return "!!int", int(intv)
+-			}
+-		} else if strings.HasPrefix(plain, "-0b") {
+-			intv, err := strconv.ParseInt(plain[3:], 2, 64)
+-			if err == nil {
+-				return "!!int", -int(intv)
+-			}
+-		}
+-	// XXX Handle timestamps here.
+-
+-	default:
+-		panic("resolveTable item not yet handled: " +
+-			string([]byte{c}) + " (with " + in + ")")
+-	}
+-	return "!!str", in
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/scannerc.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/scannerc.go
+deleted file mode 100644
+index fe93b19..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/scannerc.go
++++ /dev/null
+@@ -1,2710 +0,0 @@
+-package yaml
+-
+-import (
+-	"bytes"
+-	"fmt"
+-)
+-
+-// Introduction
+-// ************
+-//
+-// The following notes assume that you are familiar with the YAML specification
+-// (http://yaml.org/spec/cvs/current.html).  We mostly follow it, although in
+-// some cases we are less restrictive that it requires.
+-//
+-// The process of transforming a YAML stream into a sequence of events is
+-// divided on two steps: Scanning and Parsing.
+-//
+-// The Scanner transforms the input stream into a sequence of tokens, while the
+-// parser transform the sequence of tokens produced by the Scanner into a
+-// sequence of parsing events.
+-//
+-// The Scanner is rather clever and complicated. The Parser, on the contrary,
+-// is a straightforward implementation of a recursive-descendant parser (or,
+-// LL(1) parser, as it is usually called).
+-//
+-// Actually there are two issues of Scanning that might be called "clever", the
+-// rest is quite straightforward.  The issues are "block collection start" and
+-// "simple keys".  Both issues are explained below in details.
+-//
+-// Here the Scanning step is explained and implemented.  We start with the list
+-// of all the tokens produced by the Scanner together with short descriptions.
+-//
+-// Now, tokens:
+-//
+-//      STREAM-START(encoding)          # The stream start.
+-//      STREAM-END                      # The stream end.
+-//      VERSION-DIRECTIVE(major,minor)  # The '%YAML' directive.
+-//      TAG-DIRECTIVE(handle,prefix)    # The '%TAG' directive.
+-//      DOCUMENT-START                  # '---'
+-//      DOCUMENT-END                    # '...'
+-//      BLOCK-SEQUENCE-START            # Indentation increase denoting a block
+-//      BLOCK-MAPPING-START             # sequence or a block mapping.
+-//      BLOCK-END                       # Indentation decrease.
+-//      FLOW-SEQUENCE-START             # '['
+-//      FLOW-SEQUENCE-END               # ']'
+-//      BLOCK-SEQUENCE-START            # '{'
+-//      BLOCK-SEQUENCE-END              # '}'
+-//      BLOCK-ENTRY                     # '-'
+-//      FLOW-ENTRY                      # ','
+-//      KEY                             # '?' or nothing (simple keys).
+-//      VALUE                           # ':'
+-//      ALIAS(anchor)                   # '*anchor'
+-//      ANCHOR(anchor)                  # '&anchor'
+-//      TAG(handle,suffix)              # '!handle!suffix'
+-//      SCALAR(value,style)             # A scalar.
+-//
+-// The following two tokens are "virtual" tokens denoting the beginning and the
+-// end of the stream:
+-//
+-//      STREAM-START(encoding)
+-//      STREAM-END
+-//
+-// We pass the information about the input stream encoding with the
+-// STREAM-START token.
+-//
+-// The next two tokens are responsible for tags:
+-//
+-//      VERSION-DIRECTIVE(major,minor)
+-//      TAG-DIRECTIVE(handle,prefix)
+-//
+-// Example:
+-//
+-//      %YAML   1.1
+-//      %TAG    !   !foo
+-//      %TAG    !yaml!  tag:yaml.org,2002:
+-//      ---
+-//
+-// The correspoding sequence of tokens:
+-//
+-//      STREAM-START(utf-8)
+-//      VERSION-DIRECTIVE(1,1)
+-//      TAG-DIRECTIVE("!","!foo")
+-//      TAG-DIRECTIVE("!yaml","tag:yaml.org,2002:")
+-//      DOCUMENT-START
+-//      STREAM-END
+-//
+-// Note that the VERSION-DIRECTIVE and TAG-DIRECTIVE tokens occupy a whole
+-// line.
+-//
+-// The document start and end indicators are represented by:
+-//
+-//      DOCUMENT-START
+-//      DOCUMENT-END
+-//
+-// Note that if a YAML stream contains an implicit document (without '---'
+-// and '...' indicators), no DOCUMENT-START and DOCUMENT-END tokens will be
+-// produced.
+-//
+-// In the following examples, we present whole documents together with the
+-// produced tokens.
+-//
+-//      1. An implicit document:
+-//
+-//          'a scalar'
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          SCALAR("a scalar",single-quoted)
+-//          STREAM-END
+-//
+-//      2. An explicit document:
+-//
+-//          ---
+-//          'a scalar'
+-//          ...
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          DOCUMENT-START
+-//          SCALAR("a scalar",single-quoted)
+-//          DOCUMENT-END
+-//          STREAM-END
+-//
+-//      3. Several documents in a stream:
+-//
+-//          'a scalar'
+-//          ---
+-//          'another scalar'
+-//          ---
+-//          'yet another scalar'
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          SCALAR("a scalar",single-quoted)
+-//          DOCUMENT-START
+-//          SCALAR("another scalar",single-quoted)
+-//          DOCUMENT-START
+-//          SCALAR("yet another scalar",single-quoted)
+-//          STREAM-END
+-//
+-// We have already introduced the SCALAR token above.  The following tokens are
+-// used to describe aliases, anchors, tag, and scalars:
+-//
+-//      ALIAS(anchor)
+-//      ANCHOR(anchor)
+-//      TAG(handle,suffix)
+-//      SCALAR(value,style)
+-//
+-// The following series of examples illustrate the usage of these tokens:
+-//
+-//      1. A recursive sequence:
+-//
+-//          &A [ *A ]
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          ANCHOR("A")
+-//          FLOW-SEQUENCE-START
+-//          ALIAS("A")
+-//          FLOW-SEQUENCE-END
+-//          STREAM-END
+-//
+-//      2. A tagged scalar:
+-//
+-//          !!float "3.14"  # A good approximation.
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          TAG("!!","float")
+-//          SCALAR("3.14",double-quoted)
+-//          STREAM-END
+-//
+-//      3. Various scalar styles:
+-//
+-//          --- # Implicit empty plain scalars do not produce tokens.
+-//          --- a plain scalar
+-//          --- 'a single-quoted scalar'
+-//          --- "a double-quoted scalar"
+-//          --- |-
+-//            a literal scalar
+-//          --- >-
+-//            a folded
+-//            scalar
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          DOCUMENT-START
+-//          DOCUMENT-START
+-//          SCALAR("a plain scalar",plain)
+-//          DOCUMENT-START
+-//          SCALAR("a single-quoted scalar",single-quoted)
+-//          DOCUMENT-START
+-//          SCALAR("a double-quoted scalar",double-quoted)
+-//          DOCUMENT-START
+-//          SCALAR("a literal scalar",literal)
+-//          DOCUMENT-START
+-//          SCALAR("a folded scalar",folded)
+-//          STREAM-END
+-//
+-// Now it's time to review collection-related tokens. We will start with
+-// flow collections:
+-//
+-//      FLOW-SEQUENCE-START
+-//      FLOW-SEQUENCE-END
+-//      FLOW-MAPPING-START
+-//      FLOW-MAPPING-END
+-//      FLOW-ENTRY
+-//      KEY
+-//      VALUE
+-//
+-// The tokens FLOW-SEQUENCE-START, FLOW-SEQUENCE-END, FLOW-MAPPING-START, and
+-// FLOW-MAPPING-END represent the indicators '[', ']', '{', and '}'
+-// correspondingly.  FLOW-ENTRY represent the ',' indicator.  Finally the
+-// indicators '?' and ':', which are used for denoting mapping keys and values,
+-// are represented by the KEY and VALUE tokens.
+-//
+-// The following examples show flow collections:
+-//
+-//      1. A flow sequence:
+-//
+-//          [item 1, item 2, item 3]
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          FLOW-SEQUENCE-START
+-//          SCALAR("item 1",plain)
+-//          FLOW-ENTRY
+-//          SCALAR("item 2",plain)
+-//          FLOW-ENTRY
+-//          SCALAR("item 3",plain)
+-//          FLOW-SEQUENCE-END
+-//          STREAM-END
+-//
+-//      2. A flow mapping:
+-//
+-//          {
+-//              a simple key: a value,  # Note that the KEY token is produced.
+-//              ? a complex key: another value,
+-//          }
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          FLOW-MAPPING-START
+-//          KEY
+-//          SCALAR("a simple key",plain)
+-//          VALUE
+-//          SCALAR("a value",plain)
+-//          FLOW-ENTRY
+-//          KEY
+-//          SCALAR("a complex key",plain)
+-//          VALUE
+-//          SCALAR("another value",plain)
+-//          FLOW-ENTRY
+-//          FLOW-MAPPING-END
+-//          STREAM-END
+-//
+-// A simple key is a key which is not denoted by the '?' indicator.  Note that
+-// the Scanner still produce the KEY token whenever it encounters a simple key.
+-//
+-// For scanning block collections, the following tokens are used (note that we
+-// repeat KEY and VALUE here):
+-//
+-//      BLOCK-SEQUENCE-START
+-//      BLOCK-MAPPING-START
+-//      BLOCK-END
+-//      BLOCK-ENTRY
+-//      KEY
+-//      VALUE
+-//
+-// The tokens BLOCK-SEQUENCE-START and BLOCK-MAPPING-START denote indentation
+-// increase that precedes a block collection (cf. the INDENT token in Python).
+-// The token BLOCK-END denote indentation decrease that ends a block collection
+-// (cf. the DEDENT token in Python).  However YAML has some syntax pecularities
+-// that makes detections of these tokens more complex.
+-//
+-// The tokens BLOCK-ENTRY, KEY, and VALUE are used to represent the indicators
+-// '-', '?', and ':' correspondingly.
+-//
+-// The following examples show how the tokens BLOCK-SEQUENCE-START,
+-// BLOCK-MAPPING-START, and BLOCK-END are emitted by the Scanner:
+-//
+-//      1. Block sequences:
+-//
+-//          - item 1
+-//          - item 2
+-//          -
+-//            - item 3.1
+-//            - item 3.2
+-//          -
+-//            key 1: value 1
+-//            key 2: value 2
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          BLOCK-SEQUENCE-START
+-//          BLOCK-ENTRY
+-//          SCALAR("item 1",plain)
+-//          BLOCK-ENTRY
+-//          SCALAR("item 2",plain)
+-//          BLOCK-ENTRY
+-//          BLOCK-SEQUENCE-START
+-//          BLOCK-ENTRY
+-//          SCALAR("item 3.1",plain)
+-//          BLOCK-ENTRY
+-//          SCALAR("item 3.2",plain)
+-//          BLOCK-END
+-//          BLOCK-ENTRY
+-//          BLOCK-MAPPING-START
+-//          KEY
+-//          SCALAR("key 1",plain)
+-//          VALUE
+-//          SCALAR("value 1",plain)
+-//          KEY
+-//          SCALAR("key 2",plain)
+-//          VALUE
+-//          SCALAR("value 2",plain)
+-//          BLOCK-END
+-//          BLOCK-END
+-//          STREAM-END
+-//
+-//      2. Block mappings:
+-//
+-//          a simple key: a value   # The KEY token is produced here.
+-//          ? a complex key
+-//          : another value
+-//          a mapping:
+-//            key 1: value 1
+-//            key 2: value 2
+-//          a sequence:
+-//            - item 1
+-//            - item 2
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          BLOCK-MAPPING-START
+-//          KEY
+-//          SCALAR("a simple key",plain)
+-//          VALUE
+-//          SCALAR("a value",plain)
+-//          KEY
+-//          SCALAR("a complex key",plain)
+-//          VALUE
+-//          SCALAR("another value",plain)
+-//          KEY
+-//          SCALAR("a mapping",plain)
+-//          BLOCK-MAPPING-START
+-//          KEY
+-//          SCALAR("key 1",plain)
+-//          VALUE
+-//          SCALAR("value 1",plain)
+-//          KEY
+-//          SCALAR("key 2",plain)
+-//          VALUE
+-//          SCALAR("value 2",plain)
+-//          BLOCK-END
+-//          KEY
+-//          SCALAR("a sequence",plain)
+-//          VALUE
+-//          BLOCK-SEQUENCE-START
+-//          BLOCK-ENTRY
+-//          SCALAR("item 1",plain)
+-//          BLOCK-ENTRY
+-//          SCALAR("item 2",plain)
+-//          BLOCK-END
+-//          BLOCK-END
+-//          STREAM-END
+-//
+-// YAML does not always require to start a new block collection from a new
+-// line.  If the current line contains only '-', '?', and ':' indicators, a new
+-// block collection may start at the current line.  The following examples
+-// illustrate this case:
+-//
+-//      1. Collections in a sequence:
+-//
+-//          - - item 1
+-//            - item 2
+-//          - key 1: value 1
+-//            key 2: value 2
+-//          - ? complex key
+-//            : complex value
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          BLOCK-SEQUENCE-START
+-//          BLOCK-ENTRY
+-//          BLOCK-SEQUENCE-START
+-//          BLOCK-ENTRY
+-//          SCALAR("item 1",plain)
+-//          BLOCK-ENTRY
+-//          SCALAR("item 2",plain)
+-//          BLOCK-END
+-//          BLOCK-ENTRY
+-//          BLOCK-MAPPING-START
+-//          KEY
+-//          SCALAR("key 1",plain)
+-//          VALUE
+-//          SCALAR("value 1",plain)
+-//          KEY
+-//          SCALAR("key 2",plain)
+-//          VALUE
+-//          SCALAR("value 2",plain)
+-//          BLOCK-END
+-//          BLOCK-ENTRY
+-//          BLOCK-MAPPING-START
+-//          KEY
+-//          SCALAR("complex key")
+-//          VALUE
+-//          SCALAR("complex value")
+-//          BLOCK-END
+-//          BLOCK-END
+-//          STREAM-END
+-//
+-//      2. Collections in a mapping:
+-//
+-//          ? a sequence
+-//          : - item 1
+-//            - item 2
+-//          ? a mapping
+-//          : key 1: value 1
+-//            key 2: value 2
+-//
+-//      Tokens:
+-//
+-//          STREAM-START(utf-8)
+-//          BLOCK-MAPPING-START
+-//          KEY
+-//          SCALAR("a sequence",plain)
+-//          VALUE
+-//          BLOCK-SEQUENCE-START
+-//          BLOCK-ENTRY
+-//          SCALAR("item 1",plain)
+-//          BLOCK-ENTRY
+-//          SCALAR("item 2",plain)
+-//          BLOCK-END
+-//          KEY
+-//          SCALAR("a mapping",plain)
+-//          VALUE
+-//          BLOCK-MAPPING-START
+-//          KEY
+-//          SCALAR("key 1",plain)
+-//          VALUE
+-//          SCALAR("value 1",plain)
+-//          KEY
+-//          SCALAR("key 2",plain)
+-//          VALUE
+-//          SCALAR("value 2",plain)
+-//          BLOCK-END
+-//          BLOCK-END
+-//          STREAM-END
+-//
+-// YAML also permits non-indented sequences if they are included into a block
+-// mapping.  In this case, the token BLOCK-SEQUENCE-START is not produced:
+-//
+-//      key:
+-//      - item 1    # BLOCK-SEQUENCE-START is NOT produced here.
+-//      - item 2
+-//
+-// Tokens:
+-//
+-//      STREAM-START(utf-8)
+-//      BLOCK-MAPPING-START
+-//      KEY
+-//      SCALAR("key",plain)
+-//      VALUE
+-//      BLOCK-ENTRY
+-//      SCALAR("item 1",plain)
+-//      BLOCK-ENTRY
+-//      SCALAR("item 2",plain)
+-//      BLOCK-END
+-//
+-
+-// Ensure that the buffer contains the required number of characters.
+-// Return true on success, false on failure (reader error or memory error).
+-func cache(parser *yaml_parser_t, length int) bool {
+-	// [Go] This was inlined: !cache(A, B) -> unread < B && !update(A, B)
+-	return parser.unread >= length || yaml_parser_update_buffer(parser, length)
+-}
+-
+-// Advance the buffer pointer.
+-func skip(parser *yaml_parser_t) {
+-	parser.mark.index++
+-	parser.mark.column++
+-	parser.unread--
+-	parser.buffer_pos += width(parser.buffer[parser.buffer_pos])
+-}
+-
+-func skip_line(parser *yaml_parser_t) {
+-	if is_crlf(parser.buffer, parser.buffer_pos) {
+-		parser.mark.index += 2
+-		parser.mark.column = 0
+-		parser.mark.line++
+-		parser.unread -= 2
+-		parser.buffer_pos += 2
+-	} else if is_break(parser.buffer, parser.buffer_pos) {
+-		parser.mark.index++
+-		parser.mark.column = 0
+-		parser.mark.line++
+-		parser.unread--
+-		parser.buffer_pos += width(parser.buffer[parser.buffer_pos])
+-	}
+-}
+-
+-// Copy a character to a string buffer and advance pointers.
+-func read(parser *yaml_parser_t, s []byte) []byte {
+-	w := width(parser.buffer[parser.buffer_pos])
+-	if w == 0 {
+-		panic("invalid character sequence")
+-	}
+-	if len(s) == 0 {
+-		s = make([]byte, 0, 32)
+-	}
+-	if w == 1 && len(s)+w <= cap(s) {
+-		s = s[:len(s)+1]
+-		s[len(s)-1] = parser.buffer[parser.buffer_pos]
+-		parser.buffer_pos++
+-	} else {
+-		s = append(s, parser.buffer[parser.buffer_pos:parser.buffer_pos+w]...)
+-		parser.buffer_pos += w
+-	}
+-	parser.mark.index++
+-	parser.mark.column++
+-	parser.unread--
+-	return s
+-}
+-
+-// Copy a line break character to a string buffer and advance pointers.
+-func read_line(parser *yaml_parser_t, s []byte) []byte {
+-	buf := parser.buffer
+-	pos := parser.buffer_pos
+-	switch {
+-	case buf[pos] == '\r' && buf[pos+1] == '\n':
+-		// CR LF . LF
+-		s = append(s, '\n')
+-		parser.buffer_pos += 2
+-		parser.mark.index++
+-		parser.unread--
+-	case buf[pos] == '\r' || buf[pos] == '\n':
+-		// CR|LF . LF
+-		s = append(s, '\n')
+-		parser.buffer_pos += 1
+-	case buf[pos] == '\xC2' && buf[pos+1] == '\x85':
+-		// NEL . LF
+-		s = append(s, '\n')
+-		parser.buffer_pos += 2
+-	case buf[pos] == '\xE2' && buf[pos+1] == '\x80' && (buf[pos+2] == '\xA8' || buf[pos+2] == '\xA9'):
+-		// LS|PS . LS|PS
+-		s = append(s, buf[parser.buffer_pos:pos+3]...)
+-		parser.buffer_pos += 3
+-	default:
+-		return s
+-	}
+-	parser.mark.index++
+-	parser.mark.column = 0
+-	parser.mark.line++
+-	parser.unread--
+-	return s
+-}
+-
+-// Get the next token.
+-func yaml_parser_scan(parser *yaml_parser_t, token *yaml_token_t) bool {
+-	// Erase the token object.
+-	*token = yaml_token_t{} // [Go] Is this necessary?
+-
+-	// No tokens after STREAM-END or error.
+-	if parser.stream_end_produced || parser.error != yaml_NO_ERROR {
+-		return true
+-	}
+-
+-	// Ensure that the tokens queue contains enough tokens.
+-	if !parser.token_available {
+-		if !yaml_parser_fetch_more_tokens(parser) {
+-			return false
+-		}
+-	}
+-
+-	// Fetch the next token from the queue.
+-	*token = parser.tokens[parser.tokens_head]
+-	parser.tokens_head++
+-	parser.tokens_parsed++
+-	parser.token_available = false
+-
+-	if token.typ == yaml_STREAM_END_TOKEN {
+-		parser.stream_end_produced = true
+-	}
+-	return true
+-}
+-
+-// Set the scanner error and return false.
+-func yaml_parser_set_scanner_error(parser *yaml_parser_t, context string, context_mark yaml_mark_t, problem string) bool {
+-	parser.error = yaml_SCANNER_ERROR
+-	parser.context = context
+-	parser.context_mark = context_mark
+-	parser.problem = problem
+-	parser.problem_mark = parser.mark
+-	return false
+-}
+-
+-func yaml_parser_set_scanner_tag_error(parser *yaml_parser_t, directive bool, context_mark yaml_mark_t, problem string) bool {
+-	context := "while parsing a tag"
+-	if directive {
+-		context = "while parsing a %TAG directive"
+-	}
+-	return yaml_parser_set_scanner_error(parser, context, context_mark, "did not find URI escaped octet")
+-}
+-
+-func trace(args ...interface{}) func() {
+-	pargs := append([]interface{}{"+++"}, args...)
+-	fmt.Println(pargs...)
+-	pargs = append([]interface{}{"---"}, args...)
+-	return func() { fmt.Println(pargs...) }
+-}
+-
+-// Ensure that the tokens queue contains at least one token which can be
+-// returned to the Parser.
+-func yaml_parser_fetch_more_tokens(parser *yaml_parser_t) bool {
+-	// While we need more tokens to fetch, do it.
+-	for {
+-		// Check if we really need to fetch more tokens.
+-		need_more_tokens := false
+-
+-		if parser.tokens_head == len(parser.tokens) {
+-			// Queue is empty.
+-			need_more_tokens = true
+-		} else {
+-			// Check if any potential simple key may occupy the head position.
+-			if !yaml_parser_stale_simple_keys(parser) {
+-				return false
+-			}
+-
+-			for i := range parser.simple_keys {
+-				simple_key := &parser.simple_keys[i]
+-				if simple_key.possible && simple_key.token_number == parser.tokens_parsed {
+-					need_more_tokens = true
+-					break
+-				}
+-			}
+-		}
+-
+-		// We are finished.
+-		if !need_more_tokens {
+-			break
+-		}
+-		// Fetch the next token.
+-		if !yaml_parser_fetch_next_token(parser) {
+-			return false
+-		}
+-	}
+-
+-	parser.token_available = true
+-	return true
+-}
+-
+-// The dispatcher for token fetchers.
+-func yaml_parser_fetch_next_token(parser *yaml_parser_t) bool {
+-	// Ensure that the buffer is initialized.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-
+-	// Check if we just started scanning.  Fetch STREAM-START then.
+-	if !parser.stream_start_produced {
+-		return yaml_parser_fetch_stream_start(parser)
+-	}
+-
+-	// Eat whitespaces and comments until we reach the next token.
+-	if !yaml_parser_scan_to_next_token(parser) {
+-		return false
+-	}
+-
+-	// Remove obsolete potential simple keys.
+-	if !yaml_parser_stale_simple_keys(parser) {
+-		return false
+-	}
+-
+-	// Check the indentation level against the current column.
+-	if !yaml_parser_unroll_indent(parser, parser.mark.column) {
+-		return false
+-	}
+-
+-	// Ensure that the buffer contains at least 4 characters.  4 is the length
+-	// of the longest indicators ('--- ' and '... ').
+-	if parser.unread < 4 && !yaml_parser_update_buffer(parser, 4) {
+-		return false
+-	}
+-
+-	// Is it the end of the stream?
+-	if is_z(parser.buffer, parser.buffer_pos) {
+-		return yaml_parser_fetch_stream_end(parser)
+-	}
+-
+-	// Is it a directive?
+-	if parser.mark.column == 0 && parser.buffer[parser.buffer_pos] == '%' {
+-		return yaml_parser_fetch_directive(parser)
+-	}
+-
+-	buf := parser.buffer
+-	pos := parser.buffer_pos
+-
+-	// Is it the document start indicator?
+-	if parser.mark.column == 0 && buf[pos] == '-' && buf[pos+1] == '-' && buf[pos+2] == '-' && is_blankz(buf, pos+3) {
+-		return yaml_parser_fetch_document_indicator(parser, yaml_DOCUMENT_START_TOKEN)
+-	}
+-
+-	// Is it the document end indicator?
+-	if parser.mark.column == 0 && buf[pos] == '.' && buf[pos+1] == '.' && buf[pos+2] == '.' && is_blankz(buf, pos+3) {
+-		return yaml_parser_fetch_document_indicator(parser, yaml_DOCUMENT_END_TOKEN)
+-	}
+-
+-	// Is it the flow sequence start indicator?
+-	if buf[pos] == '[' {
+-		return yaml_parser_fetch_flow_collection_start(parser, yaml_FLOW_SEQUENCE_START_TOKEN)
+-	}
+-
+-	// Is it the flow mapping start indicator?
+-	if parser.buffer[parser.buffer_pos] == '{' {
+-		return yaml_parser_fetch_flow_collection_start(parser, yaml_FLOW_MAPPING_START_TOKEN)
+-	}
+-
+-	// Is it the flow sequence end indicator?
+-	if parser.buffer[parser.buffer_pos] == ']' {
+-		return yaml_parser_fetch_flow_collection_end(parser,
+-			yaml_FLOW_SEQUENCE_END_TOKEN)
+-	}
+-
+-	// Is it the flow mapping end indicator?
+-	if parser.buffer[parser.buffer_pos] == '}' {
+-		return yaml_parser_fetch_flow_collection_end(parser,
+-			yaml_FLOW_MAPPING_END_TOKEN)
+-	}
+-
+-	// Is it the flow entry indicator?
+-	if parser.buffer[parser.buffer_pos] == ',' {
+-		return yaml_parser_fetch_flow_entry(parser)
+-	}
+-
+-	// Is it the block entry indicator?
+-	if parser.buffer[parser.buffer_pos] == '-' && is_blankz(parser.buffer, parser.buffer_pos+1) {
+-		return yaml_parser_fetch_block_entry(parser)
+-	}
+-
+-	// Is it the key indicator?
+-	if parser.buffer[parser.buffer_pos] == '?' && (parser.flow_level > 0 || is_blankz(parser.buffer, parser.buffer_pos+1)) {
+-		return yaml_parser_fetch_key(parser)
+-	}
+-
+-	// Is it the value indicator?
+-	if parser.buffer[parser.buffer_pos] == ':' && (parser.flow_level > 0 || is_blankz(parser.buffer, parser.buffer_pos+1)) {
+-		return yaml_parser_fetch_value(parser)
+-	}
+-
+-	// Is it an alias?
+-	if parser.buffer[parser.buffer_pos] == '*' {
+-		return yaml_parser_fetch_anchor(parser, yaml_ALIAS_TOKEN)
+-	}
+-
+-	// Is it an anchor?
+-	if parser.buffer[parser.buffer_pos] == '&' {
+-		return yaml_parser_fetch_anchor(parser, yaml_ANCHOR_TOKEN)
+-	}
+-
+-	// Is it a tag?
+-	if parser.buffer[parser.buffer_pos] == '!' {
+-		return yaml_parser_fetch_tag(parser)
+-	}
+-
+-	// Is it a literal scalar?
+-	if parser.buffer[parser.buffer_pos] == '|' && parser.flow_level == 0 {
+-		return yaml_parser_fetch_block_scalar(parser, true)
+-	}
+-
+-	// Is it a folded scalar?
+-	if parser.buffer[parser.buffer_pos] == '>' && parser.flow_level == 0 {
+-		return yaml_parser_fetch_block_scalar(parser, false)
+-	}
+-
+-	// Is it a single-quoted scalar?
+-	if parser.buffer[parser.buffer_pos] == '\'' {
+-		return yaml_parser_fetch_flow_scalar(parser, true)
+-	}
+-
+-	// Is it a double-quoted scalar?
+-	if parser.buffer[parser.buffer_pos] == '"' {
+-		return yaml_parser_fetch_flow_scalar(parser, false)
+-	}
+-
+-	// Is it a plain scalar?
+-	//
+-	// A plain scalar may start with any non-blank characters except
+-	//
+-	//      '-', '?', ':', ',', '[', ']', '{', '}',
+-	//      '#', '&', '*', '!', '|', '>', '\'', '\"',
+-	//      '%', '@', '`'.
+-	//
+-	// In the block context (and, for the '-' indicator, in the flow context
+-	// too), it may also start with the characters
+-	//
+-	//      '-', '?', ':'
+-	//
+-	// if it is followed by a non-space character.
+-	//
+-	// The last rule is more restrictive than the specification requires.
+-	// [Go] Make this logic more reasonable.
+-	//switch parser.buffer[parser.buffer_pos] {
+-	//case '-', '?', ':', ',', '?', '-', ',', ':', ']', '[', '}', '{', '&', '#', '!', '*', '>', '|', '"', '\'', '@', '%', '-', '`':
+-	//}
+-	if !(is_blankz(parser.buffer, parser.buffer_pos) || parser.buffer[parser.buffer_pos] == '-' ||
+-		parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == ':' ||
+-		parser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == '[' ||
+-		parser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '{' ||
+-		parser.buffer[parser.buffer_pos] == '}' || parser.buffer[parser.buffer_pos] == '#' ||
+-		parser.buffer[parser.buffer_pos] == '&' || parser.buffer[parser.buffer_pos] == '*' ||
+-		parser.buffer[parser.buffer_pos] == '!' || parser.buffer[parser.buffer_pos] == '|' ||
+-		parser.buffer[parser.buffer_pos] == '>' || parser.buffer[parser.buffer_pos] == '\'' ||
+-		parser.buffer[parser.buffer_pos] == '"' || parser.buffer[parser.buffer_pos] == '%' ||
+-		parser.buffer[parser.buffer_pos] == '@' || parser.buffer[parser.buffer_pos] == '`') ||
+-		(parser.buffer[parser.buffer_pos] == '-' && !is_blank(parser.buffer, parser.buffer_pos+1)) ||
+-		(parser.flow_level == 0 &&
+-			(parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == ':') &&
+-			!is_blankz(parser.buffer, parser.buffer_pos+1)) {
+-		return yaml_parser_fetch_plain_scalar(parser)
+-	}
+-
+-	// If we don't determine the token type so far, it is an error.
+-	return yaml_parser_set_scanner_error(parser,
+-		"while scanning for the next token", parser.mark,
+-		"found character that cannot start any token")
+-}
+-
+-// Check the list of potential simple keys and remove the positions that
+-// cannot contain simple keys anymore.
+-func yaml_parser_stale_simple_keys(parser *yaml_parser_t) bool {
+-	// Check for a potential simple key for each flow level.
+-	for i := range parser.simple_keys {
+-		simple_key := &parser.simple_keys[i]
+-
+-		// The specification requires that a simple key
+-		//
+-		//  - is limited to a single line,
+-		//  - is shorter than 1024 characters.
+-		if simple_key.possible && (simple_key.mark.line < parser.mark.line || simple_key.mark.index+1024 < parser.mark.index) {
+-
+-			// Check if the potential simple key to be removed is required.
+-			if simple_key.required {
+-				return yaml_parser_set_scanner_error(parser,
+-					"while scanning a simple key", simple_key.mark,
+-					"could not find expected ':'")
+-			}
+-			simple_key.possible = false
+-		}
+-	}
+-	return true
+-}
+-
+-// Check if a simple key may start at the current position and add it if
+-// needed.
+-func yaml_parser_save_simple_key(parser *yaml_parser_t) bool {
+-	// A simple key is required at the current position if the scanner is in
+-	// the block context and the current column coincides with the indentation
+-	// level.
+-
+-	required := parser.flow_level == 0 && parser.indent == parser.mark.column
+-
+-	// A simple key is required only when it is the first token in the current
+-	// line.  Therefore it is always allowed.  But we add a check anyway.
+-	if required && !parser.simple_key_allowed {
+-		panic("should not happen")
+-	}
+-
+-	//
+-	// If the current position may start a simple key, save it.
+-	//
+-	if parser.simple_key_allowed {
+-		simple_key := yaml_simple_key_t{
+-			possible:     true,
+-			required:     required,
+-			token_number: parser.tokens_parsed + (len(parser.tokens) - parser.tokens_head),
+-		}
+-		simple_key.mark = parser.mark
+-
+-		if !yaml_parser_remove_simple_key(parser) {
+-			return false
+-		}
+-		parser.simple_keys[len(parser.simple_keys)-1] = simple_key
+-	}
+-	return true
+-}
+-
+-// Remove a potential simple key at the current flow level.
+-func yaml_parser_remove_simple_key(parser *yaml_parser_t) bool {
+-	i := len(parser.simple_keys) - 1
+-	if parser.simple_keys[i].possible {
+-		// If the key is required, it is an error.
+-		if parser.simple_keys[i].required {
+-			return yaml_parser_set_scanner_error(parser,
+-				"while scanning a simple key", parser.simple_keys[i].mark,
+-				"could not find expected ':'")
+-		}
+-	}
+-	// Remove the key from the stack.
+-	parser.simple_keys[i].possible = false
+-	return true
+-}
+-
+-// Increase the flow level and resize the simple key list if needed.
+-func yaml_parser_increase_flow_level(parser *yaml_parser_t) bool {
+-	// Reset the simple key on the next level.
+-	parser.simple_keys = append(parser.simple_keys, yaml_simple_key_t{})
+-
+-	// Increase the flow level.
+-	parser.flow_level++
+-	return true
+-}
+-
+-// Decrease the flow level.
+-func yaml_parser_decrease_flow_level(parser *yaml_parser_t) bool {
+-	if parser.flow_level > 0 {
+-		parser.flow_level--
+-		parser.simple_keys = parser.simple_keys[:len(parser.simple_keys)-1]
+-	}
+-	return true
+-}
+-
+-// Push the current indentation level to the stack and set the new level
+-// the current column is greater than the indentation level.  In this case,
+-// append or insert the specified token into the token queue.
+-func yaml_parser_roll_indent(parser *yaml_parser_t, column, number int, typ yaml_token_type_t, mark yaml_mark_t) bool {
+-	// In the flow context, do nothing.
+-	if parser.flow_level > 0 {
+-		return true
+-	}
+-
+-	if parser.indent < column {
+-		// Push the current indentation level to the stack and set the new
+-		// indentation level.
+-		parser.indents = append(parser.indents, parser.indent)
+-		parser.indent = column
+-
+-		// Create a token and insert it into the queue.
+-		token := yaml_token_t{
+-			typ:        typ,
+-			start_mark: mark,
+-			end_mark:   mark,
+-		}
+-		if number > -1 {
+-			number -= parser.tokens_parsed
+-		}
+-		yaml_insert_token(parser, number, &token)
+-	}
+-	return true
+-}
+-
+-// Pop indentation levels from the indents stack until the current level
+-// becomes less or equal to the column.  For each intendation level, append
+-// the BLOCK-END token.
+-func yaml_parser_unroll_indent(parser *yaml_parser_t, column int) bool {
+-	// In the flow context, do nothing.
+-	if parser.flow_level > 0 {
+-		return true
+-	}
+-
+-	// Loop through the intendation levels in the stack.
+-	for parser.indent > column {
+-		// Create a token and append it to the queue.
+-		token := yaml_token_t{
+-			typ:        yaml_BLOCK_END_TOKEN,
+-			start_mark: parser.mark,
+-			end_mark:   parser.mark,
+-		}
+-		yaml_insert_token(parser, -1, &token)
+-
+-		// Pop the indentation level.
+-		parser.indent = parser.indents[len(parser.indents)-1]
+-		parser.indents = parser.indents[:len(parser.indents)-1]
+-	}
+-	return true
+-}
+-
+-// Initialize the scanner and produce the STREAM-START token.
+-func yaml_parser_fetch_stream_start(parser *yaml_parser_t) bool {
+-
+-	// Set the initial indentation.
+-	parser.indent = -1
+-
+-	// Initialize the simple key stack.
+-	parser.simple_keys = append(parser.simple_keys, yaml_simple_key_t{})
+-
+-	// A simple key is allowed at the beginning of the stream.
+-	parser.simple_key_allowed = true
+-
+-	// We have started.
+-	parser.stream_start_produced = true
+-
+-	// Create the STREAM-START token and append it to the queue.
+-	token := yaml_token_t{
+-		typ:        yaml_STREAM_START_TOKEN,
+-		start_mark: parser.mark,
+-		end_mark:   parser.mark,
+-		encoding:   parser.encoding,
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the STREAM-END token and shut down the scanner.
+-func yaml_parser_fetch_stream_end(parser *yaml_parser_t) bool {
+-
+-	// Force new line.
+-	if parser.mark.column != 0 {
+-		parser.mark.column = 0
+-		parser.mark.line++
+-	}
+-
+-	// Reset the indentation level.
+-	if !yaml_parser_unroll_indent(parser, -1) {
+-		return false
+-	}
+-
+-	// Reset simple keys.
+-	if !yaml_parser_remove_simple_key(parser) {
+-		return false
+-	}
+-
+-	parser.simple_key_allowed = false
+-
+-	// Create the STREAM-END token and append it to the queue.
+-	token := yaml_token_t{
+-		typ:        yaml_STREAM_END_TOKEN,
+-		start_mark: parser.mark,
+-		end_mark:   parser.mark,
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce a VERSION-DIRECTIVE or TAG-DIRECTIVE token.
+-func yaml_parser_fetch_directive(parser *yaml_parser_t) bool {
+-	// Reset the indentation level.
+-	if !yaml_parser_unroll_indent(parser, -1) {
+-		return false
+-	}
+-
+-	// Reset simple keys.
+-	if !yaml_parser_remove_simple_key(parser) {
+-		return false
+-	}
+-
+-	parser.simple_key_allowed = false
+-
+-	// Create the YAML-DIRECTIVE or TAG-DIRECTIVE token.
+-	token := yaml_token_t{}
+-	if !yaml_parser_scan_directive(parser, &token) {
+-		return false
+-	}
+-	// Append the token to the queue.
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the DOCUMENT-START or DOCUMENT-END token.
+-func yaml_parser_fetch_document_indicator(parser *yaml_parser_t, typ yaml_token_type_t) bool {
+-	// Reset the indentation level.
+-	if !yaml_parser_unroll_indent(parser, -1) {
+-		return false
+-	}
+-
+-	// Reset simple keys.
+-	if !yaml_parser_remove_simple_key(parser) {
+-		return false
+-	}
+-
+-	parser.simple_key_allowed = false
+-
+-	// Consume the token.
+-	start_mark := parser.mark
+-
+-	skip(parser)
+-	skip(parser)
+-	skip(parser)
+-
+-	end_mark := parser.mark
+-
+-	// Create the DOCUMENT-START or DOCUMENT-END token.
+-	token := yaml_token_t{
+-		typ:        typ,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-	}
+-	// Append the token to the queue.
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the FLOW-SEQUENCE-START or FLOW-MAPPING-START token.
+-func yaml_parser_fetch_flow_collection_start(parser *yaml_parser_t, typ yaml_token_type_t) bool {
+-	// The indicators '[' and '{' may start a simple key.
+-	if !yaml_parser_save_simple_key(parser) {
+-		return false
+-	}
+-
+-	// Increase the flow level.
+-	if !yaml_parser_increase_flow_level(parser) {
+-		return false
+-	}
+-
+-	// A simple key may follow the indicators '[' and '{'.
+-	parser.simple_key_allowed = true
+-
+-	// Consume the token.
+-	start_mark := parser.mark
+-	skip(parser)
+-	end_mark := parser.mark
+-
+-	// Create the FLOW-SEQUENCE-START of FLOW-MAPPING-START token.
+-	token := yaml_token_t{
+-		typ:        typ,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-	}
+-	// Append the token to the queue.
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the FLOW-SEQUENCE-END or FLOW-MAPPING-END token.
+-func yaml_parser_fetch_flow_collection_end(parser *yaml_parser_t, typ yaml_token_type_t) bool {
+-	// Reset any potential simple key on the current flow level.
+-	if !yaml_parser_remove_simple_key(parser) {
+-		return false
+-	}
+-
+-	// Decrease the flow level.
+-	if !yaml_parser_decrease_flow_level(parser) {
+-		return false
+-	}
+-
+-	// No simple keys after the indicators ']' and '}'.
+-	parser.simple_key_allowed = false
+-
+-	// Consume the token.
+-
+-	start_mark := parser.mark
+-	skip(parser)
+-	end_mark := parser.mark
+-
+-	// Create the FLOW-SEQUENCE-END of FLOW-MAPPING-END token.
+-	token := yaml_token_t{
+-		typ:        typ,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-	}
+-	// Append the token to the queue.
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the FLOW-ENTRY token.
+-func yaml_parser_fetch_flow_entry(parser *yaml_parser_t) bool {
+-	// Reset any potential simple keys on the current flow level.
+-	if !yaml_parser_remove_simple_key(parser) {
+-		return false
+-	}
+-
+-	// Simple keys are allowed after ','.
+-	parser.simple_key_allowed = true
+-
+-	// Consume the token.
+-	start_mark := parser.mark
+-	skip(parser)
+-	end_mark := parser.mark
+-
+-	// Create the FLOW-ENTRY token and append it to the queue.
+-	token := yaml_token_t{
+-		typ:        yaml_FLOW_ENTRY_TOKEN,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the BLOCK-ENTRY token.
+-func yaml_parser_fetch_block_entry(parser *yaml_parser_t) bool {
+-	// Check if the scanner is in the block context.
+-	if parser.flow_level == 0 {
+-		// Check if we are allowed to start a new entry.
+-		if !parser.simple_key_allowed {
+-			return yaml_parser_set_scanner_error(parser, "", parser.mark,
+-				"block sequence entries are not allowed in this context")
+-		}
+-		// Add the BLOCK-SEQUENCE-START token if needed.
+-		if !yaml_parser_roll_indent(parser, parser.mark.column, -1, yaml_BLOCK_SEQUENCE_START_TOKEN, parser.mark) {
+-			return false
+-		}
+-	} else {
+-		// It is an error for the '-' indicator to occur in the flow context,
+-		// but we let the Parser detect and report about it because the Parser
+-		// is able to point to the context.
+-	}
+-
+-	// Reset any potential simple keys on the current flow level.
+-	if !yaml_parser_remove_simple_key(parser) {
+-		return false
+-	}
+-
+-	// Simple keys are allowed after '-'.
+-	parser.simple_key_allowed = true
+-
+-	// Consume the token.
+-	start_mark := parser.mark
+-	skip(parser)
+-	end_mark := parser.mark
+-
+-	// Create the BLOCK-ENTRY token and append it to the queue.
+-	token := yaml_token_t{
+-		typ:        yaml_BLOCK_ENTRY_TOKEN,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the KEY token.
+-func yaml_parser_fetch_key(parser *yaml_parser_t) bool {
+-
+-	// In the block context, additional checks are required.
+-	if parser.flow_level == 0 {
+-		// Check if we are allowed to start a new key (not nessesary simple).
+-		if !parser.simple_key_allowed {
+-			return yaml_parser_set_scanner_error(parser, "", parser.mark,
+-				"mapping keys are not allowed in this context")
+-		}
+-		// Add the BLOCK-MAPPING-START token if needed.
+-		if !yaml_parser_roll_indent(parser, parser.mark.column, -1, yaml_BLOCK_MAPPING_START_TOKEN, parser.mark) {
+-			return false
+-		}
+-	}
+-
+-	// Reset any potential simple keys on the current flow level.
+-	if !yaml_parser_remove_simple_key(parser) {
+-		return false
+-	}
+-
+-	// Simple keys are allowed after '?' in the block context.
+-	parser.simple_key_allowed = parser.flow_level == 0
+-
+-	// Consume the token.
+-	start_mark := parser.mark
+-	skip(parser)
+-	end_mark := parser.mark
+-
+-	// Create the KEY token and append it to the queue.
+-	token := yaml_token_t{
+-		typ:        yaml_KEY_TOKEN,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the VALUE token.
+-func yaml_parser_fetch_value(parser *yaml_parser_t) bool {
+-
+-	simple_key := &parser.simple_keys[len(parser.simple_keys)-1]
+-
+-	// Have we found a simple key?
+-	if simple_key.possible {
+-		// Create the KEY token and insert it into the queue.
+-		token := yaml_token_t{
+-			typ:        yaml_KEY_TOKEN,
+-			start_mark: simple_key.mark,
+-			end_mark:   simple_key.mark,
+-		}
+-		yaml_insert_token(parser, simple_key.token_number-parser.tokens_parsed, &token)
+-
+-		// In the block context, we may need to add the BLOCK-MAPPING-START token.
+-		if !yaml_parser_roll_indent(parser, simple_key.mark.column,
+-			simple_key.token_number,
+-			yaml_BLOCK_MAPPING_START_TOKEN, simple_key.mark) {
+-			return false
+-		}
+-
+-		// Remove the simple key.
+-		simple_key.possible = false
+-
+-		// A simple key cannot follow another simple key.
+-		parser.simple_key_allowed = false
+-
+-	} else {
+-		// The ':' indicator follows a complex key.
+-
+-		// In the block context, extra checks are required.
+-		if parser.flow_level == 0 {
+-
+-			// Check if we are allowed to start a complex value.
+-			if !parser.simple_key_allowed {
+-				return yaml_parser_set_scanner_error(parser, "", parser.mark,
+-					"mapping values are not allowed in this context")
+-			}
+-
+-			// Add the BLOCK-MAPPING-START token if needed.
+-			if !yaml_parser_roll_indent(parser, parser.mark.column, -1, yaml_BLOCK_MAPPING_START_TOKEN, parser.mark) {
+-				return false
+-			}
+-		}
+-
+-		// Simple keys after ':' are allowed in the block context.
+-		parser.simple_key_allowed = parser.flow_level == 0
+-	}
+-
+-	// Consume the token.
+-	start_mark := parser.mark
+-	skip(parser)
+-	end_mark := parser.mark
+-
+-	// Create the VALUE token and append it to the queue.
+-	token := yaml_token_t{
+-		typ:        yaml_VALUE_TOKEN,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the ALIAS or ANCHOR token.
+-func yaml_parser_fetch_anchor(parser *yaml_parser_t, typ yaml_token_type_t) bool {
+-	// An anchor or an alias could be a simple key.
+-	if !yaml_parser_save_simple_key(parser) {
+-		return false
+-	}
+-
+-	// A simple key cannot follow an anchor or an alias.
+-	parser.simple_key_allowed = false
+-
+-	// Create the ALIAS or ANCHOR token and append it to the queue.
+-	var token yaml_token_t
+-	if !yaml_parser_scan_anchor(parser, &token, typ) {
+-		return false
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the TAG token.
+-func yaml_parser_fetch_tag(parser *yaml_parser_t) bool {
+-	// A tag could be a simple key.
+-	if !yaml_parser_save_simple_key(parser) {
+-		return false
+-	}
+-
+-	// A simple key cannot follow a tag.
+-	parser.simple_key_allowed = false
+-
+-	// Create the TAG token and append it to the queue.
+-	var token yaml_token_t
+-	if !yaml_parser_scan_tag(parser, &token) {
+-		return false
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the SCALAR(...,literal) or SCALAR(...,folded) tokens.
+-func yaml_parser_fetch_block_scalar(parser *yaml_parser_t, literal bool) bool {
+-	// Remove any potential simple keys.
+-	if !yaml_parser_remove_simple_key(parser) {
+-		return false
+-	}
+-
+-	// A simple key may follow a block scalar.
+-	parser.simple_key_allowed = true
+-
+-	// Create the SCALAR token and append it to the queue.
+-	var token yaml_token_t
+-	if !yaml_parser_scan_block_scalar(parser, &token, literal) {
+-		return false
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the SCALAR(...,single-quoted) or SCALAR(...,double-quoted) tokens.
+-func yaml_parser_fetch_flow_scalar(parser *yaml_parser_t, single bool) bool {
+-	// A plain scalar could be a simple key.
+-	if !yaml_parser_save_simple_key(parser) {
+-		return false
+-	}
+-
+-	// A simple key cannot follow a flow scalar.
+-	parser.simple_key_allowed = false
+-
+-	// Create the SCALAR token and append it to the queue.
+-	var token yaml_token_t
+-	if !yaml_parser_scan_flow_scalar(parser, &token, single) {
+-		return false
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Produce the SCALAR(...,plain) token.
+-func yaml_parser_fetch_plain_scalar(parser *yaml_parser_t) bool {
+-	// A plain scalar could be a simple key.
+-	if !yaml_parser_save_simple_key(parser) {
+-		return false
+-	}
+-
+-	// A simple key cannot follow a flow scalar.
+-	parser.simple_key_allowed = false
+-
+-	// Create the SCALAR token and append it to the queue.
+-	var token yaml_token_t
+-	if !yaml_parser_scan_plain_scalar(parser, &token) {
+-		return false
+-	}
+-	yaml_insert_token(parser, -1, &token)
+-	return true
+-}
+-
+-// Eat whitespaces and comments until the next token is found.
+-func yaml_parser_scan_to_next_token(parser *yaml_parser_t) bool {
+-
+-	// Until the next token is not found.
+-	for {
+-		// Allow the BOM mark to start a line.
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-		if parser.mark.column == 0 && is_bom(parser.buffer, parser.buffer_pos) {
+-			skip(parser)
+-		}
+-
+-		// Eat whitespaces.
+-		// Tabs are allowed:
+-		//  - in the flow context
+-		//  - in the block context, but not at the beginning of the line or
+-		//  after '-', '?', or ':' (complex value).
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-
+-		for parser.buffer[parser.buffer_pos] == ' ' || ((parser.flow_level > 0 || !parser.simple_key_allowed) && parser.buffer[parser.buffer_pos] == '\t') {
+-			skip(parser)
+-			if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-				return false
+-			}
+-		}
+-
+-		// Eat a comment until a line break.
+-		if parser.buffer[parser.buffer_pos] == '#' {
+-			for !is_breakz(parser.buffer, parser.buffer_pos) {
+-				skip(parser)
+-				if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-					return false
+-				}
+-			}
+-		}
+-
+-		// If it is a line break, eat it.
+-		if is_break(parser.buffer, parser.buffer_pos) {
+-			if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-				return false
+-			}
+-			skip_line(parser)
+-
+-			// In the block context, a new line may start a simple key.
+-			if parser.flow_level == 0 {
+-				parser.simple_key_allowed = true
+-			}
+-		} else {
+-			break // We have found a token.
+-		}
+-	}
+-
+-	return true
+-}
+-
+-// Scan a YAML-DIRECTIVE or TAG-DIRECTIVE token.
+-//
+-// Scope:
+-//      %YAML    1.1    # a comment \n
+-//      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-//      %TAG    !yaml!  tag:yaml.org,2002:  \n
+-//      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-//
+-func yaml_parser_scan_directive(parser *yaml_parser_t, token *yaml_token_t) bool {
+-	// Eat '%'.
+-	start_mark := parser.mark
+-	skip(parser)
+-
+-	// Scan the directive name.
+-	var name []byte
+-	if !yaml_parser_scan_directive_name(parser, start_mark, &name) {
+-		return false
+-	}
+-
+-	// Is it a YAML directive?
+-	if bytes.Equal(name, []byte("YAML")) {
+-		// Scan the VERSION directive value.
+-		var major, minor int8
+-		if !yaml_parser_scan_version_directive_value(parser, start_mark, &major, &minor) {
+-			return false
+-		}
+-		end_mark := parser.mark
+-
+-		// Create a VERSION-DIRECTIVE token.
+-		*token = yaml_token_t{
+-			typ:        yaml_VERSION_DIRECTIVE_TOKEN,
+-			start_mark: start_mark,
+-			end_mark:   end_mark,
+-			major:      major,
+-			minor:      minor,
+-		}
+-
+-		// Is it a TAG directive?
+-	} else if bytes.Equal(name, []byte("TAG")) {
+-		// Scan the TAG directive value.
+-		var handle, prefix []byte
+-		if !yaml_parser_scan_tag_directive_value(parser, start_mark, &handle, &prefix) {
+-			return false
+-		}
+-		end_mark := parser.mark
+-
+-		// Create a TAG-DIRECTIVE token.
+-		*token = yaml_token_t{
+-			typ:        yaml_TAG_DIRECTIVE_TOKEN,
+-			start_mark: start_mark,
+-			end_mark:   end_mark,
+-			value:      handle,
+-			prefix:     prefix,
+-		}
+-
+-		// Unknown directive.
+-	} else {
+-		yaml_parser_set_scanner_error(parser, "while scanning a directive",
+-			start_mark, "found uknown directive name")
+-		return false
+-	}
+-
+-	// Eat the rest of the line including any comments.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-
+-	for is_blank(parser.buffer, parser.buffer_pos) {
+-		skip(parser)
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-
+-	if parser.buffer[parser.buffer_pos] == '#' {
+-		for !is_breakz(parser.buffer, parser.buffer_pos) {
+-			skip(parser)
+-			if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-				return false
+-			}
+-		}
+-	}
+-
+-	// Check if we are at the end of the line.
+-	if !is_breakz(parser.buffer, parser.buffer_pos) {
+-		yaml_parser_set_scanner_error(parser, "while scanning a directive",
+-			start_mark, "did not find expected comment or line break")
+-		return false
+-	}
+-
+-	// Eat a line break.
+-	if is_break(parser.buffer, parser.buffer_pos) {
+-		if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-			return false
+-		}
+-		skip_line(parser)
+-	}
+-
+-	return true
+-}
+-
+-// Scan the directive name.
+-//
+-// Scope:
+-//      %YAML   1.1     # a comment \n
+-//       ^^^^
+-//      %TAG    !yaml!  tag:yaml.org,2002:  \n
+-//       ^^^
+-//
+-func yaml_parser_scan_directive_name(parser *yaml_parser_t, start_mark yaml_mark_t, name *[]byte) bool {
+-	// Consume the directive name.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-
+-	var s []byte
+-	for is_alpha(parser.buffer, parser.buffer_pos) {
+-		s = read(parser, s)
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-
+-	// Check if the name is empty.
+-	if len(s) == 0 {
+-		yaml_parser_set_scanner_error(parser, "while scanning a directive",
+-			start_mark, "could not find expected directive name")
+-		return false
+-	}
+-
+-	// Check for an blank character after the name.
+-	if !is_blankz(parser.buffer, parser.buffer_pos) {
+-		yaml_parser_set_scanner_error(parser, "while scanning a directive",
+-			start_mark, "found unexpected non-alphabetical character")
+-		return false
+-	}
+-	*name = s
+-	return true
+-}
+-
+-// Scan the value of VERSION-DIRECTIVE.
+-//
+-// Scope:
+-//      %YAML   1.1     # a comment \n
+-//           ^^^^^^
+-func yaml_parser_scan_version_directive_value(parser *yaml_parser_t, start_mark yaml_mark_t, major, minor *int8) bool {
+-	// Eat whitespaces.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-	for is_blank(parser.buffer, parser.buffer_pos) {
+-		skip(parser)
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-
+-	// Consume the major version number.
+-	if !yaml_parser_scan_version_directive_number(parser, start_mark, major) {
+-		return false
+-	}
+-
+-	// Eat '.'.
+-	if parser.buffer[parser.buffer_pos] != '.' {
+-		return yaml_parser_set_scanner_error(parser, "while scanning a %YAML directive",
+-			start_mark, "did not find expected digit or '.' character")
+-	}
+-
+-	skip(parser)
+-
+-	// Consume the minor version number.
+-	if !yaml_parser_scan_version_directive_number(parser, start_mark, minor) {
+-		return false
+-	}
+-	return true
+-}
+-
+-const max_number_length = 2
+-
+-// Scan the version number of VERSION-DIRECTIVE.
+-//
+-// Scope:
+-//      %YAML   1.1     # a comment \n
+-//              ^
+-//      %YAML   1.1     # a comment \n
+-//                ^
+-func yaml_parser_scan_version_directive_number(parser *yaml_parser_t, start_mark yaml_mark_t, number *int8) bool {
+-
+-	// Repeat while the next character is digit.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-	var value, length int8
+-	for is_digit(parser.buffer, parser.buffer_pos) {
+-		// Check if the number is too long.
+-		length++
+-		if length > max_number_length {
+-			return yaml_parser_set_scanner_error(parser, "while scanning a %YAML directive",
+-				start_mark, "found extremely long version number")
+-		}
+-		value = value*10 + int8(as_digit(parser.buffer, parser.buffer_pos))
+-		skip(parser)
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-
+-	// Check if the number was present.
+-	if length == 0 {
+-		return yaml_parser_set_scanner_error(parser, "while scanning a %YAML directive",
+-			start_mark, "did not find expected version number")
+-	}
+-	*number = value
+-	return true
+-}
+-
+-// Scan the value of a TAG-DIRECTIVE token.
+-//
+-// Scope:
+-//      %TAG    !yaml!  tag:yaml.org,2002:  \n
+-//          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-//
+-func yaml_parser_scan_tag_directive_value(parser *yaml_parser_t, start_mark yaml_mark_t, handle, prefix *[]byte) bool {
+-	var handle_value, prefix_value []byte
+-
+-	// Eat whitespaces.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-
+-	for is_blank(parser.buffer, parser.buffer_pos) {
+-		skip(parser)
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-
+-	// Scan a handle.
+-	if !yaml_parser_scan_tag_handle(parser, true, start_mark, &handle_value) {
+-		return false
+-	}
+-
+-	// Expect a whitespace.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-	if !is_blank(parser.buffer, parser.buffer_pos) {
+-		yaml_parser_set_scanner_error(parser, "while scanning a %TAG directive",
+-			start_mark, "did not find expected whitespace")
+-		return false
+-	}
+-
+-	// Eat whitespaces.
+-	for is_blank(parser.buffer, parser.buffer_pos) {
+-		skip(parser)
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-
+-	// Scan a prefix.
+-	if !yaml_parser_scan_tag_uri(parser, true, nil, start_mark, &prefix_value) {
+-		return false
+-	}
+-
+-	// Expect a whitespace or line break.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-	if !is_blankz(parser.buffer, parser.buffer_pos) {
+-		yaml_parser_set_scanner_error(parser, "while scanning a %TAG directive",
+-			start_mark, "did not find expected whitespace or line break")
+-		return false
+-	}
+-
+-	*handle = handle_value
+-	*prefix = prefix_value
+-	return true
+-}
+-
+-func yaml_parser_scan_anchor(parser *yaml_parser_t, token *yaml_token_t, typ yaml_token_type_t) bool {
+-	var s []byte
+-
+-	// Eat the indicator character.
+-	start_mark := parser.mark
+-	skip(parser)
+-
+-	// Consume the value.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-
+-	for is_alpha(parser.buffer, parser.buffer_pos) {
+-		s = read(parser, s)
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-
+-	end_mark := parser.mark
+-
+-	/*
+-	 * Check if length of the anchor is greater than 0 and it is followed by
+-	 * a whitespace character or one of the indicators:
+-	 *
+-	 *      '?', ':', ',', ']', '}', '%', '@', '`'.
+-	 */
+-
+-	if len(s) == 0 ||
+-		!(is_blankz(parser.buffer, parser.buffer_pos) || parser.buffer[parser.buffer_pos] == '?' ||
+-			parser.buffer[parser.buffer_pos] == ':' || parser.buffer[parser.buffer_pos] == ',' ||
+-			parser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '}' ||
+-			parser.buffer[parser.buffer_pos] == '%' || parser.buffer[parser.buffer_pos] == '@' ||
+-			parser.buffer[parser.buffer_pos] == '`') {
+-		context := "while scanning an alias"
+-		if typ == yaml_ANCHOR_TOKEN {
+-			context = "while scanning an anchor"
+-		}
+-		yaml_parser_set_scanner_error(parser, context, start_mark,
+-			"did not find expected alphabetic or numeric character")
+-		return false
+-	}
+-
+-	// Create a token.
+-	*token = yaml_token_t{
+-		typ:        typ,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-		value:      s,
+-	}
+-
+-	return true
+-}
+-
+-/*
+- * Scan a TAG token.
+- */
+-
+-func yaml_parser_scan_tag(parser *yaml_parser_t, token *yaml_token_t) bool {
+-	var handle, suffix []byte
+-
+-	start_mark := parser.mark
+-
+-	// Check if the tag is in the canonical form.
+-	if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-		return false
+-	}
+-
+-	if parser.buffer[parser.buffer_pos+1] == '<' {
+-		// Keep the handle as ''
+-
+-		// Eat '!<'
+-		skip(parser)
+-		skip(parser)
+-
+-		// Consume the tag value.
+-		if !yaml_parser_scan_tag_uri(parser, false, nil, start_mark, &suffix) {
+-			return false
+-		}
+-
+-		// Check for '>' and eat it.
+-		if parser.buffer[parser.buffer_pos] != '>' {
+-			yaml_parser_set_scanner_error(parser, "while scanning a tag",
+-				start_mark, "did not find the expected '>'")
+-			return false
+-		}
+-
+-		skip(parser)
+-	} else {
+-		// The tag has either the '!suffix' or the '!handle!suffix' form.
+-
+-		// First, try to scan a handle.
+-		if !yaml_parser_scan_tag_handle(parser, false, start_mark, &handle) {
+-			return false
+-		}
+-
+-		// Check if it is, indeed, handle.
+-		if handle[0] == '!' && len(handle) > 1 && handle[len(handle)-1] == '!' {
+-			// Scan the suffix now.
+-			if !yaml_parser_scan_tag_uri(parser, false, nil, start_mark, &suffix) {
+-				return false
+-			}
+-		} else {
+-			// It wasn't a handle after all.  Scan the rest of the tag.
+-			if !yaml_parser_scan_tag_uri(parser, false, handle, start_mark, &suffix) {
+-				return false
+-			}
+-
+-			// Set the handle to '!'.
+-			handle = []byte{'!'}
+-
+-			// A special case: the '!' tag.  Set the handle to '' and the
+-			// suffix to '!'.
+-			if len(suffix) == 0 {
+-				handle, suffix = suffix, handle
+-			}
+-		}
+-	}
+-
+-	// Check the character which ends the tag.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-	if !is_blankz(parser.buffer, parser.buffer_pos) {
+-		yaml_parser_set_scanner_error(parser, "while scanning a tag",
+-			start_mark, "did not find expected whitespace or line break")
+-		return false
+-	}
+-
+-	end_mark := parser.mark
+-
+-	// Create a token.
+-	*token = yaml_token_t{
+-		typ:        yaml_TAG_TOKEN,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-		value:      handle,
+-		suffix:     suffix,
+-	}
+-	return true
+-}
+-
+-// Scan a tag handle.
+-func yaml_parser_scan_tag_handle(parser *yaml_parser_t, directive bool, start_mark yaml_mark_t, handle *[]byte) bool {
+-	// Check the initial '!' character.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-	if parser.buffer[parser.buffer_pos] != '!' {
+-		yaml_parser_set_scanner_tag_error(parser, directive,
+-			start_mark, "did not find expected '!'")
+-		return false
+-	}
+-
+-	var s []byte
+-
+-	// Copy the '!' character.
+-	s = read(parser, s)
+-
+-	// Copy all subsequent alphabetical and numerical characters.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-	for is_alpha(parser.buffer, parser.buffer_pos) {
+-		s = read(parser, s)
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-
+-	// Check if the trailing character is '!' and copy it.
+-	if parser.buffer[parser.buffer_pos] == '!' {
+-		s = read(parser, s)
+-	} else {
+-		// It's either the '!' tag or not really a tag handle.  If it's a %TAG
+-		// directive, it's an error.  If it's a tag token, it must be a part of URI.
+-		if directive && !(s[0] == '!' && s[1] == 0) {
+-			yaml_parser_set_scanner_tag_error(parser, directive,
+-				start_mark, "did not find expected '!'")
+-			return false
+-		}
+-	}
+-
+-	*handle = s
+-	return true
+-}
+-
+-// Scan a tag.
+-func yaml_parser_scan_tag_uri(parser *yaml_parser_t, directive bool, head []byte, start_mark yaml_mark_t, uri *[]byte) bool {
+-	//size_t length = head ? strlen((char *)head) : 0
+-	var s []byte
+-
+-	// Copy the head if needed.
+-	//
+-	// Note that we don't copy the leading '!' character.
+-	if len(head) > 1 {
+-		s = append(s, head[1:]...)
+-	}
+-
+-	// Scan the tag.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-
+-	// The set of characters that may appear in URI is as follows:
+-	//
+-	//      '0'-'9', 'A'-'Z', 'a'-'z', '_', '-', ';', '/', '?', ':', '@', '&',
+-	//      '=', '+', '$', ',', '.', '!', '~', '*', '\'', '(', ')', '[', ']',
+-	//      '%'.
+-	// [Go] Convert this into more reasonable logic.
+-	for is_alpha(parser.buffer, parser.buffer_pos) || parser.buffer[parser.buffer_pos] == ';' ||
+-		parser.buffer[parser.buffer_pos] == '/' || parser.buffer[parser.buffer_pos] == '?' ||
+-		parser.buffer[parser.buffer_pos] == ':' || parser.buffer[parser.buffer_pos] == '@' ||
+-		parser.buffer[parser.buffer_pos] == '&' || parser.buffer[parser.buffer_pos] == '=' ||
+-		parser.buffer[parser.buffer_pos] == '+' || parser.buffer[parser.buffer_pos] == '$' ||
+-		parser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == '.' ||
+-		parser.buffer[parser.buffer_pos] == '!' || parser.buffer[parser.buffer_pos] == '~' ||
+-		parser.buffer[parser.buffer_pos] == '*' || parser.buffer[parser.buffer_pos] == '\'' ||
+-		parser.buffer[parser.buffer_pos] == '(' || parser.buffer[parser.buffer_pos] == ')' ||
+-		parser.buffer[parser.buffer_pos] == '[' || parser.buffer[parser.buffer_pos] == ']' ||
+-		parser.buffer[parser.buffer_pos] == '%' {
+-		// Check if it is a URI-escape sequence.
+-		if parser.buffer[parser.buffer_pos] == '%' {
+-			if !yaml_parser_scan_uri_escapes(parser, directive, start_mark, &s) {
+-				return false
+-			}
+-		} else {
+-			s = read(parser, s)
+-		}
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-
+-	// Check if the tag is non-empty.
+-	if len(s) == 0 {
+-		yaml_parser_set_scanner_tag_error(parser, directive,
+-			start_mark, "did not find expected tag URI")
+-		return false
+-	}
+-	*uri = s
+-	return true
+-}
+-
+-// Decode an URI-escape sequence corresponding to a single UTF-8 character.
+-func yaml_parser_scan_uri_escapes(parser *yaml_parser_t, directive bool, start_mark yaml_mark_t, s *[]byte) bool {
+-
+-	// Decode the required number of characters.
+-	w := 1024
+-	for w > 0 {
+-		// Check for a URI-escaped octet.
+-		if parser.unread < 3 && !yaml_parser_update_buffer(parser, 3) {
+-			return false
+-		}
+-
+-		if !(parser.buffer[parser.buffer_pos] == '%' &&
+-			is_hex(parser.buffer, parser.buffer_pos+1) &&
+-			is_hex(parser.buffer, parser.buffer_pos+2)) {
+-			return yaml_parser_set_scanner_tag_error(parser, directive,
+-				start_mark, "did not find URI escaped octet")
+-		}
+-
+-		// Get the octet.
+-		octet := byte((as_hex(parser.buffer, parser.buffer_pos+1) << 4) + as_hex(parser.buffer, parser.buffer_pos+2))
+-
+-		// If it is the leading octet, determine the length of the UTF-8 sequence.
+-		if w == 1024 {
+-			w = width(octet)
+-			if w == 0 {
+-				return yaml_parser_set_scanner_tag_error(parser, directive,
+-					start_mark, "found an incorrect leading UTF-8 octet")
+-			}
+-		} else {
+-			// Check if the trailing octet is correct.
+-			if octet&0xC0 != 0x80 {
+-				return yaml_parser_set_scanner_tag_error(parser, directive,
+-					start_mark, "found an incorrect trailing UTF-8 octet")
+-			}
+-		}
+-
+-		// Copy the octet and move the pointers.
+-		*s = append(*s, octet)
+-		skip(parser)
+-		skip(parser)
+-		skip(parser)
+-		w--
+-	}
+-	return true
+-}
+-
+-// Scan a block scalar.
+-func yaml_parser_scan_block_scalar(parser *yaml_parser_t, token *yaml_token_t, literal bool) bool {
+-	// Eat the indicator '|' or '>'.
+-	start_mark := parser.mark
+-	skip(parser)
+-
+-	// Scan the additional block scalar indicators.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-
+-	// Check for a chomping indicator.
+-	var chomping, increment int
+-	if parser.buffer[parser.buffer_pos] == '+' || parser.buffer[parser.buffer_pos] == '-' {
+-		// Set the chomping method and eat the indicator.
+-		if parser.buffer[parser.buffer_pos] == '+' {
+-			chomping = +1
+-		} else {
+-			chomping = -1
+-		}
+-		skip(parser)
+-
+-		// Check for an indentation indicator.
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-		if is_digit(parser.buffer, parser.buffer_pos) {
+-			// Check that the intendation is greater than 0.
+-			if parser.buffer[parser.buffer_pos] == '0' {
+-				yaml_parser_set_scanner_error(parser, "while scanning a block scalar",
+-					start_mark, "found an intendation indicator equal to 0")
+-				return false
+-			}
+-
+-			// Get the intendation level and eat the indicator.
+-			increment = as_digit(parser.buffer, parser.buffer_pos)
+-			skip(parser)
+-		}
+-
+-	} else if is_digit(parser.buffer, parser.buffer_pos) {
+-		// Do the same as above, but in the opposite order.
+-
+-		if parser.buffer[parser.buffer_pos] == '0' {
+-			yaml_parser_set_scanner_error(parser, "while scanning a block scalar",
+-				start_mark, "found an intendation indicator equal to 0")
+-			return false
+-		}
+-		increment = as_digit(parser.buffer, parser.buffer_pos)
+-		skip(parser)
+-
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-		if parser.buffer[parser.buffer_pos] == '+' || parser.buffer[parser.buffer_pos] == '-' {
+-			if parser.buffer[parser.buffer_pos] == '+' {
+-				chomping = +1
+-			} else {
+-				chomping = -1
+-			}
+-			skip(parser)
+-		}
+-	}
+-
+-	// Eat whitespaces and comments to the end of the line.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-	for is_blank(parser.buffer, parser.buffer_pos) {
+-		skip(parser)
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-	}
+-	if parser.buffer[parser.buffer_pos] == '#' {
+-		for !is_breakz(parser.buffer, parser.buffer_pos) {
+-			skip(parser)
+-			if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-				return false
+-			}
+-		}
+-	}
+-
+-	// Check if we are at the end of the line.
+-	if !is_breakz(parser.buffer, parser.buffer_pos) {
+-		yaml_parser_set_scanner_error(parser, "while scanning a block scalar",
+-			start_mark, "did not find expected comment or line break")
+-		return false
+-	}
+-
+-	// Eat a line break.
+-	if is_break(parser.buffer, parser.buffer_pos) {
+-		if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-			return false
+-		}
+-		skip_line(parser)
+-	}
+-
+-	end_mark := parser.mark
+-
+-	// Set the intendation level if it was specified.
+-	var indent int
+-	if increment > 0 {
+-		if parser.indent >= 0 {
+-			indent = parser.indent + increment
+-		} else {
+-			indent = increment
+-		}
+-	}
+-
+-	// Scan the leading line breaks and determine the indentation level if needed.
+-	var s, leading_break, trailing_breaks []byte
+-	if !yaml_parser_scan_block_scalar_breaks(parser, &indent, &trailing_breaks, start_mark, &end_mark) {
+-		return false
+-	}
+-
+-	// Scan the block scalar content.
+-	if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-		return false
+-	}
+-	var leading_blank, trailing_blank bool
+-	for parser.mark.column == indent && !is_z(parser.buffer, parser.buffer_pos) {
+-		// We are at the beginning of a non-empty line.
+-
+-		// Is it a trailing whitespace?
+-		trailing_blank = is_blank(parser.buffer, parser.buffer_pos)
+-
+-		// Check if we need to fold the leading line break.
+-		if !literal && !leading_blank && !trailing_blank && len(leading_break) > 0 && leading_break[0] == '\n' {
+-			// Do we need to join the lines by space?
+-			if len(trailing_breaks) == 0 {
+-				s = append(s, ' ')
+-			}
+-		} else {
+-			s = append(s, leading_break...)
+-		}
+-		leading_break = leading_break[:0]
+-
+-		// Append the remaining line breaks.
+-		s = append(s, trailing_breaks...)
+-		trailing_breaks = trailing_breaks[:0]
+-
+-		// Is it a leading whitespace?
+-		leading_blank = is_blank(parser.buffer, parser.buffer_pos)
+-
+-		// Consume the current line.
+-		for !is_breakz(parser.buffer, parser.buffer_pos) {
+-			s = read(parser, s)
+-			if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-				return false
+-			}
+-		}
+-
+-		// Consume the line break.
+-		if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-			return false
+-		}
+-
+-		leading_break = read_line(parser, leading_break)
+-
+-		// Eat the following intendation spaces and line breaks.
+-		if !yaml_parser_scan_block_scalar_breaks(parser, &indent, &trailing_breaks, start_mark, &end_mark) {
+-			return false
+-		}
+-	}
+-
+-	// Chomp the tail.
+-	if chomping != -1 {
+-		s = append(s, leading_break...)
+-	}
+-	if chomping == 1 {
+-		s = append(s, trailing_breaks...)
+-	}
+-
+-	// Create a token.
+-	*token = yaml_token_t{
+-		typ:        yaml_SCALAR_TOKEN,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-		value:      s,
+-		style:      yaml_LITERAL_SCALAR_STYLE,
+-	}
+-	if !literal {
+-		token.style = yaml_FOLDED_SCALAR_STYLE
+-	}
+-	return true
+-}
+-
+-// Scan intendation spaces and line breaks for a block scalar.  Determine the
+-// intendation level if needed.
+-func yaml_parser_scan_block_scalar_breaks(parser *yaml_parser_t, indent *int, breaks *[]byte, start_mark yaml_mark_t, end_mark *yaml_mark_t) bool {
+-	*end_mark = parser.mark
+-
+-	// Eat the intendation spaces and line breaks.
+-	max_indent := 0
+-	for {
+-		// Eat the intendation spaces.
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-		for (*indent == 0 || parser.mark.column < *indent) && is_space(parser.buffer, parser.buffer_pos) {
+-			skip(parser)
+-			if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-				return false
+-			}
+-		}
+-		if parser.mark.column > max_indent {
+-			max_indent = parser.mark.column
+-		}
+-
+-		// Check for a tab character messing the intendation.
+-		if (*indent == 0 || parser.mark.column < *indent) && is_tab(parser.buffer, parser.buffer_pos) {
+-			return yaml_parser_set_scanner_error(parser, "while scanning a block scalar",
+-				start_mark, "found a tab character where an intendation space is expected")
+-		}
+-
+-		// Have we found a non-empty line?
+-		if !is_break(parser.buffer, parser.buffer_pos) {
+-			break
+-		}
+-
+-		// Consume the line break.
+-		if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-			return false
+-		}
+-		// [Go] Should really be returning breaks instead.
+-		*breaks = read_line(parser, *breaks)
+-		*end_mark = parser.mark
+-	}
+-
+-	// Determine the indentation level if needed.
+-	if *indent == 0 {
+-		*indent = max_indent
+-		if *indent < parser.indent+1 {
+-			*indent = parser.indent + 1
+-		}
+-		if *indent < 1 {
+-			*indent = 1
+-		}
+-	}
+-	return true
+-}
+-
+-// Scan a quoted scalar.
+-func yaml_parser_scan_flow_scalar(parser *yaml_parser_t, token *yaml_token_t, single bool) bool {
+-	// Eat the left quote.
+-	start_mark := parser.mark
+-	skip(parser)
+-
+-	// Consume the content of the quoted scalar.
+-	var s, leading_break, trailing_breaks, whitespaces []byte
+-	for {
+-		// Check that there are no document indicators at the beginning of the line.
+-		if parser.unread < 4 && !yaml_parser_update_buffer(parser, 4) {
+-			return false
+-		}
+-
+-		if parser.mark.column == 0 &&
+-			((parser.buffer[parser.buffer_pos+0] == '-' &&
+-				parser.buffer[parser.buffer_pos+1] == '-' &&
+-				parser.buffer[parser.buffer_pos+2] == '-') ||
+-				(parser.buffer[parser.buffer_pos+0] == '.' &&
+-					parser.buffer[parser.buffer_pos+1] == '.' &&
+-					parser.buffer[parser.buffer_pos+2] == '.')) &&
+-			is_blankz(parser.buffer, parser.buffer_pos+3) {
+-			yaml_parser_set_scanner_error(parser, "while scanning a quoted scalar",
+-				start_mark, "found unexpected document indicator")
+-			return false
+-		}
+-
+-		// Check for EOF.
+-		if is_z(parser.buffer, parser.buffer_pos) {
+-			yaml_parser_set_scanner_error(parser, "while scanning a quoted scalar",
+-				start_mark, "found unexpected end of stream")
+-			return false
+-		}
+-
+-		// Consume non-blank characters.
+-		leading_blanks := false
+-		for !is_blankz(parser.buffer, parser.buffer_pos) {
+-			if single && parser.buffer[parser.buffer_pos] == '\'' && parser.buffer[parser.buffer_pos+1] == '\'' {
+-				// Is is an escaped single quote.
+-				s = append(s, '\'')
+-				skip(parser)
+-				skip(parser)
+-
+-			} else if single && parser.buffer[parser.buffer_pos] == '\'' {
+-				// It is a right single quote.
+-				break
+-			} else if !single && parser.buffer[parser.buffer_pos] == '"' {
+-				// It is a right double quote.
+-				break
+-
+-			} else if !single && parser.buffer[parser.buffer_pos] == '\\' && is_break(parser.buffer, parser.buffer_pos+1) {
+-				// It is an escaped line break.
+-				if parser.unread < 3 && !yaml_parser_update_buffer(parser, 3) {
+-					return false
+-				}
+-				skip(parser)
+-				skip_line(parser)
+-				leading_blanks = true
+-				break
+-
+-			} else if !single && parser.buffer[parser.buffer_pos] == '\\' {
+-				// It is an escape sequence.
+-				code_length := 0
+-
+-				// Check the escape character.
+-				switch parser.buffer[parser.buffer_pos+1] {
+-				case '0':
+-					s = append(s, 0)
+-				case 'a':
+-					s = append(s, '\x07')
+-				case 'b':
+-					s = append(s, '\x08')
+-				case 't', '\t':
+-					s = append(s, '\x09')
+-				case 'n':
+-					s = append(s, '\x0A')
+-				case 'v':
+-					s = append(s, '\x0B')
+-				case 'f':
+-					s = append(s, '\x0C')
+-				case 'r':
+-					s = append(s, '\x0D')
+-				case 'e':
+-					s = append(s, '\x1B')
+-				case ' ':
+-					s = append(s, '\x20')
+-				case '"':
+-					s = append(s, '"')
+-				case '\'':
+-					s = append(s, '\'')
+-				case '\\':
+-					s = append(s, '\\')
+-				case 'N': // NEL (#x85)
+-					s = append(s, '\xC2')
+-					s = append(s, '\x85')
+-				case '_': // #xA0
+-					s = append(s, '\xC2')
+-					s = append(s, '\xA0')
+-				case 'L': // LS (#x2028)
+-					s = append(s, '\xE2')
+-					s = append(s, '\x80')
+-					s = append(s, '\xA8')
+-				case 'P': // PS (#x2029)
+-					s = append(s, '\xE2')
+-					s = append(s, '\x80')
+-					s = append(s, '\xA9')
+-				case 'x':
+-					code_length = 2
+-				case 'u':
+-					code_length = 4
+-				case 'U':
+-					code_length = 8
+-				default:
+-					yaml_parser_set_scanner_error(parser, "while parsing a quoted scalar",
+-						start_mark, "found unknown escape character")
+-					return false
+-				}
+-
+-				skip(parser)
+-				skip(parser)
+-
+-				// Consume an arbitrary escape code.
+-				if code_length > 0 {
+-					var value int
+-
+-					// Scan the character value.
+-					if parser.unread < code_length && !yaml_parser_update_buffer(parser, code_length) {
+-						return false
+-					}
+-					for k := 0; k < code_length; k++ {
+-						if !is_hex(parser.buffer, parser.buffer_pos+k) {
+-							yaml_parser_set_scanner_error(parser, "while parsing a quoted scalar",
+-								start_mark, "did not find expected hexdecimal number")
+-							return false
+-						}
+-						value = (value << 4) + as_hex(parser.buffer, parser.buffer_pos+k)
+-					}
+-
+-					// Check the value and write the character.
+-					if (value >= 0xD800 && value <= 0xDFFF) || value > 0x10FFFF {
+-						yaml_parser_set_scanner_error(parser, "while parsing a quoted scalar",
+-							start_mark, "found invalid Unicode character escape code")
+-						return false
+-					}
+-					if value <= 0x7F {
+-						s = append(s, byte(value))
+-					} else if value <= 0x7FF {
+-						s = append(s, byte(0xC0+(value>>6)))
+-						s = append(s, byte(0x80+(value&0x3F)))
+-					} else if value <= 0xFFFF {
+-						s = append(s, byte(0xE0+(value>>12)))
+-						s = append(s, byte(0x80+((value>>6)&0x3F)))
+-						s = append(s, byte(0x80+(value&0x3F)))
+-					} else {
+-						s = append(s, byte(0xF0+(value>>18)))
+-						s = append(s, byte(0x80+((value>>12)&0x3F)))
+-						s = append(s, byte(0x80+((value>>6)&0x3F)))
+-						s = append(s, byte(0x80+(value&0x3F)))
+-					}
+-
+-					// Advance the pointer.
+-					for k := 0; k < code_length; k++ {
+-						skip(parser)
+-					}
+-				}
+-			} else {
+-				// It is a non-escaped non-blank character.
+-				s = read(parser, s)
+-			}
+-			if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-				return false
+-			}
+-		}
+-
+-		// Check if we are at the end of the scalar.
+-		if single {
+-			if parser.buffer[parser.buffer_pos] == '\'' {
+-				break
+-			}
+-		} else {
+-			if parser.buffer[parser.buffer_pos] == '"' {
+-				break
+-			}
+-		}
+-
+-		// Consume blank characters.
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-
+-		for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) {
+-			if is_blank(parser.buffer, parser.buffer_pos) {
+-				// Consume a space or a tab character.
+-				if !leading_blanks {
+-					whitespaces = read(parser, whitespaces)
+-				} else {
+-					skip(parser)
+-				}
+-			} else {
+-				if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-					return false
+-				}
+-
+-				// Check if it is a first line break.
+-				if !leading_blanks {
+-					whitespaces = whitespaces[:0]
+-					leading_break = read_line(parser, leading_break)
+-					leading_blanks = true
+-				} else {
+-					trailing_breaks = read_line(parser, trailing_breaks)
+-				}
+-			}
+-			if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-				return false
+-			}
+-		}
+-
+-		// Join the whitespaces or fold line breaks.
+-		if leading_blanks {
+-			// Do we need to fold line breaks?
+-			if len(leading_break) > 0 && leading_break[0] == '\n' {
+-				if len(trailing_breaks) == 0 {
+-					s = append(s, ' ')
+-				} else {
+-					s = append(s, trailing_breaks...)
+-				}
+-			} else {
+-				s = append(s, leading_break...)
+-				s = append(s, trailing_breaks...)
+-			}
+-			trailing_breaks = trailing_breaks[:0]
+-			leading_break = leading_break[:0]
+-		} else {
+-			s = append(s, whitespaces...)
+-			whitespaces = whitespaces[:0]
+-		}
+-	}
+-
+-	// Eat the right quote.
+-	skip(parser)
+-	end_mark := parser.mark
+-
+-	// Create a token.
+-	*token = yaml_token_t{
+-		typ:        yaml_SCALAR_TOKEN,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-		value:      s,
+-		style:      yaml_SINGLE_QUOTED_SCALAR_STYLE,
+-	}
+-	if !single {
+-		token.style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
+-	}
+-	return true
+-}
+-
+-// Scan a plain scalar.
+-func yaml_parser_scan_plain_scalar(parser *yaml_parser_t, token *yaml_token_t) bool {
+-
+-	var s, leading_break, trailing_breaks, whitespaces []byte
+-	var leading_blanks bool
+-	var indent = parser.indent + 1
+-
+-	start_mark := parser.mark
+-	end_mark := parser.mark
+-
+-	// Consume the content of the plain scalar.
+-	for {
+-		// Check for a document indicator.
+-		if parser.unread < 4 && !yaml_parser_update_buffer(parser, 4) {
+-			return false
+-		}
+-		if parser.mark.column == 0 &&
+-			((parser.buffer[parser.buffer_pos+0] == '-' &&
+-				parser.buffer[parser.buffer_pos+1] == '-' &&
+-				parser.buffer[parser.buffer_pos+2] == '-') ||
+-				(parser.buffer[parser.buffer_pos+0] == '.' &&
+-					parser.buffer[parser.buffer_pos+1] == '.' &&
+-					parser.buffer[parser.buffer_pos+2] == '.')) &&
+-			is_blankz(parser.buffer, parser.buffer_pos+3) {
+-			break
+-		}
+-
+-		// Check for a comment.
+-		if parser.buffer[parser.buffer_pos] == '#' {
+-			break
+-		}
+-
+-		// Consume non-blank characters.
+-		for !is_blankz(parser.buffer, parser.buffer_pos) {
+-
+-			// Check for 'x:x' in the flow context. TODO: Fix the test "spec-08-13".
+-			if parser.flow_level > 0 &&
+-				parser.buffer[parser.buffer_pos] == ':' &&
+-				!is_blankz(parser.buffer, parser.buffer_pos+1) {
+-				yaml_parser_set_scanner_error(parser, "while scanning a plain scalar",
+-					start_mark, "found unexpected ':'")
+-				return false
+-			}
+-
+-			// Check for indicators that may end a plain scalar.
+-			if (parser.buffer[parser.buffer_pos] == ':' && is_blankz(parser.buffer, parser.buffer_pos+1)) ||
+-				(parser.flow_level > 0 &&
+-					(parser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == ':' ||
+-						parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == '[' ||
+-						parser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '{' ||
+-						parser.buffer[parser.buffer_pos] == '}')) {
+-				break
+-			}
+-
+-			// Check if we need to join whitespaces and breaks.
+-			if leading_blanks || len(whitespaces) > 0 {
+-				if leading_blanks {
+-					// Do we need to fold line breaks?
+-					if leading_break[0] == '\n' {
+-						if len(trailing_breaks) == 0 {
+-							s = append(s, ' ')
+-						} else {
+-							s = append(s, trailing_breaks...)
+-						}
+-					} else {
+-						s = append(s, leading_break...)
+-						s = append(s, trailing_breaks...)
+-					}
+-					trailing_breaks = trailing_breaks[:0]
+-					leading_break = leading_break[:0]
+-					leading_blanks = false
+-				} else {
+-					s = append(s, whitespaces...)
+-					whitespaces = whitespaces[:0]
+-				}
+-			}
+-
+-			// Copy the character.
+-			s = read(parser, s)
+-
+-			end_mark = parser.mark
+-			if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-				return false
+-			}
+-		}
+-
+-		// Is it the end?
+-		if !(is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos)) {
+-			break
+-		}
+-
+-		// Consume blank characters.
+-		if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-			return false
+-		}
+-
+-		for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) {
+-			if is_blank(parser.buffer, parser.buffer_pos) {
+-
+-				// Check for tab character that abuse intendation.
+-				if leading_blanks && parser.mark.column < indent && is_tab(parser.buffer, parser.buffer_pos) {
+-					yaml_parser_set_scanner_error(parser, "while scanning a plain scalar",
+-						start_mark, "found a tab character that violate intendation")
+-					return false
+-				}
+-
+-				// Consume a space or a tab character.
+-				if !leading_blanks {
+-					whitespaces = read(parser, whitespaces)
+-				} else {
+-					skip(parser)
+-				}
+-			} else {
+-				if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {
+-					return false
+-				}
+-
+-				// Check if it is a first line break.
+-				if !leading_blanks {
+-					whitespaces = whitespaces[:0]
+-					leading_break = read_line(parser, leading_break)
+-					leading_blanks = true
+-				} else {
+-					trailing_breaks = read_line(parser, trailing_breaks)
+-				}
+-			}
+-			if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+-				return false
+-			}
+-		}
+-
+-		// Check intendation level.
+-		if parser.flow_level == 0 && parser.mark.column < indent {
+-			break
+-		}
+-	}
+-
+-	// Create a token.
+-	*token = yaml_token_t{
+-		typ:        yaml_SCALAR_TOKEN,
+-		start_mark: start_mark,
+-		end_mark:   end_mark,
+-		value:      s,
+-		style:      yaml_PLAIN_SCALAR_STYLE,
+-	}
+-
+-	// Note that we change the 'simple_key_allowed' flag.
+-	if leading_blanks {
+-		parser.simple_key_allowed = true
+-	}
+-	return true
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/sorter.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/sorter.go
+deleted file mode 100644
+index 5958822..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/sorter.go
++++ /dev/null
+@@ -1,104 +0,0 @@
+-package yaml
+-
+-import (
+-	"reflect"
+-	"unicode"
+-)
+-
+-type keyList []reflect.Value
+-
+-func (l keyList) Len() int      { return len(l) }
+-func (l keyList) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
+-func (l keyList) Less(i, j int) bool {
+-	a := l[i]
+-	b := l[j]
+-	ak := a.Kind()
+-	bk := b.Kind()
+-	for (ak == reflect.Interface || ak == reflect.Ptr) && !a.IsNil() {
+-		a = a.Elem()
+-		ak = a.Kind()
+-	}
+-	for (bk == reflect.Interface || bk == reflect.Ptr) && !b.IsNil() {
+-		b = b.Elem()
+-		bk = b.Kind()
+-	}
+-	af, aok := keyFloat(a)
+-	bf, bok := keyFloat(b)
+-	if aok && bok {
+-		if af != bf {
+-			return af < bf
+-		}
+-		if ak != bk {
+-			return ak < bk
+-		}
+-		return numLess(a, b)
+-	}
+-	if ak != reflect.String || bk != reflect.String {
+-		return ak < bk
+-	}
+-	ar, br := []rune(a.String()), []rune(b.String())
+-	for i := 0; i < len(ar) && i < len(br); i++ {
+-		if ar[i] == br[i] {
+-			continue
+-		}
+-		al := unicode.IsLetter(ar[i])
+-		bl := unicode.IsLetter(br[i])
+-		if al && bl {
+-			return ar[i] < br[i]
+-		}
+-		if al || bl {
+-			return bl
+-		}
+-		var ai, bi int
+-		var an, bn int64
+-		for ai = i; ai < len(ar) && unicode.IsDigit(ar[ai]); ai++ {
+-			an = an*10 + int64(ar[ai]-'0')
+-		}
+-		for bi = i; bi < len(br) && unicode.IsDigit(br[bi]); bi++ {
+-			bn = bn*10 + int64(br[bi]-'0')
+-		}
+-		if an != bn {
+-			return an < bn
+-		}
+-		if ai != bi {
+-			return ai < bi
+-		}
+-		return ar[i] < br[i]
+-	}
+-	return len(ar) < len(br)
+-}
+-
+-// keyFloat returns a float value for v if it is a number/bool
+-// and whether it is a number/bool or not.
+-func keyFloat(v reflect.Value) (f float64, ok bool) {
+-	switch v.Kind() {
+-	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+-		return float64(v.Int()), true
+-	case reflect.Float32, reflect.Float64:
+-		return v.Float(), true
+-	case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+-		return float64(v.Uint()), true
+-	case reflect.Bool:
+-		if v.Bool() {
+-			return 1, true
+-		}
+-		return 0, true
+-	}
+-	return 0, false
+-}
+-
+-// numLess returns whether a < b.
+-// a and b must necessarily have the same kind.
+-func numLess(a, b reflect.Value) bool {
+-	switch a.Kind() {
+-	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+-		return a.Int() < b.Int()
+-	case reflect.Float32, reflect.Float64:
+-		return a.Float() < b.Float()
+-	case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+-		return a.Uint() < b.Uint()
+-	case reflect.Bool:
+-		return !a.Bool() && b.Bool()
+-	}
+-	panic("not a number")
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/suite_test.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/suite_test.go
+deleted file mode 100644
+index c5cf1ed..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/suite_test.go
++++ /dev/null
+@@ -1,12 +0,0 @@
+-package yaml_test
+-
+-import (
+-	. "gopkg.in/check.v1"
+-	"testing"
+-)
+-
+-func Test(t *testing.T) { TestingT(t) }
+-
+-type S struct{}
+-
+-var _ = Suite(&S{})
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/writerc.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/writerc.go
+deleted file mode 100644
+index 190362f..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/writerc.go
++++ /dev/null
+@@ -1,89 +0,0 @@
+-package yaml
+-
+-// Set the writer error and return false.
+-func yaml_emitter_set_writer_error(emitter *yaml_emitter_t, problem string) bool {
+-	emitter.error = yaml_WRITER_ERROR
+-	emitter.problem = problem
+-	return false
+-}
+-
+-// Flush the output buffer.
+-func yaml_emitter_flush(emitter *yaml_emitter_t) bool {
+-	if emitter.write_handler == nil {
+-		panic("write handler not set")
+-	}
+-
+-	// Check if the buffer is empty.
+-	if emitter.buffer_pos == 0 {
+-		return true
+-	}
+-
+-	// If the output encoding is UTF-8, we don't need to recode the buffer.
+-	if emitter.encoding == yaml_UTF8_ENCODING {
+-		if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil {
+-			return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error())
+-		}
+-		emitter.buffer_pos = 0
+-		return true
+-	}
+-
+-	// Recode the buffer into the raw buffer.
+-	var low, high int
+-	if emitter.encoding == yaml_UTF16LE_ENCODING {
+-		low, high = 0, 1
+-	} else {
+-		high, low = 1, 0
+-	}
+-
+-	pos := 0
+-	for pos < emitter.buffer_pos {
+-		// See the "reader.c" code for more details on UTF-8 encoding.  Note
+-		// that we assume that the buffer contains a valid UTF-8 sequence.
+-
+-		// Read the next UTF-8 character.
+-		octet := emitter.buffer[pos]
+-
+-		var w int
+-		var value rune
+-		switch {
+-		case octet&0x80 == 0x00:
+-			w, value = 1, rune(octet&0x7F)
+-		case octet&0xE0 == 0xC0:
+-			w, value = 2, rune(octet&0x1F)
+-		case octet&0xF0 == 0xE0:
+-			w, value = 3, rune(octet&0x0F)
+-		case octet&0xF8 == 0xF0:
+-			w, value = 4, rune(octet&0x07)
+-		}
+-		for k := 1; k < w; k++ {
+-			octet = emitter.buffer[pos+k]
+-			value = (value << 6) + (rune(octet) & 0x3F)
+-		}
+-		pos += w
+-
+-		// Write the character.
+-		if value < 0x10000 {
+-			var b [2]byte
+-			b[high] = byte(value >> 8)
+-			b[low] = byte(value & 0xFF)
+-			emitter.raw_buffer = append(emitter.raw_buffer, b[0], b[1])
+-		} else {
+-			// Write the character using a surrogate pair (check "reader.c").
+-			var b [4]byte
+-			value -= 0x10000
+-			b[high] = byte(0xD8 + (value >> 18))
+-			b[low] = byte((value >> 10) & 0xFF)
+-			b[high+2] = byte(0xDC + ((value >> 8) & 0xFF))
+-			b[low+2] = byte(value & 0xFF)
+-			emitter.raw_buffer = append(emitter.raw_buffer, b[0], b[1], b[2], b[3])
+-		}
+-	}
+-
+-	// Write the raw buffer.
+-	if err := emitter.write_handler(emitter, emitter.raw_buffer); err != nil {
+-		return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error())
+-	}
+-	emitter.buffer_pos = 0
+-	emitter.raw_buffer = emitter.raw_buffer[:0]
+-	return true
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/yaml.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/yaml.go
+deleted file mode 100644
+index 44b0cc6..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/yaml.go
++++ /dev/null
+@@ -1,306 +0,0 @@
+-// Package yaml implements YAML support for the Go language.
+-//
+-// Source code and other details for the project are available at GitHub:
+-//
+-//   https://github.com/go-yaml/yaml
+-//
+-package yaml
+-
+-import (
+-	"errors"
+-	"fmt"
+-	"reflect"
+-	"runtime"
+-	"strings"
+-	"sync"
+-)
+-
+-func handleErr(err *error) {
+-	if r := recover(); r != nil {
+-		if _, ok := r.(runtime.Error); ok {
+-			panic(r)
+-		} else if _, ok := r.(*reflect.ValueError); ok {
+-			panic(r)
+-		} else if _, ok := r.(externalPanic); ok {
+-			panic(r)
+-		} else if s, ok := r.(string); ok {
+-			*err = errors.New("YAML error: " + s)
+-		} else if e, ok := r.(error); ok {
+-			*err = e
+-		} else {
+-			panic(r)
+-		}
+-	}
+-}
+-
+-// The Setter interface may be implemented by types to do their own custom
+-// unmarshalling of YAML values, rather than being implicitly assigned by
+-// the yaml package machinery. If setting the value works, the method should
+-// return true.  If it returns false, the value is considered unsupported
+-// and is omitted from maps and slices.
+-type Setter interface {
+-	SetYAML(tag string, value interface{}) bool
+-}
+-
+-// The Getter interface is implemented by types to do their own custom
+-// marshalling into a YAML tag and value.
+-type Getter interface {
+-	GetYAML() (tag string, value interface{})
+-}
+-
+-// Unmarshal decodes the first document found within the in byte slice
+-// and assigns decoded values into the out value.
+-//
+-// Maps and pointers (to a struct, string, int, etc) are accepted as out
+-// values.  If an internal pointer within a struct is not initialized,
+-// the yaml package will initialize it if necessary for unmarshalling
+-// the provided data. The out parameter must not be nil.
+-//
+-// The type of the decoded values and the type of out will be considered,
+-// and Unmarshal will do the best possible job to unmarshal values
+-// appropriately.  It is NOT considered an error, though, to skip values
+-// because they are not available in the decoded YAML, or if they are not
+-// compatible with the out value. To ensure something was properly
+-// unmarshaled use a map or compare against the previous value for the
+-// field (usually the zero value).
+-//
+-// Struct fields are only unmarshalled if they are exported (have an
+-// upper case first letter), and are unmarshalled using the field name
+-// lowercased as the default key. Custom keys may be defined via the
+-// "yaml" name in the field tag: the content preceding the first comma
+-// is used as the key, and the following comma-separated options are
+-// used to tweak the marshalling process (see Marshal).
+-// Conflicting names result in a runtime error.
+-//
+-// For example:
+-//
+-//     type T struct {
+-//         F int `yaml:"a,omitempty"`
+-//         B int
+-//     }
+-//     var T t
+-//     yaml.Unmarshal([]byte("a: 1\nb: 2"), &t)
+-//
+-// See the documentation of Marshal for the format of tags and a list of
+-// supported tag options.
+-//
+-func Unmarshal(in []byte, out interface{}) (err error) {
+-	defer handleErr(&err)
+-	d := newDecoder()
+-	p := newParser(in)
+-	defer p.destroy()
+-	node := p.parse()
+-	if node != nil {
+-		d.unmarshal(node, reflect.ValueOf(out))
+-	}
+-	return nil
+-}
+-
+-// Marshal serializes the value provided into a YAML document. The structure
+-// of the generated document will reflect the structure of the value itself.
+-// Maps and pointers (to struct, string, int, etc) are accepted as the in value.
+-//
+-// Struct fields are only unmarshalled if they are exported (have an upper case
+-// first letter), and are unmarshalled using the field name lowercased as the
+-// default key. Custom keys may be defined via the "yaml" name in the field
+-// tag: the content preceding the first comma is used as the key, and the
+-// following comma-separated options are used to tweak the marshalling process.
+-// Conflicting names result in a runtime error.
+-//
+-// The field tag format accepted is:
+-//
+-//     `(...) yaml:"[<key>][,<flag1>[,<flag2>]]" (...)`
+-//
+-// The following flags are currently supported:
+-//
+-//     omitempty    Only include the field if it's not set to the zero
+-//                  value for the type or to empty slices or maps.
+-//                  Does not apply to zero valued structs.
+-//
+-//     flow         Marshal using a flow style (useful for structs,
+-//                  sequences and maps.
+-//
+-//     inline       Inline the struct it's applied to, so its fields
+-//                  are processed as if they were part of the outer
+-//                  struct.
+-//
+-// In addition, if the key is "-", the field is ignored.
+-//
+-// For example:
+-//
+-//     type T struct {
+-//         F int "a,omitempty"
+-//         B int
+-//     }
+-//     yaml.Marshal(&T{B: 2}) // Returns "b: 2\n"
+-//     yaml.Marshal(&T{F: 1}} // Returns "a: 1\nb: 0\n"
+-//
+-func Marshal(in interface{}) (out []byte, err error) {
+-	defer handleErr(&err)
+-	e := newEncoder()
+-	defer e.destroy()
+-	e.marshal("", reflect.ValueOf(in))
+-	e.finish()
+-	out = e.out
+-	return
+-}
+-
+-// --------------------------------------------------------------------------
+-// Maintain a mapping of keys to structure field indexes
+-
+-// The code in this section was copied from mgo/bson.
+-
+-// structInfo holds details for the serialization of fields of
+-// a given struct.
+-type structInfo struct {
+-	FieldsMap  map[string]fieldInfo
+-	FieldsList []fieldInfo
+-
+-	// InlineMap is the number of the field in the struct that
+-	// contains an ,inline map, or -1 if there's none.
+-	InlineMap int
+-}
+-
+-type fieldInfo struct {
+-	Key       string
+-	Num       int
+-	OmitEmpty bool
+-	Flow      bool
+-
+-	// Inline holds the field index if the field is part of an inlined struct.
+-	Inline []int
+-}
+-
+-var structMap = make(map[reflect.Type]*structInfo)
+-var fieldMapMutex sync.RWMutex
+-
+-type externalPanic string
+-
+-func (e externalPanic) String() string {
+-	return string(e)
+-}
+-
+-func getStructInfo(st reflect.Type) (*structInfo, error) {
+-	fieldMapMutex.RLock()
+-	sinfo, found := structMap[st]
+-	fieldMapMutex.RUnlock()
+-	if found {
+-		return sinfo, nil
+-	}
+-
+-	n := st.NumField()
+-	fieldsMap := make(map[string]fieldInfo)
+-	fieldsList := make([]fieldInfo, 0, n)
+-	inlineMap := -1
+-	for i := 0; i != n; i++ {
+-		field := st.Field(i)
+-		if field.PkgPath != "" {
+-			continue // Private field
+-		}
+-
+-		info := fieldInfo{Num: i}
+-
+-		tag := field.Tag.Get("yaml")
+-		if tag == "" && strings.Index(string(field.Tag), ":") < 0 {
+-			tag = string(field.Tag)
+-		}
+-		if tag == "-" {
+-			continue
+-		}
+-
+-		inline := false
+-		fields := strings.Split(tag, ",")
+-		if len(fields) > 1 {
+-			for _, flag := range fields[1:] {
+-				switch flag {
+-				case "omitempty":
+-					info.OmitEmpty = true
+-				case "flow":
+-					info.Flow = true
+-				case "inline":
+-					inline = true
+-				default:
+-					msg := fmt.Sprintf("Unsupported flag %q in tag %q of type %s", flag, tag, st)
+-					panic(externalPanic(msg))
+-				}
+-			}
+-			tag = fields[0]
+-		}
+-
+-		if inline {
+-			switch field.Type.Kind() {
+-			//case reflect.Map:
+-			//	if inlineMap >= 0 {
+-			//		return nil, errors.New("Multiple ,inline maps in struct " + st.String())
+-			//	}
+-			//	if field.Type.Key() != reflect.TypeOf("") {
+-			//		return nil, errors.New("Option ,inline needs a map with string keys in struct " + st.String())
+-			//	}
+-			//	inlineMap = info.Num
+-			case reflect.Struct:
+-				sinfo, err := getStructInfo(field.Type)
+-				if err != nil {
+-					return nil, err
+-				}
+-				for _, finfo := range sinfo.FieldsList {
+-					if _, found := fieldsMap[finfo.Key]; found {
+-						msg := "Duplicated key '" + finfo.Key + "' in struct " + st.String()
+-						return nil, errors.New(msg)
+-					}
+-					if finfo.Inline == nil {
+-						finfo.Inline = []int{i, finfo.Num}
+-					} else {
+-						finfo.Inline = append([]int{i}, finfo.Inline...)
+-					}
+-					fieldsMap[finfo.Key] = finfo
+-					fieldsList = append(fieldsList, finfo)
+-				}
+-			default:
+-				//panic("Option ,inline needs a struct value or map field")
+-				panic("Option ,inline needs a struct value field")
+-			}
+-			continue
+-		}
+-
+-		if tag != "" {
+-			info.Key = tag
+-		} else {
+-			info.Key = strings.ToLower(field.Name)
+-		}
+-
+-		if _, found = fieldsMap[info.Key]; found {
+-			msg := "Duplicated key '" + info.Key + "' in struct " + st.String()
+-			return nil, errors.New(msg)
+-		}
+-
+-		fieldsList = append(fieldsList, info)
+-		fieldsMap[info.Key] = info
+-	}
+-
+-	sinfo = &structInfo{fieldsMap, fieldsList, inlineMap}
+-
+-	fieldMapMutex.Lock()
+-	structMap[st] = sinfo
+-	fieldMapMutex.Unlock()
+-	return sinfo, nil
+-}
+-
+-func isZero(v reflect.Value) bool {
+-	switch v.Kind() {
+-	case reflect.String:
+-		return len(v.String()) == 0
+-	case reflect.Interface, reflect.Ptr:
+-		return v.IsNil()
+-	case reflect.Slice:
+-		return v.Len() == 0
+-	case reflect.Map:
+-		return v.Len() == 0
+-	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+-		return v.Int() == 0
+-	case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+-		return v.Uint() == 0
+-	case reflect.Bool:
+-		return !v.Bool()
+-	}
+-	return false
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/yamlh.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/yamlh.go
+deleted file mode 100644
+index 6624d6c..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/yamlh.go
++++ /dev/null
+@@ -1,712 +0,0 @@
+-package yaml
+-
+-import (
+-	"io"
+-)
+-
+-// The version directive data.
+-type yaml_version_directive_t struct {
+-	major int8 // The major version number.
+-	minor int8 // The minor version number.
+-}
+-
+-// The tag directive data.
+-type yaml_tag_directive_t struct {
+-	handle []byte // The tag handle.
+-	prefix []byte // The tag prefix.
+-}
+-
+-type yaml_encoding_t int
+-
+-// The stream encoding.
+-const (
+-	// Let the parser choose the encoding.
+-	yaml_ANY_ENCODING yaml_encoding_t = iota
+-
+-	yaml_UTF8_ENCODING    // The default UTF-8 encoding.
+-	yaml_UTF16LE_ENCODING // The UTF-16-LE encoding with BOM.
+-	yaml_UTF16BE_ENCODING // The UTF-16-BE encoding with BOM.
+-)
+-
+-type yaml_break_t int
+-
+-// Line break types.
+-const (
+-	// Let the parser choose the break type.
+-	yaml_ANY_BREAK yaml_break_t = iota
+-
+-	yaml_CR_BREAK   // Use CR for line breaks (Mac style).
+-	yaml_LN_BREAK   // Use LN for line breaks (Unix style).
+-	yaml_CRLN_BREAK // Use CR LN for line breaks (DOS style).
+-)
+-
+-type yaml_error_type_t int
+-
+-// Many bad things could happen with the parser and emitter.
+-const (
+-	// No error is produced.
+-	yaml_NO_ERROR yaml_error_type_t = iota
+-
+-	yaml_MEMORY_ERROR   // Cannot allocate or reallocate a block of memory.
+-	yaml_READER_ERROR   // Cannot read or decode the input stream.
+-	yaml_SCANNER_ERROR  // Cannot scan the input stream.
+-	yaml_PARSER_ERROR   // Cannot parse the input stream.
+-	yaml_COMPOSER_ERROR // Cannot compose a YAML document.
+-	yaml_WRITER_ERROR   // Cannot write to the output stream.
+-	yaml_EMITTER_ERROR  // Cannot emit a YAML stream.
+-)
+-
+-// The pointer position.
+-type yaml_mark_t struct {
+-	index  int // The position index.
+-	line   int // The position line.
+-	column int // The position column.
+-}
+-
+-// Node Styles
+-
+-type yaml_style_t int8
+-
+-type yaml_scalar_style_t yaml_style_t
+-
+-// Scalar styles.
+-const (
+-	// Let the emitter choose the style.
+-	yaml_ANY_SCALAR_STYLE yaml_scalar_style_t = iota
+-
+-	yaml_PLAIN_SCALAR_STYLE         // The plain scalar style.
+-	yaml_SINGLE_QUOTED_SCALAR_STYLE // The single-quoted scalar style.
+-	yaml_DOUBLE_QUOTED_SCALAR_STYLE // The double-quoted scalar style.
+-	yaml_LITERAL_SCALAR_STYLE       // The literal scalar style.
+-	yaml_FOLDED_SCALAR_STYLE        // The folded scalar style.
+-)
+-
+-type yaml_sequence_style_t yaml_style_t
+-
+-// Sequence styles.
+-const (
+-	// Let the emitter choose the style.
+-	yaml_ANY_SEQUENCE_STYLE yaml_sequence_style_t = iota
+-
+-	yaml_BLOCK_SEQUENCE_STYLE // The block sequence style.
+-	yaml_FLOW_SEQUENCE_STYLE  // The flow sequence style.
+-)
+-
+-type yaml_mapping_style_t yaml_style_t
+-
+-// Mapping styles.
+-const (
+-	// Let the emitter choose the style.
+-	yaml_ANY_MAPPING_STYLE yaml_mapping_style_t = iota
+-
+-	yaml_BLOCK_MAPPING_STYLE // The block mapping style.
+-	yaml_FLOW_MAPPING_STYLE  // The flow mapping style.
+-)
+-
+-// Tokens
+-
+-type yaml_token_type_t int
+-
+-// Token types.
+-const (
+-	// An empty token.
+-	yaml_NO_TOKEN yaml_token_type_t = iota
+-
+-	yaml_STREAM_START_TOKEN // A STREAM-START token.
+-	yaml_STREAM_END_TOKEN   // A STREAM-END token.
+-
+-	yaml_VERSION_DIRECTIVE_TOKEN // A VERSION-DIRECTIVE token.
+-	yaml_TAG_DIRECTIVE_TOKEN     // A TAG-DIRECTIVE token.
+-	yaml_DOCUMENT_START_TOKEN    // A DOCUMENT-START token.
+-	yaml_DOCUMENT_END_TOKEN      // A DOCUMENT-END token.
+-
+-	yaml_BLOCK_SEQUENCE_START_TOKEN // A BLOCK-SEQUENCE-START token.
+-	yaml_BLOCK_MAPPING_START_TOKEN  // A BLOCK-SEQUENCE-END token.
+-	yaml_BLOCK_END_TOKEN            // A BLOCK-END token.
+-
+-	yaml_FLOW_SEQUENCE_START_TOKEN // A FLOW-SEQUENCE-START token.
+-	yaml_FLOW_SEQUENCE_END_TOKEN   // A FLOW-SEQUENCE-END token.
+-	yaml_FLOW_MAPPING_START_TOKEN  // A FLOW-MAPPING-START token.
+-	yaml_FLOW_MAPPING_END_TOKEN    // A FLOW-MAPPING-END token.
+-
+-	yaml_BLOCK_ENTRY_TOKEN // A BLOCK-ENTRY token.
+-	yaml_FLOW_ENTRY_TOKEN  // A FLOW-ENTRY token.
+-	yaml_KEY_TOKEN         // A KEY token.
+-	yaml_VALUE_TOKEN       // A VALUE token.
+-
+-	yaml_ALIAS_TOKEN  // An ALIAS token.
+-	yaml_ANCHOR_TOKEN // An ANCHOR token.
+-	yaml_TAG_TOKEN    // A TAG token.
+-	yaml_SCALAR_TOKEN // A SCALAR token.
+-)
+-
+-func (tt yaml_token_type_t) String() string {
+-	switch tt {
+-	case yaml_NO_TOKEN:
+-		return "yaml_NO_TOKEN"
+-	case yaml_STREAM_START_TOKEN:
+-		return "yaml_STREAM_START_TOKEN"
+-	case yaml_STREAM_END_TOKEN:
+-		return "yaml_STREAM_END_TOKEN"
+-	case yaml_VERSION_DIRECTIVE_TOKEN:
+-		return "yaml_VERSION_DIRECTIVE_TOKEN"
+-	case yaml_TAG_DIRECTIVE_TOKEN:
+-		return "yaml_TAG_DIRECTIVE_TOKEN"
+-	case yaml_DOCUMENT_START_TOKEN:
+-		return "yaml_DOCUMENT_START_TOKEN"
+-	case yaml_DOCUMENT_END_TOKEN:
+-		return "yaml_DOCUMENT_END_TOKEN"
+-	case yaml_BLOCK_SEQUENCE_START_TOKEN:
+-		return "yaml_BLOCK_SEQUENCE_START_TOKEN"
+-	case yaml_BLOCK_MAPPING_START_TOKEN:
+-		return "yaml_BLOCK_MAPPING_START_TOKEN"
+-	case yaml_BLOCK_END_TOKEN:
+-		return "yaml_BLOCK_END_TOKEN"
+-	case yaml_FLOW_SEQUENCE_START_TOKEN:
+-		return "yaml_FLOW_SEQUENCE_START_TOKEN"
+-	case yaml_FLOW_SEQUENCE_END_TOKEN:
+-		return "yaml_FLOW_SEQUENCE_END_TOKEN"
+-	case yaml_FLOW_MAPPING_START_TOKEN:
+-		return "yaml_FLOW_MAPPING_START_TOKEN"
+-	case yaml_FLOW_MAPPING_END_TOKEN:
+-		return "yaml_FLOW_MAPPING_END_TOKEN"
+-	case yaml_BLOCK_ENTRY_TOKEN:
+-		return "yaml_BLOCK_ENTRY_TOKEN"
+-	case yaml_FLOW_ENTRY_TOKEN:
+-		return "yaml_FLOW_ENTRY_TOKEN"
+-	case yaml_KEY_TOKEN:
+-		return "yaml_KEY_TOKEN"
+-	case yaml_VALUE_TOKEN:
+-		return "yaml_VALUE_TOKEN"
+-	case yaml_ALIAS_TOKEN:
+-		return "yaml_ALIAS_TOKEN"
+-	case yaml_ANCHOR_TOKEN:
+-		return "yaml_ANCHOR_TOKEN"
+-	case yaml_TAG_TOKEN:
+-		return "yaml_TAG_TOKEN"
+-	case yaml_SCALAR_TOKEN:
+-		return "yaml_SCALAR_TOKEN"
+-	}
+-	return "<unknown token>"
+-}
+-
+-// The token structure.
+-type yaml_token_t struct {
+-	// The token type.
+-	typ yaml_token_type_t
+-
+-	// The start/end of the token.
+-	start_mark, end_mark yaml_mark_t
+-
+-	// The stream encoding (for yaml_STREAM_START_TOKEN).
+-	encoding yaml_encoding_t
+-
+-	// The alias/anchor/scalar value or tag/tag directive handle
+-	// (for yaml_ALIAS_TOKEN, yaml_ANCHOR_TOKEN, yaml_SCALAR_TOKEN, yaml_TAG_TOKEN, yaml_TAG_DIRECTIVE_TOKEN).
+-	value []byte
+-
+-	// The tag suffix (for yaml_TAG_TOKEN).
+-	suffix []byte
+-
+-	// The tag directive prefix (for yaml_TAG_DIRECTIVE_TOKEN).
+-	prefix []byte
+-
+-	// The scalar style (for yaml_SCALAR_TOKEN).
+-	style yaml_scalar_style_t
+-
+-	// The version directive major/minor (for yaml_VERSION_DIRECTIVE_TOKEN).
+-	major, minor int8
+-}
+-
+-// Events
+-
+-type yaml_event_type_t int8
+-
+-// Event types.
+-const (
+-	// An empty event.
+-	yaml_NO_EVENT yaml_event_type_t = iota
+-
+-	yaml_STREAM_START_EVENT   // A STREAM-START event.
+-	yaml_STREAM_END_EVENT     // A STREAM-END event.
+-	yaml_DOCUMENT_START_EVENT // A DOCUMENT-START event.
+-	yaml_DOCUMENT_END_EVENT   // A DOCUMENT-END event.
+-	yaml_ALIAS_EVENT          // An ALIAS event.
+-	yaml_SCALAR_EVENT         // A SCALAR event.
+-	yaml_SEQUENCE_START_EVENT // A SEQUENCE-START event.
+-	yaml_SEQUENCE_END_EVENT   // A SEQUENCE-END event.
+-	yaml_MAPPING_START_EVENT  // A MAPPING-START event.
+-	yaml_MAPPING_END_EVENT    // A MAPPING-END event.
+-)
+-
+-// The event structure.
+-type yaml_event_t struct {
+-
+-	// The event type.
+-	typ yaml_event_type_t
+-
+-	// The start and end of the event.
+-	start_mark, end_mark yaml_mark_t
+-
+-	// The document encoding (for yaml_STREAM_START_EVENT).
+-	encoding yaml_encoding_t
+-
+-	// The version directive (for yaml_DOCUMENT_START_EVENT).
+-	version_directive *yaml_version_directive_t
+-
+-	// The list of tag directives (for yaml_DOCUMENT_START_EVENT).
+-	tag_directives []yaml_tag_directive_t
+-
+-	// The anchor (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_ALIAS_EVENT).
+-	anchor []byte
+-
+-	// The tag (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT).
+-	tag []byte
+-
+-	// The scalar value (for yaml_SCALAR_EVENT).
+-	value []byte
+-
+-	// Is the document start/end indicator implicit, or the tag optional?
+-	// (for yaml_DOCUMENT_START_EVENT, yaml_DOCUMENT_END_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_SCALAR_EVENT).
+-	implicit bool
+-
+-	// Is the tag optional for any non-plain style? (for yaml_SCALAR_EVENT).
+-	quoted_implicit bool
+-
+-	// The style (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT).
+-	style yaml_style_t
+-}
+-
+-func (e *yaml_event_t) scalar_style() yaml_scalar_style_t     { return yaml_scalar_style_t(e.style) }
+-func (e *yaml_event_t) sequence_style() yaml_sequence_style_t { return yaml_sequence_style_t(e.style) }
+-func (e *yaml_event_t) mapping_style() yaml_mapping_style_t   { return yaml_mapping_style_t(e.style) }
+-
+-// Nodes
+-
+-const (
+-	yaml_NULL_TAG      = "tag:yaml.org,2002:null"      // The tag !!null with the only possible value: null.
+-	yaml_BOOL_TAG      = "tag:yaml.org,2002:bool"      // The tag !!bool with the values: true and false.
+-	yaml_STR_TAG       = "tag:yaml.org,2002:str"       // The tag !!str for string values.
+-	yaml_INT_TAG       = "tag:yaml.org,2002:int"       // The tag !!int for integer values.
+-	yaml_FLOAT_TAG     = "tag:yaml.org,2002:float"     // The tag !!float for float values.
+-	yaml_TIMESTAMP_TAG = "tag:yaml.org,2002:timestamp" // The tag !!timestamp for date and time values.
+-
+-	yaml_SEQ_TAG = "tag:yaml.org,2002:seq" // The tag !!seq is used to denote sequences.
+-	yaml_MAP_TAG = "tag:yaml.org,2002:map" // The tag !!map is used to denote mapping.
+-
+-	yaml_DEFAULT_SCALAR_TAG   = yaml_STR_TAG // The default scalar tag is !!str.
+-	yaml_DEFAULT_SEQUENCE_TAG = yaml_SEQ_TAG // The default sequence tag is !!seq.
+-	yaml_DEFAULT_MAPPING_TAG  = yaml_MAP_TAG // The default mapping tag is !!map.
+-)
+-
+-type yaml_node_type_t int
+-
+-// Node types.
+-const (
+-	// An empty node.
+-	yaml_NO_NODE yaml_node_type_t = iota
+-
+-	yaml_SCALAR_NODE   // A scalar node.
+-	yaml_SEQUENCE_NODE // A sequence node.
+-	yaml_MAPPING_NODE  // A mapping node.
+-)
+-
+-// An element of a sequence node.
+-type yaml_node_item_t int
+-
+-// An element of a mapping node.
+-type yaml_node_pair_t struct {
+-	key   int // The key of the element.
+-	value int // The value of the element.
+-}
+-
+-// The node structure.
+-type yaml_node_t struct {
+-	typ yaml_node_type_t // The node type.
+-	tag []byte           // The node tag.
+-
+-	// The node data.
+-
+-	// The scalar parameters (for yaml_SCALAR_NODE).
+-	scalar struct {
+-		value  []byte              // The scalar value.
+-		length int                 // The length of the scalar value.
+-		style  yaml_scalar_style_t // The scalar style.
+-	}
+-
+-	// The sequence parameters (for YAML_SEQUENCE_NODE).
+-	sequence struct {
+-		items_data []yaml_node_item_t    // The stack of sequence items.
+-		style      yaml_sequence_style_t // The sequence style.
+-	}
+-
+-	// The mapping parameters (for yaml_MAPPING_NODE).
+-	mapping struct {
+-		pairs_data  []yaml_node_pair_t   // The stack of mapping pairs (key, value).
+-		pairs_start *yaml_node_pair_t    // The beginning of the stack.
+-		pairs_end   *yaml_node_pair_t    // The end of the stack.
+-		pairs_top   *yaml_node_pair_t    // The top of the stack.
+-		style       yaml_mapping_style_t // The mapping style.
+-	}
+-
+-	start_mark yaml_mark_t // The beginning of the node.
+-	end_mark   yaml_mark_t // The end of the node.
+-
+-}
+-
+-// The document structure.
+-type yaml_document_t struct {
+-
+-	// The document nodes.
+-	nodes []yaml_node_t
+-
+-	// The version directive.
+-	version_directive *yaml_version_directive_t
+-
+-	// The list of tag directives.
+-	tag_directives_data  []yaml_tag_directive_t
+-	tag_directives_start int // The beginning of the tag directives list.
+-	tag_directives_end   int // The end of the tag directives list.
+-
+-	start_implicit int // Is the document start indicator implicit?
+-	end_implicit   int // Is the document end indicator implicit?
+-
+-	// The start/end of the document.
+-	start_mark, end_mark yaml_mark_t
+-}
+-
+-// The prototype of a read handler.
+-//
+-// The read handler is called when the parser needs to read more bytes from the
+-// source. The handler should write not more than size bytes to the buffer.
+-// The number of written bytes should be set to the size_read variable.
+-//
+-// [in,out]   data        A pointer to an application data specified by
+-//                        yaml_parser_set_input().
+-// [out]      buffer      The buffer to write the data from the source.
+-// [in]       size        The size of the buffer.
+-// [out]      size_read   The actual number of bytes read from the source.
+-//
+-// On success, the handler should return 1.  If the handler failed,
+-// the returned value should be 0. On EOF, the handler should set the
+-// size_read to 0 and return 1.
+-type yaml_read_handler_t func(parser *yaml_parser_t, buffer []byte) (n int, err error)
+-
+-// This structure holds information about a potential simple key.
+-type yaml_simple_key_t struct {
+-	possible     bool        // Is a simple key possible?
+-	required     bool        // Is a simple key required?
+-	token_number int         // The number of the token.
+-	mark         yaml_mark_t // The position mark.
+-}
+-
+-// The states of the parser.
+-type yaml_parser_state_t int
+-
+-const (
+-	yaml_PARSE_STREAM_START_STATE yaml_parser_state_t = iota
+-
+-	yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE           // Expect the beginning of an implicit document.
+-	yaml_PARSE_DOCUMENT_START_STATE                    // Expect DOCUMENT-START.
+-	yaml_PARSE_DOCUMENT_CONTENT_STATE                  // Expect the content of a document.
+-	yaml_PARSE_DOCUMENT_END_STATE                      // Expect DOCUMENT-END.
+-	yaml_PARSE_BLOCK_NODE_STATE                        // Expect a block node.
+-	yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE // Expect a block node or indentless sequence.
+-	yaml_PARSE_FLOW_NODE_STATE                         // Expect a flow node.
+-	yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE        // Expect the first entry of a block sequence.
+-	yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE              // Expect an entry of a block sequence.
+-	yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE         // Expect an entry of an indentless sequence.
+-	yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE           // Expect the first key of a block mapping.
+-	yaml_PARSE_BLOCK_MAPPING_KEY_STATE                 // Expect a block mapping key.
+-	yaml_PARSE_BLOCK_MAPPING_VALUE_STATE               // Expect a block mapping value.
+-	yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE         // Expect the first entry of a flow sequence.
+-	yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE               // Expect an entry of a flow sequence.
+-	yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE   // Expect a key of an ordered mapping.
+-	yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE // Expect a value of an ordered mapping.
+-	yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE   // Expect the and of an ordered mapping entry.
+-	yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE            // Expect the first key of a flow mapping.
+-	yaml_PARSE_FLOW_MAPPING_KEY_STATE                  // Expect a key of a flow mapping.
+-	yaml_PARSE_FLOW_MAPPING_VALUE_STATE                // Expect a value of a flow mapping.
+-	yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE          // Expect an empty value of a flow mapping.
+-	yaml_PARSE_END_STATE                               // Expect nothing.
+-)
+-
+-func (ps yaml_parser_state_t) String() string {
+-	switch ps {
+-	case yaml_PARSE_STREAM_START_STATE:
+-		return "yaml_PARSE_STREAM_START_STATE"
+-	case yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE:
+-		return "yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE"
+-	case yaml_PARSE_DOCUMENT_START_STATE:
+-		return "yaml_PARSE_DOCUMENT_START_STATE"
+-	case yaml_PARSE_DOCUMENT_CONTENT_STATE:
+-		return "yaml_PARSE_DOCUMENT_CONTENT_STATE"
+-	case yaml_PARSE_DOCUMENT_END_STATE:
+-		return "yaml_PARSE_DOCUMENT_END_STATE"
+-	case yaml_PARSE_BLOCK_NODE_STATE:
+-		return "yaml_PARSE_BLOCK_NODE_STATE"
+-	case yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE:
+-		return "yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE"
+-	case yaml_PARSE_FLOW_NODE_STATE:
+-		return "yaml_PARSE_FLOW_NODE_STATE"
+-	case yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE:
+-		return "yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE"
+-	case yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE:
+-		return "yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE"
+-	case yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE:
+-		return "yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE"
+-	case yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE:
+-		return "yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE"
+-	case yaml_PARSE_BLOCK_MAPPING_KEY_STATE:
+-		return "yaml_PARSE_BLOCK_MAPPING_KEY_STATE"
+-	case yaml_PARSE_BLOCK_MAPPING_VALUE_STATE:
+-		return "yaml_PARSE_BLOCK_MAPPING_VALUE_STATE"
+-	case yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE:
+-		return "yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE"
+-	case yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE:
+-		return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE"
+-	case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE:
+-		return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE"
+-	case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE:
+-		return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE"
+-	case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE:
+-		return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE"
+-	case yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE:
+-		return "yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE"
+-	case yaml_PARSE_FLOW_MAPPING_KEY_STATE:
+-		return "yaml_PARSE_FLOW_MAPPING_KEY_STATE"
+-	case yaml_PARSE_FLOW_MAPPING_VALUE_STATE:
+-		return "yaml_PARSE_FLOW_MAPPING_VALUE_STATE"
+-	case yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE:
+-		return "yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE"
+-	case yaml_PARSE_END_STATE:
+-		return "yaml_PARSE_END_STATE"
+-	}
+-	return "<unknown parser state>"
+-}
+-
+-// This structure holds aliases data.
+-type yaml_alias_data_t struct {
+-	anchor []byte      // The anchor.
+-	index  int         // The node id.
+-	mark   yaml_mark_t // The anchor mark.
+-}
+-
+-// The parser structure.
+-//
+-// All members are internal. Manage the structure using the
+-// yaml_parser_ family of functions.
+-type yaml_parser_t struct {
+-
+-	// Error handling
+-
+-	error yaml_error_type_t // Error type.
+-
+-	problem string // Error description.
+-
+-	// The byte about which the problem occured.
+-	problem_offset int
+-	problem_value  int
+-	problem_mark   yaml_mark_t
+-
+-	// The error context.
+-	context      string
+-	context_mark yaml_mark_t
+-
+-	// Reader stuff
+-
+-	read_handler yaml_read_handler_t // Read handler.
+-
+-	input_file io.Reader // File input data.
+-	input      []byte    // String input data.
+-	input_pos  int
+-
+-	eof bool // EOF flag
+-
+-	buffer     []byte // The working buffer.
+-	buffer_pos int    // The current position of the buffer.
+-
+-	unread int // The number of unread characters in the buffer.
+-
+-	raw_buffer     []byte // The raw buffer.
+-	raw_buffer_pos int    // The current position of the buffer.
+-
+-	encoding yaml_encoding_t // The input encoding.
+-
+-	offset int         // The offset of the current position (in bytes).
+-	mark   yaml_mark_t // The mark of the current position.
+-
+-	// Scanner stuff
+-
+-	stream_start_produced bool // Have we started to scan the input stream?
+-	stream_end_produced   bool // Have we reached the end of the input stream?
+-
+-	flow_level int // The number of unclosed '[' and '{' indicators.
+-
+-	tokens          []yaml_token_t // The tokens queue.
+-	tokens_head     int            // The head of the tokens queue.
+-	tokens_parsed   int            // The number of tokens fetched from the queue.
+-	token_available bool           // Does the tokens queue contain a token ready for dequeueing.
+-
+-	indent  int   // The current indentation level.
+-	indents []int // The indentation levels stack.
+-
+-	simple_key_allowed bool                // May a simple key occur at the current position?
+-	simple_keys        []yaml_simple_key_t // The stack of simple keys.
+-
+-	// Parser stuff
+-
+-	state          yaml_parser_state_t    // The current parser state.
+-	states         []yaml_parser_state_t  // The parser states stack.
+-	marks          []yaml_mark_t          // The stack of marks.
+-	tag_directives []yaml_tag_directive_t // The list of TAG directives.
+-
+-	// Dumper stuff
+-
+-	aliases []yaml_alias_data_t // The alias data.
+-
+-	document *yaml_document_t // The currently parsed document.
+-}
+-
+-// Emitter Definitions
+-
+-// The prototype of a write handler.
+-//
+-// The write handler is called when the emitter needs to flush the accumulated
+-// characters to the output.  The handler should write @a size bytes of the
+-// @a buffer to the output.
+-//
+-// @param[in,out]   data        A pointer to an application data specified by
+-//                              yaml_emitter_set_output().
+-// @param[in]       buffer      The buffer with bytes to be written.
+-// @param[in]       size        The size of the buffer.
+-//
+-// @returns On success, the handler should return @c 1.  If the handler failed,
+-// the returned value should be @c 0.
+-//
+-type yaml_write_handler_t func(emitter *yaml_emitter_t, buffer []byte) error
+-
+-type yaml_emitter_state_t int
+-
+-// The emitter states.
+-const (
+-	// Expect STREAM-START.
+-	yaml_EMIT_STREAM_START_STATE yaml_emitter_state_t = iota
+-
+-	yaml_EMIT_FIRST_DOCUMENT_START_STATE       // Expect the first DOCUMENT-START or STREAM-END.
+-	yaml_EMIT_DOCUMENT_START_STATE             // Expect DOCUMENT-START or STREAM-END.
+-	yaml_EMIT_DOCUMENT_CONTENT_STATE           // Expect the content of a document.
+-	yaml_EMIT_DOCUMENT_END_STATE               // Expect DOCUMENT-END.
+-	yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE   // Expect the first item of a flow sequence.
+-	yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE         // Expect an item of a flow sequence.
+-	yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE     // Expect the first key of a flow mapping.
+-	yaml_EMIT_FLOW_MAPPING_KEY_STATE           // Expect a key of a flow mapping.
+-	yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE  // Expect a value for a simple key of a flow mapping.
+-	yaml_EMIT_FLOW_MAPPING_VALUE_STATE         // Expect a value of a flow mapping.
+-	yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE  // Expect the first item of a block sequence.
+-	yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE        // Expect an item of a block sequence.
+-	yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE    // Expect the first key of a block mapping.
+-	yaml_EMIT_BLOCK_MAPPING_KEY_STATE          // Expect the key of a block mapping.
+-	yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE // Expect a value for a simple key of a block mapping.
+-	yaml_EMIT_BLOCK_MAPPING_VALUE_STATE        // Expect a value of a block mapping.
+-	yaml_EMIT_END_STATE                        // Expect nothing.
+-)
+-
+-// The emitter structure.
+-//
+-// All members are internal.  Manage the structure using the @c yaml_emitter_
+-// family of functions.
+-type yaml_emitter_t struct {
+-
+-	// Error handling
+-
+-	error   yaml_error_type_t // Error type.
+-	problem string            // Error description.
+-
+-	// Writer stuff
+-
+-	write_handler yaml_write_handler_t // Write handler.
+-
+-	output_buffer *[]byte   // String output data.
+-	output_file   io.Writer // File output data.
+-
+-	buffer     []byte // The working buffer.
+-	buffer_pos int    // The current position of the buffer.
+-
+-	raw_buffer     []byte // The raw buffer.
+-	raw_buffer_pos int    // The current position of the buffer.
+-
+-	encoding yaml_encoding_t // The stream encoding.
+-
+-	// Emitter stuff
+-
+-	canonical   bool         // If the output is in the canonical style?
+-	best_indent int          // The number of indentation spaces.
+-	best_width  int          // The preferred width of the output lines.
+-	unicode     bool         // Allow unescaped non-ASCII characters?
+-	line_break  yaml_break_t // The preferred line break.
+-
+-	state  yaml_emitter_state_t   // The current emitter state.
+-	states []yaml_emitter_state_t // The stack of states.
+-
+-	events      []yaml_event_t // The event queue.
+-	events_head int            // The head of the event queue.
+-
+-	indents []int // The stack of indentation levels.
+-
+-	tag_directives []yaml_tag_directive_t // The list of tag directives.
+-
+-	indent int // The current indentation level.
+-
+-	flow_level int // The current flow level.
+-
+-	root_context       bool // Is it the document root context?
+-	sequence_context   bool // Is it a sequence context?
+-	mapping_context    bool // Is it a mapping context?
+-	simple_key_context bool // Is it a simple mapping key context?
+-
+-	line       int  // The current line.
+-	column     int  // The current column.
+-	whitespace bool // If the last character was a whitespace?
+-	indention  bool // If the last character was an indentation character (' ', '-', '?', ':')?
+-	open_ended bool // If an explicit document end is required?
+-
+-	// Anchor analysis.
+-	anchor_data struct {
+-		anchor []byte // The anchor value.
+-		alias  bool   // Is it an alias?
+-	}
+-
+-	// Tag analysis.
+-	tag_data struct {
+-		handle []byte // The tag handle.
+-		suffix []byte // The tag suffix.
+-	}
+-
+-	// Scalar analysis.
+-	scalar_data struct {
+-		value                 []byte              // The scalar value.
+-		multiline             bool                // Does the scalar contain line breaks?
+-		flow_plain_allowed    bool                // Can the scalar be expessed in the flow plain style?
+-		block_plain_allowed   bool                // Can the scalar be expressed in the block plain style?
+-		single_quoted_allowed bool                // Can the scalar be expressed in the single quoted style?
+-		block_allowed         bool                // Can the scalar be expressed in the literal or folded styles?
+-		style                 yaml_scalar_style_t // The output style.
+-	}
+-
+-	// Dumper stuff
+-
+-	opened bool // If the stream was already opened?
+-	closed bool // If the stream was already closed?
+-
+-	// The information associated with the document nodes.
+-	anchors *struct {
+-		references int  // The number of references.
+-		anchor     int  // The anchor id.
+-		serialized bool // If the node has been emitted?
+-	}
+-
+-	last_anchor_id int // The last assigned anchor id.
+-
+-	document *yaml_document_t // The currently emitted document.
+-}
+diff --git a/Godeps/_workspace/src/gopkg.in/v1/yaml/yamlprivateh.go b/Godeps/_workspace/src/gopkg.in/v1/yaml/yamlprivateh.go
+deleted file mode 100644
+index 8110ce3..0000000
+--- a/Godeps/_workspace/src/gopkg.in/v1/yaml/yamlprivateh.go
++++ /dev/null
+@@ -1,173 +0,0 @@
+-package yaml
+-
+-const (
+-	// The size of the input raw buffer.
+-	input_raw_buffer_size = 512
+-
+-	// The size of the input buffer.
+-	// It should be possible to decode the whole raw buffer.
+-	input_buffer_size = input_raw_buffer_size * 3
+-
+-	// The size of the output buffer.
+-	output_buffer_size = 128
+-
+-	// The size of the output raw buffer.
+-	// It should be possible to encode the whole output buffer.
+-	output_raw_buffer_size = (output_buffer_size*2 + 2)
+-
+-	// The size of other stacks and queues.
+-	initial_stack_size  = 16
+-	initial_queue_size  = 16
+-	initial_string_size = 16
+-)
+-
+-// Check if the character at the specified position is an alphabetical
+-// character, a digit, '_', or '-'.
+-func is_alpha(b []byte, i int) bool {
+-	return b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'Z' || b[i] >= 'a' && b[i] <= 'z' || b[i] == '_' || b[i] == '-'
+-}
+-
+-// Check if the character at the specified position is a digit.
+-func is_digit(b []byte, i int) bool {
+-	return b[i] >= '0' && b[i] <= '9'
+-}
+-
+-// Get the value of a digit.
+-func as_digit(b []byte, i int) int {
+-	return int(b[i]) - '0'
+-}
+-
+-// Check if the character at the specified position is a hex-digit.
+-func is_hex(b []byte, i int) bool {
+-	return b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'F' || b[i] >= 'a' && b[i] <= 'f'
+-}
+-
+-// Get the value of a hex-digit.
+-func as_hex(b []byte, i int) int {
+-	bi := b[i]
+-	if bi >= 'A' && bi <= 'F' {
+-		return int(bi) - 'A' + 10
+-	}
+-	if bi >= 'a' && bi <= 'f' {
+-		return int(bi) - 'a' + 10
+-	}
+-	return int(bi) - '0'
+-}
+-
+-// Check if the character is ASCII.
+-func is_ascii(b []byte, i int) bool {
+-	return b[i] <= 0x7F
+-}
+-
+-// Check if the character at the start of the buffer can be printed unescaped.
+-func is_printable(b []byte, i int) bool {
+-	return ((b[i] == 0x0A) || // . == #x0A
+-		(b[i] >= 0x20 && b[i] <= 0x7E) || // #x20 <= . <= #x7E
+-		(b[i] == 0xC2 && b[i+1] >= 0xA0) || // #0xA0 <= . <= #xD7FF
+-		(b[i] > 0xC2 && b[i] < 0xED) ||
+-		(b[i] == 0xED && b[i+1] < 0xA0) ||
+-		(b[i] == 0xEE) ||
+-		(b[i] == 0xEF && // #xE000 <= . <= #xFFFD
+-			!(b[i+1] == 0xBB && b[i+2] == 0xBF) && // && . != #xFEFF
+-			!(b[i+1] == 0xBF && (b[i+2] == 0xBE || b[i+2] == 0xBF))))
+-}
+-
+-// Check if the character at the specified position is NUL.
+-func is_z(b []byte, i int) bool {
+-	return b[i] == 0x00
+-}
+-
+-// Check if the beginning of the buffer is a BOM.
+-func is_bom(b []byte, i int) bool {
+-	return b[0] == 0xEF && b[1] == 0xBB && b[2] == 0xBF
+-}
+-
+-// Check if the character at the specified position is space.
+-func is_space(b []byte, i int) bool {
+-	return b[i] == ' '
+-}
+-
+-// Check if the character at the specified position is tab.
+-func is_tab(b []byte, i int) bool {
+-	return b[i] == '\t'
+-}
+-
+-// Check if the character at the specified position is blank (space or tab).
+-func is_blank(b []byte, i int) bool {
+-	//return is_space(b, i) || is_tab(b, i)
+-	return b[i] == ' ' || b[i] == '\t'
+-}
+-
+-// Check if the character at the specified position is a line break.
+-func is_break(b []byte, i int) bool {
+-	return (b[i] == '\r' || // CR (#xD)
+-		b[i] == '\n' || // LF (#xA)
+-		b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
+-		b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
+-		b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9) // PS (#x2029)
+-}
+-
+-func is_crlf(b []byte, i int) bool {
+-	return b[i] == '\r' && b[i+1] == '\n'
+-}
+-
+-// Check if the character is a line break or NUL.
+-func is_breakz(b []byte, i int) bool {
+-	//return is_break(b, i) || is_z(b, i)
+-	return (        // is_break:
+-	b[i] == '\r' || // CR (#xD)
+-		b[i] == '\n' || // LF (#xA)
+-		b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
+-		b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
+-		b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)
+-		// is_z:
+-		b[i] == 0)
+-}
+-
+-// Check if the character is a line break, space, or NUL.
+-func is_spacez(b []byte, i int) bool {
+-	//return is_space(b, i) || is_breakz(b, i)
+-	return ( // is_space:
+-	b[i] == ' ' ||
+-		// is_breakz:
+-		b[i] == '\r' || // CR (#xD)
+-		b[i] == '\n' || // LF (#xA)
+-		b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
+-		b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
+-		b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)
+-		b[i] == 0)
+-}
+-
+-// Check if the character is a line break, space, tab, or NUL.
+-func is_blankz(b []byte, i int) bool {
+-	//return is_blank(b, i) || is_breakz(b, i)
+-	return ( // is_blank:
+-	b[i] == ' ' || b[i] == '\t' ||
+-		// is_breakz:
+-		b[i] == '\r' || // CR (#xD)
+-		b[i] == '\n' || // LF (#xA)
+-		b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
+-		b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
+-		b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)
+-		b[i] == 0)
+-}
+-
+-// Determine the width of the character.
+-func width(b byte) int {
+-	// Don't replace these by a switch without first
+-	// confirming that it is being inlined.
+-	if b&0x80 == 0x00 {
+-		return 1
+-	}
+-	if b&0xE0 == 0xC0 {
+-		return 2
+-	}
+-	if b&0xF0 == 0xE0 {
+-		return 3
+-	}
+-	if b&0xF8 == 0xF0 {
+-		return 4
+-	}
+-	return 0
+-
+-}
+-- 
+2.1.0
+
diff --git a/apiserver b/apiserver
new file mode 100644
index 0000000..e24e724
--- /dev/null
+++ b/apiserver
@@ -0,0 +1,20 @@
+###
+# kubernetes system config
+#
+# The following values are used to configure the kubernetes-apiserver
+#
+
+# The address on the local server to listen to.
+KUBE_API_ADDRESS="127.0.0.1"
+
+# The port on the local server to listen on.
+KUBE_API_PORT="8080"
+
+# How the replication controller and scheduler find the apiserver
+KUBE_MASTER="127.0.0.1:8080"
+
+# Comma seperated list of minions
+MINION_ADDRESSES="127.0.0.1"
+
+# Port minions listen on
+MINION_PORT="10250"
diff --git a/config b/config
new file mode 100644
index 0000000..5dddcc1
--- /dev/null
+++ b/config
@@ -0,0 +1,22 @@
+###
+# kubernetes system config
+#
+# The following values are used to configure various aspects of all
+# kubernetes services, including
+#
+#   kubernetes-apiserver.service
+#   kubernetes-controller-manager.service
+#   kubernetes-kubelet.service
+#   kubernetes-proxy.service
+
+# Comma seperated list of nodes in the etcd cluster
+KUBE_ETCD_SERVERS="http://127.0.0.1:4001"
+
+# logging to stderr means we get it in the systemd journal
+KUBE_LOGTOSTDERR="true"
+
+# journal message level, 0 is debug
+KUBE_LOG_LEVEL=0
+
+# Should this cluster be allowed to run privleged docker containers
+KUBE_ALLOW_PRIV="true"
diff --git a/controller-manager b/controller-manager
new file mode 100644
index 0000000..a631838
--- /dev/null
+++ b/controller-manager
@@ -0,0 +1,4 @@
+###
+# The following values are used to configure the kubernetes controller-manager
+
+# defaults from config and apiserver should be adequate
diff --git a/kube-apiserver.service b/kube-apiserver.service
new file mode 100644
index 0000000..5f9e49c
--- /dev/null
+++ b/kube-apiserver.service
@@ -0,0 +1,21 @@
+[Unit]
+Description=Kubernetes API Server
+
+[Service]
+EnvironmentFile=/etc/kubernetes/config
+EnvironmentFile=/etc/kubernetes/apiserver
+User=kube
+ExecStart=/usr/bin/kube-apiserver \
+            --logtostderr=${KUBE_LOGTOSTDERR} \
+	    --v=${KUBE_LOG_LEVEL} \
+            --etcd_servers=${KUBE_ETCD_SERVERS} \
+            --address=${KUBE_API_ADDRESS} \
+            --port=${KUBE_API_PORT} \
+            --machines=${MINION_ADDRESSES} \
+	    --minion_port=${MINION_PORT} \
+	    --allow_privileged=${KUBE_ALLOW_PRIV}
+Restart=on-failure
+
+[Install]
+WantedBy=multi-user.target
+ 
diff --git a/kube-controller-manager.service b/kube-controller-manager.service
new file mode 100644
index 0000000..287c45b
--- /dev/null
+++ b/kube-controller-manager.service
@@ -0,0 +1,16 @@
+[Unit]
+Description=Kubernetes Controller Manager
+
+[Service]
+EnvironmentFile=/etc/kubernetes/config
+EnvironmentFile=/etc/kubernetes/apiserver
+EnvironmentFile=/etc/kubernetes/controller-manager
+User=kube
+ExecStart=/usr/bin/kube-controller-manager \
+            --logtostderr=${KUBE_LOGTOSTDERR} \
+	    --v=${KUBE_LOG_LEVEL} \
+            --master=${KUBE_MASTER}
+Restart=on-failure
+
+[Install]
+WantedBy=multi-user.target
diff --git a/kube-proxy.service b/kube-proxy.service
new file mode 100644
index 0000000..595c7cd
--- /dev/null
+++ b/kube-proxy.service
@@ -0,0 +1,17 @@
+[Unit]
+Description=Kubernetes Proxy
+# the proxy crashes if etcd isn't reachable.
+# https://github.com/GoogleCloudPlatform/kubernetes/issues/1206
+After=network.target
+
+[Service]
+EnvironmentFile=/etc/kubernetes/config
+EnvironmentFile=/etc/kubernetes/proxy
+ExecStart=/usr/bin/kube-proxy \
+            --logtostderr=${KUBE_LOGTOSTDERR} \
+	    --v=${KUBE_LOG_LEVEL} \
+            --etcd_servers=${KUBE_ETCD_SERVERS}
+Restart=on-failure
+
+[Install]
+WantedBy=multi-user.target
diff --git a/kube-scheduler.service b/kube-scheduler.service
new file mode 100644
index 0000000..76d4036
--- /dev/null
+++ b/kube-scheduler.service
@@ -0,0 +1,15 @@
+[Unit]
+Description=Kubernetes Scheduler
+
+[Service]
+EnvironmentFile=/etc/kubernetes/config
+EnvironmentFile=/etc/kubernetes/apiserver
+EnvironmentFile=/etc/kubernetes/scheduler
+ExecStart=/usr/bin/kube-scheduler \
+            --logtostderr=${KUBE_LOGTOSTDERR} \
+	    --v=${KUBE_LOG_LEVEL} \
+	    --master=${KUBE_MASTER}
+Restart=on-failure
+
+[Install]
+WantedBy=multi-user.target
diff --git a/kubecfg.bash b/kubecfg.bash
new file mode 100644
index 0000000..bf7284d
--- /dev/null
+++ b/kubecfg.bash
@@ -0,0 +1,217 @@
+#!/bin/bash
+#
+# bash completion file for core kubecfg commands
+#
+# This script provides supports completion of:
+#  - commands and their options
+#  - container ids and names
+#  - image repos and tags
+#  - filepaths
+#
+# To enable the completions either:
+#  - place this file in /etc/bash_completion.d
+#  or
+#  - copy this file and add the line below to your .bashrc after
+#    bash completion features are loaded
+#     . kubecfg.bash
+#
+# Note:
+# Currently, the completions will not work if the kubecfg daemon is not
+# bound to the default communication port/socket
+# If the kubecfg daemon is using a unix socket for communication your user
+# must have access to the socket for the completions to function correctly
+
+__kubecfg_q() {
+    kubecfg 2>/dev/null "$@"
+}
+
+__contains_word () {
+    local w word=$1; shift
+    for w in "$@"; do
+        [[ $w = "$word" ]] && return
+    done
+}
+
+__has_service() {
+    for ((i=0; i < ${cword}; i++)); do
+        local word=${words[i]}
+        word=$(echo ${word} | awk -F"/" '{print $1}')
+        if __contains_word "${words[i]}" "${services[@]}" &&
+           ! __contains_word "${words[i-1]}" "${opts[@]}"; then
+            return 0
+        fi
+    done
+    return 1
+}
+
+__kubecfg_all_pods()
+{
+    local pods=($( __kubecfg_q list pods | tail -n +3 | awk {'print $1'} ))
+    pods=${pods[@]/#/"pods/"}
+    COMPREPLY=( $( compgen -W "${pods[*]}" -- "$cur" ) )
+}
+
+__kubecfg_all_minions()
+{
+    local minions=($( __kubecfg_q list minions | tail -n +3 | awk {'print $1'} ))
+    minions=${minions[@]/#/"minions/"}
+    COMPREPLY=( $( compgen -W "${minions[*]}" -- "$cur" ) )
+}
+
+__kubecfg_all_replicationControllers()
+{
+    local replicationControllers=($( __kubecfg_q list replicationControllers | tail -n +3 | awk {'print $1'} ))
+    replicationControllers=${replicationControllers[@]/#/"replicationControllers/"}
+    COMPREPLY=( $( compgen -W "${replicationControllers[*]}" -- "$cur" ) )
+}
+
+__kubecfg_all_services()
+{
+    local services=($( __kubecfg_q list services | tail -n +3 | awk {'print $1'} ))
+    services=${services[@]/#/"services/"}
+    COMPREPLY=( $( compgen -W "${services[*]}" -- "$cur" ) )
+}
+
+_kubecfg_specific_service_match()
+{
+    case "$cur" in
+        pods/*)
+            __kubecfg_all_pods
+            ;;
+        minions/*)
+            __kubecfg_all_minions
+            ;;
+        replicationControllers/*)
+            __kubecfg_all_replicationControllers
+            ;;
+        services/*)
+            __kubecfg_all_services
+            ;;
+        *)
+            if __has_service; then
+                return 0
+            fi
+            compopt -o nospace
+            COMPREPLY=( $( compgen -S / -W "${services[*]}" -- "$cur" ) )
+            ;;
+    esac
+}
+
+_kubecfg_get()
+{
+    _kubecfg_specific_service_match
+}
+
+_kubecfg_delete()
+{
+    _kubecfg_specific_service_match
+}
+
+_kubecfg_update()
+{
+    _kubecfg_specific_service_match
+}
+
+_kubecfg_service_match()
+{
+    if __has_service; then
+        return 0
+    fi
+
+    case "$cur" in
+        *)
+            COMPREPLY=( $( compgen -W "${services[*]}" -- "$cur" ) )
+            ;;
+    esac
+}
+
+_kubecfg_list()
+{
+    _kubecfg_service_match
+}
+
+_kubecfg_create()
+{
+    _kubecfg_service_match
+}
+
+_kubecfg()
+{
+    local opts=(
+            -h
+            -c
+    )
+    local -A all_services=(
+        [CREATE]="pods replicationControllers services"
+        [UPDATE]="replicationControllers"
+        [ALL]="pods replicationControllers services minions"
+    )
+    local services=(${all_services[ALL]})
+    local -A all_commands=(
+        [WITH_JSON]="create update"
+        [ALL]="create update get list delete stop rm rollingupdate resize"
+    )
+    local commands=(${all_commands[ALL]})
+
+    COMPREPLY=()
+    local command
+    local cur prev words cword
+    _get_comp_words_by_ref -n : cur prev words cword
+
+    if __contains_word "$prev" "${opts[@]}"; then
+        case $prev in
+            -h)
+                comps=$(compgen -A hostname)
+                return 0
+                ;;
+            -c)
+                _filedir json
+                return 0
+                ;;
+        esac
+    fi
+
+    if [[ "$cur" = -* ]]; then
+        COMPREPLY=( $(compgen -W '${opts[*]}' -- "$cur") )
+        return 0
+    fi
+
+    # if you passed -c, you are limited to create
+    if __contains_word "-c" "${words[@]}"; then
+        services=(${all_services[CREATE]} ${all_services[UPDATE]})
+        commands=(${all_commands[WITH_JSON]})
+    fi
+
+    # figure out which command they are running, remembering that arguments to
+    # options don't count as the command!  So a hostname named 'create' won't
+    # trip things up
+    for ((i=0; i < ${cword}; i++)); do
+        if __contains_word "${words[i]}" "${commands[@]}" &&
+           ! __contains_word "${words[i-1]}" "${opts[@]}"; then
+            command=${words[i]}
+            break
+        fi
+    done
+
+    # tell the list of possible commands
+    if [[ -z ${command} ]]; then
+        COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) )
+        return 0
+    fi
+
+    # remove services which you can't update given your command
+    if [[ ${command} == "create" ]]; then
+        services=(${all_services[CREATE]})
+    elif [[ ${command} == "update" ]]; then
+        services=(${all_services[UPDATE]})
+    fi
+
+    # run the _kubecfg_${command} function to keep parsing
+    local completions_func=_kubecfg_${command}
+    declare -F $completions_func >/dev/null && $completions_func
+
+    return 0
+}
+
+complete -F _kubecfg kubecfg
+# ex: ts=4 sw=4 et filetype=sh
diff --git a/kubelet b/kubelet
new file mode 100644
index 0000000..cc38cee
--- /dev/null
+++ b/kubelet
@@ -0,0 +1,11 @@
+###
+# kubernetes kublet (minion) config
+
+# The address for the info server to serve on
+MINION_ADDRESS="127.0.0.1"
+
+# The port for the info server to serve on
+MINION_PORT="10250"
+
+# You may leave this blank to use the actual hostname
+MINION_HOSTNAME="127.0.0.1"
diff --git a/kubelet.service b/kubelet.service
new file mode 100644
index 0000000..de09249
--- /dev/null
+++ b/kubelet.service
@@ -0,0 +1,20 @@
+[Unit]
+Description=Kubernetes Kubelet
+After=docker.socket cadvisor.service
+Requires=docker.socket cadvisor.service
+
+[Service]
+EnvironmentFile=/etc/kubernetes/config
+EnvironmentFile=/etc/kubernetes/kubelet
+ExecStart=/usr/bin/kubelet \
+            --logtostderr=${KUBE_LOGTOSTDERR} \
+	    --v=${KUBE_LOG_LEVEL} \
+            --etcd_servers=${KUBE_ETCD_SERVERS} \
+            --address=${MINION_ADDRESS} \
+            --port=${MINION_PORT} \
+	    --hostname_override=${MINION_HOSTNAME} \
+	    --allow_privileged=${KUBE_ALLOW_PRIV}
+Restart=on-failure
+
+[Install]
+WantedBy=multi-user.target
diff --git a/kubernetes-88fdb65.tar.gz b/kubernetes-88fdb65.tar.gz
new file mode 100644
index 0000000..5ccf5a1
Binary files /dev/null and b/kubernetes-88fdb65.tar.gz differ
diff --git a/kubernetes.spec b/kubernetes.spec
new file mode 100644
index 0000000..c787ac1
--- /dev/null
+++ b/kubernetes.spec
@@ -0,0 +1,262 @@
+#debuginfo not supported with Go
+%global debug_package	%{nil}
+%global import_path	github.com/GoogleCloudPlatform/kubernetes
+%global commit		88fdb659bc44cf2d1895c03f8838d36f4d890796
+%global shortcommit	%(c=%{commit}; echo ${c:0:7})
+
+#binaries which should be called kube-*
+%global prefixed_binaries proxy apiserver controller-manager scheduler
+#binaries which should not be renamed at all
+%global nonprefixed_binaries kubelet kubecfg
+#all of the above
+%global binaries	%{prefixed_binaries} %{nonprefixed_binaries}
+
+#I really need this, otherwise "version_ldflags=$(kube::version_ldflags)"
+# does not work
+%global _buildshell	/bin/bash
+%global _checkshell	/bin/bash
+
+Name:		kubernetes
+Version:	0.3
+Release:	0.2.git%{shortcommit}%{?dist}
+Summary:	Container cluster management
+License:	ASL 2.0
+URL:		https://github.com/GoogleCloudPlatform/kubernetes
+ExclusiveArch:	x86_64
+Source0:	https://github.com/GoogleCloudPlatform/kubernetes/archive/%{commit}/kubernetes-%{shortcommit}.tar.gz
+Source1:	kubecfg.bash
+
+#config files
+Source10:	config
+Source11:	apiserver
+Source12:	controller-manager
+Source13:	proxy
+Source14:	kubelet
+Source15:	scheduler
+#service files
+Source20:	kube-apiserver.service
+Source21:	kube-controller-manager.service
+Source22:	kube-proxy.service
+Source23:	kubelet.service
+Source24:	kube-scheduler.service
+
+Patch1:		0001-remove-all-third-party-software.patch
+
+%if 0%{?fedora} >= 21 || 0%{?rhel}
+Requires:	docker
+%else
+Requires:	docker-io
+%endif
+
+Requires:	etcd
+Requires:	cadvisor
+
+Requires(pre):	shadow-utils
+
+BuildRequires:	git
+BuildRequires:	golang >= 1.2-7
+BuildRequires:	systemd
+BuildRequires:	golang-cover
+BuildRequires:	etcd
+BuildRequires:	golang(bitbucket.org/kardianos/osext)
+BuildRequires:	golang(github.com/coreos/go-log/log)
+BuildRequires:	golang(github.com/coreos/go-systemd)
+BuildRequires:	golang(github.com/coreos/go-etcd/etcd)
+BuildRequires:	golang(github.com/google/gofuzz)
+BuildRequires:  golang(code.google.com/p/go.net/html)
+BuildRequires:  golang(code.google.com/p/go.net/html/atom)
+BuildRequires:  golang(code.google.com/p/go.net/websocket)
+BuildRequires:	golang(code.google.com/p/goauth2)
+BuildRequires:	golang(code.google.com/p/go-uuid)
+BuildRequires:	golang(code.google.com/p/google-api-go-client)
+BuildRequires:	golang(github.com/fsouza/go-dockerclient) > 0-0.6
+BuildRequires:	golang(github.com/golang/glog)
+BuildRequires:	golang(github.com/stretchr/objx)
+BuildRequires:	golang(github.com/stretchr/testify)
+BuildRequires:	golang(gopkg.in/v1/yaml)
+BuildRequires:	golang(github.com/google/cadvisor)
+BuildRequires:	golang(code.google.com/p/gcfg)
+BuildRequires:	golang(github.com/mitchellh/goamz/aws)
+BuildRequires:	golang(github.com/mitchellh/goamz/ec2)
+BuildRequires:	golang(github.com/vaughan0/go-ini)
+
+%description
+%{summary}
+
+%prep
+%autosetup -Sgit -n %{name}-%{commit}
+
+%build
+export KUBE_GIT_COMMIT=%{commit}
+export KUBE_GIT_TREE_STATE="dirty"
+export KUBE_GIT_VERSION=v%{version}
+
+export KUBE_EXTRA_GOPATH=%{gopath}
+export KUBE_NO_GODEPS="true"
+
+. hack/config-go.sh
+
+kube::setup_go_environment
+
+version_ldflags=$(kube::version_ldflags)
+
+targets=($(kube::default_build_targets))
+targets+=("cmd/integration")
+binaries=($(kube::binaries_from_targets "${targets[@]}"))
+
+for binary in ${binaries[@]}; do
+  bin=$(basename "${binary}")
+  echo "+++ Building ${bin}"
+  go build -o "${KUBE_TARGET}/bin/${bin}" \
+        "${goflags[@]:+${goflags[@]}}" \
+        -ldflags "${version_ldflags}" \
+        "${binary}"
+done
+
+%check
+export KUBE_EXTRA_GOPATH=%{gopath}
+export KUBE_NO_GODEPS="true"
+export KUBE_NO_BUILD_INTEGRATION="true"
+exit
+echo "******Testing the commands*****"
+hack/test-cmd.sh
+# In Fedora 20 and RHEL7 the go cover tools isn't available correctly
+%if 0%{?fedora} >= 21
+echo "******Testing the go code******"
+hack/test-go.sh
+echo "******Testing integration******"
+hack/test-integration.sh
+%endif
+echo "******Benchmarking kube********"
+hack/benchmark-go.sh
+
+%install
+install -m 755 -d %{buildroot}%{_bindir}
+for bin in %{prefixed_binaries}; do
+  echo "+++ INSTALLING ${bin}"
+  install -p -m 755 _output/go/bin/${bin} %{buildroot}%{_bindir}/kube-${bin}
+done
+for bin in %{nonprefixed_binaries}; do
+  echo "+++ INSTALLING ${bin}"
+  install -p -m 755 _output/go/bin/${bin} %{buildroot}%{_bindir}/${bin}
+done
+
+# install the bash completion
+install -d -m 0755 %{buildroot}%{_datadir}/bash-completion/completions/
+install -T %{SOURCE1} %{buildroot}%{_datadir}/bash-completion/completions/kubecfg
+
+# install config files
+install -d -m 0755 %{buildroot}%{_sysconfdir}/%{name}
+install -m 644 -t %{buildroot}%{_sysconfdir}/%{name} %{SOURCE10} %{SOURCE11} %{SOURCE12} %{SOURCE13} %{SOURCE14} %{SOURCE15}
+
+# install service files
+install -d -m 0755 %{buildroot}%{_unitdir}
+install -m 0644 -t %{buildroot}%{_unitdir} %{SOURCE20} %{SOURCE21} %{SOURCE22} %{SOURCE23} %{SOURCE24}
+
+%files
+%doc README.md LICENSE CONTRIB.md CONTRIBUTING.md DESIGN.md
+%{_bindir}/kube-apiserver
+%{_bindir}/kubecfg
+%{_bindir}/kube-controller-manager
+%{_bindir}/kubelet
+%{_bindir}/kube-proxy
+%{_bindir}/kube-scheduler
+%{_unitdir}/kube-apiserver.service
+%{_unitdir}/kubelet.service
+%{_unitdir}/kube-scheduler.service
+%{_unitdir}/kube-controller-manager.service
+%{_unitdir}/kube-proxy.service
+%dir %{_sysconfdir}/%{name}
+%{_datadir}/bash-completion/completions/kubecfg
+%config(noreplace) %{_sysconfdir}/%{name}/config
+%config(noreplace) %{_sysconfdir}/%{name}/apiserver
+%config(noreplace) %{_sysconfdir}/%{name}/controller-manager
+%config(noreplace) %{_sysconfdir}/%{name}/proxy
+%config(noreplace) %{_sysconfdir}/%{name}/kubelet
+%config(noreplace) %{_sysconfdir}/%{name}/scheduler
+
+%pre
+getent group kube >/dev/null || groupadd -r kube
+getent passwd kube >/dev/null || useradd -r -g kube -d / -s /sbin/nologin \
+        -c "Kubernetes user" kube
+%post
+%systemd_post %{basename:%{SOURCE20}} %{basename:%{SOURCE21}} %{basename:%{SOURCE22}} %{basename:%{SOURCE22}} %{basename:%{SOURCE24}}
+
+%preun
+%systemd_preun %{basename:%{SOURCE20}} %{basename:%{SOURCE21}} %{basename:%{SOURCE22}} %{basename:%{SOURCE23}} %{basename:%{SOURCE24}}
+
+%postun
+%systemd_postun
+
+%changelog
+* Mon Sep 29 2014 Jan Chaloupka <jchaloup at redhat.com> - 0.3-0.2.git88fdb65
+- replace * with coresponding files
+- remove dependency on gcc
+
+* Wed Sep 24 2014 Eric Paris <eparis at redhat.com - 0.3-0.1.git88fdb65
+- Bump to upstream 88fdb659bc44cf2d1895c03f8838d36f4d890796
+
+* Tue Sep 23 2014 Eric Paris <eparis at redhat.com - 0.3-0.0.gitbab5082
+- Bump to upstream bab5082a852218bb65aaacb91bdf599f9dd1b3ac
+
+* Fri Sep 19 2014 Eric Paris <eparis at redhat.com - 0.2-0.10.git06316f4
+- Bump to upstream 06316f486127697d5c2f5f4c82963dec272926cf
+
+* Thu Sep 18 2014 Eric Paris <eparis at redhat.com - 0.2-0.9.gitf7a5ec3
+- Bump to upstream f7a5ec3c36bd40cc2216c1da331ab647733769dd
+
+* Wed Sep 17 2014 Eric Paris <eparis at redhat.com - 0.2-0.8.gitac8ee45
+- Try to intelligently determine the deps
+
+* Wed Sep 17 2014 Eric Paris <eparis at redhat.com - 0.2-0.7.gitac8ee45
+- Bump to upstream ac8ee45f4fc4579b3ed65faafa618de9c0f8fb26
+
+* Mon Sep 15 2014 Eric Paris <eparis at redhat.com - 0.2-0.5.git24b5b7e
+- Bump to upstream 24b5b7e8d3a8af1eecf4db40c204e3c15ae955ba
+
+* Thu Sep 11 2014 Eric Paris <eparis at redhat.com - 0.2-0.3.gitcc7999c
+- Bump to upstream cc7999c00a40df21bd3b5e85ecea3b817377b231
+
+* Wed Sep 10 2014 Eric Paris <eparis at redhat.com - 0.2-0.2.git60d4770
+- Add bash completions
+
+* Wed Sep 10 2014 Eric Paris <eparis at redhat.com - 0.2-0.1.git60d4770
+- Bump to upstream 60d4770127d22e51c53e74ca94c3639702924bd2
+
+* Mon Sep 08 2014 Lokesh Mandvekar <lsm5 at fedoraproject.org> - 0.1-0.4.git6ebe69a
+- prefer autosetup instead of setup (revert setup change in 0-0.3.git)
+https://fedoraproject.org/wiki/Autosetup_packaging_draft
+- revert version number to 0.1
+
+* Mon Sep 08 2014 Lokesh Mandvekar <lsm5 at fedoraproject.org> - 0-0.3.git6ebe69a
+- gopath defined in golang package already
+- package owns /etc/kubernetes
+- bash dependency implicit
+- keep buildroot/$RPM_BUILD_ROOT macros consistent
+- replace with macros wherever possible
+- set version, release and source tarball prep as per
+https://fedoraproject.org/wiki/Packaging:SourceURL#Github
+
+* Mon Sep 08 2014 Eric Paris <eparis at redhat.com>
+- make services restart automatically on error
+
+* Sat Sep 06 2014 Eric Paris <eparis at redhat.com - 0.1-0.1.0.git6ebe69a8
+- Bump to upstream 6ebe69a8751508c11d0db4dceb8ecab0c2c7314a
+
+* Wed Aug 13 2014 Eric Paris <eparis at redhat.com>
+- update to upstream
+- redo build to use project scripts
+- use project scripts in %check
+- rework deletion of third_party packages to easily detect changes
+- run apiserver and controller-manager as non-root
+
+* Mon Aug 11 2014 Adam Miller <maxamillion at redhat.com>
+- update to upstream
+- decouple the rest of third_party
+
+* Thu Aug 7 2014 Eric Paris <eparis at redhat.com>
+- update to head
+- update package to include config files
+
+* Wed Jul 16 2014 Colin Walters <walters at redhat.com>
+- Initial package
diff --git a/proxy b/proxy
new file mode 100644
index 0000000..50dda83
--- /dev/null
+++ b/proxy
@@ -0,0 +1,4 @@
+###
+# kubernetes proxy config
+
+# default config should be adequate
diff --git a/scheduler b/scheduler
new file mode 100644
index 0000000..ae87ea1
--- /dev/null
+++ b/scheduler
@@ -0,0 +1,4 @@
+###
+# kubernetes scheduler config
+
+# default config should be adequate
-- 
cgit v0.10.2


	http://pkgs.fedoraproject.org/cgit/kubernetes.git/commit/?h=master&id=c623ade0261e994e0113fb36103832a8c1f54bc8


More information about the scm-commits mailing list