The package rpms/qt5-qtwebengine.git has added or updated architecture specific content in
its
spec file (ExclusiveArch/ExcludeArch or %ifarch/%ifnarch) in commit(s):
https://src.fedoraproject.org/cgit/rpms/qt5-qtwebengine.git/commit/?id=6e....
Change:
-%ifarch %{ix86}
Thanks.
Full change:
============
commit ef6768f9d68dffbf0b361d0e60ae5393be4c1373
Author: Rex Dieter <rdieter(a)gmail.com>
Date: Mon Mar 25 13:26:27 2019 -0500
+changelog wrt libxslt
diff --git a/qt5-qtwebengine.spec b/qt5-qtwebengine.spec
index 40cbcfe..300b6ba 100644
--- a/qt5-qtwebengine.spec
+++ b/qt5-qtwebengine.spec
@@ -570,6 +570,7 @@ done
%changelog
* Mon Mar 25 2019 Rex Dieter <rdieter(a)fedoraproject.org> - 5.12.2-1
- 5.12.2
+- use system libxml2/libxslt
* Sun Feb 24 2019 Rex Dieter <rdieter(a)fedoraproject.org> - 5.12.1-1
- 5.12.1
commit b597268292347faae47aee429f93ba2a50b0482a
Author: Rex Dieter <rdieter(a)gmail.com>
Date: Mon Mar 25 13:25:53 2019 -0500
use system libxslt/libxml2
diff --git a/qt5-qtwebengine.spec b/qt5-qtwebengine.spec
index 9f4d304..40cbcfe 100644
--- a/qt5-qtwebengine.spec
+++ b/qt5-qtwebengine.spec
@@ -159,6 +159,7 @@ BuildRequires: pkgconfig(libpci)
BuildRequires: pkgconfig(dbus-1)
BuildRequires: pkgconfig(nss)
BuildRequires: pkgconfig(lcms2)
+BuildRequires: pkgconfig(libxslt) pkgconfig(libxml-2.0)
BuildRequires: perl-interpreter
BuildRequires: python2-devel
%if 0%{?use_system_libvpx}
@@ -228,7 +229,7 @@ Provides: bundled(libwebp) = 0.6.0
%endif
# bundled as "libxml"
# see src/3rdparty/chromium/third_party/libxml/linux/include/libxml/xmlversion.h
-Provides: bundled(libxml2) = 2.9.4
+#Provides: bundled(libxml2) = 2.9.4
# see src/3rdparty/chromium/third_party/libxslt/linux/config.h for version
Provides: bundled(libxslt) = 1.1.29
Provides: bundled(libXNVCtrl) = 302.17
commit 021ded1dacee519cf4c29720fd9ac9d1ed33bdd3
Author: Rex Dieter <rdieter(a)gmail.com>
Date: Mon Mar 25 12:59:42 2019 -0500
5.12.2
diff --git a/.gitignore b/.gitignore
index 9b1512e..ea7e16c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,3 +2,4 @@
/qtwebengine-everywhere-src-5.11.2-clean.tar.xz
/qtwebengine-everywhere-src-5.11.3-clean.tar.xz
/qtwebengine-everywhere-src-5.12.1-clean.tar.xz
+/qtwebengine-everywhere-src-5.12.2-clean.tar.xz
diff --git a/qt5-qtwebengine.spec b/qt5-qtwebengine.spec
index 8503dd1..9f4d304 100644
--- a/qt5-qtwebengine.spec
+++ b/qt5-qtwebengine.spec
@@ -46,7 +46,7 @@
Summary: Qt5 - QtWebEngine components
Name: qt5-qtwebengine
-Version: 5.12.1
+Version: 5.12.2
Release: 1%{?dist}
# See LICENSE.GPL LICENSE.LGPL LGPL_EXCEPTION.txt, for details
@@ -55,8 +55,8 @@ Release: 1%{?dist}
License: (LGPLv2 with exceptions or GPLv3 with exceptions) and BSD and LGPLv2+ and ASL
2.0 and IJG and MIT and GPLv2+ and ISC and OpenSSL and (MPLv1.1 or GPLv2 or LGPLv2)
URL:
http://www.qt.io
# cleaned tarball with patent-encumbered codecs removed from the bundled FFmpeg
-# wget
http://download.qt.io/official_releases/qt/5.11/5.11.1/submodules/qtweben...
-# ./clean_qtwebengine.sh 5.11.1
+# wget
http://download.qt.io/official_releases/qt/5.12/5.12.2/submodules/qtweben...
+# ./clean_qtwebengine.sh 5.12.2
Source0: qtwebengine-everywhere-src-%{version}-clean.tar.xz
# cleanup scripts used above
Source1: clean_qtwebengine.sh
@@ -567,6 +567,9 @@ done
%changelog
+* Mon Mar 25 2019 Rex Dieter <rdieter(a)fedoraproject.org> - 5.12.2-1
+- 5.12.2
+
* Sun Feb 24 2019 Rex Dieter <rdieter(a)fedoraproject.org> - 5.12.1-1
- 5.12.1
- enable kerberos support
diff --git a/sources b/sources
index 135d182..5cf1218 100644
--- a/sources
+++ b/sources
@@ -1 +1 @@
-SHA512 (qtwebengine-everywhere-src-5.12.1-clean.tar.xz) =
779d63b93849a6a5b8ecea1c1480ce80c01cc678929947ba64ea5003f9de51e76b49f06d9f0dee89afb28a7713f11d2a7412b55acb3203c548ca8ebf564b30cb
+SHA512 (qtwebengine-everywhere-src-5.12.2-clean.tar.xz) =
96afcc0ea36d4d06da1b64bd67cbc52394653d8c4700f7c166a5a8f2159517d0f9036b90949f97182c4e55603405576c87d200a512ae80931ff0ed502e72db6c
commit 9086983e50231ee512c8bdff466b7180e997353d
Author: Rex Dieter <rdieter(a)gmail.com>
Date: Sun Feb 24 22:54:05 2019 -0600
enable kerberos support
and fix build (remove reference to old python2.patch)
diff --git a/qt5-qtwebengine.spec b/qt5-qtwebengine.spec
index 0018e2f..8503dd1 100644
--- a/qt5-qtwebengine.spec
+++ b/qt5-qtwebengine.spec
@@ -109,6 +109,7 @@ BuildRequires: gcc-c++
BuildRequires: libstdc++-static
BuildRequires: git-core
BuildRequires: gperf
+BuildRequires: krb5-devel
BuildRequires: libicu-devel
BuildRequires: libjpeg-devel
BuildRequires: re2-devel
@@ -343,7 +344,6 @@ BuildArch: noarch
## upstream patches
-%patch7 -p1 -b .python2
%patch10 -p1 -b .openmax-dl-neon
## NEEDSWORK
#patch21 -p1 -b .gn-bootstrap-verbose
@@ -403,7 +403,9 @@ export NINJA_PATH=%{__ninja}
%{qmake_qt5} \
CONFIG+="%{debug_config}" \
- QMAKE_EXTRA_ARGS+="-system-webengine-icu" .
+ QMAKE_EXTRA_ARGS+="-system-webengine-icu" \
+ QMAKE_EXTRA_ARGS+="-webengine-kerberos" \
+ .
# avoid %%make_build for now, the -O flag buffers output from intermediate build steps
done via ninja
make %{?_smp_mflags}
@@ -565,8 +567,9 @@ done
%changelog
-* Wed Feb 13 2019 Rex Dieter <rdieter(a)fedoraproject.org> - 5.12.1-1
+* Sun Feb 24 2019 Rex Dieter <rdieter(a)fedoraproject.org> - 5.12.1-1
- 5.12.1
+- enable kerberos support
* Tue Feb 05 2019 Bjrn Esser <besser82(a)fedoraproject.org> - 5.11.3-5
- rebuilt (libvpx)
commit 6df43488ca17cd03b6d1a82151a93021b568fa40
Author: Rex Dieter <rdieter(a)gmail.com>
Date: Wed Feb 13 16:34:16 2019 -0600
update linux-pri.patch for python2
...instead of fixing it seperately
diff --git a/qt5-qtwebengine.spec b/qt5-qtwebengine.spec
index 1152ff3..0018e2f 100644
--- a/qt5-qtwebengine.spec
+++ b/qt5-qtwebengine.spec
@@ -77,8 +77,6 @@ Patch2: qtwebengine-opensource-src-5.12.1-fix-extractcflag.patch
# disable NEON vector instructions on ARM where the NEON code FTBFS due to
# GCC bug
https://bugzilla.redhat.com/show_bug.cgi?id=1282495
Patch3: qtwebengine-opensource-src-5.9.0-no-neon.patch
-# python -> python2
-Patch7: qtwebengine-everywhere-src-5.12.1-python2.patch
# remove Android dependencies from openmax_dl ARM NEON detection (detect.c)
Patch10: qtwebengine-opensource-src-5.9.0-openmax-dl-neon.patch
# Force verbose output from the GN bootstrap process
diff --git a/qtwebengine-everywhere-src-5.10.0-linux-pri.patch
b/qtwebengine-everywhere-src-5.10.0-linux-pri.patch
index 162f63e..4bcd376 100644
--- a/qtwebengine-everywhere-src-5.10.0-linux-pri.patch
+++ b/qtwebengine-everywhere-src-5.10.0-linux-pri.patch
@@ -19,5 +19,5 @@ diff -ur qtwebengine-everywhere-src-5.10.0/src/core/config/linux.pri
qtwebengine
+CHROMIUM_SRC_DIR = "$$QTWEBENGINE_ROOT/$$getChromiumSrcDir()"
+R_G_F_PY = "$$CHROMIUM_SRC_DIR/build/linux/unbundle/replace_gn_files.py"
+R_G_F_PY_ARGS = "--system-libraries yasm"
-+log("Running python $$R_G_F_PY $$R_G_F_PY_ARGS$${EOL}")
-+!system("python $$R_G_F_PY $$R_G_F_PY_ARGS"): error("-- unbundling
failed")
++log("Running python2 $$R_G_F_PY $$R_G_F_PY_ARGS$${EOL}")
++!system("python2 $$R_G_F_PY $$R_G_F_PY_ARGS"): error("-- unbundling
failed")
diff --git a/qtwebengine-everywhere-src-5.12.1-python2.patch
b/qtwebengine-everywhere-src-5.12.1-python2.patch
deleted file mode 100644
index 259952d..0000000
--- a/qtwebengine-everywhere-src-5.12.1-python2.patch
+++ /dev/null
@@ -1,11 +0,0 @@
-diff -up qtwebengine-everywhere-src-5.12.1/src/core/config/linux.pri.python2
qtwebengine-everywhere-src-5.12.1/src/core/config/linux.pri
---- qtwebengine-everywhere-src-5.12.1/src/core/config/linux.pri.python2 2019-02-01
09:30:19.194657298 -0600
-+++ qtwebengine-everywhere-src-5.12.1/src/core/config/linux.pri 2019-02-01
10:53:16.756357279 -0600
-@@ -205,5 +205,5 @@ gn_args += linux_link_libpci=true
- CHROMIUM_SRC_DIR = "$$QTWEBENGINE_ROOT/$$getChromiumSrcDir()"
- R_G_F_PY = "$$CHROMIUM_SRC_DIR/build/linux/unbundle/replace_gn_files.py"
- R_G_F_PY_ARGS = "--system-libraries yasm"
--log("Running python $$R_G_F_PY $$R_G_F_PY_ARGS$${EOL}")
--!system("python $$R_G_F_PY $$R_G_F_PY_ARGS"): error("-- unbundling
failed")
-+log("Running python2 $$R_G_F_PY $$R_G_F_PY_ARGS$${EOL}")
-+!system("python2 $$R_G_F_PY $$R_G_F_PY_ARGS"): error("-- unbundling
failed")
commit 6ef5248a05092a0b333504987a4988f1fcaec8d2
Author: Rex Dieter <rdieter(a)gmail.com>
Date: Wed Feb 13 15:32:57 2019 -0600
5.12.1
diff --git a/.gitignore b/.gitignore
index 308d839..9b1512e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,19 +1,4 @@
-/qtwebengine-opensource-src-5.6.0-beta-clean.tar.xz
-/qtwebengine-opensource-src-5.6.0-rc-clean.tar.xz
-/qtwebengine-opensource-src-5.6.0-clean.tar.xz
-/qtwebengine-opensource-src-5.6.1-clean.tar.xz
-/qtwebengine-opensource-src-5.6.2-clean.tar.xz
-/qtwebengine-opensource-src-5.6.3-ee719ad313e564d4e6f06d74b313ae179169466f-clean.tar.xz
-/qtwebengine-opensource-src-5.7.0-clean.tar.xz
-/qtwebengine-opensource-src-5.7.1-clean.tar.xz
-/qtwebengine-opensource-src-5.8.0-clean.tar.xz
-/qtwebengine-opensource-src-5.9.0-clean.tar.xz
-/qtwebengine-opensource-src-5.9.1-clean.tar.xz
-/qtwebengine-opensource-src-5.9.2-clean.tar.xz
-/qtwebengine-opensource-src-5.9.3-clean.tar.xz
-/qtwebengine-everywhere-src-5.10.0-clean.tar.xz
-/qtwebengine-everywhere-src-5.10.1-clean.tar.xz
-/qtwebengine-everywhere-src-5.11.0-clean.tar.xz
/qtwebengine-everywhere-src-5.11.1-clean.tar.xz
/qtwebengine-everywhere-src-5.11.2-clean.tar.xz
/qtwebengine-everywhere-src-5.11.3-clean.tar.xz
+/qtwebengine-everywhere-src-5.12.1-clean.tar.xz
diff --git a/clean_ffmpeg.sh b/clean_ffmpeg.sh
index 77f0b57..ab775a4 100755
--- a/clean_ffmpeg.sh
+++ b/clean_ffmpeg.sh
@@ -140,6 +140,7 @@ header_files=" libavutil/x86/asm.h \
libavcodec/mpegpicture.h \
libavcodec/mpegutils.h \
libavcodec/mpegvideo.h \
+ libavcodec/mpegvideodata.h \
libavcodec/mpegvideodsp.h \
libavcodec/mpegvideoencdsp.h \
libavcodec/old_codec_ids.h \
diff --git a/qt5-qtwebengine.spec b/qt5-qtwebengine.spec
index 6232444..1152ff3 100644
--- a/qt5-qtwebengine.spec
+++ b/qt5-qtwebengine.spec
@@ -1,17 +1,13 @@
-
%global qt_module qtwebengine
%global _hardened_build 1
# define to build docs, need to undef this for bootstrapping
# where qt5-qttools (qt5-doctools) builds are not yet available
-# disable on Rawhide for now
-%if 0%{?fedora} < 29
%global docs 1
-%endif
-%if 0%{?fedora} > 27
-# need libvpx >= 1.7.0 (need commit 297dfd869609d7c3c5cd5faa3ebc7b43a394434e)
+%if 0%{?fedora} > 29
+# need libvpx >= 1.8.0 (need commit 297dfd869609d7c3c5cd5faa3ebc7b43a394434e)
%global use_system_libvpx 1
%endif
# need libwebp >= 0.6.0
@@ -19,18 +15,18 @@
# NEON support on ARM (detected at runtime) - disable this if you are hitting
# FTBFS due to e.g. GCC bug
https://bugzilla.redhat.com/show_bug.cgi?id=1282495
-%global arm_neon 1
+#global arm_neon 1
# the QMake CONFIG flags to force debugging information to be produced in
# release builds, and for all parts of the code
-#ifarch %{arm}
-%if 1
+%ifarch %{arm} aarch64
# the ARM builder runs out of memory during linking with the full setting below,
# so omit debugging information for the parts upstream deems it dispensable for
# (webcore, v8base)
-%global debug_config force_debug_info
+%global debug_config %{nil}
%else
-%global debug_config webcore_debug v8base_debug force_debug_info
+%global debug_config force_debug_info
+# webcore_debug v8base_debug
%endif
#global prerelease rc
@@ -50,8 +46,8 @@
Summary: Qt5 - QtWebEngine components
Name: qt5-qtwebengine
-Version: 5.11.3
-Release: 5%{?dist}
+Version: 5.12.1
+Release: 1%{?dist}
# See LICENSE.GPL LICENSE.LGPL LGPL_EXCEPTION.txt, for details
# See also
http://qt-project.org/doc/qt-5.0/qtdoc/licensing.html
@@ -77,47 +73,16 @@ Patch0: qtwebengine-everywhere-src-5.10.0-linux-pri.patch
Patch1: qtwebengine-everywhere-src-5.11.0-no-icudtl-dat.patch
# fix extractCFlag to also look in QMAKE_CFLAGS_RELEASE, needed to detect the
# ARM flags with our %%qmake_qt5 macro, including for the next patch
-Patch2: qtwebengine-opensource-src-5.9.0-fix-extractcflag.patch
+Patch2: qtwebengine-opensource-src-5.12.1-fix-extractcflag.patch
# disable NEON vector instructions on ARM where the NEON code FTBFS due to
# GCC bug
https://bugzilla.redhat.com/show_bug.cgi?id=1282495
Patch3: qtwebengine-opensource-src-5.9.0-no-neon.patch
-# use the system NSPR prtime (based on Debian patch)
-# We already depend on NSPR, so it is useless to copy these functions here.
-# Debian uses this just fine, and I don't see relevant modifications either.
-Patch4: qtwebengine-everywhere-src-5.10.0-system-nspr-prtime.patch
-# use the system ICU UTF functions
-# We already depend on ICU, so it is useless to copy these functions here.
-# I checked the history of that directory, and other than the renames I am
-# undoing, there were no modifications at all. Must be applied after Patch4.
-Patch5: qtwebengine-everywhere-src-5.10.0-system-icu-utf.patch
-# do not require SSE2 on i686
-# cumulative revert of Chromium reviews 187423002, 308003004, 511773002 (parts
-# relevant to QtWebEngine only), 516543004, 1152053004 and 1161853008, Chromium
-# Gerrit review 570351 and V8 Gerrit review 575756, along with some custom fixes
-# and improvements
-# also build V8 shared and twice on i686 (once for x87, once for SSE2)
-Patch6: qtwebengine-everywhere-src-5.10.1-no-sse2.patch
-# fix missing ARM -mfpu setting
-Patch9: qtwebengine-opensource-src-5.9.2-arm-fpu-fix.patch
+# python -> python2
+Patch7: qtwebengine-everywhere-src-5.12.1-python2.patch
# remove Android dependencies from openmax_dl ARM NEON detection (detect.c)
Patch10: qtwebengine-opensource-src-5.9.0-openmax-dl-neon.patch
-# restore NEON runtime detection in Skia: revert upstream review 1952953004,
-# restore the non-Android Linux NEON runtime detection code lost in upstream
-# review 1890483002, also add VFPv4 runtime detection
-Patch11: qtwebengine-everywhere-src-5.10.0-skia-neon.patch
-# webrtc: enable the CPU feature detection for ARM Linux also for Chromium
-Patch12: qtwebengine-opensource-src-5.9.0-webrtc-neon-detect.patch
# Force verbose output from the GN bootstrap process
-Patch21: qtwebengine-everywhere-src-5.10.0-gn-bootstrap-verbose.patch
-# Forward-port missing parts of build fix with system ICU >= 59 from 5.9:
-#
https://codereview.qt-project.org/#/c/196922/
-# see QTBUG-60886 and QTBUG-65090
-Patch22: qtwebengine-everywhere-src-5.10.0-icu59.patch
-# Fix FTBFS with GCC 8 on i686: GCC8 has changed the alignof operator to return
-# the minimal alignment required by the target ABI instead of the preferred
-# alignment. This means int64_t is now 4 on i686 (instead of 8). Use __alignof__
-# to get the value we expect (and chromium checks for). Patch by spot.
-Patch23: qtwebengine-everywhere-src-5.10.1-gcc8-alignof.patch
+Patch21: qtwebengine-everywhere-src-5.12.0-gn-bootstrap-verbose.patch
# Fix/workaround FTBFS on aarch64 with newer glibc
Patch24: qtwebengine-everywhere-src-5.11.3-aarch64-new-stat.patch
## Upstream patches:
@@ -141,6 +106,9 @@ BuildRequires: ninja-build
BuildRequires: cmake
BuildRequires: bison
BuildRequires: flex
+BuildRequires: gcc-c++
+# gn links statically (for now)
+BuildRequires: libstdc++-static
BuildRequires: git-core
BuildRequires: gperf
BuildRequires: libicu-devel
@@ -157,6 +125,7 @@ BuildRequires: pkgconfig(fontconfig)
BuildRequires: pkgconfig(freetype2)
BuildRequires: pkgconfig(gl)
BuildRequires: pkgconfig(egl)
+BuildRequires: pkgconfig(jsoncpp)
BuildRequires: pkgconfig(libpng)
BuildRequires: pkgconfig(libudev)
%if 0%{?use_system_libwebp}
@@ -165,6 +134,7 @@ BuildRequires: pkgconfig(libwebp) >= 0.6.0
BuildRequires: pkgconfig(harfbuzz)
BuildRequires: pkgconfig(libdrm)
BuildRequires: pkgconfig(opus)
+BuildRequires: pkgconfig(protobuf)
BuildRequires: pkgconfig(libevent)
BuildRequires: pkgconfig(zlib)
%if 0%{?fedora} && 0%{?fedora} < 30
@@ -191,9 +161,7 @@ BuildRequires: pkgconfig(dbus-1)
BuildRequires: pkgconfig(nss)
BuildRequires: pkgconfig(lcms2)
BuildRequires: perl-interpreter
-# recommended workaround from
-#
https://fedoraproject.org/wiki/Changes/Move_usr_bin_python_into_separate_...
-BuildRequires: /usr/bin/python
+BuildRequires: python2-devel
%if 0%{?use_system_libvpx}
BuildRequires: pkgconfig(vpx) >= 1.7.0
%endif
@@ -270,7 +238,7 @@ Provides: bundled(modp_b64)
Provides: bundled(openmax_dl) = 1.0.2
Provides: bundled(ots)
# see src/3rdparty/chromium/third_party/protobuf/CHANGES.txt for the version
-Provides: bundled(protobuf) = 3.0.0-0.1.beta3
+#Provides: bundled(protobuf) = 3.0.0-0.1.beta3
Provides: bundled(qcms) = 4
Provides: bundled(sfntly)
Provides: bundled(skia)
@@ -329,10 +297,10 @@ Provides: bundled(fdlibm) = 5.3
%package devel
Summary: Development files for %{name}
Requires: %{name}%{?_isa} = %{version}-%{release}
-# not arch'd for now, see if can get away with avoiding multilib'ing -- rex
-Requires: %{name}-devtools = %{version}-%{release}
Requires: qt5-qtbase-devel%{?_isa}
Requires: qt5-qtdeclarative-devel%{?_isa}
+# not arch'd for now, see if can get away with avoiding multilib'ing -- rex
+Requires: %{name}-devtools = %{version}-%{release}
%description devel
%{summary}.
@@ -377,25 +345,21 @@ BuildArch: noarch
## upstream patches
-##FIXME/TODO rebase
-#patch4 -p1 -b .system-nspr-prtime
-#patch5 -p1 -b .system-icu-utf
-#patch6 -p1 -b .no-sse2
-%ifarch %{ix86}
-#global sse2 1
-%endif
-%patch9 -p1 -b .arm-fpu-fix
+%patch7 -p1 -b .python2
%patch10 -p1 -b .openmax-dl-neon
-#patch11 -p1 -b .skia-neon
-%patch12 -p1 -b .webrtc-neon-detect
-%patch21 -p1 -b .gn-bootstrap-verbose
-#patch22 -p1 -b .icu59
-%patch23 -p1 -b .gcc8
+## NEEDSWORK
+#patch21 -p1 -b .gn-bootstrap-verbose
%patch24 -p1 -b .aarch64-new-stat
+# the xkbcommon config/feature was renamed in 5.12, so need to adjust QT_CONFIG
references
+# when building on older Qt releases
+%if "%{_qt5_version}" < "5.12.0"
+sed -i -e 's|QT_CONFIG(xkbcommon)|QT_CONFIG(xkbcommon_evdev)|g'
src/core/web_event_factory.cpp
+%endif
+
# fix // in #include in content/renderer/gpu to avoid debugedit failure
-sed -i -e 's!gpu//!gpu/!g' \
- src/3rdparty/chromium/content/renderer/gpu/compositor_forwarding_message_filter.cc
+#sed -i -e 's!gpu//!gpu/!g' \
+# src/3rdparty/chromium/content/renderer/gpu/compositor_forwarding_message_filter.cc
# and another one in 2 files in WebRTC
sed -i -e 's!audio_processing//!audio_processing/!g' \
src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/utility/ooura_fft.cc
\
@@ -424,7 +388,7 @@ sed -i -e 's/symbol_level=1/symbol_level=2/g'
src/core/config/common.pri
# generate qtwebengine-3rdparty.qdoc, it is missing from the tarball
pushd src/3rdparty
-python chromium/tools/licenses.py \
+%{__python2} chromium/tools/licenses.py \
--file-template ../../tools/about_credits.tmpl \
--entry-template ../../tools/about_credits_entry.tmpl \
credits >../webengine/doc/src/qtwebengine-3rdparty.qdoc
@@ -433,12 +397,14 @@ popd
# copy the Chromium license so it is installed with the appropriate name
cp -p src/3rdparty/chromium/LICENSE LICENSE.Chromium
+
%build
export STRIP=strip
export NINJAFLAGS="%{__ninja_common_opts}"
export NINJA_PATH=%{__ninja}
-%{qmake_qt5} CONFIG+="%{debug_config}" \
+%{qmake_qt5} \
+ CONFIG+="%{debug_config}" \
QMAKE_EXTRA_ARGS+="-system-webengine-icu" .
# avoid %%make_build for now, the -O flag buffers output from intermediate build steps
done via ninja
@@ -601,6 +567,9 @@ done
%changelog
+* Wed Feb 13 2019 Rex Dieter <rdieter(a)fedoraproject.org> - 5.12.1-1
+- 5.12.1
+
* Tue Feb 05 2019 Bjrn Esser <besser82(a)fedoraproject.org> - 5.11.3-5
- rebuilt (libvpx)
diff --git a/qtwebengine-everywhere-src-5.10.0-gn-bootstrap-verbose.patch
b/qtwebengine-everywhere-src-5.10.0-gn-bootstrap-verbose.patch
deleted file mode 100644
index cac2e56..0000000
--- a/qtwebengine-everywhere-src-5.10.0-gn-bootstrap-verbose.patch
+++ /dev/null
@@ -1,12 +0,0 @@
-diff -ur qtwebengine-everywhere-src-5.10.0/src/buildtools/gn.pro
qtwebengine-everywhere-src-5.10.0-gn-bootstrap-verbose/src/buildtools/gn.pro
---- qtwebengine-everywhere-src-5.10.0/src/buildtools/gn.pro 2017-11-29 09:42:29.000000000
+0100
-+++
qtwebengine-everywhere-src-5.10.0-gn-bootstrap-verbose/src/buildtools/gn.pro 2017-12-25
18:51:46.953799125 +0100
-@@ -25,7 +25,7 @@
- gn_args = $$replace(gn_args, "use_incremental_linking=true ",
"")
- }
-
-- gn_configure = $$system_quote($$gn_bootstrap) --shadow
--gn-gen-args=$$gn_args $$ninja_path
-+ gn_configure = $$system_quote($$gn_bootstrap) --verbose --shadow
--gn-gen-args=$$gn_args $$ninja_path
- !system("cd $$system_quote($$system_path($$dirname(out))) &&
$$pythonPathForSystem() $$gn_configure") {
- error("GN build error!")
- }
diff --git a/qtwebengine-everywhere-src-5.10.0-icu59.patch
b/qtwebengine-everywhere-src-5.10.0-icu59.patch
deleted file mode 100644
index 2b031f7..0000000
--- a/qtwebengine-everywhere-src-5.10.0-icu59.patch
+++ /dev/null
@@ -1,545 +0,0 @@
-diff -Nur qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/BUILD.gn
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/BUILD.gn
---- qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/BUILD.gn 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/BUILD.gn 2017-12-26
00:08:24.179696335 +0100
-@@ -1134,6 +1134,10 @@
- ":debugging_flags",
- ]
-
-+ if (!is_win) {
-+ public_deps += [ "//third_party/icu:icuuc" ]
-+ }
-+
- # Needed for <atomic> if using newer C++ library than sysroot, except if
- # building inside the cros_sdk environment - use host_toolchain as a
- # more robust check for this.
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/bidi_line_iterator.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/bidi_line_iterator.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/bidi_line_iterator.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/bidi_line_iterator.cc 2017-12-25
23:52:46.376221561 +0100
-@@ -44,7 +44,7 @@
- bidi_ = ubidi_openSized(static_cast<int>(text.length()), 0, &error);
- if (U_FAILURE(error))
- return false;
-- ubidi_setPara(bidi_, text.data(), static_cast<int>(text.length()),
-+ ubidi_setPara(bidi_, reinterpret_cast<const UChar*>(text.data()),
static_cast<int>(text.length()),
- GetParagraphLevelForDirection(direction), NULL, &error);
- return (U_SUCCESS(error) == TRUE);
- }
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/break_iterator.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/break_iterator.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/break_iterator.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/break_iterator.cc 2017-12-25
23:52:46.376221561 +0100
-@@ -59,9 +59,9 @@
- return false;
- }
- if (break_type_ == RULE_BASED) {
-- iter_ = ubrk_openRules(rules_.c_str(),
-+ iter_ = ubrk_openRules(reinterpret_cast<const UChar*>(rules_.c_str()),
- static_cast<int32_t>(rules_.length()),
-- string_.data(),
-+ reinterpret_cast<const UChar*>(string_.data()),
- static_cast<int32_t>(string_.size()),
- &parse_error,
- &status);
-@@ -72,7 +72,7 @@
- } else {
- iter_ = ubrk_open(break_type,
- NULL,
-- string_.data(),
-+ reinterpret_cast<const UChar*>(string_.data()),
- static_cast<int32_t>(string_.size()),
- &status);
- if (U_FAILURE(status)) {
-@@ -128,7 +128,7 @@
- bool BreakIterator::SetText(const base::char16* text, const size_t length) {
- UErrorCode status = U_ZERO_ERROR;
- ubrk_setText(static_cast<UBreakIterator*>(iter_),
-- text, length, &status);
-+ reinterpret_cast<const UChar*>(text), length, &status);
- pos_ = 0; // implicit when ubrk_setText is done
- prev_ = npos;
- if (U_FAILURE(status)) {
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/case_conversion.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/case_conversion.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/case_conversion.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/case_conversion.cc 2017-12-25
23:52:46.376221561 +0100
-@@ -64,8 +64,8 @@
- // terminator, but will otherwise. So we don't need to save room for that.
- // Don't use WriteInto, which assumes null terminators.
- int32_t new_length = case_mapper(
-- &dest[0], saturated_cast<int32_t>(dest.size()),
-- string.data(), saturated_cast<int32_t>(string.size()),
-+ reinterpret_cast<UChar*>(&dest[0]),
saturated_cast<int32_t>(dest.size()),
-+ reinterpret_cast<const UChar*>(string.data()),
saturated_cast<int32_t>(string.size()),
- &error);
- dest.resize(new_length);
- } while (error == U_BUFFER_OVERFLOW_ERROR);
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/icu_string_conversions.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/icu_string_conversions.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/icu_string_conversions.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/icu_string_conversions.cc 2017-12-25
23:52:46.376221561 +0100
-@@ -151,7 +151,7 @@
- if (!U_SUCCESS(status))
- return false;
-
-- return ConvertFromUTF16(converter, utf16.c_str(),
-+ return ConvertFromUTF16(converter, reinterpret_cast<const
UChar*>(utf16.c_str()),
- static_cast<int>(utf16.length()), on_error, encoded);
- }
-
-@@ -178,7 +178,7 @@
-
- SetUpErrorHandlerForToUChars(on_error, converter, &status);
- std::unique_ptr<char16[]> buffer(new char16[uchar_max_length]);
-- int actual_size = ucnv_toUChars(converter, buffer.get(),
-+ int actual_size = ucnv_toUChars(converter,
reinterpret_cast<UChar*>(buffer.get()),
- static_cast<int>(uchar_max_length), encoded.data(),
- static_cast<int>(encoded.length()), &status);
- ucnv_close(converter);
-@@ -205,8 +205,8 @@
- string16 normalized_utf16;
- std::unique_ptr<char16[]> buffer(new char16[max_length]);
- int actual_length = unorm_normalize(
-- utf16.c_str(), utf16.length(), UNORM_NFC, 0,
-- buffer.get(), static_cast<int>(max_length), &status);
-+ reinterpret_cast<const UChar*>(utf16.c_str()), utf16.length(), UNORM_NFC,
0,
-+ reinterpret_cast<UChar*>(buffer.get()), static_cast<int>(max_length),
&status);
- if (!U_SUCCESS(status))
- return false;
- normalized_utf16.assign(buffer.get(), actual_length);
-diff -Nur qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/rtl.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/rtl.cc
---- qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/rtl.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/rtl.cc 2017-12-25
23:54:24.681803775 +0100
-@@ -212,7 +212,7 @@
- }
-
- TextDirection GetFirstStrongCharacterDirection(const string16& text) {
-- const UChar* string = text.c_str();
-+ const UChar* string = reinterpret_cast<const UChar*>(text.c_str());
- size_t length = text.length();
- size_t position = 0;
- while (position < length) {
-@@ -228,7 +228,7 @@
- }
-
- TextDirection GetLastStrongCharacterDirection(const string16& text) {
-- const UChar* string = text.c_str();
-+ const UChar* string = reinterpret_cast<const UChar*>(text.c_str());
- size_t position = text.length();
- while (position > 0) {
- UChar32 character;
-@@ -243,7 +243,7 @@
- }
-
- TextDirection GetStringDirection(const string16& text) {
-- const UChar* string = text.c_str();
-+ const UChar* string = reinterpret_cast<const UChar*>(text.c_str());
- size_t length = text.length();
- size_t position = 0;
-
-@@ -374,7 +374,7 @@
- #endif // !OS_WIN
-
- bool StringContainsStrongRTLChars(const string16& text) {
-- const UChar* string = text.c_str();
-+ const UChar* string = reinterpret_cast<const UChar*>(text.c_str());
- size_t length = text.length();
- size_t position = 0;
- while (position < length) {
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/string_search.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/string_search.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/string_search.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/string_search.cc 2017-12-25
23:54:45.809499066 +0100
-@@ -20,8 +20,8 @@
- const string16& dummy = find_this_;
-
- UErrorCode status = U_ZERO_ERROR;
-- search_ = usearch_open(find_this_.data(), find_this_.size(),
-- dummy.data(), dummy.size(),
-+ search_ = usearch_open(reinterpret_cast<const UChar*>(find_this_.data()),
find_this_.size(),
-+ reinterpret_cast<const UChar*>(dummy.data()),
dummy.size(),
- uloc_getDefault(),
- NULL, // breakiter
- &status);
-@@ -41,7 +41,7 @@
- bool FixedPatternStringSearchIgnoringCaseAndAccents::Search(
- const string16& in_this, size_t* match_index, size_t* match_length) {
- UErrorCode status = U_ZERO_ERROR;
-- usearch_setText(search_, in_this.data(), in_this.size(), &status);
-+ usearch_setText(search_, reinterpret_cast<const UChar *>(in_this.data()),
in_this.size(), &status);
-
- // Default to basic substring search if usearch fails. According to
- //
http://icu-project.org/apiref/icu4c/usearch_8h.html, usearch_open will fail
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/unicodestring.h
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/unicodestring.h
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/i18n/unicodestring.h 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/base/i18n/unicodestring.h 2017-12-26
01:22:00.605067404 +0100
-@@ -9,16 +9,12 @@
- #include "third_party/icu/source/common/unicode/unistr.h"
- #include "third_party/icu/source/common/unicode/uvernum.h"
-
--#if U_ICU_VERSION_MAJOR_NUM >= 59
--#include "third_party/icu/source/common/unicode/char16ptr.h"
--#endif
--
- namespace base {
- namespace i18n {
-
- inline string16 UnicodeStringToString16(const icu::UnicodeString& unistr) {
- #if U_ICU_VERSION_MAJOR_NUM >= 59
-- return base::string16(icu::toUCharPtr(unistr.getBuffer()),
-+ return base::string16(reinterpret_cast<const char16*>(unistr.getBuffer()),
- static_cast<size_t>(unistr.length()));
- #else
- return base::string16(unistr.getBuffer(),
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/components/url_formatter/idn_spoof_checker.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/components/url_formatter/idn_spoof_checker.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/components/url_formatter/idn_spoof_checker.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/components/url_formatter/idn_spoof_checker.cc 2017-12-26
00:16:45.791461970 +0100
-@@ -155,14 +155,14 @@
- bool is_tld_ascii) {
- UErrorCode status = U_ZERO_ERROR;
- int32_t result =
-- uspoof_check(checker_, label.data(),
-+ uspoof_check(checker_, (const UChar*)label.data(),
- base::checked_cast<int32_t>(label.size()), NULL, &status);
- // If uspoof_check fails (due to library failure), or if any of the checks
- // fail, treat the IDN as unsafe.
- if (U_FAILURE(status) || (result & USPOOF_ALL_CHECKS))
- return false;
-
-- icu::UnicodeString label_string(FALSE, label.data(),
-+ icu::UnicodeString label_string(FALSE, (const UChar*)label.data(),
- base::checked_cast<int32_t>(label.size()));
-
- // A punycode label with 'xn--' prefix is not subject to the URL
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/components/url_formatter/url_formatter.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/components/url_formatter/url_formatter.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/components/url_formatter/url_formatter.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/components/url_formatter/url_formatter.cc 2017-12-25
23:58:01.767672910 +0100
-@@ -374,7 +374,7 @@
- // code units, |status| will be U_BUFFER_OVERFLOW_ERROR and we'll try
- // the conversion again, but with a sufficiently large buffer.
- output_length = uidna_labelToUnicode(
-- uidna, comp, static_cast<int32_t>(comp_len),
&(*out)[original_length],
-+ uidna, (const UChar*)comp, static_cast<int32_t>(comp_len),
(UChar*)&(*out)[original_length],
- output_length, &info, &status);
- } while ((status == U_BUFFER_OVERFLOW_ERROR && info.errors == 0));
-
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/content/child/browser_font_resource_trusted.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/content/child/browser_font_resource_trusted.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/content/child/browser_font_resource_trusted.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/content/child/browser_font_resource_trusted.cc 2017-12-25
23:58:54.555911585 +0100
-@@ -77,7 +77,7 @@
- } else {
- bidi_ = ubidi_open();
- UErrorCode uerror = U_ZERO_ERROR;
-- ubidi_setPara(bidi_, text_.data(), text_.size(), run.rtl, NULL, &uerror);
-+ ubidi_setPara(bidi_, reinterpret_cast<const UChar*>(text_.data()),
text_.size(), run.rtl, NULL, &uerror);
- if (U_SUCCESS(uerror))
- num_runs_ = ubidi_countRuns(bidi_, &uerror);
- }
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/net/third_party/mozilla_security_manager/nsPKCS12Blob.cpp
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/net/third_party/mozilla_security_manager/nsPKCS12Blob.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/net/third_party/mozilla_security_manager/nsPKCS12Blob.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/net/third_party/mozilla_security_manager/nsPKCS12Blob.cpp 2017-12-26
00:00:40.801379288 +0100
-@@ -58,7 +58,7 @@
- // For the NSS PKCS#12 library, must convert PRUnichars (shorts) to
- // a buffer of octets. Must handle byte order correctly.
- // TODO: Is there a Mozilla way to do this? In the string lib?
--void unicodeToItem(const PRUnichar *uni, SECItem *item)
-+void unicodeToItem(const base::char16 *uni, SECItem *item)
- {
- int len = 0;
- while (uni[len++] != 0);
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ppapi/proxy/pdf_resource.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ppapi/proxy/pdf_resource.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ppapi/proxy/pdf_resource.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ppapi/proxy/pdf_resource.cc 2017-12-26
00:00:40.801379288 +0100
-@@ -58,10 +58,10 @@
- PP_PrivateFindResult** results, int* count) {
- if (locale_.empty())
- locale_ = GetLocale();
-- const base::char16* string =
-- reinterpret_cast<const base::char16*>(input_string);
-- const base::char16* term =
-- reinterpret_cast<const base::char16*>(input_term);
-+ const UChar* string =
-+ reinterpret_cast<const UChar*>(input_string);
-+ const UChar* term =
-+ reinterpret_cast<const UChar*>(input_term);
-
- UErrorCode status = U_ZERO_ERROR;
- UStringSearch* searcher = usearch_open(term, -1, string, -1, locale_.c_str(),
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/sfntly/src/cpp/src/sample/chromium/subsetter_impl.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/sfntly/src/cpp/src/sample/chromium/subsetter_impl.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/sfntly/src/cpp/src/sample/chromium/subsetter_impl.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/sfntly/src/cpp/src/sample/chromium/subsetter_impl.cc 2017-12-26
00:02:54.958444442 +0100
-@@ -27,6 +27,7 @@
- #include <unicode/unistr.h>
- #include <unicode/uversion.h>
-
-+#include "base/i18n/unicodestring.h"
- #include "sfntly/table/bitmap/eblc_table.h"
- #include "sfntly/table/bitmap/ebdt_table.h"
- #include "sfntly/table/bitmap/index_sub_table.h"
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/FilePathConversion.cpp
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/FilePathConversion.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/FilePathConversion.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/FilePathConversion.cpp 2017-12-26
00:21:22.768467342 +0100
-@@ -19,7 +19,7 @@
- String str = web_string;
- if (!str.Is8Bit()) {
- return base::FilePath::FromUTF16Unsafe(
-- base::StringPiece16(str.Characters16(), str.length()));
-+ base::StringPiece16((const base::char16*)str.Characters16(), str.length()));
- }
-
- #if defined(OS_POSIX)
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/URLConversion.cpp
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/URLConversion.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/URLConversion.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/URLConversion.cpp 2017-12-26
00:21:37.908248992 +0100
-@@ -23,7 +23,7 @@
- }
-
- // GURL can consume UTF-16 directly.
-- return GURL(base::StringPiece16(str.Characters16(), str.length()));
-+ return GURL(base::StringPiece16((const base::char16*)str.Characters16(),
str.length()));
- }
-
- } // namespace blink
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/WebString.cpp
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/WebString.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/WebString.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/exported/WebString.cpp 2017-12-26
00:22:14.401722675 +0100
-@@ -59,7 +59,7 @@
- }
-
- void WebString::Assign(const WebUChar* data, size_t length) {
-- Assign(StringImpl::Create8BitIfPossible(data, length).Get());
-+ Assign(StringImpl::Create8BitIfPossible((const UChar*)data, length).Get());
- }
-
- size_t WebString::length() const {
-@@ -75,7 +75,7 @@
- }
-
- const WebUChar* WebString::Data16() const {
-- return !private_.IsNull() && !Is8Bit() ? private_->Characters16() : 0;
-+ return !private_.IsNull() && !Is8Bit() ? (const
WebUChar*)private_->Characters16() : 0;
- }
-
- std::string WebString::Utf8(UTF8ConversionMode mode) const {
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/LinkHash.cpp
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/LinkHash.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/LinkHash.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/LinkHash.cpp 2017-12-26
00:20:18.452394923 +0100
-@@ -51,7 +51,7 @@
- relative_utf8.length(), 0, buffer, &parsed);
- }
- return url::ResolveRelative(base_utf8.Data(), base_utf8.length(),
-- base.GetParsed(), relative.Characters16(),
-+ base.GetParsed(), (const
base::char16*)relative.Characters16(),
- relative.length(), 0, buffer, &parsed);
- }
-
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/weborigin/KURL.cpp
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/weborigin/KURL.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/weborigin/KURL.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/weborigin/KURL.cpp 2017-12-26
00:25:55.112547976 +0100
-@@ -104,7 +104,7 @@
- int input_length,
- url::CanonOutput* output) override {
- CString encoded = encoding_->Encode(
-- String(input, input_length), WTF::kURLEncodedEntitiesForUnencodables);
-+ String((const UChar*)input, input_length),
WTF::kURLEncodedEntitiesForUnencodables);
- output->Append(encoded.data(), static_cast<int>(encoded.length()));
- }
-
-@@ -341,7 +341,7 @@
- if (string_.Is8Bit())
- url::ExtractFileName(AsURLChar8Subtle(string_), path, &file);
- else
-- url::ExtractFileName(string_.Characters16(), path, &file);
-+ url::ExtractFileName((const base::char16*)string_.Characters16(), path, &file);
-
- // Bug:
https://bugs.webkit.org/show_bug.cgi?id=21015 this function returns
- // a null string when the path is empty, which we duplicate here.
-@@ -371,7 +371,7 @@
- DCHECK(!string_.IsNull());
- int port = string_.Is8Bit()
- ? url::ParsePort(AsURLChar8Subtle(string_), parsed_.port)
-- : url::ParsePort(string_.Characters16(), parsed_.port);
-+ : url::ParsePort((const base::char16*)string_.Characters16(),
parsed_.port);
- DCHECK_NE(port, url::PORT_UNSPECIFIED); // Checked port.len <= 0 before.
-
- if (port == url::PORT_INVALID ||
-@@ -666,7 +666,7 @@
- return false;
- return string_.Is8Bit()
- ? url::IsStandard(AsURLChar8Subtle(string_), parsed_.scheme)
-- : url::IsStandard(string_.Characters16(), parsed_.scheme);
-+ : url::IsStandard((const base::char16*)string_.Characters16(),
parsed_.scheme);
- }
-
- bool EqualIgnoringFragmentIdentifier(const KURL& a, const KURL& b) {
-@@ -719,7 +719,7 @@
- if (string_.Is8Bit())
- url::ExtractFileName(AsURLChar8Subtle(string_), parsed_.path, &filename);
- else
-- url::ExtractFileName(string_.Characters16(), parsed_.path, &filename);
-+ url::ExtractFileName((const base::char16*)string_.Characters16(), parsed_.path,
&filename);
- return filename.begin;
- }
-
-@@ -732,7 +732,7 @@
- if (url.Is8Bit())
- return url::FindAndCompareScheme(AsURLChar8Subtle(url), url.length(),
- protocol, 0);
-- return url::FindAndCompareScheme(url.Characters16(), url.length(), protocol,
-+ return url::FindAndCompareScheme((const base::char16*)url.Characters16(),
url.length(), protocol,
- 0);
- }
-
-@@ -765,7 +765,7 @@
- charset_converter, &output, &parsed_);
- } else {
- is_valid_ = url::ResolveRelative(base_utf8.Data(), base_utf8.length(),
-- base.parsed_, relative.Characters16(),
-+ base.parsed_, (const
base::char16*)relative.Characters16(),
- clampTo<int>(relative.length()),
- charset_converter, &output, &parsed_);
- }
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/weborigin/SecurityOrigin.cpp
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/weborigin/SecurityOrigin.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/weborigin/SecurityOrigin.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/weborigin/SecurityOrigin.cpp 2017-12-26
00:27:48.865912016 +0100
-@@ -638,7 +638,7 @@
- url::CanonicalizeHost(utf8.Data(), url::Component(0, utf8.length()),
- &canon_output, &out_host);
- } else {
-- *success = url::CanonicalizeHost(host.Characters16(),
-+ *success = url::CanonicalizeHost(reinterpret_cast<const base::char16
*>(host.Characters16()),
- url::Component(0, host.length()),
- &canon_output, &out_host);
- }
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/AtomicString.h
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/AtomicString.h
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/AtomicString.h 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/AtomicString.h 2017-12-26
00:02:31.246786418 +0100
-@@ -66,9 +66,10 @@
- AtomicString(const LChar* chars, unsigned length);
- AtomicString(const UChar* chars, unsigned length);
- AtomicString(const UChar* chars);
-+#if U_ICU_VERSION_MAJOR_NUM < 59
- AtomicString(const char16_t* chars)
- : AtomicString(reinterpret_cast<const UChar*>(chars)) {}
--
-+#endif
- template <size_t inlineCapacity>
- explicit AtomicString(const Vector<UChar, inlineCapacity>& vector)
- : AtomicString(vector.data(), vector.size()) {}
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/StringView.h
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/StringView.h
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/StringView.h 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/StringView.h 2017-12-26
00:02:44.550594548 +0100
-@@ -83,8 +83,10 @@
- characters16_(chars),
- length_(length) {}
- StringView(const UChar* chars);
-+#if U_ICU_VERSION_MAJOR_NUM < 59
- StringView(const char16_t* chars)
- : StringView(reinterpret_cast<const UChar*>(chars)) {}
-+#endif
-
- #if DCHECK_IS_ON()
- ~StringView();
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/WTFString.h
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/WTFString.h
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/WTFString.h 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/third_party/WebKit/Source/platform/wtf/text/WTFString.h 2017-12-26
00:33:00.427431253 +0100
-@@ -36,6 +36,8 @@
- #include <algorithm>
- #include <iosfwd>
-
-+#include "third_party/icu/source/common/unicode/uvernum.h"
-+
- #ifdef __OBJC__
- #include <objc/objc.h>
- #endif
-@@ -82,8 +84,13 @@
-
- // Construct a string with UTF-16 data, from a null-terminated source.
- String(const UChar*);
-+#if U_ICU_VERSION_MAJOR_NUM < 59
- String(const char16_t* chars)
- : String(reinterpret_cast<const UChar*>(chars)) {}
-+#else
-+ String(const uint16_t* chars)
-+ : String(reinterpret_cast<const UChar*>(chars)) {}
-+#endif
-
- // Construct a string with latin1 data.
- String(const LChar* characters, unsigned length);
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ui/base/accelerators/accelerator.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ui/base/accelerators/accelerator.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ui/base/accelerators/accelerator.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ui/base/accelerators/accelerator.cc 2017-12-26
00:02:54.958444442 +0100
-@@ -225,7 +225,7 @@
- key = LOWORD(::MapVirtualKeyW(key_code_, MAPVK_VK_TO_CHAR));
- shortcut += key;
- #elif defined(USE_AURA) || defined(OS_MACOSX)
-- const uint16_t c = DomCodeToUsLayoutCharacter(
-+ const base::char16 c = DomCodeToUsLayoutCharacter(
- UsLayoutKeyboardCodeToDomCode(key_code_), false);
- if (c != 0)
- shortcut +=
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ui/base/l10n/l10n_util.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ui/base/l10n/l10n_util.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ui/base/l10n/l10n_util.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ui/base/l10n/l10n_util.cc 2017-12-26
00:02:54.959444427 +0100
-@@ -581,7 +581,7 @@
-
- int actual_size = uloc_getDisplayName(
- locale_code.c_str(), display_locale.c_str(),
-- base::WriteInto(&display_name, kBufferSize), kBufferSize - 1, &error);
-+ (UChar*)base::WriteInto(&display_name, kBufferSize), kBufferSize - 1,
&error);
- DCHECK(U_SUCCESS(error));
- display_name.resize(actual_size);
- }
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ui/base/l10n/time_format.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ui/base/l10n/time_format.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ui/base/l10n/time_format.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ui/base/l10n/time_format.cc 2017-12-26
00:02:54.959444427 +0100
-@@ -141,7 +141,7 @@
- DCHECK_GT(capacity, 1);
- base::string16 result;
- UErrorCode error = U_ZERO_ERROR;
-- time_string.extract(static_cast<UChar*>(base::WriteInto(&result,
capacity)),
-+ time_string.extract(reinterpret_cast<UChar*>(base::WriteInto(&result,
capacity)),
- capacity, error);
- DCHECK(U_SUCCESS(error));
- return result;
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ui/base/x/selection_utils.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ui/base/x/selection_utils.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/ui/base/x/selection_utils.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/ui/base/x/selection_utils.cc 2017-12-26
00:02:54.959444427 +0100
-@@ -207,8 +207,8 @@
- // If the data starts with 0xFEFF, i.e., Byte Order Mark, assume it is
- // UTF-16, otherwise assume UTF-8.
- if (size >= 2 &&
-- reinterpret_cast<const uint16_t*>(data)[0] == 0xFEFF) {
-- markup.assign(reinterpret_cast<const uint16_t*>(data) + 1,
-+ reinterpret_cast<const base::char16*>(data)[0] == 0xFEFF) {
-+ markup.assign(reinterpret_cast<const base::char16*>(data) + 1,
- (size / 2) - 1);
- } else {
- base::UTF8ToUTF16(reinterpret_cast<const char*>(data), size, &markup);
-diff -Nur qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/url/url_canon_icu.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/url/url_canon_icu.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/url/url_canon_icu.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/url/url_canon_icu.cc 2017-12-26
00:02:54.959444427 +0100
-@@ -133,7 +133,7 @@
- UErrorCode err = U_ZERO_ERROR;
- char* dest = &output->data()[begin_offset];
- int required_capacity = ucnv_fromUChars(converter_, dest, dest_capacity,
-- input, input_len, &err);
-+ (const UChar*)input, input_len, &err);
- if (err != U_BUFFER_OVERFLOW_ERROR) {
- output->set_length(begin_offset + required_capacity);
- return;
-@@ -170,7 +170,7 @@
- while (true) {
- UErrorCode err = U_ZERO_ERROR;
- UIDNAInfo info = UIDNA_INFO_INITIALIZER;
-- int output_length = uidna_nameToASCII(uidna, src, src_len, output->data(),
-+ int output_length = uidna_nameToASCII(uidna, (const UChar*)src, src_len,
(UChar*)output->data(),
- output->capacity(), &info, &err);
- if (U_SUCCESS(err) && info.errors == 0) {
- output->set_length(output_length);
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/v8/src/runtime/runtime-intl.cc
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/v8/src/runtime/runtime-intl.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/v8/src/runtime/runtime-intl.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-icu59/src/3rdparty/chromium/v8/src/runtime/runtime-intl.cc 2017-12-26
00:38:34.568625756 +0100
-@@ -43,6 +43,7 @@
- #include "unicode/ucurr.h"
- #include "unicode/uloc.h"
- #include "unicode/unistr.h"
-+#include "unicode/ustring.h"
- #include "unicode/unum.h"
- #include "unicode/uvernum.h"
- #include "unicode/uversion.h"
diff --git a/qtwebengine-everywhere-src-5.10.0-skia-neon.patch
b/qtwebengine-everywhere-src-5.10.0-skia-neon.patch
deleted file mode 100644
index 9424e9f..0000000
--- a/qtwebengine-everywhere-src-5.10.0-skia-neon.patch
+++ /dev/null
@@ -1,390 +0,0 @@
-diff -Nur qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/skia/BUILD.gn
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/skia/BUILD.gn
---- qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/skia/BUILD.gn 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/skia/BUILD.gn 2017-12-25
18:31:12.288797893 +0100
-@@ -508,6 +508,24 @@
- }
-
- # Separated out so it can be compiled with different flags for SSE.
-+if (current_cpu == "arm" && (arm_use_neon || arm_optionally_use_neon))
{
-+ source_set("skia_opts_neon") {
-+ sources = skia_opts.neon_sources
-+ # Root build config sets -mfpu=$arm_fpu, which we expect to be neon
-+ # when running this.
-+ if (!arm_use_neon) {
-+ configs -= [ "//build/config/compiler:compiler_arm_fpu" ]
-+ cflags = [ "-mfpu=neon" ]
-+ }
-+ visibility = [ ":skia_opts" ]
-+ configs -= [ "//build/config/compiler:chromium_code" ]
-+ configs += [
-+ ":skia_config",
-+ ":skia_library_config",
-+ "//build/config/compiler:no_chromium_code",
-+ ]
-+ }
-+}
- if (current_cpu == "arm64") {
- source_set("skia_opts_crc32") {
- sources = skia_opts.crc32_sources
-@@ -644,14 +662,7 @@
- if (arm_version >= 7) {
- sources = skia_opts.armv7_sources
- if (arm_use_neon || arm_optionally_use_neon) {
-- sources += skia_opts.neon_sources
--
-- # Root build config sets -mfpu=$arm_fpu, which we expect to be neon
-- # when running this.
-- if (!arm_use_neon) {
-- configs -= [ "//build/config/compiler:compiler_arm_fpu" ]
-- cflags += [ "-mfpu=neon" ]
-- }
-+ deps += [ ":skia_opts_neon" ]
- }
- } else {
- sources = skia_opts.none_sources
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/gn/opts.gni
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/gn/opts.gni
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/gn/opts.gni 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/gn/opts.gni 2017-12-25
18:29:15.083480322 +0100
-@@ -23,6 +23,7 @@
- "$_src/opts/SkBitmapProcState_matrixProcs_neon.cpp",
- "$_src/opts/SkBlitMask_opts_arm_neon.cpp",
- "$_src/opts/SkBlitRow_opts_arm_neon.cpp",
-+ "$_src/opts/SkOpts_neon.cpp",
- ]
-
- arm64 = [
-@@ -33,6 +34,7 @@
- "$_src/opts/SkBlitMask_opts_arm_neon.cpp",
- "$_src/opts/SkBlitRow_opts_arm.cpp",
- "$_src/opts/SkBlitRow_opts_arm_neon.cpp",
-+ "$_src/opts/SkOpts_neon.cpp",
- ]
-
- crc32 = [ "$_src/opts/SkOpts_crc32.cpp" ]
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkBitmapProcState.cpp
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkBitmapProcState.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkBitmapProcState.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkBitmapProcState.cpp 2017-12-25
18:29:22.449374588 +0100
-@@ -17,7 +17,7 @@
- #include "SkImageEncoder.h"
- #include "SkResourceCache.h"
-
--#if defined(SK_ARM_HAS_NEON)
-+#if !SK_ARM_NEON_IS_NONE
- // These are defined in src/opts/SkBitmapProcState_arm_neon.cpp
- extern const SkBitmapProcState::SampleProc32 gSkBitmapProcStateSample32_neon[];
- #endif
-@@ -212,7 +212,7 @@
- index |= 4;
- }
-
--#if !defined(SK_ARM_HAS_NEON)
-+#if !SK_ARM_NEON_IS_ALWAYS
- static const SampleProc32 gSkBitmapProcStateSample32[] = {
- S32_opaque_D32_nofilter_DXDY,
- S32_alpha_D32_nofilter_DXDY,
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkBitmapProcState_matrixProcs.cpp
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkBitmapProcState_matrixProcs.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkBitmapProcState_matrixProcs.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkBitmapProcState_matrixProcs.cpp 2017-12-25
18:34:09.229257992 +0100
-@@ -46,16 +46,16 @@
- ///////////////////////////////////////////////////////////////////////////////
-
- // Compile neon code paths if needed
--#if defined(SK_ARM_HAS_NEON)
-+#if !SK_ARM_NEON_IS_NONE
-
- // These are defined in src/opts/SkBitmapProcState_matrixProcs_neon.cpp
- extern const SkBitmapProcState::MatrixProc ClampX_ClampY_Procs_neon[];
- extern const SkBitmapProcState::MatrixProc RepeatX_RepeatY_Procs_neon[];
-
--#endif // defined(SK_ARM_HAS_NEON)
-+#endif // !SK_ARM_NEON_IS_NONE
-
- // Compile non-neon code path if needed
--#if !defined(SK_ARM_HAS_NEON)
-+#if !SK_ARM_NEON_IS_ALWAYS
- #define MAKENAME(suffix) ClampX_ClampY ## suffix
- #define TILEX_PROCF(fx, max) SkClampMax((fx) >> 16, max)
- #define TILEY_PROCF(fy, max) SkClampMax((fy) >> 16, max)
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkCpu.cpp
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkCpu.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkCpu.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkCpu.cpp 2017-12-25
18:37:45.974144769 +0100
-@@ -74,6 +74,124 @@
- return features;
- }
-
-+#elif defined(SK_CPU_ARM32) && \
-+ !defined(SK_BUILD_FOR_ANDROID)
-+#include <unistd.h>
-+#include <fcntl.h>
-+#include <errno.h>
-+#include <string.h>
-+#include <pthread.h>
-+
-+ static uint32_t read_cpu_features() {
-+ uint32_t features = 0;
-+
-+ // If we fail any of the following, assume we don't have NEON/VFPv4
instructions
-+ // This allows us to return immediately in case of error.
-+ bool have_neon = false;
-+ bool have_vfpv4 = false;
-+
-+ // There is no user-accessible CPUID instruction on ARM that we can use.
-+ // Instead, we must parse /proc/cpuinfo and look for the 'neon'
feature.
-+ // For example, here's a typical output (Nexus S running ICS 4.0.3):
-+ /*
-+ Processor : ARMv7 Processor rev 2 (v7l)
-+ BogoMIPS : 994.65
-+ Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3
-+ CPU implementer : 0x41
-+ CPU architecture: 7
-+ CPU variant : 0x2
-+ CPU part : 0xc08
-+ CPU revision : 2
-+
-+ Hardware : herring
-+ Revision : 000b
-+ Serial : 3833c77d6dc000ec
-+ */
-+ char buffer[4096];
-+
-+ do {
-+ // open /proc/cpuinfo
-+ int fd = TEMP_FAILURE_RETRY(open("/proc/cpuinfo", O_RDONLY));
-+ if (fd < 0) {
-+ SkDebugf("Could not open /proc/cpuinfo: %s\n",
strerror(errno));
-+ break;
-+ }
-+
-+ // Read the file. To simplify our search, we're going to place two
-+ // sentinel '\n' characters: one at the start of the buffer, and one
at
-+ // the end. This means we reserve the first and last buffer bytes.
-+ buffer[0] = '\n';
-+ int size = TEMP_FAILURE_RETRY(read(fd, buffer+1, sizeof(buffer)-2));
-+ close(fd);
-+
-+ if (size < 0) { // should not happen
-+ SkDebugf("Could not read /proc/cpuinfo: %s\n",
strerror(errno));
-+ break;
-+ }
-+
-+ SkDebugf("START /proc/cpuinfo:\n%.*s\nEND /proc/cpuinfo\n",
-+ size, buffer+1);
-+
-+ // Compute buffer limit, and place final sentinel
-+ char* buffer_end = buffer + 1 + size;
-+ buffer_end[0] = '\n';
-+
-+ // Now, find a line that starts with "Features", i.e. look for
-+ // '\nFeatures ' in our buffer.
-+ const char features[] = "\nFeatures\t";
-+ const size_t features_len = sizeof(features)-1;
-+
-+ char* line = (char*) memmem(buffer, buffer_end - buffer,
-+ features, features_len);
-+ if (line == nullptr) { // Weird, no Features line, bad kernel?
-+ SkDebugf("Could not find a line starting with
'Features'"
-+ "in /proc/cpuinfo ?\n");
-+ break;
-+ }
-+
-+ line += features_len; // Skip the "\nFeatures\t" prefix
-+
-+ // Find the end of the current line
-+ char* line_end = (char*) memchr(line, '\n', buffer_end - line);
-+ if (line_end == nullptr)
-+ line_end = buffer_end;
-+
-+ // Now find an instance of 'neon' in the flags list. We want to
-+ // ensure it's only 'neon' and not something fancy like
'noneon'
-+ // so check that it follows a space.
-+ const char neon[] = " neon";
-+ const size_t neon_len = sizeof(neon)-1;
-+ const char* flag = (const char*) memmem(line, line_end - line,
-+ neon, neon_len);
-+ // Ensure it is followed by a space or a newline.
-+ if (flag != nullptr
-+ && (flag[neon_len] == ' ' || flag[neon_len] ==
'\n')) {
-+ // Fine, we support Arm NEON !
-+ have_neon = true;
-+ }
-+
-+ // Now find an instance of 'vfpv4' in the flags list. We want to
-+ // ensure it's only 'vfpv4' and not something fancy like
'novfpv4'
-+ // so check that it follows a space.
-+ const char vfpv4[] = " vfpv4";
-+ const size_t vfpv4_len = sizeof(vfpv4)-1;
-+ const char* vflag = (const char*) memmem(line, line_end - line,
-+ vfpv4, vfpv4_len);
-+ // Ensure it is followed by a space or a newline.
-+ if (vflag != nullptr
-+ && (vflag[vfpv4_len] == ' ' || vflag[vfpv4_len] ==
'\n')) {
-+ // Fine, we support Arm VFPv4 !
-+ have_vfpv4 = true;
-+ }
-+
-+ } while (0);
-+
-+ if (have_neon) { features |= SkCpu::NEON ; }
-+ if (have_neon && have_vfpv4) { features |= SkCpu::NEON_FMA; }
-+ if (have_vfpv4) { features |= SkCpu::VFP_FP16; }
-+ return features;
-+ }
-+
- #elif defined(SK_CPU_ARM64) && __has_include(<sys/auxv.h>)
- #include <sys/auxv.h>
-
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkOpts.cpp
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkOpts.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkOpts.cpp 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkOpts.cpp 2017-12-25
18:34:52.777632875 +0100
-@@ -95,6 +95,7 @@
- void Init_sse42();
- void Init_avx();
- void Init_crc32();
-+ void Init_neon();
-
- static void init() {
- #if !defined(SK_BUILD_NO_OPTS)
-@@ -104,6 +105,9 @@
- if (SkCpu::Supports(SkCpu::SSE42)) { Init_sse42(); }
- if (SkCpu::Supports(SkCpu::AVX )) { Init_avx(); }
-
-+ #elif defined(SK_CPU_ARM32)
-+ if (SkCpu::Supports(SkCpu::NEON)) { Init_neon(); }
-+
- #elif defined(SK_CPU_ARM64)
- if (SkCpu::Supports(SkCpu::CRC32)) { Init_crc32(); }
-
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkUtilsArm.h
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkUtilsArm.h
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/core/SkUtilsArm.h 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/core/SkUtilsArm.h 2017-12-25
18:34:52.777632875 +0100
-@@ -8,12 +8,75 @@
- #ifndef SkUtilsArm_DEFINED
- #define SkUtilsArm_DEFINED
-
--#include "SkTypes.h"
-+#include "SkCpu.h"
-+#include "SkUtils.h"
-
--#if defined(SK_ARM_HAS_NEON)
-- #define SK_ARM_NEON_WRAP(x) (x ## _neon)
-+// Define SK_ARM_NEON_MODE to one of the following values
-+// corresponding respectively to:
-+// - No ARM Neon support at all (not targetting ARMv7-A, or don't have NEON)
-+// - Full ARM Neon support (i.e. assume the CPU always supports it)
-+// - Optional ARM Neon support (i.e. probe CPU at runtime)
-+//
-+#define SK_ARM_NEON_MODE_NONE 0
-+#define SK_ARM_NEON_MODE_ALWAYS 1
-+#define SK_ARM_NEON_MODE_DYNAMIC 2
-+
-+#if defined(SK_ARM_HAS_OPTIONAL_NEON)
-+# define SK_ARM_NEON_MODE SK_ARM_NEON_MODE_DYNAMIC
-+#elif defined(SK_ARM_HAS_NEON)
-+# define SK_ARM_NEON_MODE SK_ARM_NEON_MODE_ALWAYS
-+#else
-+# define SK_ARM_NEON_MODE SK_ARM_NEON_MODE_NONE
-+#endif
-+
-+// Convenience test macros, always defined as 0 or 1
-+#define SK_ARM_NEON_IS_NONE (SK_ARM_NEON_MODE == SK_ARM_NEON_MODE_NONE)
-+#define SK_ARM_NEON_IS_ALWAYS (SK_ARM_NEON_MODE == SK_ARM_NEON_MODE_ALWAYS)
-+#define SK_ARM_NEON_IS_DYNAMIC (SK_ARM_NEON_MODE == SK_ARM_NEON_MODE_DYNAMIC)
-+
-+// The sk_cpu_arm_has_neon() function returns true iff the target device
-+// is ARMv7-A and supports Neon instructions. In DYNAMIC mode, this actually
-+// probes the CPU at runtime (and caches the result).
-+
-+static inline bool sk_cpu_arm_has_neon(void) {
-+#if SK_ARM_NEON_IS_NONE
-+ return false;
- #else
-- #define SK_ARM_NEON_WRAP(x) (x)
-+ return SkCpu::Supports(SkCpu::NEON);
-+#endif
-+}
-+
-+// Use SK_ARM_NEON_WRAP(symbol) to map 'symbol' to a NEON-specific symbol
-+// when applicable. This will transform 'symbol' differently depending on
-+// the current NEON configuration, i.e.:
-+//
-+// NONE -> 'symbol'
-+// ALWAYS -> 'symbol_neon'
-+// DYNAMIC -> 'symbol' or 'symbol_neon' depending on
runtime check.
-+//
-+// The goal is to simplify user code, for example:
-+//
-+// return SK_ARM_NEON_WRAP(do_something)(params);
-+//
-+// Replaces the equivalent:
-+//
-+// #if SK_ARM_NEON_IS_NONE
-+// return do_something(params);
-+// #elif SK_ARM_NEON_IS_ALWAYS
-+// return do_something_neon(params);
-+// #elif SK_ARM_NEON_IS_DYNAMIC
-+// if (sk_cpu_arm_has_neon())
-+// return do_something_neon(params);
-+// else
-+// return do_something(params);
-+// #endif
-+//
-+#if SK_ARM_NEON_IS_NONE
-+# define SK_ARM_NEON_WRAP(x) (x)
-+#elif SK_ARM_NEON_IS_ALWAYS
-+# define SK_ARM_NEON_WRAP(x) (x ## _neon)
-+#elif SK_ARM_NEON_IS_DYNAMIC
-+# define SK_ARM_NEON_WRAP(x) (sk_cpu_arm_has_neon() ? x ## _neon : x)
- #endif
-
- #endif // SkUtilsArm_DEFINED
-diff -Nur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/opts/SkOpts_neon.cpp
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/opts/SkOpts_neon.cpp
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/third_party/skia/src/opts/SkOpts_neon.cpp 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-skia-neon/src/3rdparty/chromium/third_party/skia/src/opts/SkOpts_neon.cpp 2017-12-26
01:45:00.514114716 +0100
-@@ -0,0 +1,47 @@
-+/*
-+ * Copyright 2015 Google Inc.
-+ *
-+ * Use of this source code is governed by a BSD-style license that can be
-+ * found in the LICENSE file.
-+ */
-+
-+#include "SkOpts.h"
-+
-+#define SK_OPTS_NS sk_neon
-+#include "SkBlitMask_opts.h"
-+#include "SkBlitRow_opts.h"
-+#include "SkBlurImageFilter_opts.h"
-+#include "SkMorphologyImageFilter_opts.h"
-+#include "SkSwizzler_opts.h"
-+#include "SkXfermode_opts.h"
-+
-+namespace SkOpts {
-+ void Init_neon() {
-+ create_xfermode = sk_neon::create_xfermode;
-+
-+ box_blur_xx = sk_neon::box_blur_xx;
-+ box_blur_xy = sk_neon::box_blur_xy;
-+ box_blur_yx = sk_neon::box_blur_yx;
-+
-+ dilate_x = sk_neon::dilate_x;
-+ dilate_y = sk_neon::dilate_y;
-+ erode_x = sk_neon::erode_x;
-+ erode_y = sk_neon::erode_y;
-+
-+ blit_mask_d32_a8 = sk_neon::blit_mask_d32_a8;
-+
-+ blit_row_color32 = sk_neon::blit_row_color32;
-+ blit_row_s32a_opaque = sk_neon::blit_row_s32a_opaque;
-+
-+ RGBA_to_BGRA = sk_neon::RGBA_to_BGRA;
-+ RGBA_to_rgbA = sk_neon::RGBA_to_rgbA;
-+ RGBA_to_bgrA = sk_neon::RGBA_to_bgrA;
-+ RGB_to_RGB1 = sk_neon::RGB_to_RGB1;
-+ RGB_to_BGR1 = sk_neon::RGB_to_BGR1;
-+ gray_to_RGB1 = sk_neon::gray_to_RGB1;
-+ grayA_to_RGBA = sk_neon::grayA_to_RGBA;
-+ grayA_to_rgbA = sk_neon::grayA_to_rgbA;
-+ inverted_CMYK_to_RGB1 = sk_neon::inverted_CMYK_to_RGB1;
-+ inverted_CMYK_to_BGR1 = sk_neon::inverted_CMYK_to_BGR1;
-+ }
-+}
diff --git a/qtwebengine-everywhere-src-5.10.0-system-icu-utf.patch
b/qtwebengine-everywhere-src-5.10.0-system-icu-utf.patch
deleted file mode 100644
index e645de8..0000000
--- a/qtwebengine-everywhere-src-5.10.0-system-icu-utf.patch
+++ /dev/null
@@ -1,463 +0,0 @@
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/BUILD.gn
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/BUILD.gn
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/BUILD.gn 2017-12-25
12:16:23.250517752 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/BUILD.gn 2017-12-25
12:26:21.502411527 +0100
-@@ -859,8 +859,6 @@
- "third_party/dmg_fp/dmg_fp.h",
- "third_party/dmg_fp/dtoa_wrapper.cc",
- "third_party/dmg_fp/g_fmt.cc",
-- "third_party/icu/icu_utf.cc",
-- "third_party/icu/icu_utf.h",
- "third_party/superfasthash/superfasthash.c",
- "third_party/valgrind/memcheck.h",
- "threading/platform_thread.h",
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/files/file_path.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/files/file_path.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/files/file_path.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/files/file_path.cc 2017-12-25
12:26:21.503411511 +0100
-@@ -18,7 +18,7 @@
-
- #if defined(OS_MACOSX)
- #include "base/mac/scoped_cftyperef.h"
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
- #endif
-
- #if defined(OS_WIN)
-@@ -1163,9 +1163,9 @@
- int* index) {
- int codepoint = 0;
- while (*index < length && codepoint == 0) {
-- // CBU8_NEXT returns a value < 0 in error cases. For purposes of string
-+ // U8_NEXT returns a value < 0 in error cases. For purposes of string
- // comparison, we just use that value and flag it with DCHECK.
-- CBU8_NEXT(string, *index, length, codepoint);
-+ U8_NEXT(string, *index, length, codepoint);
- DCHECK_GT(codepoint, 0);
- if (codepoint > 0) {
- // Check if there is a subtable for this upper byte.
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/json/json_parser.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/json/json_parser.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/json/json_parser.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/json/json_parser.cc 2017-12-25
12:29:56.210138445 +0100
-@@ -16,7 +16,7 @@
- #include "base/strings/stringprintf.h"
- #include "base/strings/utf_string_conversion_utils.h"
- #include "base/strings/utf_string_conversions.h"
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
- #include "base/values.h"
-
- namespace base {
-@@ -482,14 +482,14 @@
- // string character and the terminating closing quote.
- while (CanConsume(2)) {
- int start_index = index_;
-- pos_ = start_pos_ + index_; // CBU8_NEXT is postcrement.
-- CBU8_NEXT(start_pos_, index_, length, next_char);
-+ pos_ = start_pos_ + index_; // U8_NEXT is postcrement.
-+ U8_NEXT(start_pos_, index_, length, next_char);
- if (next_char < 0 || !IsValidCharacter(next_char)) {
- if ((options_ & JSON_REPLACE_INVALID_CHARACTERS) == 0) {
- ReportError(JSONReader::JSON_UNSUPPORTED_ENCODING, 1);
- return false;
- }
-- CBU8_NEXT(start_pos_, start_index, length, next_char);
-+ U8_NEXT(start_pos_, start_index, length, next_char);
- string.Convert();
- string.AppendString(kUnicodeReplacementString,
- arraysize(kUnicodeReplacementString) - 1);
-@@ -497,7 +497,7 @@
- }
-
- if (next_char == '"') {
-- --index_; // Rewind by one because of CBU8_NEXT.
-+ --index_; // Rewind by one because of U8_NEXT.
- *out = std::move(string);
- return true;
- }
-@@ -633,10 +633,10 @@
-
- // If this is a high surrogate, consume the next code unit to get the
- // low surrogate.
-- if (CBU16_IS_SURROGATE(code_unit16_high)) {
-+ if (U16_IS_SURROGATE(code_unit16_high)) {
- // Make sure this is the high surrogate. If not, it's an encoding
- // error.
-- if (!CBU16_IS_SURROGATE_LEAD(code_unit16_high))
-+ if (!U16_IS_SURROGATE_LEAD(code_unit16_high))
- return false;
-
- // Make sure that the token has more characters to consume the
-@@ -653,20 +653,20 @@
-
- NextNChars(3);
-
-- if (!CBU16_IS_TRAIL(code_unit16_low)) {
-+ if (!U16_IS_TRAIL(code_unit16_low)) {
- return false;
- }
-
- uint32_t code_point =
-- CBU16_GET_SUPPLEMENTARY(code_unit16_high, code_unit16_low);
-+ U16_GET_SUPPLEMENTARY(code_unit16_high, code_unit16_low);
- if (!IsValidCharacter(code_point))
- return false;
-
- offset = 0;
-- CBU8_APPEND_UNSAFE(code_unit8, offset, code_point);
-+ U8_APPEND_UNSAFE(code_unit8, offset, code_point);
- } else {
- // Not a surrogate.
-- DCHECK(CBU16_IS_SINGLE(code_unit16_high));
-+ DCHECK(U16_IS_SINGLE(code_unit16_high));
- if (!IsValidCharacter(code_unit16_high)) {
- if ((options_ & JSON_REPLACE_INVALID_CHARACTERS) == 0) {
- return false;
-@@ -675,7 +675,7 @@
- return true;
- }
-
-- CBU8_APPEND_UNSAFE(code_unit8, offset, code_unit16_high);
-+ U8_APPEND_UNSAFE(code_unit8, offset, code_unit16_high);
- }
-
- dest_string->append(code_unit8, offset);
-@@ -692,9 +692,9 @@
- } else {
- char utf8_units[4] = { 0 };
- int offset = 0;
-- CBU8_APPEND_UNSAFE(utf8_units, offset, point);
-+ U8_APPEND_UNSAFE(utf8_units, offset, point);
- dest->Convert();
-- // CBU8_APPEND_UNSAFE can overwrite up to 4 bytes, so utf8_units may not be
-+ // U8_APPEND_UNSAFE can overwrite up to 4 bytes, so utf8_units may not be
- // zero terminated at this point. |offset| contains the correct length.
- dest->AppendString(utf8_units, offset);
- }
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/json/string_escape.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/json/string_escape.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/json/string_escape.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/json/string_escape.cc 2017-12-25
12:36:34.186118210 +0100
-@@ -14,7 +14,7 @@
- #include "base/strings/stringprintf.h"
- #include "base/strings/utf_string_conversion_utils.h"
- #include "base/strings/utf_string_conversions.h"
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
-
- namespace base {
-
-@@ -92,7 +92,7 @@
- for (int32_t i = 0; i < length; ++i) {
- uint32_t code_point;
- if (!ReadUnicodeCharacter(str.data(), length, &i, &code_point) ||
-- code_point == static_cast<decltype(code_point)>(CBU_SENTINEL) ||
-+ code_point == static_cast<decltype(code_point)>(U_SENTINEL) ||
- !IsValidCharacter(code_point)) {
- code_point = kReplacementCodePoint;
- did_replacement = true;
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/strings/pattern.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/strings/pattern.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/strings/pattern.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/strings/pattern.cc 2017-12-25
12:26:21.545410871 +0100
-@@ -4,13 +4,13 @@
-
- #include "base/strings/pattern.h"
-
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
-
- namespace base {
-
- namespace {
-
--static bool IsWildcard(base_icu::UChar32 character) {
-+static bool IsWildcard(UChar32 character) {
- return character == '*' || character == '?';
- }
-
-@@ -37,9 +37,9 @@
- // Check if the chars match, if so, increment the ptrs.
- const CHAR* pattern_next = *pattern;
- const CHAR* string_next = *string;
-- base_icu::UChar32 pattern_char = next(&pattern_next, pattern_end);
-+ UChar32 pattern_char = next(&pattern_next, pattern_end);
- if (pattern_char == next(&string_next, string_end) &&
-- pattern_char != CBU_SENTINEL) {
-+ pattern_char != U_SENTINEL) {
- *pattern = pattern_next;
- *string = string_next;
- } else {
-@@ -133,20 +133,20 @@
- }
-
- struct NextCharUTF8 {
-- base_icu::UChar32 operator()(const char** p, const char* end) {
-- base_icu::UChar32 c;
-+ UChar32 operator()(const char** p, const char* end) {
-+ UChar32 c;
- int offset = 0;
-- CBU8_NEXT(*p, offset, end - *p, c);
-+ U8_NEXT(*p, offset, end - *p, c);
- *p += offset;
- return c;
- }
- };
-
- struct NextCharUTF16 {
-- base_icu::UChar32 operator()(const char16** p, const char16* end) {
-- base_icu::UChar32 c;
-+ UChar32 operator()(const char16** p, const char16* end) {
-+ UChar32 c;
- int offset = 0;
-- CBU16_NEXT(*p, offset, end - *p, c);
-+ U16_NEXT(*p, offset, end - *p, c);
- *p += offset;
- return c;
- }
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/strings/string_split.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/strings/string_split.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/strings/string_split.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/strings/string_split.cc 2017-12-25
12:26:21.545410871 +0100
-@@ -8,7 +8,7 @@
-
- #include "base/logging.h"
- #include "base/strings/string_util.h"
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
-
- namespace base {
-
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/strings/string_util.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/strings/string_util.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/strings/string_util.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/strings/string_util.cc 2017-12-25
12:26:21.546410856 +0100
-@@ -25,7 +25,7 @@
- #include "base/memory/singleton.h"
- #include "base/strings/utf_string_conversion_utils.h"
- #include "base/strings/utf_string_conversions.h"
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
- #include "build/build_config.h"
-
- namespace base {
-@@ -372,19 +372,19 @@
- }
- DCHECK_LE(byte_size,
- static_cast<uint32_t>(std::numeric_limits<int32_t>::max()));
-- // Note: This cast is necessary because CBU8_NEXT uses int32_ts.
-+ // Note: This cast is necessary because U8_NEXT uses int32_ts.
- int32_t truncation_length = static_cast<int32_t>(byte_size);
- int32_t char_index = truncation_length - 1;
- const char* data = input.data();
-
-- // Using CBU8, we will move backwards from the truncation point
-+ // Using U8, we will move backwards from the truncation point
- // to the beginning of the string looking for a valid UTF8
- // character. Once a full UTF8 character is found, we will
- // truncate the string to the end of that character.
- while (char_index >= 0) {
- int32_t prev = char_index;
-- base_icu::UChar32 code_point = 0;
-- CBU8_NEXT(data, char_index, truncation_length, code_point);
-+ UChar32 code_point = 0;
-+ U8_NEXT(data, char_index, truncation_length, code_point);
- if (!IsValidCharacter(code_point) ||
- !IsValidCodepoint(code_point)) {
- char_index = prev - 1;
-@@ -537,7 +537,7 @@
-
- while (char_index < src_len) {
- int32_t code_point;
-- CBU8_NEXT(src, char_index, src_len, code_point);
-+ U8_NEXT(src, char_index, src_len, code_point);
- if (!IsValidCharacter(code_point))
- return false;
- }
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/strings/utf_string_conversion_utils.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/strings/utf_string_conversion_utils.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/strings/utf_string_conversion_utils.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/base/strings/utf_string_conversion_utils.cc 2017-12-25
12:26:21.546410856 +0100
-@@ -4,7 +4,7 @@
-
- #include "base/strings/utf_string_conversion_utils.h"
-
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
-
- namespace base {
-
-@@ -18,7 +18,7 @@
- // use a signed type for code_point. But this function returns false
- // on error anyway, so code_point_out is unsigned.
- int32_t code_point;
-- CBU8_NEXT(src, *char_index, src_len, code_point);
-+ U8_NEXT(src, *char_index, src_len, code_point);
- *code_point_out = static_cast<uint32_t>(code_point);
-
- // The ICU macro above moves to the next char, we want to point to the last
-@@ -33,16 +33,16 @@
- int32_t src_len,
- int32_t* char_index,
- uint32_t* code_point) {
-- if (CBU16_IS_SURROGATE(src[*char_index])) {
-- if (!CBU16_IS_SURROGATE_LEAD(src[*char_index]) ||
-+ if (U16_IS_SURROGATE(src[*char_index])) {
-+ if (!U16_IS_SURROGATE_LEAD(src[*char_index]) ||
- *char_index + 1 >= src_len ||
-- !CBU16_IS_TRAIL(src[*char_index + 1])) {
-+ !U16_IS_TRAIL(src[*char_index + 1])) {
- // Invalid surrogate pair.
- return false;
- }
-
- // Valid surrogate pair.
-- *code_point = CBU16_GET_SUPPLEMENTARY(src[*char_index],
-+ *code_point = U16_GET_SUPPLEMENTARY(src[*char_index],
- src[*char_index + 1]);
- (*char_index)++;
- } else {
-@@ -76,30 +76,30 @@
- }
-
-
-- // CBU8_APPEND_UNSAFE can append up to 4 bytes.
-+ // U8_APPEND_UNSAFE can append up to 4 bytes.
- size_t char_offset = output->length();
- size_t original_char_offset = char_offset;
-- output->resize(char_offset + CBU8_MAX_LENGTH);
-+ output->resize(char_offset + U8_MAX_LENGTH);
-
-- CBU8_APPEND_UNSAFE(&(*output)[0], char_offset, code_point);
-+ U8_APPEND_UNSAFE(&(*output)[0], char_offset, code_point);
-
-- // CBU8_APPEND_UNSAFE will advance our pointer past the inserted character, so
-+ // U8_APPEND_UNSAFE will advance our pointer past the inserted character, so
- // it will represent the new length of the string.
- output->resize(char_offset);
- return char_offset - original_char_offset;
- }
-
- size_t WriteUnicodeCharacter(uint32_t code_point, string16* output) {
-- if (CBU16_LENGTH(code_point) == 1) {
-+ if (U16_LENGTH(code_point) == 1) {
- // Thie code point is in the Basic Multilingual Plane (BMP).
- output->push_back(static_cast<char16>(code_point));
- return 1;
- }
- // Non-BMP characters use a double-character encoding.
- size_t char_offset = output->length();
-- output->resize(char_offset + CBU16_MAX_LENGTH);
-- CBU16_APPEND_UNSAFE(&(*output)[0], char_offset, code_point);
-- return CBU16_MAX_LENGTH;
-+ output->resize(char_offset + U16_MAX_LENGTH);
-+ U16_APPEND_UNSAFE(&(*output)[0], char_offset, code_point);
-+ return U16_MAX_LENGTH;
- }
-
- // Generalized Unicode converter -----------------------------------------------
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/content/browser/devtools/devtools_io_context.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/content/browser/devtools/devtools_io_context.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/content/browser/devtools/devtools_io_context.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/content/browser/devtools/devtools_io_context.cc 2017-12-25
12:37:08.791629561 +0100
-@@ -10,7 +10,7 @@
- #include "base/strings/string_number_conversions.h"
- #include "base/strings/string_util.h"
- #include "base/task_scheduler/post_task.h"
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
- #include "base/threading/thread_restrictions.h"
- #include "content/public/browser/browser_thread.h"
-
-@@ -92,7 +92,7 @@
- } else {
- // Provided client has requested sufficient large block, make their
- // life easier by not truncating in the middle of a UTF-8 character.
-- if (size_got > 6 && !CBU8_IS_SINGLE(buffer[size_got - 1])) {
-+ if (size_got > 6 && !U8_IS_SINGLE(buffer[size_got - 1])) {
- base::TruncateUTF8ToByteSize(buffer, size_got, &buffer);
- size_got = buffer.size();
- } else {
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/net/cert/internal/parse_name.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/net/cert/internal/parse_name.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/net/cert/internal/parse_name.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/net/cert/internal/parse_name.cc 2017-12-25
12:34:58.610528544 +0100
-@@ -9,7 +9,7 @@
- #include "base/strings/utf_string_conversion_utils.h"
- #include "base/strings/utf_string_conversions.h"
- #include "base/sys_byteorder.h"
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
-
- #if !defined(OS_NACL)
- #include "net/base/net_string_util.h"
-@@ -38,7 +38,7 @@
-
- // BMPString only supports codepoints in the Basic Multilingual Plane;
- // surrogates are not allowed.
-- if (CBU_IS_SURROGATE(c))
-+ if (U_IS_SURROGATE(c))
- return false;
- }
- return base::UTF16ToUTF8(in_16bit.data(), in_16bit.size(), out);
-@@ -58,7 +58,7 @@
- for (const uint32_t c : in_32bit) {
- // UniversalString is UCS-4 in big-endian order.
- uint32_t codepoint = base::NetToHost32(c);
-- if (!CBU_IS_UNICODE_CHAR(codepoint))
-+ if (!U_IS_UNICODE_CHAR(codepoint))
- return false;
-
- base::WriteUnicodeCharacter(codepoint, out);
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/tools/gn/bootstrap/bootstrap.py
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/tools/gn/bootstrap/bootstrap.py
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/tools/gn/bootstrap/bootstrap.py 2017-12-25
12:20:43.585562853 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/tools/gn/bootstrap/bootstrap.py 2017-12-25
12:41:57.071558915 +0100
-@@ -526,7 +526,6 @@
- 'base/task_scheduler/task_traits.cc',
- 'base/third_party/dmg_fp/dtoa_wrapper.cc',
- 'base/third_party/dmg_fp/g_fmt.cc',
-- 'base/third_party/icu/icu_utf.cc',
- 'base/threading/post_task_and_reply_impl.cc',
- 'base/threading/sequence_local_storage_map.cc',
- 'base/threading/sequenced_task_runner_handle.cc',
-@@ -679,7 +678,7 @@
- 'base/allocator/allocator_shim.cc',
- 'base/allocator/allocator_shim_default_dispatch_to_glibc.cc',
- ])
-- libs.extend(['-lrt', '-lnspr4'])
-+ libs.extend(['-lrt', '-lnspr4', '-licuuc'])
- static_libraries['libevent']['include_dirs'].extend([
- os.path.join(SRC_ROOT, 'base', 'third_party',
'libevent', 'linux')
- ])
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/tools/gn/BUILD.gn
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/tools/gn/BUILD.gn
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/tools/gn/BUILD.gn 2017-12-25
12:16:48.744131902 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/tools/gn/BUILD.gn 2017-12-25
12:26:21.547410841 +0100
-@@ -278,6 +278,7 @@
-
- libs = [
- "nspr4",
-+ "icuuc",
- ]
- }
-
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/ui/base/ime/input_method_chromeos.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/ui/base/ime/input_method_chromeos.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/ui/base/ime/input_method_chromeos.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/ui/base/ime/input_method_chromeos.cc 2017-12-25
12:40:50.356500963 +0100
-@@ -17,7 +17,6 @@
- #include "base/logging.h"
- #include "base/strings/string_util.h"
- #include "base/strings/utf_string_conversions.h"
--#include "base/third_party/icu/icu_utf.h"
- #include "chromeos/system/devicemode.h"
- #include "ui/base/ime/chromeos/ime_keyboard.h"
- #include "ui/base/ime/chromeos/input_method_manager.h"
-diff -ur
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/ui/gfx/utf16_indexing.cc
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/ui/gfx/utf16_indexing.cc
----
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/ui/gfx/utf16_indexing.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-icu-utf/src/3rdparty/chromium/ui/gfx/utf16_indexing.cc 2017-12-25
12:26:21.547410841 +0100
-@@ -5,13 +5,13 @@
- #include "ui/gfx/utf16_indexing.h"
-
- #include "base/logging.h"
--#include "base/third_party/icu/icu_utf.h"
-+#include <unicode/utf.h>
-
- namespace gfx {
-
- bool IsValidCodePointIndex(const base::string16& s, size_t index) {
- return index == 0 || index == s.length() ||
-- !(CBU16_IS_TRAIL(s[index]) && CBU16_IS_LEAD(s[index - 1]));
-+ !(U16_IS_TRAIL(s[index]) && U16_IS_LEAD(s[index - 1]));
- }
-
- ptrdiff_t UTF16IndexToOffset(const base::string16& s, size_t base, size_t pos) {
diff --git a/qtwebengine-everywhere-src-5.10.0-system-nspr-prtime.patch
b/qtwebengine-everywhere-src-5.10.0-system-nspr-prtime.patch
deleted file mode 100644
index ec4dce8..0000000
--- a/qtwebengine-everywhere-src-5.10.0-system-nspr-prtime.patch
+++ /dev/null
@@ -1,80 +0,0 @@
-diff -ur qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/BUILD.gn
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/BUILD.gn
---- qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/BUILD.gn 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/BUILD.gn 2017-12-25
12:16:23.250517752 +0100
-@@ -53,6 +53,9 @@
- "-Wno-char-subscripts",
- ]
- }
-+ ldflags = [
-+ "-lnspr4",
-+ ]
- }
-
- config("base_implementation") {
-@@ -858,8 +861,6 @@
- "third_party/dmg_fp/g_fmt.cc",
- "third_party/icu/icu_utf.cc",
- "third_party/icu/icu_utf.h",
-- "third_party/nspr/prtime.cc",
-- "third_party/nspr/prtime.h",
- "third_party/superfasthash/superfasthash.c",
- "third_party/valgrind/memcheck.h",
- "threading/platform_thread.h",
-diff -ur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/time/pr_time_unittest.cc
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/time/pr_time_unittest.cc
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/time/pr_time_unittest.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/time/pr_time_unittest.cc 2017-12-25
12:16:23.250517752 +0100
-@@ -7,7 +7,7 @@
-
- #include "base/compiler_specific.h"
- #include "base/macros.h"
--#include "base/third_party/nspr/prtime.h"
-+#include <nspr4/prtime.h>
- #include "base/time/time.h"
- #include "build/build_config.h"
- #include "testing/gtest/include/gtest/gtest.h"
-diff -ur qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/time/time.cc
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/time/time.cc
---- qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/base/time/time.cc 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/base/time/time.cc 2017-12-25
12:16:48.710132416 +0100
-@@ -14,7 +14,7 @@
- #include "base/logging.h"
- #include "base/macros.h"
- #include "base/strings/stringprintf.h"
--#include "base/third_party/nspr/prtime.h"
-+#include <nspr4/prtime.h>
- #include "build/build_config.h"
-
- namespace base {
-diff -ur
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/tools/gn/bootstrap/bootstrap.py
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/tools/gn/bootstrap/bootstrap.py
----
qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/tools/gn/bootstrap/bootstrap.py 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/tools/gn/bootstrap/bootstrap.py 2017-12-25
12:20:43.585562853 +0100
-@@ -527,7 +527,6 @@
- 'base/third_party/dmg_fp/dtoa_wrapper.cc',
- 'base/third_party/dmg_fp/g_fmt.cc',
- 'base/third_party/icu/icu_utf.cc',
-- 'base/third_party/nspr/prtime.cc',
- 'base/threading/post_task_and_reply_impl.cc',
- 'base/threading/sequence_local_storage_map.cc',
- 'base/threading/sequenced_task_runner_handle.cc',
-@@ -680,7 +679,7 @@
- 'base/allocator/allocator_shim.cc',
- 'base/allocator/allocator_shim_default_dispatch_to_glibc.cc',
- ])
-- libs.extend(['-lrt'])
-+ libs.extend(['-lrt', '-lnspr4'])
- static_libraries['libevent']['include_dirs'].extend([
- os.path.join(SRC_ROOT, 'base', 'third_party',
'libevent', 'linux')
- ])
-diff -ur qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/tools/gn/BUILD.gn
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/tools/gn/BUILD.gn
---- qtwebengine-everywhere-src-5.10.0/src/3rdparty/chromium/tools/gn/BUILD.gn 2017-11-28
14:06:53.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.0-system-nspr-prtime/src/3rdparty/chromium/tools/gn/BUILD.gn 2017-12-25
12:16:48.744131902 +0100
-@@ -275,6 +275,10 @@
- "//build/config:exe_and_shlib_deps",
- "//build/win:default_exe_manifest",
- ]
-+
-+ libs = [
-+ "nspr4",
-+ ]
- }
-
- test("gn_unittests") {
diff --git a/qtwebengine-everywhere-src-5.10.1-gcc8-alignof.patch
b/qtwebengine-everywhere-src-5.10.1-gcc8-alignof.patch
deleted file mode 100644
index ff007b7..0000000
--- a/qtwebengine-everywhere-src-5.10.1-gcc8-alignof.patch
+++ /dev/null
@@ -1,18 +0,0 @@
-diff -up
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/mojo/public/c/system/macros.h.gcc8-alignof
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/mojo/public/c/system/macros.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/mojo/public/c/system/macros.h.gcc8-alignof 2018-05-15
14:58:46.448912634 -0400
-+++
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/mojo/public/c/system/macros.h 2018-05-15
14:58:52.041784613 -0400
-@@ -18,7 +18,13 @@
- #endif
-
- // Like the C++11 |alignof| operator.
--#if __cplusplus >= 201103L
-+#if defined(__GNUC__) && __GNUC__ >= 8
-+// GCC 8 has changed the alignof operator to return the minimal alignment
-+// required by the target ABI, instead of the preferred alignment.
-+// This means that on 32-bit x86, it will return 4 instead of 8.
-+// Use __alignof__ instead to avoid this.
-+#define MOJO_ALIGNOF(type) __alignof__(type)
-+#elif __cplusplus >= 201103L
- #define MOJO_ALIGNOF(type) alignof(type)
- #elif defined(__GNUC__)
- #define MOJO_ALIGNOF(type) __alignof__(type)
diff --git a/qtwebengine-everywhere-src-5.10.1-no-sse2.patch
b/qtwebengine-everywhere-src-5.10.1-no-sse2.patch
deleted file mode 100644
index 084b795..0000000
--- a/qtwebengine-everywhere-src-5.10.1-no-sse2.patch
+++ /dev/null
@@ -1,30292 +0,0 @@
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webengine/customdialogs/customdialogs.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webengine/customdialogs/customdialogs.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webengine/customdialogs/customdialogs.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webengine/customdialogs/customdialogs.pro 2018-02-18
19:00:43.343577798 +0100
-@@ -1,5 +1,7 @@
- QT += webengine
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- HEADERS += \
- server.h
-
-diff -Nur qtwebengine-everywhere-src-5.10.1/examples/webengine/minimal/minimal.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webengine/minimal/minimal.pro
---- qtwebengine-everywhere-src-5.10.1/examples/webengine/minimal/minimal.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webengine/minimal/minimal.pro 2018-02-18
19:00:44.647558618 +0100
-@@ -2,6 +2,8 @@
-
- QT += webengine
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- SOURCES += main.cpp
-
- RESOURCES += qml.qrc
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webengine/quicknanobrowser/quicknanobrowser.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webengine/quicknanobrowser/quicknanobrowser.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webengine/quicknanobrowser/quicknanobrowser.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webengine/quicknanobrowser/quicknanobrowser.pro 2018-02-18
19:00:51.606456259 +0100
-@@ -20,5 +20,7 @@
- QT += widgets # QApplication is required to get native styling with QtQuickControls
- }
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- target.path = $$[QT_INSTALL_EXAMPLES]/webengine/quicknanobrowser
- INSTALLS += target
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webengine/recipebrowser/recipebrowser.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webengine/recipebrowser/recipebrowser.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webengine/recipebrowser/recipebrowser.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webengine/recipebrowser/recipebrowser.pro 2018-02-18
19:00:52.096449052 +0100
-@@ -2,6 +2,8 @@
-
- QT += quick qml quickcontrols2 webengine
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- cross_compile {
- posix|qnx|linux: DEFINES += QTWEBENGINE_RECIPE_BROWSER_EMBEDDED
- }
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/contentmanipulation/contentmanipulation.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/contentmanipulation/contentmanipulation.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/contentmanipulation/contentmanipulation.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/contentmanipulation/contentmanipulation.pro 2018-02-18
19:00:52.381444860 +0100
-@@ -1,5 +1,7 @@
- QT += webenginewidgets
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- HEADERS = mainwindow.h
- SOURCES = main.cpp \
- mainwindow.cpp
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/cookiebrowser/cookiebrowser.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/cookiebrowser/cookiebrowser.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/cookiebrowser/cookiebrowser.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/cookiebrowser/cookiebrowser.pro 2018-02-18
19:00:52.432444110 +0100
-@@ -3,6 +3,8 @@
- TEMPLATE = app
- CONFIG += c++11
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- SOURCES += \
- main.cpp\
- mainwindow.cpp
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/html2pdf/html2pdf.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/html2pdf/html2pdf.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/html2pdf/html2pdf.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/html2pdf/html2pdf.pro 2018-02-18
19:00:52.563442183 +0100
-@@ -2,6 +2,8 @@
-
- QT += webenginewidgets
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- SOURCES += html2pdf.cpp
-
- target.path = $$[QT_INSTALL_EXAMPLES]/webenginewidgets/html2pdf
-diff -Nur qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/maps/maps.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/maps/maps.pro
---- qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/maps/maps.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/maps/maps.pro 2018-02-18
19:00:52.647440947 +0100
-@@ -2,6 +2,8 @@
-
- QT += webenginewidgets
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- HEADERS += \
- mainwindow.h
-
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/markdowneditor/markdowneditor.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/markdowneditor/markdowneditor.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/markdowneditor/markdowneditor.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/markdowneditor/markdowneditor.pro 2018-02-18
19:00:52.710440020 +0100
-@@ -3,6 +3,8 @@
- QT += webenginewidgets webchannel
- CONFIG += c++11
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- HEADERS += \
- mainwindow.h \
- previewpage.h \
-diff -Nur qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/minimal/minimal.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/minimal/minimal.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/minimal/minimal.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/minimal/minimal.pro 2018-02-18
19:00:52.766439197 +0100
-@@ -2,6 +2,8 @@
-
- QT += webenginewidgets
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- SOURCES += main.cpp
-
- target.path = $$[QT_INSTALL_EXAMPLES]/webenginewidgets/minimal
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/simplebrowser/simplebrowser.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/simplebrowser/simplebrowser.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/simplebrowser/simplebrowser.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/simplebrowser/simplebrowser.pro 2018-02-18
19:00:52.844438049 +0100
-@@ -3,6 +3,8 @@
- QT += webenginewidgets
- CONFIG += c++11
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- HEADERS += \
- browser.h \
- browserwindow.h \
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/spellchecker/spellchecker.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/spellchecker/spellchecker.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/spellchecker/spellchecker.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/spellchecker/spellchecker.pro 2018-02-18
19:00:52.899437241 +0100
-@@ -9,6 +9,8 @@
- error("Spellcheck example can not be built when using native OS
dictionaries.")
- }
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- HEADERS += \
- webview.h
-
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/stylesheetbrowser/stylesheetbrowser.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/stylesheetbrowser/stylesheetbrowser.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/stylesheetbrowser/stylesheetbrowser.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/stylesheetbrowser/stylesheetbrowser.pro 2018-02-18
19:00:52.963436299 +0100
-@@ -3,6 +3,8 @@
- QT += webenginewidgets
- CONFIG += c++11
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- HEADERS += \
- mainwindow.h \
- stylesheetdialog.h
-diff -Nur
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/videoplayer/videoplayer.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/videoplayer/videoplayer.pro
----
qtwebengine-everywhere-src-5.10.1/examples/webenginewidgets/videoplayer/videoplayer.pro 2018-02-09
05:07:39.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/examples/webenginewidgets/videoplayer/videoplayer.pro 2018-02-18
19:00:53.022435432 +0100
-@@ -2,6 +2,8 @@
-
- QT += webenginewidgets
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../../../src/core/release
-+
- HEADERS += \
- mainwindow.h \
- fullscreenwindow.h \
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/build/config/compiler/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/build/config/compiler/BUILD.gn
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/build/config/compiler/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/build/config/compiler/BUILD.gn 2018-02-18
19:00:53.089434446 +0100
-@@ -604,13 +604,6 @@
- } else if (current_cpu == "x86") {
- cflags += [ "-m32" ]
- ldflags += [ "-m32" ]
-- if (!is_nacl) {
-- cflags += [
-- "-msse2",
-- "-mfpmath=sse",
-- "-mmmx",
-- ]
-- }
- } else if (current_cpu == "arm") {
- if (is_clang && !is_android && !is_nacl) {
- cflags += [ "--target=arm-linux-gnueabihf" ]
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/build/config/v8_target_cpu.gni
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/build/config/v8_target_cpu.gni
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/build/config/v8_target_cpu.gni 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/build/config/v8_target_cpu.gni 2018-02-18
19:00:53.089434446 +0100
-@@ -59,3 +59,11 @@
- # It should never be explicitly set by the user.
- v8_current_cpu = v8_target_cpu
- }
-+
-+if (v8_current_cpu == "x86") {
-+ # If we are not building for the x86_sse2 toolchain, we actually want to build
-+ # the "x87" backend instead.
-+ if (current_toolchain != "//build/toolchain/linux:x86_sse2") {
-+ v8_current_cpu = "x87"
-+ }
-+}
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/build/toolchain/gcc_toolchain.gni
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/build/toolchain/gcc_toolchain.gni
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/build/toolchain/gcc_toolchain.gni 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/build/toolchain/gcc_toolchain.gni 2018-02-18
19:00:53.143433652 +0100
-@@ -266,6 +266,10 @@
- enable_linker_map = defined(invoker.enable_linker_map) &&
- invoker.enable_linker_map && generate_linker_map
-
-+ if (defined(invoker.shlib_subdir)) {
-+ shlib_subdir = invoker.shlib_subdir
-+ }
-+
- # These library switches can apply to all tools below.
- lib_switch = "-l"
- lib_dir_switch = "-L"
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/build/toolchain/linux/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/build/toolchain/linux/BUILD.gn
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/build/toolchain/linux/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/build/toolchain/linux/BUILD.gn 2018-02-18
19:00:53.144433637 +0100
-@@ -110,6 +110,26 @@
- }
- }
-
-+gcc_toolchain("x86_sse2") {
-+ cc = "gcc"
-+ cxx = "g++"
-+
-+ readelf = "readelf"
-+ nm = "nm"
-+ ar = "ar"
-+ ld = cxx
-+
-+ extra_cflags = "-msse2 -mfpmath=sse"
-+ extra_cxxflags = "-msse2 -mfpmath=sse"
-+ shlib_subdir = "lib/sse2"
-+
-+ toolchain_args = {
-+ current_cpu = "x86"
-+ current_os = "linux"
-+ is_clang = false
-+ }
-+}
-+
- clang_toolchain("clang_x64") {
- # Output linker map files for binary size analysis.
- enable_linker_map = true
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/cc/base/math_util.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/cc/base/math_util.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/cc/base/math_util.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/cc/base/math_util.cc 2018-02-18
19:00:53.144433637 +0100
-@@ -7,7 +7,7 @@
- #include <algorithm>
- #include <cmath>
- #include <limits>
--#if defined(ARCH_CPU_X86_FAMILY)
-+#ifdef __SSE__
- #include <xmmintrin.h>
- #endif
-
-@@ -810,7 +810,7 @@
- }
-
- ScopedSubnormalFloatDisabler::ScopedSubnormalFloatDisabler() {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#ifdef __SSE__
- // Turn on "subnormals are zero" and "flush to zero" CSR flags.
- orig_state_ = _mm_getcsr();
- _mm_setcsr(orig_state_ | 0x8040);
-@@ -818,7 +818,7 @@
- }
-
- ScopedSubnormalFloatDisabler::~ScopedSubnormalFloatDisabler() {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#ifdef __SSE__
- _mm_setcsr(orig_state_);
- #endif
- }
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/cc/base/math_util.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/cc/base/math_util.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/cc/base/math_util.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/cc/base/math_util.h 2018-02-18
19:00:53.177433152 +0100
-@@ -11,7 +11,6 @@
- #include <vector>
-
- #include "base/logging.h"
--#include "build/build_config.h"
- #include "cc/base/base_export.h"
- #include "ui/gfx/geometry/box_f.h"
- #include "ui/gfx/geometry/point3_f.h"
-@@ -331,7 +330,7 @@
- ~ScopedSubnormalFloatDisabler();
-
- private:
--#if defined(ARCH_CPU_X86_FAMILY)
-+#ifdef __SSE__
- unsigned int orig_state_;
- #endif
- DISALLOW_COPY_AND_ASSIGN(ScopedSubnormalFloatDisabler);
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/cc/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/cc/BUILD.gn
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/cc/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/cc/BUILD.gn 2018-02-18
19:00:53.177433152 +0100
-@@ -445,13 +445,6 @@
- "trees/tree_synchronizer.h",
- ]
-
-- if (current_cpu == "x86" || current_cpu == "x64") {
-- sources += [
-- "raster/texture_compressor_etc1_sse.cc",
-- "raster/texture_compressor_etc1_sse.h",
-- ]
-- }
--
- # TODO(khushalsagar): Remove once
crbug.com/683263 is fixed.
- configs = [ "//build/config/compiler:no_size_t_to_int_warning" ]
-
-@@ -463,6 +456,7 @@
- deps = [
- "//base",
- "//base/third_party/dynamic_annotations",
-+ "//cc:cc_opts",
- "//cc/paint",
- "//components/viz/common",
- "//gpu",
-@@ -493,6 +487,36 @@
- }
- }
-
-+source_set("cc_opts") {
-+ public_deps = [
-+ "//cc:cc_opts_sse",
-+ ]
-+}
-+
-+source_set("cc_opts_sse") {
-+ if (current_cpu == "x86" || current_cpu == "x64") {
-+ deps = [
-+ "//base",
-+ ]
-+
-+ defines = [ "CC_IMPLEMENTATION=1" ]
-+
-+ if (!is_debug && (is_win || is_android)) {
-+ configs -= [ "//build/config/compiler:optimize" ]
-+ configs += [ "//build/config/compiler:optimize_max" ]
-+ }
-+
-+ sources = [
-+ "raster/texture_compressor.h",
-+ "raster/texture_compressor_etc1.h",
-+ "raster/texture_compressor_etc1_sse.cc",
-+ "raster/texture_compressor_etc1_sse.h",
-+ ]
-+
-+ cflags = [ "-msse2" ]
-+ }
-+}
-+
- cc_static_library("test_support") {
- testonly = true
- sources = [
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/content/renderer/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/content/renderer/BUILD.gn
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/content/renderer/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/content/renderer/BUILD.gn 2018-02-18
19:00:53.178433137 +0100
-@@ -514,6 +514,13 @@
- "//ui/surface",
- "//v8",
- ]
-+
-+ if (current_cpu == "x86") {
-+ deps += [
-+ "//v8(//build/toolchain/linux:x86_sse2)",
-+ ]
-+ }
-+
- allow_circular_includes_from = []
-
- if (use_aura && !use_qt) {
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/BUILD.gn
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/BUILD.gn 2018-02-18
19:00:53.242432196 +0100
-@@ -344,6 +344,12 @@
- defines += [ "DISABLE_USER_INPUT_MONITOR" ]
- }
-
-+ if (current_cpu == "x86" || current_cpu == "x64") {
-+ deps += [
-+ ":media_sse",
-+ ]
-+ }
-+
- if (is_linux || is_win) {
- sources += [
- "keyboard_event_counter.cc",
-@@ -366,6 +372,21 @@
- ]
- }
-
-+if (current_cpu == "x86" || current_cpu == "x64") {
-+ source_set("media_sse") {
-+ sources = [
-+ "sinc_resampler_sse.cc",
-+ ]
-+ configs += [
-+ "//media:media_config",
-+ "//media:media_implementation",
-+ ]
-+ if (!is_win) {
-+ cflags = [ "-msse" ]
-+ }
-+ }
-+}
-+
- if (is_android) {
- java_cpp_enum("java_enums") {
- sources = [
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/media.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/media.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/media.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/media.cc 2018-02-18
19:00:53.299431357 +0100
-@@ -10,6 +10,8 @@
- #include "base/metrics/field_trial.h"
- #include "base/trace_event/trace_event.h"
- #include "media/base/media_switches.h"
-+#include "media/base/sinc_resampler.h"
-+#include "media/base/vector_math.h"
- #include "third_party/libyuv/include/libyuv.h"
-
- #if defined(OS_ANDROID)
-@@ -30,6 +32,9 @@
- TRACE_EVENT_WARMUP_CATEGORY("audio");
- TRACE_EVENT_WARMUP_CATEGORY("media");
-
-+ // Perform initialization of libraries which require runtime CPU detection.
-+ vector_math::Initialize();
-+ SincResampler::InitializeCPUSpecificFeatures();
- libyuv::InitCpuFlags();
-
- #if !defined(MEDIA_DISABLE_FFMPEG)
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler.cc 2018-02-18
19:00:53.299431357 +0100
-@@ -81,17 +81,12 @@
- #include <cmath>
- #include <limits>
-
-+#include "base/cpu.h"
- #include "base/logging.h"
- #include "build/build_config.h"
-
--#if defined(ARCH_CPU_X86_FAMILY)
--#include <xmmintrin.h>
--#define CONVOLVE_FUNC Convolve_SSE
--#elif defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
-+#if defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
- #include <arm_neon.h>
--#define CONVOLVE_FUNC Convolve_NEON
--#else
--#define CONVOLVE_FUNC Convolve_C
- #endif
-
- namespace media {
-@@ -112,10 +107,41 @@
- return sinc_scale_factor;
- }
-
-+#undef CONVOLVE_FUNC
-+
- static int CalculateChunkSize(int block_size_, double io_ratio) {
- return block_size_ / io_ratio;
- }
-
-+// If we know the minimum architecture at compile time, avoid CPU detection.
-+// Force NaCl code to use C routines since (at present) nothing there uses these
-+// methods and plumbing the -msse built library is non-trivial.
-+#if defined(ARCH_CPU_X86_FAMILY) && !defined(OS_NACL)
-+#if defined(__SSE__)
-+#define CONVOLVE_FUNC Convolve_SSE
-+void SincResampler::InitializeCPUSpecificFeatures() {}
-+#else
-+// X86 CPU detection required. Functions will be set by
-+// InitializeCPUSpecificFeatures().
-+#define CONVOLVE_FUNC g_convolve_proc_
-+
-+typedef float (*ConvolveProc)(const float*, const float*, const float*, double);
-+static ConvolveProc g_convolve_proc_ = NULL;
-+
-+void SincResampler::InitializeCPUSpecificFeatures() {
-+ CHECK(!g_convolve_proc_);
-+ g_convolve_proc_ = base::CPU().has_sse() ? Convolve_SSE : Convolve_C;
-+}
-+#endif
-+#elif defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
-+#define CONVOLVE_FUNC Convolve_NEON
-+void SincResampler::InitializeCPUSpecificFeatures() {}
-+#else
-+// Unknown architecture.
-+#define CONVOLVE_FUNC Convolve_C
-+void SincResampler::InitializeCPUSpecificFeatures() {}
-+#endif
-+
- SincResampler::SincResampler(double io_sample_rate_ratio,
- int request_frames,
- const ReadCB& read_cb)
-@@ -328,46 +354,7 @@
- kernel_interpolation_factor * sum2);
- }
-
--#if defined(ARCH_CPU_X86_FAMILY)
--float SincResampler::Convolve_SSE(const float* input_ptr, const float* k1,
-- const float* k2,
-- double kernel_interpolation_factor) {
-- __m128 m_input;
-- __m128 m_sums1 = _mm_setzero_ps();
-- __m128 m_sums2 = _mm_setzero_ps();
--
-- // Based on |input_ptr| alignment, we need to use loadu or load. Unrolling
-- // these loops hurt performance in local testing.
-- if (reinterpret_cast<uintptr_t>(input_ptr) & 0x0F) {
-- for (int i = 0; i < kKernelSize; i += 4) {
-- m_input = _mm_loadu_ps(input_ptr + i);
-- m_sums1 = _mm_add_ps(m_sums1, _mm_mul_ps(m_input, _mm_load_ps(k1 + i)));
-- m_sums2 = _mm_add_ps(m_sums2, _mm_mul_ps(m_input, _mm_load_ps(k2 + i)));
-- }
-- } else {
-- for (int i = 0; i < kKernelSize; i += 4) {
-- m_input = _mm_load_ps(input_ptr + i);
-- m_sums1 = _mm_add_ps(m_sums1, _mm_mul_ps(m_input, _mm_load_ps(k1 + i)));
-- m_sums2 = _mm_add_ps(m_sums2, _mm_mul_ps(m_input, _mm_load_ps(k2 + i)));
-- }
-- }
--
-- // Linearly interpolate the two "convolutions".
-- m_sums1 = _mm_mul_ps(m_sums1, _mm_set_ps1(
-- static_cast<float>(1.0 - kernel_interpolation_factor)));
-- m_sums2 = _mm_mul_ps(m_sums2, _mm_set_ps1(
-- static_cast<float>(kernel_interpolation_factor)));
-- m_sums1 = _mm_add_ps(m_sums1, m_sums2);
--
-- // Sum components together.
-- float result;
-- m_sums2 = _mm_add_ps(_mm_movehl_ps(m_sums1, m_sums1), m_sums1);
-- _mm_store_ss(&result, _mm_add_ss(m_sums2, _mm_shuffle_ps(
-- m_sums2, m_sums2, 1)));
--
-- return result;
--}
--#elif defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
-+#if defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
- float SincResampler::Convolve_NEON(const float* input_ptr, const float* k1,
- const float* k2,
- double kernel_interpolation_factor) {
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler.h 2018-02-18
19:00:53.300431343 +0100
-@@ -36,6 +36,10 @@
- kKernelStorageSize = kKernelSize * (kKernelOffsetCount + 1),
- };
-
-+ // Selects runtime specific CPU features like SSE. Must be called before
-+ // using SincResampler.
-+ static void InitializeCPUSpecificFeatures();
-+
- // Callback type for providing more data into the resampler. Expects |frames|
- // of data to be rendered into |destination|; zero padded if not enough frames
- // are available to satisfy the request.
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler_perftest.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler_perftest.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler_perftest.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler_perftest.cc 2018-02-18
19:00:53.300431343 +0100
-@@ -4,6 +4,7 @@
-
- #include "base/bind.h"
- #include "base/bind_helpers.h"
-+#include "base/cpu.h"
- #include "base/time/time.h"
- #include "build/build_config.h"
- #include "media/base/sinc_resampler.h"
-@@ -61,6 +62,9 @@
- &resampler, SincResampler::Convolve_C, true,
"unoptimized_aligned");
-
- #if defined(CONVOLVE_FUNC)
-+#if defined(ARCH_CPU_X86_FAMILY)
-+ ASSERT_TRUE(base::CPU().has_sse());
-+#endif
- RunConvolveBenchmark(
- &resampler, SincResampler::CONVOLVE_FUNC, true,
"optimized_aligned");
- RunConvolveBenchmark(
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler_sse.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler_sse.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler_sse.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler_sse.cc 2018-02-18
19:00:53.300431343 +0100
-@@ -0,0 +1,50 @@
-+// Copyright 2013 The Chromium Authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#include "media/base/sinc_resampler.h"
-+
-+#include <xmmintrin.h>
-+
-+namespace media {
-+
-+float SincResampler::Convolve_SSE(const float* input_ptr, const float* k1,
-+ const float* k2,
-+ double kernel_interpolation_factor) {
-+ __m128 m_input;
-+ __m128 m_sums1 = _mm_setzero_ps();
-+ __m128 m_sums2 = _mm_setzero_ps();
-+
-+ // Based on |input_ptr| alignment, we need to use loadu or load. Unrolling
-+ // these loops hurt performance in local testing.
-+ if (reinterpret_cast<uintptr_t>(input_ptr) & 0x0F) {
-+ for (int i = 0; i < kKernelSize; i += 4) {
-+ m_input = _mm_loadu_ps(input_ptr + i);
-+ m_sums1 = _mm_add_ps(m_sums1, _mm_mul_ps(m_input, _mm_load_ps(k1 + i)));
-+ m_sums2 = _mm_add_ps(m_sums2, _mm_mul_ps(m_input, _mm_load_ps(k2 + i)));
-+ }
-+ } else {
-+ for (int i = 0; i < kKernelSize; i += 4) {
-+ m_input = _mm_load_ps(input_ptr + i);
-+ m_sums1 = _mm_add_ps(m_sums1, _mm_mul_ps(m_input, _mm_load_ps(k1 + i)));
-+ m_sums2 = _mm_add_ps(m_sums2, _mm_mul_ps(m_input, _mm_load_ps(k2 + i)));
-+ }
-+ }
-+
-+ // Linearly interpolate the two "convolutions".
-+ m_sums1 = _mm_mul_ps(m_sums1, _mm_set_ps1(
-+ static_cast<float>(1.0 - kernel_interpolation_factor)));
-+ m_sums2 = _mm_mul_ps(m_sums2, _mm_set_ps1(
-+ static_cast<float>(kernel_interpolation_factor)));
-+ m_sums1 = _mm_add_ps(m_sums1, m_sums2);
-+
-+ // Sum components together.
-+ float result;
-+ m_sums2 = _mm_add_ps(_mm_movehl_ps(m_sums1, m_sums1), m_sums1);
-+ _mm_store_ss(&result, _mm_add_ss(m_sums2, _mm_shuffle_ps(
-+ m_sums2, m_sums2, 1)));
-+
-+ return result;
-+}
-+
-+} // namespace media
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler_unittest.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler_unittest.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/sinc_resampler_unittest.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/sinc_resampler_unittest.cc 2018-02-18
19:00:53.301431328 +0100
-@@ -10,6 +10,7 @@
-
- #include "base/bind.h"
- #include "base/bind_helpers.h"
-+#include "base/cpu.h"
- #include "base/macros.h"
- #include "base/strings/string_number_conversions.h"
- #include "base/time/time.h"
-@@ -166,6 +167,10 @@
- static const double kKernelInterpolationFactor = 0.5;
-
- TEST(SincResamplerTest, Convolve) {
-+#if defined(ARCH_CPU_X86_FAMILY)
-+ ASSERT_TRUE(base::CPU().has_sse());
-+#endif
-+
- // Initialize a dummy resampler.
- MockSource mock_source;
- SincResampler resampler(
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math.cc 2018-02-18
19:00:53.301431328 +0100
-@@ -7,12 +7,17 @@
-
- #include <algorithm>
-
-+#include "base/cpu.h"
- #include "base/logging.h"
- #include "build/build_config.h"
-
-+namespace media {
-+namespace vector_math {
-+
-+// If we know the minimum architecture at compile time, avoid CPU detection.
- // NaCl does not allow intrinsics.
- #if defined(ARCH_CPU_X86_FAMILY) && !defined(OS_NACL)
--#include <xmmintrin.h>
-+#if defined(__SSE__)
- // Don't use custom SSE versions where the auto-vectorized C version performs
- // better, which is anywhere clang is used.
- // TODO(pcc): Linux currently uses ThinLTO which has broken auto-vectorization
-@@ -25,20 +30,52 @@
- #define FMUL_FUNC FMUL_C
- #endif
- #define EWMAAndMaxPower_FUNC EWMAAndMaxPower_SSE
-+void Initialize() {}
-+#else
-+// X86 CPU detection required. Functions will be set by Initialize().
-+#if !defined(__clang__)
-+#define FMAC_FUNC g_fmac_proc_
-+#define FMUL_FUNC g_fmul_proc_
-+#else
-+#define FMAC_FUNC FMAC_C
-+#define FMUL_FUNC FMUL_C
-+#endif
-+#define EWMAAndMaxPower_FUNC g_ewma_power_proc_
-+
-+#if !defined(__clang__)
-+typedef void (*MathProc)(const float src[], float scale, int len, float dest[]);
-+static MathProc g_fmac_proc_ = NULL;
-+static MathProc g_fmul_proc_ = NULL;
-+#endif
-+typedef std::pair<float, float> (*EWMAAndMaxPowerProc)(
-+ float initial_value, const float src[], int len, float smoothing_factor);
-+static EWMAAndMaxPowerProc g_ewma_power_proc_ = NULL;
-+
-+void Initialize() {
-+ CHECK(!g_fmac_proc_);
-+ CHECK(!g_fmul_proc_);
-+ CHECK(!g_ewma_power_proc_);
-+ const bool kUseSSE = base::CPU().has_sse();
-+#if !defined(__clang__)
-+ g_fmac_proc_ = kUseSSE ? FMAC_SSE : FMAC_C;
-+ g_fmul_proc_ = kUseSSE ? FMUL_SSE : FMUL_C;
-+#endif
-+ g_ewma_power_proc_ = kUseSSE ? EWMAAndMaxPower_SSE : EWMAAndMaxPower_C;
-+}
-+#endif
- #elif defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
- #include <arm_neon.h>
- #define FMAC_FUNC FMAC_NEON
- #define FMUL_FUNC FMUL_NEON
- #define EWMAAndMaxPower_FUNC EWMAAndMaxPower_NEON
-+void Initialize() {}
- #else
- #define FMAC_FUNC FMAC_C
- #define FMUL_FUNC FMUL_C
- #define EWMAAndMaxPower_FUNC EWMAAndMaxPower_C
-+void Initialize() {}
- #endif
-
--namespace media {
--namespace vector_math {
--
- void FMAC(const float src[], float scale, int len, float dest[]) {
- // Ensure |src| and |dest| are 16-byte aligned.
- DCHECK_EQ(0u, reinterpret_cast<uintptr_t>(src) & (kRequiredAlignment - 1));
-@@ -91,111 +128,6 @@
- return result;
- }
-
--#if defined(ARCH_CPU_X86_FAMILY) && !defined(OS_NACL)
--void FMUL_SSE(const float src[], float scale, int len, float dest[]) {
-- const int rem = len % 4;
-- const int last_index = len - rem;
-- __m128 m_scale = _mm_set_ps1(scale);
-- for (int i = 0; i < last_index; i += 4)
-- _mm_store_ps(dest + i, _mm_mul_ps(_mm_load_ps(src + i), m_scale));
--
-- // Handle any remaining values that wouldn't fit in an SSE pass.
-- for (int i = last_index; i < len; ++i)
-- dest[i] = src[i] * scale;
--}
--
--void FMAC_SSE(const float src[], float scale, int len, float dest[]) {
-- const int rem = len % 4;
-- const int last_index = len - rem;
-- __m128 m_scale = _mm_set_ps1(scale);
-- for (int i = 0; i < last_index; i += 4) {
-- _mm_store_ps(dest + i, _mm_add_ps(_mm_load_ps(dest + i),
-- _mm_mul_ps(_mm_load_ps(src + i), m_scale)));
-- }
--
-- // Handle any remaining values that wouldn't fit in an SSE pass.
-- for (int i = last_index; i < len; ++i)
-- dest[i] += src[i] * scale;
--}
--
--// Convenience macro to extract float 0 through 3 from the vector |a|. This is
--// needed because compilers other than clang don't support access via
--// operator[]().
--#define EXTRACT_FLOAT(a, i) \
-- (i == 0 ? \
-- _mm_cvtss_f32(a) : \
-- _mm_cvtss_f32(_mm_shuffle_ps(a, a, i)))
--
--std::pair<float, float> EWMAAndMaxPower_SSE(
-- float initial_value, const float src[], int len, float smoothing_factor) {
-- // When the recurrence is unrolled, we see that we can split it into 4
-- // separate lanes of evaluation:
-- //
-- // y[n] = a(S[n]^2) + (1-a)(y[n-1])
-- // = a(S[n]^2) + (1-a)^1(aS[n-1]^2) + (1-a)^2(aS[n-2]^2) + ...
-- // = z[n] + (1-a)^1(z[n-1]) + (1-a)^2(z[n-2]) + (1-a)^3(z[n-3])
-- //
-- // where z[n] = a(S[n]^2) + (1-a)^4(z[n-4]) + (1-a)^8(z[n-8]) + ...
-- //
-- // Thus, the strategy here is to compute z[n], z[n-1], z[n-2], and z[n-3] in
-- // each of the 4 lanes, and then combine them to give y[n].
--
-- const int rem = len % 4;
-- const int last_index = len - rem;
--
-- const __m128 smoothing_factor_x4 = _mm_set_ps1(smoothing_factor);
-- const float weight_prev = 1.0f - smoothing_factor;
-- const __m128 weight_prev_x4 = _mm_set_ps1(weight_prev);
-- const __m128 weight_prev_squared_x4 =
-- _mm_mul_ps(weight_prev_x4, weight_prev_x4);
-- const __m128 weight_prev_4th_x4 =
-- _mm_mul_ps(weight_prev_squared_x4, weight_prev_squared_x4);
--
-- // Compute z[n], z[n-1], z[n-2], and z[n-3] in parallel in lanes 3, 2, 1 and
-- // 0, respectively.
-- __m128 max_x4 = _mm_setzero_ps();
-- __m128 ewma_x4 = _mm_setr_ps(0.0f, 0.0f, 0.0f, initial_value);
-- int i;
-- for (i = 0; i < last_index; i += 4) {
-- ewma_x4 = _mm_mul_ps(ewma_x4, weight_prev_4th_x4);
-- const __m128 sample_x4 = _mm_load_ps(src + i);
-- const __m128 sample_squared_x4 = _mm_mul_ps(sample_x4, sample_x4);
-- max_x4 = _mm_max_ps(max_x4, sample_squared_x4);
-- // Note: The compiler optimizes this to a single multiply-and-accumulate
-- // instruction:
-- ewma_x4 = _mm_add_ps(ewma_x4,
-- _mm_mul_ps(sample_squared_x4, smoothing_factor_x4));
-- }
--
-- // y[n] = z[n] + (1-a)^1(z[n-1]) + (1-a)^2(z[n-2]) + (1-a)^3(z[n-3])
-- float ewma = EXTRACT_FLOAT(ewma_x4, 3);
-- ewma_x4 = _mm_mul_ps(ewma_x4, weight_prev_x4);
-- ewma += EXTRACT_FLOAT(ewma_x4, 2);
-- ewma_x4 = _mm_mul_ps(ewma_x4, weight_prev_x4);
-- ewma += EXTRACT_FLOAT(ewma_x4, 1);
-- ewma_x4 = _mm_mul_ss(ewma_x4, weight_prev_x4);
-- ewma += EXTRACT_FLOAT(ewma_x4, 0);
--
-- // Fold the maximums together to get the overall maximum.
-- max_x4 = _mm_max_ps(max_x4,
-- _mm_shuffle_ps(max_x4, max_x4, _MM_SHUFFLE(3, 3, 1, 1)));
-- max_x4 = _mm_max_ss(max_x4, _mm_shuffle_ps(max_x4, max_x4, 2));
--
-- std::pair<float, float> result(ewma, EXTRACT_FLOAT(max_x4, 0));
--
-- // Handle remaining values at the end of |src|.
-- for (; i < len; ++i) {
-- result.first *= weight_prev;
-- const float sample = src[i];
-- const float sample_squared = sample * sample;
-- result.first += sample_squared * smoothing_factor;
-- result.second = std::max(result.second, sample_squared);
-- }
--
-- return result;
--}
--#endif
--
- #if defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
- void FMAC_NEON(const float src[], float scale, int len, float dest[]) {
- const int rem = len % 4;
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math.h 2018-02-18
19:00:53.301431328 +0100
-@@ -15,6 +15,11 @@
- // Required alignment for inputs and outputs to all vector math functions
- enum { kRequiredAlignment = 16 };
-
-+// Selects runtime specific optimizations such as SSE. Must be called prior to
-+// calling FMAC() or FMUL(). Called during media library initialization; most
-+// users should never have to call this.
-+MEDIA_EXPORT void Initialize();
-+
- // Multiply each element of |src| (up to |len|) by |scale| and add to |dest|.
- // |src| and |dest| must be aligned by kRequiredAlignment.
- MEDIA_EXPORT void FMAC(const float src[], float scale, int len, float dest[]);
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math_perftest.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math_perftest.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math_perftest.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math_perftest.cc 2018-02-18
19:00:53.302431313 +0100
-@@ -5,6 +5,7 @@
- #include <memory>
-
- #include "base/macros.h"
-+#include "base/cpu.h"
- #include "base/memory/aligned_memory.h"
- #include "base/time/time.h"
- #include "build/build_config.h"
-@@ -82,15 +83,11 @@
- DISALLOW_COPY_AND_ASSIGN(VectorMathPerfTest);
- };
-
--// Define platform dependent function names for SIMD optimized methods.
-+// Define platform independent function name for FMAC* perf tests.
- #if defined(ARCH_CPU_X86_FAMILY)
- #define FMAC_FUNC FMAC_SSE
--#define FMUL_FUNC FMUL_SSE
--#define EWMAAndMaxPower_FUNC EWMAAndMaxPower_SSE
- #elif defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
- #define FMAC_FUNC FMAC_NEON
--#define FMUL_FUNC FMUL_NEON
--#define EWMAAndMaxPower_FUNC EWMAAndMaxPower_NEON
- #endif
-
- // Benchmark for each optimized vector_math::FMAC() method.
-@@ -99,6 +96,9 @@
- RunBenchmark(
- vector_math::FMAC_C, true, "vector_math_fmac",
"unoptimized");
- #if defined(FMAC_FUNC)
-+#if defined(ARCH_CPU_X86_FAMILY)
-+ ASSERT_TRUE(base::CPU().has_sse());
-+#endif
- // Benchmark FMAC_FUNC() with unaligned size.
- ASSERT_NE((kVectorSize - 1) % (vector_math::kRequiredAlignment /
- sizeof(float)), 0U);
-@@ -112,12 +112,24 @@
- #endif
- }
-
-+#undef FMAC_FUNC
-+
-+// Define platform independent function name for FMULBenchmark* tests.
-+#if defined(ARCH_CPU_X86_FAMILY)
-+#define FMUL_FUNC FMUL_SSE
-+#elif defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
-+#define FMUL_FUNC FMUL_NEON
-+#endif
-+
- // Benchmark for each optimized vector_math::FMUL() method.
- TEST_F(VectorMathPerfTest, FMUL) {
- // Benchmark FMUL_C().
- RunBenchmark(
- vector_math::FMUL_C, true, "vector_math_fmul",
"unoptimized");
- #if defined(FMUL_FUNC)
-+#if defined(ARCH_CPU_X86_FAMILY)
-+ ASSERT_TRUE(base::CPU().has_sse());
-+#endif
- // Benchmark FMUL_FUNC() with unaligned size.
- ASSERT_NE((kVectorSize - 1) % (vector_math::kRequiredAlignment /
- sizeof(float)), 0U);
-@@ -131,6 +143,14 @@
- #endif
- }
-
-+#undef FMUL_FUNC
-+
-+#if defined(ARCH_CPU_X86_FAMILY)
-+#define EWMAAndMaxPower_FUNC EWMAAndMaxPower_SSE
-+#elif defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
-+#define EWMAAndMaxPower_FUNC EWMAAndMaxPower_NEON
-+#endif
-+
- // Benchmark for each optimized vector_math::EWMAAndMaxPower() method.
- TEST_F(VectorMathPerfTest, EWMAAndMaxPower) {
- // Benchmark EWMAAndMaxPower_C().
-@@ -139,6 +159,9 @@
- "vector_math_ewma_and_max_power",
- "unoptimized");
- #if defined(EWMAAndMaxPower_FUNC)
-+#if defined(ARCH_CPU_X86_FAMILY)
-+ ASSERT_TRUE(base::CPU().has_sse());
-+#endif
- // Benchmark EWMAAndMaxPower_FUNC() with unaligned size.
- ASSERT_NE((kVectorSize - 1) % (vector_math::kRequiredAlignment /
- sizeof(float)), 0U);
-@@ -156,4 +179,6 @@
- #endif
- }
-
-+#undef EWMAAndMaxPower_FUNC
-+
- } // namespace media
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math_sse.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math_sse.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math_sse.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math_sse.cc 2018-02-18
19:00:53.302431313 +0100
-@@ -0,0 +1,118 @@
-+// Copyright 2013 The Chromium Authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#include "media/base/vector_math_testing.h"
-+
-+#include <algorithm>
-+
-+#include <xmmintrin.h> // NOLINT
-+
-+namespace media {
-+namespace vector_math {
-+
-+void FMUL_SSE(const float src[], float scale, int len, float dest[]) {
-+ const int rem = len % 4;
-+ const int last_index = len - rem;
-+ __m128 m_scale = _mm_set_ps1(scale);
-+ for (int i = 0; i < last_index; i += 4)
-+ _mm_store_ps(dest + i, _mm_mul_ps(_mm_load_ps(src + i), m_scale));
-+
-+ // Handle any remaining values that wouldn't fit in an SSE pass.
-+ for (int i = last_index; i < len; ++i)
-+ dest[i] = src[i] * scale;
-+}
-+
-+void FMAC_SSE(const float src[], float scale, int len, float dest[]) {
-+ const int rem = len % 4;
-+ const int last_index = len - rem;
-+ __m128 m_scale = _mm_set_ps1(scale);
-+ for (int i = 0; i < last_index; i += 4) {
-+ _mm_store_ps(dest + i, _mm_add_ps(_mm_load_ps(dest + i),
-+ _mm_mul_ps(_mm_load_ps(src + i), m_scale)));
-+ }
-+
-+ // Handle any remaining values that wouldn't fit in an SSE pass.
-+ for (int i = last_index; i < len; ++i)
-+ dest[i] += src[i] * scale;
-+}
-+
-+// Convenience macro to extract float 0 through 3 from the vector |a|. This is
-+// needed because compilers other than clang don't support access via
-+// operator[]().
-+#define EXTRACT_FLOAT(a, i) \
-+ (i == 0 ? \
-+ _mm_cvtss_f32(a) : \
-+ _mm_cvtss_f32(_mm_shuffle_ps(a, a, i)))
-+
-+std::pair<float, float> EWMAAndMaxPower_SSE(
-+ float initial_value, const float src[], int len, float smoothing_factor) {
-+ // When the recurrence is unrolled, we see that we can split it into 4
-+ // separate lanes of evaluation:
-+ //
-+ // y[n] = a(S[n]^2) + (1-a)(y[n-1])
-+ // = a(S[n]^2) + (1-a)^1(aS[n-1]^2) + (1-a)^2(aS[n-2]^2) + ...
-+ // = z[n] + (1-a)^1(z[n-1]) + (1-a)^2(z[n-2]) + (1-a)^3(z[n-3])
-+ //
-+ // where z[n] = a(S[n]^2) + (1-a)^4(z[n-4]) + (1-a)^8(z[n-8]) + ...
-+ //
-+ // Thus, the strategy here is to compute z[n], z[n-1], z[n-2], and z[n-3] in
-+ // each of the 4 lanes, and then combine them to give y[n].
-+
-+ const int rem = len % 4;
-+ const int last_index = len - rem;
-+
-+ const __m128 smoothing_factor_x4 = _mm_set_ps1(smoothing_factor);
-+ const float weight_prev = 1.0f - smoothing_factor;
-+ const __m128 weight_prev_x4 = _mm_set_ps1(weight_prev);
-+ const __m128 weight_prev_squared_x4 =
-+ _mm_mul_ps(weight_prev_x4, weight_prev_x4);
-+ const __m128 weight_prev_4th_x4 =
-+ _mm_mul_ps(weight_prev_squared_x4, weight_prev_squared_x4);
-+
-+ // Compute z[n], z[n-1], z[n-2], and z[n-3] in parallel in lanes 3, 2, 1 and
-+ // 0, respectively.
-+ __m128 max_x4 = _mm_setzero_ps();
-+ __m128 ewma_x4 = _mm_setr_ps(0.0f, 0.0f, 0.0f, initial_value);
-+ int i;
-+ for (i = 0; i < last_index; i += 4) {
-+ ewma_x4 = _mm_mul_ps(ewma_x4, weight_prev_4th_x4);
-+ const __m128 sample_x4 = _mm_load_ps(src + i);
-+ const __m128 sample_squared_x4 = _mm_mul_ps(sample_x4, sample_x4);
-+ max_x4 = _mm_max_ps(max_x4, sample_squared_x4);
-+ // Note: The compiler optimizes this to a single multiply-and-accumulate
-+ // instruction:
-+ ewma_x4 = _mm_add_ps(ewma_x4,
-+ _mm_mul_ps(sample_squared_x4, smoothing_factor_x4));
-+ }
-+
-+ // y[n] = z[n] + (1-a)^1(z[n-1]) + (1-a)^2(z[n-2]) + (1-a)^3(z[n-3])
-+ float ewma = EXTRACT_FLOAT(ewma_x4, 3);
-+ ewma_x4 = _mm_mul_ps(ewma_x4, weight_prev_x4);
-+ ewma += EXTRACT_FLOAT(ewma_x4, 2);
-+ ewma_x4 = _mm_mul_ps(ewma_x4, weight_prev_x4);
-+ ewma += EXTRACT_FLOAT(ewma_x4, 1);
-+ ewma_x4 = _mm_mul_ss(ewma_x4, weight_prev_x4);
-+ ewma += EXTRACT_FLOAT(ewma_x4, 0);
-+
-+ // Fold the maximums together to get the overall maximum.
-+ max_x4 = _mm_max_ps(max_x4,
-+ _mm_shuffle_ps(max_x4, max_x4, _MM_SHUFFLE(3, 3, 1, 1)));
-+ max_x4 = _mm_max_ss(max_x4, _mm_shuffle_ps(max_x4, max_x4, 2));
-+
-+ std::pair<float, float> result(ewma, EXTRACT_FLOAT(max_x4, 0));
-+
-+ // Handle remaining values at the end of |src|.
-+ for (; i < len; ++i) {
-+ result.first *= weight_prev;
-+ const float sample = src[i];
-+ const float sample_squared = sample * sample;
-+ result.first += sample_squared * smoothing_factor;
-+ result.second = std::max(result.second, sample_squared);
-+ }
-+
-+ return result;
-+}
-+
-+} // namespace vector_math
-+} // namespace media
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math_testing.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math_testing.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math_testing.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math_testing.h 2018-02-18
19:00:53.302431313 +0100
-@@ -19,7 +19,7 @@
- MEDIA_EXPORT std::pair<float, float> EWMAAndMaxPower_C(
- float initial_value, const float src[], int len, float smoothing_factor);
-
--#if defined(ARCH_CPU_X86_FAMILY) && !defined(OS_NACL)
-+#if defined(ARCH_CPU_X86_FAMILY)
- MEDIA_EXPORT void FMAC_SSE(const float src[], float scale, int len,
- float dest[]);
- MEDIA_EXPORT void FMUL_SSE(const float src[], float scale, int len,
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math_unittest.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math_unittest.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/base/vector_math_unittest.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/base/vector_math_unittest.cc 2018-02-18
19:00:53.302431313 +0100
-@@ -9,6 +9,7 @@
- #include <memory>
-
- #include "base/macros.h"
-+#include "base/cpu.h"
- #include "base/memory/aligned_memory.h"
- #include "base/strings/string_number_conversions.h"
- #include "base/strings/stringize_macros.h"
-@@ -78,6 +79,7 @@
-
- #if defined(ARCH_CPU_X86_FAMILY)
- {
-+ ASSERT_TRUE(base::CPU().has_sse());
- SCOPED_TRACE("FMAC_SSE");
- FillTestVectors(kInputFillValue, kOutputFillValue);
- vector_math::FMAC_SSE(
-@@ -119,6 +121,7 @@
-
- #if defined(ARCH_CPU_X86_FAMILY)
- {
-+ ASSERT_TRUE(base::CPU().has_sse());
- SCOPED_TRACE("FMUL_SSE");
- FillTestVectors(kInputFillValue, kOutputFillValue);
- vector_math::FMUL_SSE(
-@@ -227,6 +230,7 @@
-
- #if defined(ARCH_CPU_X86_FAMILY)
- {
-+ ASSERT_TRUE(base::CPU().has_sse());
- SCOPED_TRACE("EWMAAndMaxPower_SSE");
- const std::pair<float, float>& result =
vector_math::EWMAAndMaxPower_SSE(
- initial_value_, data_.get(), data_len_, smoothing_factor_);
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/BUILD.gn
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/BUILD.gn 2018-02-18
19:00:53.303431298 +0100
-@@ -534,6 +534,26 @@
- "//base",
- "//ui/gfx/geometry",
- ]
-+ if (current_cpu == "x86" || current_cpu == "x64") {
-+ deps += [
-+ ":shared_memory_support_sse",
-+ ]
-+ }
-+}
-+
-+if (current_cpu == "x86" || current_cpu == "x64") {
-+ source_set("shared_memory_support_sse") {
-+ sources = [
-+ "base/vector_math_sse.cc",
-+ ]
-+ configs += [
-+ "//media:media_config",
-+ "//media:media_implementation",
-+ ]
-+ if (!is_win) {
-+ cflags = [ "-msse" ]
-+ }
-+ }
- }
-
- # TODO(watk): Refactor tests that could be made to run on Android. See
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/filters/wsola_internals.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/filters/wsola_internals.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/media/filters/wsola_internals.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/media/filters/wsola_internals.cc 2018-02-18
19:00:53.372430283 +0100
-@@ -15,7 +15,7 @@
- #include "base/logging.h"
- #include "media/base/audio_bus.h"
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if defined(ARCH_CPU_X86_FAMILY) && defined(__SSE__)
- #define USE_SIMD 1
- #include <xmmintrin.h>
- #elif defined(ARCH_CPU_ARM_FAMILY) && defined(USE_NEON)
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/skia/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/skia/BUILD.gn
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/skia/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/skia/BUILD.gn 2018-02-18
19:00:53.444429225 +0100
-@@ -257,17 +257,6 @@
- "ext/platform_canvas.h",
- ]
- }
-- if (!is_ios && (current_cpu == "x86" || current_cpu ==
"x64")) {
-- sources += [
-- "ext/convolver_SSE2.cc",
-- "ext/convolver_SSE2.h",
-- ]
-- } else if (current_cpu == "mipsel" && mips_dsp_rev >= 2) {
-- sources += [
-- "ext/convolver_mips_dspr2.cc",
-- "ext/convolver_mips_dspr2.h",
-- ]
-- }
-
- if (!is_fuchsia) {
- sources -= [
-@@ -522,6 +511,31 @@
- }
- }
- if (current_cpu == "x86" || current_cpu == "x64") {
-+ source_set("skia_opts_sse2") {
-+ sources = skia_opts.sse2_sources +
-+ [
-+ # Chrome-specific.
-+ "ext/convolver_SSE2.cc",
-+ "ext/convolver_SSE2.h",
-+ ]
-+ sources -= [
-+ # Detection code must not be built with -msse2
-+ "//third_party/skia/src/opts/opts_check_x86.cpp",
-+ ]
-+ if (!is_win || is_clang) {
-+ cflags = [ "-msse2" ]
-+ }
-+ if (is_win) {
-+ defines = [ "SK_CPU_SSE_LEVEL=20" ]
-+ }
-+ visibility = [ ":skia_opts" ]
-+ configs -= [ "//build/config/compiler:chromium_code" ]
-+ configs += [
-+ ":skia_config",
-+ ":skia_library_config",
-+ "//build/config/compiler:no_chromium_code",
-+ ]
-+ }
- source_set("skia_opts_sse3") {
- sources = skia_opts.ssse3_sources
- if (!is_win || is_clang) {
-@@ -626,10 +640,13 @@
- ]
-
- if (current_cpu == "x86" || current_cpu == "x64") {
-- sources = skia_opts.sse2_sources
-+ sources = [
-+ "//third_party/skia/src/opts/opts_check_x86.cpp",
-+ ]
- deps += [
- ":skia_opts_avx",
- ":skia_opts_hsw",
-+ ":skia_opts_sse2",
- ":skia_opts_sse3",
- ":skia_opts_sse41",
- ":skia_opts_sse42",
-@@ -664,6 +681,13 @@
-
- if (mips_dsp_rev >= 1) {
- sources = skia_opts.mips_dsp_sources
-+ if (mips_dsp_rev >= 2) {
-+ sources += [
-+ # Chrome-specific.
-+ "ext/convolver_mips_dspr2.cc",
-+ "ext/convolver_mips_dspr2.h",
-+ ]
-+ }
- } else {
- sources = skia_opts.none_sources
- }
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/skia/ext/convolver.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/skia/ext/convolver.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/skia/ext/convolver.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/skia/ext/convolver.cc 2018-02-18
19:00:53.510428253 +0100
-@@ -362,10 +362,13 @@
-
- void SetupSIMD(ConvolveProcs *procs) {
- #ifdef SIMD_SSE2
-- procs->extra_horizontal_reads = 3;
-- procs->convolve_vertically = &ConvolveVertically_SSE2;
-- procs->convolve_4rows_horizontally = &Convolve4RowsHorizontally_SSE2;
-- procs->convolve_horizontally = &ConvolveHorizontally_SSE2;
-+ base::CPU cpu;
-+ if (cpu.has_sse2()) {
-+ procs->extra_horizontal_reads = 3;
-+ procs->convolve_vertically = &ConvolveVertically_SSE2;
-+ procs->convolve_4rows_horizontally = &Convolve4RowsHorizontally_SSE2;
-+ procs->convolve_horizontally = &ConvolveHorizontally_SSE2;
-+ }
- #elif defined SIMD_MIPS_DSPR2
- procs->extra_horizontal_reads = 3;
- procs->convolve_vertically = &ConvolveVertically_mips_dspr2;
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/skia/ext/convolver.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/skia/ext/convolver.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/skia/ext/convolver.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/skia/ext/convolver.h 2018-02-18
19:00:53.511428239 +0100
-@@ -11,6 +11,7 @@
- #include <vector>
-
- #include "build/build_config.h"
-+#include "base/cpu.h"
- #include "third_party/skia/include/core/SkSize.h"
- #include "third_party/skia/include/core/SkTypes.h"
-
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/BUILD.gn
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/BUILD.gn 2018-02-18
19:00:53.511428239 +0100
-@@ -192,6 +192,26 @@
- public_deps = [
- ":angle_common",
- ]
-+
-+ if (current_cpu == "x86") {
-+ deps = [
-+ ":angle_image_util_x86_sse2",
-+ ]
-+ }
-+}
-+
-+source_set("angle_image_util_x86_sse2") {
-+ configs -= angle_undefine_configs
-+ configs += [ ":internal_config" ]
-+
-+ deps = [
-+ ":angle_common",
-+ ]
-+
-+ sources = [
-+ "src/image_util/loadimage_SSE2.cpp",
-+ ]
-+ cflags = [ "-msse2", "-mfpmath=sse" ]
- }
-
- config("angle_gpu_info_util_config") {
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/common/mathutil.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/common/mathutil.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/common/mathutil.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/common/mathutil.h 2018-02-18
19:00:53.566427430 +0100
-@@ -124,9 +124,42 @@
- }
- }
-
--inline bool supportsSSE2()
-+#if defined(ANGLE_USE_SSE) && !defined(__x86_64__) && !defined(__SSE2__)
&& !defined(_MSC_VER)
-+
-+// From the base/cpu.cc in Chromium, to avoid depending on Chromium headers
-+
-+#if defined(__pic__) && defined(__i386__)
-+
-+static inline void __cpuid(int cpu_info[4], int info_type) {
-+ __asm__ volatile (
-+ "mov %%ebx, %%edi\n"
-+ "cpuid\n"
-+ "xchg %%edi, %%ebx\n"
-+ : "=a"(cpu_info[0]), "=D"(cpu_info[1]),
"=c"(cpu_info[2]), "=d"(cpu_info[3])
-+ : "a"(info_type)
-+ );
-+}
-+
-+#else
-+
-+static inline void __cpuid(int cpu_info[4], int info_type) {
-+ __asm__ volatile (
-+ "cpuid\n"
-+ : "=a"(cpu_info[0]), "=b"(cpu_info[1]),
"=c"(cpu_info[2]), "=d"(cpu_info[3])
-+ : "a"(info_type)
-+ );
-+}
-+
-+#endif
-+
-+#endif
-+
-+static inline bool supportsSSE2()
- {
- #if defined(ANGLE_USE_SSE)
-+#if defined(__x86_64__) || defined(__SSE2__)
-+ return true;
-+#else
- static bool checked = false;
- static bool supports = false;
-
-@@ -135,7 +168,6 @@
- return supports;
- }
-
--#if defined(ANGLE_PLATFORM_WINDOWS) && !defined(_M_ARM)
- {
- int info[4];
- __cpuid(info, 0);
-@@ -147,9 +179,9 @@
- supports = (info[3] >> 26) & 1;
- }
- }
--#endif // defined(ANGLE_PLATFORM_WINDOWS) && !defined(_M_ARM)
- checked = true;
- return supports;
-+#endif // defined(x86_64) || defined(__SSE2__)
- #else // defined(ANGLE_USE_SSE)
- return false;
- #endif
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/common/platform.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/common/platform.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/common/platform.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/common/platform.h 2018-02-18
19:00:53.566427430 +0100
-@@ -87,7 +87,9 @@
- #include <intrin.h>
- #define ANGLE_USE_SSE
- #elif defined(__GNUC__) && (defined(__x86_64__) || defined(__i386__))
-+#if defined(__x86_64__) || defined(__SSE2__)
- #include <x86intrin.h>
-+#endif
- #define ANGLE_USE_SSE
- #endif
-
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage.cpp 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage.cpp 2018-02-18
19:00:53.567427415 +0100
-@@ -12,9 +12,17 @@
- #include "common/platform.h"
- #include "image_util/imageformats.h"
-
-+#if defined(BUILD_ONLY_THE_SSE2_PARTS) && !defined(__SSE2__)
-+#error SSE2 parts must be built with -msse2
-+#endif
-+
- namespace angle
- {
-
-+#ifdef BUILD_ONLY_THE_SSE2_PARTS
-+namespace SSE2 {
-+#endif
-+
- void LoadA8ToRGBA8(size_t width,
- size_t height,
- size_t depth,
-@@ -28,6 +36,11 @@
- #if defined(ANGLE_USE_SSE)
- if (gl::supportsSSE2())
- {
-+#if !defined(__x86_64__) && !defined(__SSE2__)
-+ angle::SSE2::LoadA8ToRGBA8(width, height, depth, input, inputRowPitch,
-+ inputDepthPitch, output, outputRowPitch,
-+ outputDepthPitch);
-+#else
- __m128i zeroWide = _mm_setzero_si128();
-
- for (size_t z = 0; z < depth; z++)
-@@ -68,6 +81,7 @@
- }
- }
- }
-+#endif
-
- return;
- }
-@@ -89,6 +103,8 @@
- }
- }
-
-+#ifndef BUILD_ONLY_THE_SSE2_PARTS
-+
- void LoadA8ToBGRA8(size_t width,
- size_t height,
- size_t depth,
-@@ -584,6 +600,8 @@
- }
- }
-
-+#endif
-+
- void LoadRGBA8ToBGRA8(size_t width,
- size_t height,
- size_t depth,
-@@ -597,6 +615,11 @@
- #if defined(ANGLE_USE_SSE)
- if (gl::supportsSSE2())
- {
-+#if !defined(__x86_64__) && !defined(__SSE2__)
-+ angle::SSE2::LoadRGBA8ToBGRA8(width, height, depth, input,
-+ inputRowPitch, inputDepthPitch, output,
-+ outputRowPitch, outputDepthPitch);
-+#else
- __m128i brMask = _mm_set1_epi32(0x00ff00ff);
-
- for (size_t z = 0; z < depth; z++)
-@@ -641,6 +664,7 @@
- }
- }
- }
-+#endif
-
- return;
- }
-@@ -663,6 +687,8 @@
- }
- }
-
-+#ifndef BUILD_ONLY_THE_SSE2_PARTS
-+
- void LoadRGBA8ToBGRA4(size_t width,
- size_t height,
- size_t depth,
-@@ -1320,4 +1346,10 @@
- }
- }
-
-+#endif
-+
-+#ifdef BUILD_ONLY_THE_SSE2_PARTS
-+} // namespace SSE2
-+#endif
-+
- } // namespace angle
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage.h 2018-02-18
19:00:53.567427415 +0100
-@@ -651,6 +651,32 @@
- size_t outputRowPitch,
- size_t outputDepthPitch);
-
-+#if defined(__i386__)
-+namespace SSE2 {
-+
-+void LoadA8ToRGBA8(size_t width,
-+ size_t height,
-+ size_t depth,
-+ const uint8_t *input,
-+ size_t inputRowPitch,
-+ size_t inputDepthPitch,
-+ uint8_t *output,
-+ size_t outputRowPitch,
-+ size_t outputDepthPitch);
-+
-+void LoadRGBA8ToBGRA8(size_t width,
-+ size_t height,
-+ size_t depth,
-+ const uint8_t *input,
-+ size_t inputRowPitch,
-+ size_t inputDepthPitch,
-+ uint8_t *output,
-+ size_t outputRowPitch,
-+ size_t outputDepthPitch);
-+
-+}
-+#endif // defined(__i386__)
-+
- } // namespace angle
-
- #include "loadimage.inl"
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage_SSE2.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage_SSE2.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage_SSE2.cpp 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/angle/src/image_util/loadimage_SSE2.cpp 2018-02-18
19:00:53.567427415 +0100
-@@ -0,0 +1,2 @@
-+#define BUILD_ONLY_THE_SSE2_PARTS
-+#include "loadimage.cpp"
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/qcms/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/qcms/BUILD.gn
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/qcms/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/qcms/BUILD.gn 2018-02-18
19:00:53.568427401 +0100
-@@ -34,8 +34,8 @@
- defines = []
-
- if (current_cpu == "x86" || current_cpu == "x64") {
-- defines += [ "SSE2_ENABLE" ]
-- sources += [ "src/transform-sse2.c" ]
-+ defines += [ "SSE2_ENABLE" ] # runtime detection
-+ deps = [ ":qcms_sse2" ]
- }
-
- if (use_libfuzzer) {
-@@ -99,3 +99,15 @@
- public_configs = [ ":qcms_config" ]
- }
- }
-+
-+source_set("qcms_sse2") {
-+ configs -= [ "//build/config/compiler:chromium_code" ]
-+ configs += [ "//build/config/compiler:no_chromium_code" ]
-+ public_configs = [ ":qcms_config" ]
-+
-+ if (current_cpu == "x86" || current_cpu == "x64") {
-+ defines = [ "SSE2_ENABLE" ]
-+ sources = [ "src/transform-sse2.c" ]
-+ cflags = [ "-msse2" ]
-+ }
-+}
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/modules/webaudio/AudioParamTimeline.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/modules/webaudio/AudioParamTimeline.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/modules/webaudio/AudioParamTimeline.cpp 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/modules/webaudio/AudioParamTimeline.cpp 2018-02-18
19:00:53.599426945 +0100
-@@ -35,7 +35,7 @@
- #include "platform/wtf/MathExtras.h"
- #include "platform/wtf/PtrUtil.h"
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- #include <emmintrin.h>
- #endif
-
-@@ -1290,7 +1290,7 @@
- size_t current_frame,
- float value,
- unsigned write_index) {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- auto number_of_values = current_state.number_of_values;
- #endif
- auto fill_to_frame = current_state.fill_to_frame;
-@@ -1303,7 +1303,7 @@
- double delta_time = time2 - time1;
- float k = delta_time > 0 ? 1 / delta_time : 0;
- const float value_delta = value2 - value1;
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- if (fill_to_frame > write_index) {
- // Minimize in-loop operations. Calculate starting value and increment.
- // Next step: value += inc.
-@@ -1431,7 +1431,7 @@
- size_t current_frame,
- float value,
- unsigned write_index) {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- auto number_of_values = current_state.number_of_values;
- #endif
- auto fill_to_frame = current_state.fill_to_frame;
-@@ -1482,7 +1482,7 @@
- for (; write_index < fill_to_frame; ++write_index)
- values[write_index] = target;
- } else {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- if (fill_to_frame > write_index) {
- // Resolve recursion by expanding constants to achieve a 4-step
- // loop unrolling.
-@@ -1616,7 +1616,7 @@
- // Oversampled curve data can be provided if sharp discontinuities are
- // desired.
- unsigned k = 0;
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- if (fill_to_frame > write_index) {
- const __m128 v_curve_virtual_index = _mm_set_ps1(curve_virtual_index);
- const __m128 v_curve_points_per_frame = _mm_set_ps1(curve_points_per_frame);
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolver.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolver.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolver.cpp 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolver.cpp 2018-02-18
19:00:53.600426930 +0100
-@@ -26,6 +26,9 @@
- * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-+// include this first to get it before the CPU() function-like macro
-+#include "base/cpu.h"
-+
- #include "platform/audio/DirectConvolver.h"
-
- #include "build/build_config.h"
-@@ -35,21 +38,48 @@
- #include <Accelerate/Accelerate.h>
- #endif
-
--#if defined(ARCH_CPU_X86_FAMILY) && !defined(OS_MACOSX)
-+#if ((defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64))
\
-+ && !defined(OS_MACOSX)
- #include <emmintrin.h>
- #endif
-
-+#if defined(BUILD_ONLY_THE_SSE2_PARTS) && !defined(__SSE2__)
-+#error SSE2 parts must be built with -msse2
-+#endif
-+
- namespace blink {
-
- using namespace VectorMath;
-
-+#ifndef BUILD_ONLY_THE_SSE2_PARTS
-+
- DirectConvolver::DirectConvolver(size_t input_block_size)
-- : input_block_size_(input_block_size), buffer_(input_block_size * 2) {}
-+ : input_block_size_(input_block_size), buffer_(input_block_size * 2) {
-+#ifdef ARCH_CPU_X86
-+ base::CPU cpu;
-+ m_haveSSE2 = cpu.has_sse2();
-+#endif
-+}
-+
-+#endif
-
-+#ifdef BUILD_ONLY_THE_SSE2_PARTS
-+void DirectConvolver::m_ProcessSSE2(AudioFloatArray* convolution_kernel,
-+ const float* source_p,
-+ float* dest_p,
-+ size_t frames_to_process) {
-+#else
- void DirectConvolver::Process(AudioFloatArray* convolution_kernel,
- const float* source_p,
- float* dest_p,
- size_t frames_to_process) {
-+#endif
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+ if (m_haveSSE2) {
-+ m_ProcessSSE2(convolution_kernel, source_p, dest_p, frames_to_process);
-+ return;
-+ }
-+#endif
- DCHECK_EQ(frames_to_process, input_block_size_);
- if (frames_to_process != input_block_size_)
- return;
-@@ -83,7 +113,7 @@
- #endif // ARCH_CPU_X86
- #else
- size_t i = 0;
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- // Convolution using SSE2. Currently only do this if both |kernelSize| and
- // |framesToProcess| are multiples of 4. If not, use the straightforward loop
- // below.
-@@ -397,7 +427,7 @@
- }
- dest_p[i++] = sum;
- }
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- }
- #endif
- #endif // OS_MACOSX
-@@ -406,8 +436,12 @@
- memcpy(buffer_.Data(), input_p, sizeof(float) * frames_to_process);
- }
-
-+#ifndef BUILD_ONLY_THE_SSE2_PARTS
-+
- void DirectConvolver::Reset() {
- buffer_.Zero();
- }
-
-+#endif
-+
- } // namespace blink
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolver.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolver.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolver.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolver.h 2018-02-18
19:00:53.600426930 +0100
-@@ -29,6 +29,7 @@
- #ifndef DirectConvolver_h
- #define DirectConvolver_h
-
-+#include "build/build_config.h"
- #include "platform/PlatformExport.h"
- #include "platform/audio/AudioArray.h"
- #include "platform/wtf/Allocator.h"
-@@ -54,6 +55,14 @@
- size_t input_block_size_;
-
- AudioFloatArray buffer_;
-+
-+#ifdef ARCH_CPU_X86
-+ bool m_haveSSE2;
-+ void m_ProcessSSE2(AudioFloatArray* convolution_kernel,
-+ const float* source_p,
-+ float* dest_p,
-+ size_t frames_to_process);
-+#endif
- };
-
- } // namespace blink
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolverSSE2.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolverSSE2.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolverSSE2.cpp 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/DirectConvolverSSE2.cpp 2018-02-18
19:00:53.600426930 +0100
-@@ -0,0 +1,2 @@
-+#define BUILD_ONLY_THE_SSE2_PARTS
-+#include "DirectConvolver.cpp"
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResampler.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResampler.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResampler.cpp 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResampler.cpp 2018-02-18
19:00:53.601426915 +0100
-@@ -26,16 +26,23 @@
- * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-+// include this first to get it before the CPU() function-like macro
-+#include "base/cpu.h"
-+
- #include "platform/audio/SincResampler.h"
-
- #include "build/build_config.h"
- #include "platform/audio/AudioBus.h"
- #include "platform/wtf/MathExtras.h"
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- #include <emmintrin.h>
- #endif
-
-+#if defined(BUILD_ONLY_THE_SSE2_PARTS) && !defined(__SSE2__)
-+#error SSE2 parts must be built with -msse2
-+#endif
-+
- // Input buffer layout, dividing the total buffer into regions (r0 - r5):
- //
- // |----------------|-----------------------------------------|----------------|
-@@ -67,6 +74,8 @@
-
- namespace blink {
-
-+#ifndef BUILD_ONLY_THE_SSE2_PARTS
-+
- SincResampler::SincResampler(double scale_factor,
- unsigned kernel_size,
- unsigned number_of_kernel_offsets)
-@@ -82,6 +91,10 @@
- source_frames_available_(0),
- source_provider_(nullptr),
- is_buffer_primed_(false) {
-+#ifdef ARCH_CPU_X86
-+ base::CPU cpu;
-+ m_haveSSE2 = cpu.has_sse2();
-+#endif
- InitializeKernel();
- }
-
-@@ -205,9 +218,23 @@
- }
- }
-
-+#endif
-+
-+#ifdef BUILD_ONLY_THE_SSE2_PARTS
-+void SincResampler::m_ProcessSSE2(AudioSourceProvider* source_provider,
-+ float* destination,
-+ size_t frames_to_process) {
-+#else
- void SincResampler::Process(AudioSourceProvider* source_provider,
- float* destination,
- size_t frames_to_process) {
-+#endif
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+ if (m_haveSSE2) {
-+ m_ProcessSSE2(source_provider, destination, frames_to_process);
-+ return;
-+ }
-+#endif
- bool is_good = source_provider && block_size_ > kernel_size_ &&
- input_buffer_.size() >= block_size_ + kernel_size_ &&
- !(kernel_size_ % 2);
-@@ -276,7 +303,7 @@
- {
- float input;
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- // If the sourceP address is not 16-byte aligned, the first several
- // frames (at most three) should be processed seperately.
- while ((reinterpret_cast<uintptr_t>(input_p) & 0x0F) && n) {
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResampler.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResampler.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResampler.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResampler.h 2018-02-18
19:00:53.601426915 +0100
-@@ -29,6 +29,7 @@
- #ifndef SincResampler_h
- #define SincResampler_h
-
-+#include "build/build_config.h"
- #include "platform/PlatformExport.h"
- #include "platform/audio/AudioArray.h"
- #include "platform/audio/AudioSourceProvider.h"
-@@ -96,6 +97,14 @@
-
- // The buffer is primed once at the very beginning of processing.
- bool is_buffer_primed_;
-+
-+#ifdef ARCH_CPU_X86
-+ private:
-+ bool m_haveSSE2;
-+ void m_ProcessSSE2(AudioSourceProvider*,
-+ float* destination,
-+ size_t frames_to_process);
-+#endif
- };
-
- } // namespace blink
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResamplerSSE2.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResamplerSSE2.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResamplerSSE2.cpp 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/SincResamplerSSE2.cpp 2018-02-18
19:00:53.601426915 +0100
-@@ -0,0 +1,2 @@
-+#define BUILD_ONLY_THE_SSE2_PARTS
-+#include "SincResampler.cpp"
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMath.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMath.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMath.cpp 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMath.cpp 2018-02-18
19:00:53.602426901 +0100
-@@ -23,6 +23,9 @@
- * DAMAGE.
- */
-
-+// include this first to get it before the CPU() function-like macro
-+#include "base/cpu.h"
-+
- #include "platform/audio/VectorMath.h"
-
- #include <stdint.h>
-@@ -35,10 +38,14 @@
- #include <Accelerate/Accelerate.h>
- #endif
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- #include <emmintrin.h>
- #endif
-
-+#if defined(BUILD_ONLY_THE_SSE2_PARTS) && !defined(__SSE2__)
-+#error SSE2 parts must be built with -msse2
-+#endif
-+
- #if WTF_CPU_ARM_NEON
- #include <arm_neon.h>
- #endif
-@@ -170,15 +177,30 @@
- }
- #else
-
-+#ifdef BUILD_ONLY_THE_SSE2_PARTS
-+namespace SSE2 {
-+#endif
-+
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+static base::CPU cpu;
-+#endif
-+
- void Vsma(const float* source_p,
- int source_stride,
- const float* scale,
- float* dest_p,
- int dest_stride,
- size_t frames_to_process) {
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+ if (cpu.has_sse2()) {
-+ blink::VectorMath::SSE2::Vsma(source_p, source_stride, scale, dest_p,
-+ dest_stride, frames_to_process);
-+ return;
-+ }
-+#endif
- int n = frames_to_process;
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- if ((source_stride == 1) && (dest_stride == 1)) {
- float k = *scale;
-
-@@ -274,9 +296,16 @@
- float* dest_p,
- int dest_stride,
- size_t frames_to_process) {
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+ if (cpu.has_sse2()) {
-+ blink::VectorMath::SSE2::Vsmul(source_p, source_stride, scale, dest_p,
-+ dest_stride, frames_to_process);
-+ return;
-+ }
-+#endif
- int n = frames_to_process;
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- if ((source_stride == 1) && (dest_stride == 1)) {
- float k = *scale;
-
-@@ -365,7 +394,7 @@
- source_p += source_stride;
- dest_p += dest_stride;
- }
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- }
- #endif
- }
-@@ -377,9 +406,17 @@
- float* dest_p,
- int dest_stride,
- size_t frames_to_process) {
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+ if (cpu.has_sse2()) {
-+ blink::VectorMath::SSE2::Vadd(source1p, source_stride1, source2p,
-+ source_stride2, dest_p, dest_stride,
-+ frames_to_process);
-+ return;
-+ }
-+#endif
- int n = frames_to_process;
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- if ((source_stride1 == 1) && (source_stride2 == 1) && (dest_stride ==
1)) {
- // If the sourceP address is not 16-byte aligned, the first several frames
- // (at most three) should be processed separately.
-@@ -506,7 +543,7 @@
- source2p += source_stride2;
- dest_p += dest_stride;
- }
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- }
- #endif
- }
-@@ -518,9 +555,17 @@
- float* dest_p,
- int dest_stride,
- size_t frames_to_process) {
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+ if (cpu.has_sse2()) {
-+ blink::VectorMath::SSE2::Vmul(source1p, source_stride1, source2p,
-+ source_stride2, dest_p, dest_stride,
-+ frames_to_process);
-+ return;
-+ }
-+#endif
- int n = frames_to_process;
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- if ((source_stride1 == 1) && (source_stride2 == 1) && (dest_stride ==
1)) {
- // If the source1P address is not 16-byte aligned, the first several frames
- // (at most three) should be processed separately.
-@@ -619,8 +664,15 @@
- float* real_dest_p,
- float* imag_dest_p,
- size_t frames_to_process) {
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+ if (cpu.has_sse2()) {
-+ blink::VectorMath::SSE2::Zvmul(real1p, imag1p, real2p, imag2p, real_dest_p,
-+ imag_dest_p, frames_to_process);
-+ return;
-+ }
-+#endif
- unsigned i = 0;
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- // Only use the SSE optimization in the very common case that all addresses
- // are 16-byte aligned. Otherwise, fall through to the scalar code below.
- if (!(reinterpret_cast<uintptr_t>(real1p) & 0x0F) &&
-@@ -676,10 +728,17 @@
- int source_stride,
- float* sum_p,
- size_t frames_to_process) {
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+ if (cpu.has_sse2()) {
-+ blink::VectorMath::SSE2::Vsvesq(source_p, source_stride, sum_p,
-+ frames_to_process);
-+ return;
-+ }
-+#endif
- int n = frames_to_process;
- float sum = 0;
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- if (source_stride == 1) {
- // If the sourceP address is not 16-byte aligned, the first several frames
- // (at most three) should be processed separately.
-@@ -745,10 +804,17 @@
- int source_stride,
- float* max_p,
- size_t frames_to_process) {
-+#if defined(ARCH_CPU_X86) && !defined(__SSE2__)
-+ if (cpu.has_sse2()) {
-+ blink::VectorMath::SSE2::Vmaxmgv(source_p, source_stride, max_p,
-+ frames_to_process);
-+ return;
-+ }
-+#endif
- int n = frames_to_process;
- float max = 0;
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- if (source_stride == 1) {
- // If the sourceP address is not 16-byte aligned, the first several frames
- // (at most three) should be processed separately.
-@@ -837,6 +903,8 @@
- *max_p = max;
- }
-
-+#ifndef BUILD_ONLY_THE_SSE2_PARTS
-+
- void Vclip(const float* source_p,
- int source_stride,
- const float* low_threshold_p,
-@@ -894,6 +962,12 @@
- }
- }
-
-+#endif
-+
-+#ifdef BUILD_ONLY_THE_SSE2_PARTS
-+} // namespace SSE2
-+#endif
-+
- #endif // defined(OS_MACOSX)
-
- } // namespace VectorMath
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMath.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMath.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMath.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMath.h 2018-02-18
19:00:53.602426901 +0100
-@@ -27,6 +27,7 @@
- #define VectorMath_h
-
- #include <cstddef>
-+#include "build/build_config.h"
- #include "platform/PlatformExport.h"
- #include "platform/wtf/build_config.h"
-
-@@ -97,6 +98,62 @@
- int dest_stride,
- size_t frames_to_process);
-
-+#ifdef ARCH_CPU_X86
-+namespace SSE2 {
-+// Vector scalar multiply and then add.
-+PLATFORM_EXPORT void Vsma(const float* source_p,
-+ int source_stride,
-+ const float* scale,
-+ float* dest_p,
-+ int dest_stride,
-+ size_t frames_to_process);
-+
-+PLATFORM_EXPORT void Vsmul(const float* source_p,
-+ int source_stride,
-+ const float* scale,
-+ float* dest_p,
-+ int dest_stride,
-+ size_t frames_to_process);
-+PLATFORM_EXPORT void Vadd(const float* source1p,
-+ int source_stride1,
-+ const float* source2p,
-+ int source_stride2,
-+ float* dest_p,
-+ int dest_stride,
-+ size_t frames_to_process);
-+
-+// Finds the maximum magnitude of a float vector.
-+PLATFORM_EXPORT void Vmaxmgv(const float* source_p,
-+ int source_stride,
-+ float* max_p,
-+ size_t frames_to_process);
-+
-+// Sums the squares of a float vector's elements.
-+PLATFORM_EXPORT void Vsvesq(const float* source_p,
-+ int source_stride,
-+ float* sum_p,
-+ size_t frames_to_process);
-+
-+// For an element-by-element multiply of two float vectors.
-+PLATFORM_EXPORT void Vmul(const float* source1p,
-+ int source_stride1,
-+ const float* source2p,
-+ int source_stride2,
-+ float* dest_p,
-+ int dest_stride,
-+ size_t frames_to_process);
-+
-+// Multiplies two complex vectors.
-+PLATFORM_EXPORT void Zvmul(const float* real1p,
-+ const float* imag1p,
-+ const float* real2p,
-+ const float* imag2p,
-+ float* real_dest_p,
-+ float* imag_dest_p,
-+ size_t frames_to_process);
-+}
-+#endif
-+
- } // namespace VectorMath
- } // namespace blink
-
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMathSSE2.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMathSSE2.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMathSSE2.cpp 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/audio/VectorMathSSE2.cpp 2018-02-18
19:00:53.602426901 +0100
-@@ -0,0 +1,2 @@
-+#define BUILD_ONLY_THE_SSE2_PARTS
-+#include "VectorMath.cpp"
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/BUILD.gn
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/BUILD.gn 2018-02-18
19:00:53.603426886 +0100
-@@ -1695,6 +1695,10 @@
- deps += [ ":blink_x86_sse" ]
- }
-
-+ if (current_cpu == "x86") {
-+ deps += [ ":blink_x86_sse2" ]
-+ }
-+
- if (use_webaudio_ffmpeg) {
- include_dirs += [ "//third_party/ffmpeg" ]
- deps += [ "//third_party/ffmpeg" ]
-@@ -2142,6 +2146,23 @@
- }
- }
-
-+if (current_cpu == "x86") {
-+ source_set("blink_x86_sse2") {
-+ sources = [
-+ "audio/DirectConvolverSSE2.cpp",
-+ "audio/SincResamplerSSE2.cpp",
-+ "audio/VectorMathSSE2.cpp",
-+ ]
-+ cflags = [ "-msse2", "-mfpmath=sse" ]
-+ configs += [
-+ # TODO(jschuh):
crbug.com/167187 fix size_t to int truncations.
-+ "//build/config/compiler:no_size_t_to_int_warning",
-+ "//third_party/WebKit/Source:config",
-+ "//third_party/WebKit/Source:non_test_config",
-+ ]
-+ }
-+}
-+
- # This source set is used for fuzzers that need an environment similar to unit
- # tests.
- source_set("blink_fuzzer_test_support") {
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/graphics/cpu/x86/WebGLImageConversionSSE.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/graphics/cpu/x86/WebGLImageConversionSSE.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/graphics/cpu/x86/WebGLImageConversionSSE.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/graphics/cpu/x86/WebGLImageConversionSSE.h 2018-02-18
19:00:53.603426886 +0100
-@@ -7,7 +7,7 @@
-
- #include "build/build_config.h"
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- #include <emmintrin.h>
-
- namespace blink {
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/graphics/gpu/WebGLImageConversion.cpp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/graphics/gpu/WebGLImageConversion.cpp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/WebKit/Source/platform/graphics/gpu/WebGLImageConversion.cpp 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/WebKit/Source/platform/graphics/gpu/WebGLImageConversion.cpp 2018-02-18
19:00:53.604426871 +0100
-@@ -444,7 +444,7 @@
- const uint32_t* source32 = reinterpret_cast_ptr<const uint32_t*>(source);
- uint32_t* destination32 = reinterpret_cast_ptr<uint32_t*>(destination);
-
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- SIMD::UnpackOneRowOfBGRA8LittleToRGBA8(source32, destination32,
- pixels_per_row);
- #endif
-@@ -472,7 +472,7 @@
- const uint16_t* source,
- uint8_t* destination,
- unsigned pixels_per_row) {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- SIMD::UnpackOneRowOfRGBA5551LittleToRGBA8(source, destination,
- pixels_per_row);
- #endif
-@@ -502,7 +502,7 @@
- const uint16_t* source,
- uint8_t* destination,
- unsigned pixels_per_row) {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- SIMD::UnpackOneRowOfRGBA4444LittleToRGBA8(source, destination,
- pixels_per_row);
- #endif
-@@ -718,7 +718,7 @@
- uint8_t>(const uint8_t* source,
- uint8_t* destination,
- unsigned pixels_per_row) {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- SIMD::PackOneRowOfRGBA8LittleToR8(source, destination, pixels_per_row);
- #endif
- #if HAVE(MIPS_MSA_INTRINSICS)
-@@ -775,7 +775,7 @@
- uint8_t>(const uint8_t* source,
- uint8_t* destination,
- unsigned pixels_per_row) {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- SIMD::PackOneRowOfRGBA8LittleToRA8(source, destination, pixels_per_row);
- #endif
- #if HAVE(MIPS_MSA_INTRINSICS)
-@@ -887,7 +887,7 @@
- uint8_t>(const uint8_t* source,
- uint8_t* destination,
- unsigned pixels_per_row) {
--#if defined(ARCH_CPU_X86_FAMILY)
-+#if (defined(ARCH_CPU_X86) && defined(__SSE2__)) || defined(ARCH_CPU_X86_64)
- SIMD::PackOneRowOfRGBA8LittleToRGBA8(source, destination, pixels_per_row);
- #endif
- #if HAVE(MIPS_MSA_INTRINSICS)
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/common_audio/real_fourier.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/common_audio/real_fourier.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/common_audio/real_fourier.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/common_audio/real_fourier.cc 2018-02-18
19:00:53.605426856 +0100
-@@ -14,6 +14,7 @@
- #include "webrtc/common_audio/real_fourier_openmax.h"
- #include
"webrtc/common_audio/signal_processing/include/signal_processing_library.h"
- #include "webrtc/rtc_base/checks.h"
-+#include "webrtc/system_wrappers/include/cpu_features_wrapper.h"
-
- namespace webrtc {
-
-@@ -23,7 +24,15 @@
-
- std::unique_ptr<RealFourier> RealFourier::Create(int fft_order) {
- #if defined(RTC_USE_OPENMAX_DL)
-+#if defined(WEBRTC_ARCH_X86_FAMILY) && !defined(__SSE2__)
-+ // x86 CPU detection required.
-+ if (WebRtc_GetCPUInfo(kSSE2))
-+ return std::unique_ptr<RealFourier>(new RealFourierOpenmax(fft_order));
-+ else
-+ return std::unique_ptr<RealFourier>(new RealFourierOoura(fft_order));
-+#else
- return std::unique_ptr<RealFourier>(new RealFourierOpenmax(fft_order));
-+#endif
- #else
- return std::unique_ptr<RealFourier>(new RealFourierOoura(fft_order));
- #endif
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.cc 2018-02-18
19:00:53.644426283 +0100
-@@ -14,7 +14,7 @@
- #include <arm_neon.h>
- #endif
- #include "webrtc/typedefs.h"
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- #include <emmintrin.h>
- #endif
- #include <algorithm>
-@@ -59,7 +59,7 @@
- }
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- // Computes and stores the frequency response of the filter.
- void UpdateFrequencyResponse_SSE2(
- rtc::ArrayView<const FftData> H,
-@@ -111,7 +111,7 @@
- }
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- // Computes and stores the echo return loss estimate of the filter, which is the
- // sum of the partition frequency responses.
- void UpdateErlEstimator_SSE2(
-@@ -204,7 +204,7 @@
- }
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- // Adapts the filter partitions. (SSE2 variant)
- void AdaptPartitions_SSE2(const RenderBuffer& render_buffer,
- const FftData& G,
-@@ -345,7 +345,7 @@
- }
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- // Produces the filter output (SSE2 variant).
- void ApplyFilter_SSE2(const RenderBuffer& render_buffer,
- rtc::ArrayView<const FftData> H,
-@@ -445,7 +445,7 @@
- FftData* S) const {
- RTC_DCHECK(S);
- switch (optimization_) {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- case Aec3Optimization::kSse2:
- aec3::ApplyFilter_SSE2(render_buffer, H_, S);
- break;
-@@ -464,7 +464,7 @@
- const FftData& G) {
- // Adapt the filter.
- switch (optimization_) {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- case Aec3Optimization::kSse2:
- aec3::AdaptPartitions_SSE2(render_buffer, G, H_);
- break;
-@@ -483,7 +483,7 @@
-
- // Update the frequency response and echo return loss for the filter.
- switch (optimization_) {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- case Aec3Optimization::kSse2:
- aec3::UpdateFrequencyResponse_SSE2(H_, &H2_);
- aec3::UpdateErlEstimator_SSE2(H2_, &erl_);
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter.h 2018-02-18
19:00:53.644426283 +0100
-@@ -34,7 +34,7 @@
- rtc::ArrayView<const FftData> H,
- std::vector<std::array<float, kFftLengthBy2Plus1>>* H2);
- #endif
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- void UpdateFrequencyResponse_SSE2(
- rtc::ArrayView<const FftData> H,
- std::vector<std::array<float, kFftLengthBy2Plus1>>* H2);
-@@ -50,7 +50,7 @@
- const std::vector<std::array<float, kFftLengthBy2Plus1>>& H2,
- std::array<float, kFftLengthBy2Plus1>* erl);
- #endif
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- void UpdateErlEstimator_SSE2(
- const std::vector<std::array<float, kFftLengthBy2Plus1>>& H2,
- std::array<float, kFftLengthBy2Plus1>* erl);
-@@ -65,7 +65,7 @@
- const FftData& G,
- rtc::ArrayView<FftData> H);
- #endif
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- void AdaptPartitions_SSE2(const RenderBuffer& render_buffer,
- const FftData& G,
- rtc::ArrayView<FftData> H);
-@@ -80,7 +80,7 @@
- rtc::ArrayView<const FftData> H,
- FftData* S);
- #endif
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- void ApplyFilter_SSE2(const RenderBuffer& render_buffer,
- rtc::ArrayView<const FftData> H,
- FftData* S);
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter_unittest.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter_unittest.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter_unittest.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/adaptive_fir_filter_unittest.cc 2018-02-18
19:00:53.644426283 +0100
-@@ -15,7 +15,7 @@
- #include <numeric>
- #include <string>
- #include "webrtc/typedefs.h"
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- #include <emmintrin.h>
- #endif
- #include "webrtc/modules/audio_processing/aec3/aec3_fft.h"
-@@ -147,7 +147,7 @@
-
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- // Verifies that the optimized methods for filter adaptation are bitexact to
- // their reference counterparts.
- TEST(AdaptiveFirFilter, FilterAdaptationSse2Optimizations) {
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/aec3_common.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/aec3_common.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/aec3_common.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/aec3_common.cc 2018-02-18
19:00:53.645426268 +0100
-@@ -16,10 +16,8 @@
- namespace webrtc {
-
- Aec3Optimization DetectOptimization() {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-- if (WebRtc_GetCPUInfo(kSSE2) != 0) {
-- return Aec3Optimization::kSse2;
-- }
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
-+ return Aec3Optimization::kSse2;
- #endif
-
- #if defined(WEBRTC_HAS_NEON)
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator.cc 2018-02-18
19:00:53.645426268 +0100
-@@ -11,7 +11,7 @@
- #include "webrtc/modules/audio_processing/aec3/comfort_noise_generator.h"
-
- #include "webrtc/typedefs.h"
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- #include <emmintrin.h>
- #endif
- #include <math.h>
-@@ -38,7 +38,7 @@
-
- namespace aec3 {
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
-
- void EstimateComfortNoise_SSE2(const std::array<float, kFftLengthBy2Plus1>&
N2,
- uint32_t* seed,
-@@ -204,7 +204,7 @@
- N2_initial_ ? *N2_initial_ : N2_;
-
- switch (optimization_) {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- case Aec3Optimization::kSse2:
- aec3::EstimateComfortNoise_SSE2(N2, &seed_, lower_band_noise,
- upper_band_noise);
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator.h 2018-02-18
19:00:53.645426268 +0100
-@@ -21,7 +21,7 @@
-
- namespace webrtc {
- namespace aec3 {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
-
- void EstimateComfortNoise_SSE2(const std::array<float, kFftLengthBy2Plus1>&
N2,
- uint32_t* seed,
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator_unittest.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator_unittest.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator_unittest.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/comfort_noise_generator_unittest.cc 2018-02-18
19:00:53.646426253 +0100
-@@ -50,7 +50,7 @@
-
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- // Verifies that the optimized methods are bitexact to their reference
- // counterparts.
- TEST(ComfortNoiseGenerator, TestOptimizations) {
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/fft_data.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/fft_data.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/fft_data.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/fft_data.h 2018-02-18
19:00:53.646426253 +0100
-@@ -12,7 +12,7 @@
- #define WEBRTC_MODULES_AUDIO_PROCESSING_AEC3_FFT_DATA_H_
-
- #include "webrtc/typedefs.h"
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- #include <emmintrin.h>
- #endif
- #include <algorithm>
-@@ -43,7 +43,7 @@
- std::array<float, kFftLengthBy2Plus1>* power_spectrum) const {
- RTC_DCHECK(power_spectrum);
- switch (optimization) {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- case Aec3Optimization::kSse2: {
- constexpr int kNumFourBinBands = kFftLengthBy2 / 4;
- constexpr int kLimit = kNumFourBinBands * 4;
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/fft_data_unittest.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/fft_data_unittest.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/fft_data_unittest.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/fft_data_unittest.cc 2018-02-18
19:00:53.646426253 +0100
-@@ -16,7 +16,7 @@
-
- namespace webrtc {
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- // Verifies that the optimized methods are bitexact to their reference
- // counterparts.
- TEST(FftData, TestOptimizations) {
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter.cc 2018-02-18
19:00:53.647426239 +0100
-@@ -13,7 +13,7 @@
- #include <arm_neon.h>
- #endif
- #include "webrtc/typedefs.h"
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- #include <emmintrin.h>
- #endif
- #include <algorithm>
-@@ -133,7 +133,7 @@
-
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
-
- void MatchedFilterCore_SSE2(size_t x_start_index,
- float x2_sum_threshold,
-@@ -331,7 +331,7 @@
- render_buffer.buffer.size();
-
- switch (optimization_) {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- case Aec3Optimization::kSse2:
- aec3::MatchedFilterCore_SSE2(x_start_index, x2_sum_threshold,
- render_buffer.buffer, y, filters_[n],
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter.h 2018-02-18
19:00:53.647426239 +0100
-@@ -36,7 +36,7 @@
-
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
-
- // Filter core for the matched filter that is optimized for SSE2.
- void MatchedFilterCore_SSE2(size_t x_start_index,
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter_unittest.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter_unittest.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter_unittest.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/matched_filter_unittest.cc 2018-02-18
19:00:53.647426239 +0100
-@@ -11,7 +11,7 @@
- #include "webrtc/modules/audio_processing/aec3/matched_filter.h"
-
- #include "webrtc/typedefs.h"
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- #include <emmintrin.h>
- #endif
- #include <algorithm>
-@@ -80,7 +80,7 @@
- }
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- // Verifies that the optimized methods for SSE2 are bitexact to their reference
- // counterparts.
- TEST(MatchedFilter, TestSse2Optimizations) {
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/suppression_gain.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/suppression_gain.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/suppression_gain.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/suppression_gain.cc 2018-02-18
19:00:53.648426224 +0100
-@@ -11,7 +11,7 @@
- #include "webrtc/modules/audio_processing/aec3/suppression_gain.h"
-
- #include "webrtc/typedefs.h"
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- #include <emmintrin.h>
- #endif
- #include <math.h>
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/vector_math.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/vector_math.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/vector_math.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/vector_math.h 2018-02-18
19:00:53.648426224 +0100
-@@ -15,7 +15,7 @@
- #if defined(WEBRTC_HAS_NEON)
- #include <arm_neon.h>
- #endif
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- #include <emmintrin.h>
- #endif
- #include <math.h>
-@@ -39,7 +39,7 @@
- // Elementwise square root.
- void Sqrt(rtc::ArrayView<float> x) {
- switch (optimization_) {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- case Aec3Optimization::kSse2: {
- const int x_size = static_cast<int>(x.size());
- const int vector_limit = x_size >> 2;
-@@ -113,7 +113,7 @@
- RTC_DCHECK_EQ(z.size(), x.size());
- RTC_DCHECK_EQ(z.size(), y.size());
- switch (optimization_) {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- case Aec3Optimization::kSse2: {
- const int x_size = static_cast<int>(x.size());
- const int vector_limit = x_size >> 2;
-@@ -159,7 +159,7 @@
- void Accumulate(rtc::ArrayView<const float> x, rtc::ArrayView<float> z) {
- RTC_DCHECK_EQ(z.size(), x.size());
- switch (optimization_) {
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
- case Aec3Optimization::kSse2: {
- const int x_size = static_cast<int>(x.size());
- const int vector_limit = x_size >> 2;
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/vector_math_unittest.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/vector_math_unittest.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/vector_math_unittest.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/third_party/webrtc/modules/audio_processing/aec3/vector_math_unittest.cc 2018-02-18
19:00:53.648426224 +0100
-@@ -77,7 +77,7 @@
- }
- #endif
-
--#if defined(WEBRTC_ARCH_X86_FAMILY)
-+#if defined(WEBRTC_ARCH_X86_64) || (defined(WEBRTC_ARCH_X86) &&
defined(__SSE2__))
-
- TEST(VectorMath, Sqrt) {
- if (WebRtc_GetCPUInfo(kSSE2) != 0) {
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/BUILD.gn
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/BUILD.gn 2018-02-18
19:00:53.649426209 +0100
-@@ -116,9 +116,9 @@
- v8_experimental_extra_library_files =
- [ "//test/cctest/test-experimental-extra.js" ]
-
-- v8_enable_gdbjit =
-- ((v8_current_cpu == "x86" || v8_current_cpu == "x64")
&&
-- (is_linux || is_mac)) || (v8_current_cpu == "ppc64" &&
is_linux)
-+ v8_enable_gdbjit = ((v8_current_cpu == "x86" || v8_current_cpu ==
"x64" ||
-+ v8_current_cpu == "x87") && (is_linux ||
is_mac)) ||
-+ (v8_current_cpu == "ppc64" && is_linux)
-
- # Temporary flag to allow embedders to update their microtasks scopes
- # while rolling in a new version of V8.
-@@ -161,7 +161,7 @@
-
- include_dirs = [ "." ]
-
-- if (is_component_build) {
-+ if (is_component_build || v8_build_shared) {
- defines = [ "BUILDING_V8_SHARED" ]
- }
- }
-@@ -175,14 +175,14 @@
- # This config should be applied to code using the libplatform.
- config("libplatform_config") {
- include_dirs = [ "include" ]
-- if (is_component_build) {
-+ if (is_component_build || v8_build_shared) {
- defines = [ "USING_V8_PLATFORM_SHARED" ]
- }
- }
-
- # This config should be applied to code using the libbase.
- config("libbase_config") {
-- if (is_component_build) {
-+ if (is_component_build || v8_build_shared) {
- defines = [ "USING_V8_BASE_SHARED" ]
- }
- libs = []
-@@ -199,7 +199,7 @@
- # This config should only be applied to code using V8 and not any V8 code
- # itself.
- config("external_config") {
-- if (is_component_build) {
-+ if (is_component_build || v8_build_shared) {
- defines = [ "USING_V8_SHARED" ]
- }
- include_dirs = [
-@@ -434,6 +434,9 @@
- cflags += [ "/arch:SSE2" ]
- }
- }
-+ if (v8_current_cpu == "x87") {
-+ defines += [ "V8_TARGET_ARCH_X87" ]
-+ }
- if (v8_current_cpu == "x64") {
- defines += [ "V8_TARGET_ARCH_X64" ]
- if (is_win) {
-@@ -443,6 +446,9 @@
- ldflags += [ "/STACK:2097152" ]
- }
- }
-+ if (v8_current_cpu == "x87") {
-+ defines += [ "V8_TARGET_ARCH_X87" ]
-+ }
- if (is_android && v8_android_log_stdout) {
- defines += [ "V8_ANDROID_LOG_STDOUT" ]
- }
-@@ -1040,6 +1046,11 @@
- ### gcmole(arch:s390) ###
- "src/builtins/s390/builtins-s390.cc",
- ]
-+ } else if (v8_current_cpu == "x87") {
-+ sources += [
-+ ### gcmole(arch:x87) ###
-+ "src/builtins/x87/builtins-x87.cc",
-+ ]
- }
-
- if (!v8_enable_i18n_support) {
-@@ -2309,6 +2320,37 @@
- "src/s390/simulator-s390.cc",
- "src/s390/simulator-s390.h",
- ]
-+ } else if (v8_current_cpu == "x87") {
-+ sources += [ ### gcmole(arch:x87) ###
-+ "src/compiler/x87/code-generator-x87.cc",
-+ "src/compiler/x87/instruction-codes-x87.h",
-+ "src/compiler/x87/instruction-scheduler-x87.cc",
-+ "src/compiler/x87/instruction-selector-x87.cc",
-+ "src/debug/x87/debug-x87.cc",
-+ "src/full-codegen/x87/full-codegen-x87.cc",
-+ "src/ic/x87/access-compiler-x87.cc",
-+ "src/ic/x87/handler-compiler-x87.cc",
-+ "src/ic/x87/ic-x87.cc",
-+ "src/regexp/x87/regexp-macro-assembler-x87.cc",
-+ "src/regexp/x87/regexp-macro-assembler-x87.h",
-+ "src/x87/assembler-x87-inl.h",
-+ "src/x87/assembler-x87.cc",
-+ "src/x87/assembler-x87.h",
-+ "src/x87/code-stubs-x87.cc",
-+ "src/x87/code-stubs-x87.h",
-+ "src/x87/codegen-x87.cc",
-+ "src/x87/codegen-x87.h",
-+ "src/x87/cpu-x87.cc",
-+ "src/x87/deoptimizer-x87.cc",
-+ "src/x87/disasm-x87.cc",
-+ "src/x87/frames-x87.cc",
-+ "src/x87/frames-x87.h",
-+ "src/x87/interface-descriptors-x87.cc",
-+ "src/x87/macro-assembler-x87.cc",
-+ "src/x87/macro-assembler-x87.h",
-+ "src/x87/simulator-x87.cc",
-+ "src/x87/simulator-x87.h",
-+ ]
- }
-
- configs = [ ":internal_config" ]
-@@ -2420,7 +2462,7 @@
-
- defines = []
-
-- if (is_component_build) {
-+ if (is_component_build || v8_build_shared) {
- defines = [ "BUILDING_V8_BASE_SHARED" ]
- }
-
-@@ -2530,7 +2572,7 @@
-
- configs = [ ":internal_config_base" ]
-
-- if (is_component_build) {
-+ if (is_component_build || v8_build_shared) {
- defines = [ "BUILDING_V8_PLATFORM_SHARED" ]
- }
-
-@@ -2676,7 +2718,37 @@
- ]
- }
-
--if (is_component_build) {
-+if (v8_build_shared) {
-+ shared_library("v8") {
-+ sources = [
-+ "src/v8dll-main.cc",
-+ ]
-+
-+ public_deps = [
-+ ":v8_base",
-+ ":v8_maybe_snapshot",
-+ ]
-+
-+ configs += [ ":internal_config" ]
-+
-+ public_configs = [ ":external_config" ]
-+ }
-+
-+ group("v8_for_testing") {
-+ testonly = true
-+
-+ public_deps = [
-+ ":v8_base",
-+ ":v8_maybe_snapshot",
-+ ]
-+
-+ if (v8_use_snapshot) {
-+ public_deps += [ ":v8_builtins_generators" ]
-+ }
-+
-+ public_configs = [ ":external_config" ]
-+ }
-+} else if (is_component_build) {
- v8_component("v8") {
- sources = [
- "src/v8dll-main.cc",
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/gni/v8.gni
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/gni/v8.gni
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/gni/v8.gni 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/gni/v8.gni 2018-02-18
19:00:53.705425385 +0100
-@@ -42,6 +42,9 @@
- # add a dependency on the ICU library.
- v8_enable_i18n_support = true
-
-+ # Whether to build V8 as a shared library
-+ v8_build_shared = false
-+
- # Use static libraries instead of source_sets.
- v8_static_library = false
- }
-@@ -56,6 +59,11 @@
- v8_enable_backtrace = is_debug && !v8_optimized_debug
- }
-
-+if (v8_current_cpu == "x86" || v8_current_cpu == "x87") {
-+ # build V8 shared on x86 so we can swap x87 vs. SSE2 builds
-+ v8_build_shared = true
-+}
-+
- # Points to // in v8 stand-alone or to //v8/ in chromium. We need absolute
- # paths for all configs in templates as they are shared in different
- # subdirectories.
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/gypfiles/standalone.gypi
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/gypfiles/standalone.gypi
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/gypfiles/standalone.gypi 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/gypfiles/standalone.gypi 2018-02-18
19:00:53.777424326 +0100
-@@ -262,14 +262,14 @@
- # goma doesn't support PDB yet.
- 'fastbuild%': 1,
- }],
-- ['((v8_target_arch=="ia32" or v8_target_arch=="x64") and
\
-+ ['((v8_target_arch=="ia32" or v8_target_arch=="x64" or
v8_target_arch=="x87") and \
- (OS=="linux" or OS=="mac")) or
(v8_target_arch=="ppc64" and OS=="linux")', {
- 'v8_enable_gdbjit%': 1,
- }, {
- 'v8_enable_gdbjit%': 0,
- }],
- ['(OS=="linux" or OS=="mac") and
(target_arch=="ia32" or target_arch=="x64") and \
-- v8_target_arch!="x32"', {
-+ (v8_target_arch!="x87" and v8_target_arch!="x32")', {
- 'clang%': 1,
- }, {
- 'clang%': 0,
-@@ -1207,7 +1207,7 @@
- '-L<(android_libcpp_libs)/arm64-v8a',
- ],
- }],
-- ['target_arch=="ia32"', {
-+ ['target_arch=="ia32" or target_arch=="x87"',
{
- # The x86 toolchain currently has problems with stack-protector.
- 'cflags!': [
- '-fstack-protector',
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/gypfiles/toolchain.gypi
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/gypfiles/toolchain.gypi
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/gypfiles/toolchain.gypi 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/gypfiles/toolchain.gypi 2018-02-18
19:00:53.778424312 +0100
-@@ -144,7 +144,7 @@
- 'host_cxx_is_biarch%': 0,
- },
- }],
-- ['target_arch=="ia32" or target_arch=="x64" or \
-+ ['target_arch=="ia32" or target_arch=="x64" or
target_arch=="x87" or \
- target_arch=="ppc" or target_arch=="ppc64" or
target_arch=="s390" or \
- target_arch=="s390x" or clang==1', {
- 'variables': {
-@@ -342,6 +342,12 @@
- 'V8_TARGET_ARCH_IA32',
- ],
- }], # v8_target_arch=="ia32"
-+ ['v8_target_arch=="x87"', {
-+ 'defines': [
-+ 'V8_TARGET_ARCH_X87',
-+ ],
-+ 'cflags': ['-march=i586'],
-+ }], # v8_target_arch=="x87"
- ['v8_target_arch=="mips" or v8_target_arch=="mipsel" \
- or v8_target_arch=="mips64" or
v8_target_arch=="mips64el"', {
- 'target_conditions': [
-@@ -1000,8 +1006,9 @@
- ['(OS=="linux" or OS=="freebsd" or OS=="openbsd"
or OS=="solaris" \
- or OS=="netbsd" or OS=="mac" or OS=="android" or
OS=="qnx") and \
- (v8_target_arch=="arm" or v8_target_arch=="ia32" or \
-- v8_target_arch=="mips" or v8_target_arch=="mipsel" or \
-- v8_target_arch=="ppc" or v8_target_arch=="s390")', {
-+ v8_target_arch=="x87" or v8_target_arch=="mips" or \
-+ v8_target_arch=="mipsel" or v8_target_arch=="ppc" or \
-+ v8_target_arch=="s390")', {
- 'target_conditions': [
- ['_toolset=="host"', {
- 'conditions': [
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/Makefile
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/Makefile
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/Makefile 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/Makefile 2018-02-18
19:00:53.778424312 +0100
-@@ -255,13 +255,14 @@
-
- # Architectures and modes to be compiled. Consider these to be internal
- # variables, don't override them (use the targets instead).
--ARCHES = ia32 x64 arm arm64 mips mipsel mips64 mips64el ppc ppc64 s390 s390x
--ARCHES32 = ia32 arm mips mipsel ppc s390
-+ARCHES = ia32 x64 arm arm64 mips mipsel mips64 mips64el x87 ppc ppc64 s390 \
-+ s390x
-+ARCHES32 = ia32 arm mips mipsel x87 ppc s390
- DEFAULT_ARCHES = ia32 x64 arm
- MODES = release debug optdebug
- DEFAULT_MODES = release debug
- ANDROID_ARCHES = android_ia32 android_x64 android_arm android_arm64 \
-- android_mipsel
-+ android_mipsel android_x87
-
- # List of files that trigger Makefile regeneration:
- GYPFILES = third_party/icu/icu.gypi third_party/icu/icu.gyp \
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/assembler.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/assembler.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/assembler.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/assembler.cc 2018-02-18
19:00:53.779424297 +0100
-@@ -85,6 +85,8 @@
- #include "src/regexp/mips64/regexp-macro-assembler-mips64.h" // NOLINT
- #elif V8_TARGET_ARCH_S390
- #include "src/regexp/s390/regexp-macro-assembler-s390.h" // NOLINT
-+#elif V8_TARGET_ARCH_X87
-+#include "src/regexp/x87/regexp-macro-assembler-x87.h" // NOLINT
- #else // Unknown architecture.
- #error "Unknown architecture."
- #endif // Target architecture.
-@@ -1318,6 +1320,8 @@
- function = FUNCTION_ADDR(RegExpMacroAssemblerMIPS::CheckStackGuardState);
- #elif V8_TARGET_ARCH_S390
- function = FUNCTION_ADDR(RegExpMacroAssemblerS390::CheckStackGuardState);
-+#elif V8_TARGET_ARCH_X87
-+ function = FUNCTION_ADDR(RegExpMacroAssemblerX87::CheckStackGuardState);
- #else
- UNREACHABLE();
- #endif
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/assembler-inl.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/assembler-inl.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/assembler-inl.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/assembler-inl.h 2018-02-18
19:00:53.864423047 +0100
-@@ -23,6 +23,8 @@
- #include "src/mips64/assembler-mips64-inl.h"
- #elif V8_TARGET_ARCH_S390
- #include "src/s390/assembler-s390-inl.h"
-+#elif V8_TARGET_ARCH_X87
-+#include "src/x87/assembler-x87-inl.h"
- #else
- #error Unknown architecture.
- #endif
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/base/build_config.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/base/build_config.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/base/build_config.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/base/build_config.h 2018-02-18
19:00:53.864423047 +0100
-@@ -76,9 +76,9 @@
- // Target architecture detection. This may be set externally. If not, detect
- // in the same way as the host architecture, that is, target the native
- // environment as presented by the compiler.
--#if !V8_TARGET_ARCH_X64 && !V8_TARGET_ARCH_IA32 && !V8_TARGET_ARCH_ARM
&& \
-- !V8_TARGET_ARCH_ARM64 && !V8_TARGET_ARCH_MIPS &&
!V8_TARGET_ARCH_MIPS64 && \
-- !V8_TARGET_ARCH_PPC && !V8_TARGET_ARCH_S390
-+#if !V8_TARGET_ARCH_X64 && !V8_TARGET_ARCH_IA32 && !V8_TARGET_ARCH_X87
&& \
-+ !V8_TARGET_ARCH_ARM && !V8_TARGET_ARCH_ARM64 && !V8_TARGET_ARCH_MIPS
&& \
-+ !V8_TARGET_ARCH_MIPS64 && !V8_TARGET_ARCH_PPC &&
!V8_TARGET_ARCH_S390
- #if defined(_M_X64) || defined(__x86_64__)
- #define V8_TARGET_ARCH_X64 1
- #elif defined(_M_IX86) || defined(__i386__)
-@@ -129,6 +129,8 @@
- #else
- #define V8_TARGET_ARCH_32_BIT 1
- #endif
-+#elif V8_TARGET_ARCH_X87
-+#define V8_TARGET_ARCH_32_BIT 1
- #else
- #error Unknown target architecture pointer size
- #endif
-@@ -179,6 +181,8 @@
- #else
- #define V8_TARGET_LITTLE_ENDIAN 1
- #endif
-+#elif V8_TARGET_ARCH_X87
-+#define V8_TARGET_LITTLE_ENDIAN 1
- #elif __BIG_ENDIAN__ // FOR PPCGR on AIX
- #define V8_TARGET_BIG_ENDIAN 1
- #elif V8_TARGET_ARCH_PPC_LE
-@@ -195,7 +199,8 @@
- #error Unknown target architecture endianness
- #endif
-
--#if defined(V8_TARGET_ARCH_IA32) || defined(V8_TARGET_ARCH_X64)
-+#if defined(V8_TARGET_ARCH_IA32) || defined(V8_TARGET_ARCH_X64) || \
-+ defined(V8_TARGET_ARCH_X87)
- #define V8_TARGET_ARCH_STORES_RETURN_ADDRESS_ON_STACK 1
- #else
- #define V8_TARGET_ARCH_STORES_RETURN_ADDRESS_ON_STACK 0
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/builtins/x87/builtins-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/builtins/x87/builtins-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/builtins/x87/builtins-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/builtins/x87/builtins-x87.cc 2018-02-18
19:00:53.934422017 +0100
-@@ -0,0 +1,3008 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/code-factory.h"
-+#include "src/codegen.h"
-+#include "src/deoptimizer.h"
-+#include "src/full-codegen/full-codegen.h"
-+#include "src/x87/frames-x87.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+#define __ ACCESS_MASM(masm)
-+
-+void Builtins::Generate_Adaptor(MacroAssembler* masm, Address address,
-+ ExitFrameType exit_frame_type) {
-+ // ----------- S t a t e -------------
-+ // -- eax : number of arguments excluding receiver
-+ // -- edi : target
-+ // -- edx : new.target
-+ // -- esp[0] : return address
-+ // -- esp[4] : last argument
-+ // -- ...
-+ // -- esp[4 * argc] : first argument
-+ // -- esp[4 * (argc +1)] : receiver
-+ // -----------------------------------
-+ __ AssertFunction(edi);
-+
-+ // Make sure we operate in the context of the called function (for example
-+ // ConstructStubs implemented in C++ will be run in the context of the caller
-+ // instead of the callee, due to the way that [[Construct]] is defined for
-+ // ordinary functions).
-+ __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset));
-+
-+ // JumpToExternalReference expects eax to contain the number of arguments
-+ // including the receiver and the extra arguments.
-+ const int num_extra_args = 3;
-+ __ add(eax, Immediate(num_extra_args + 1));
-+
-+ // Insert extra arguments.
-+ __ PopReturnAddressTo(ecx);
-+ __ SmiTag(eax);
-+ __ Push(eax);
-+ __ SmiUntag(eax);
-+ __ Push(edi);
-+ __ Push(edx);
-+ __ PushReturnAddressFrom(ecx);
-+
-+ __ JumpToExternalReference(ExternalReference(address, masm->isolate()),
-+ exit_frame_type == BUILTIN_EXIT);
-+}
-+
-+static void GenerateTailCallToReturnedCode(MacroAssembler* masm,
-+ Runtime::FunctionId function_id) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argument count (preserved for callee)
-+ // -- edx : new target (preserved for callee)
-+ // -- edi : target function (preserved for callee)
-+ // -----------------------------------
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ // Push the number of arguments to the callee.
-+ __ SmiTag(eax);
-+ __ push(eax);
-+ // Push a copy of the target function and the new target.
-+ __ push(edi);
-+ __ push(edx);
-+ // Function is also the parameter to the runtime call.
-+ __ push(edi);
-+
-+ __ CallRuntime(function_id, 1);
-+ __ mov(ebx, eax);
-+
-+ // Restore target function and new target.
-+ __ pop(edx);
-+ __ pop(edi);
-+ __ pop(eax);
-+ __ SmiUntag(eax);
-+ }
-+
-+ __ lea(ebx, FieldOperand(ebx, Code::kHeaderSize));
-+ __ jmp(ebx);
-+}
-+
-+static void GenerateTailCallToSharedCode(MacroAssembler* masm) {
-+ __ mov(ebx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(ebx, FieldOperand(ebx, SharedFunctionInfo::kCodeOffset));
-+ __ lea(ebx, FieldOperand(ebx, Code::kHeaderSize));
-+ __ jmp(ebx);
-+}
-+
-+namespace {
-+
-+void Generate_JSBuiltinsConstructStubHelper(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax: number of arguments
-+ // -- edi: constructor function
-+ // -- edx: new target
-+ // -- esi: context
-+ // -----------------------------------
-+
-+ // Enter a construct frame.
-+ {
-+ FrameScope scope(masm, StackFrame::CONSTRUCT);
-+
-+ // Preserve the incoming parameters on the stack.
-+ __ SmiTag(eax);
-+ __ push(esi);
-+ __ push(eax);
-+ __ SmiUntag(eax);
-+
-+ // The receiver for the builtin/api call.
-+ __ PushRoot(Heap::kTheHoleValueRootIndex);
-+
-+ // Set up pointer to last argument.
-+ __ lea(ebx, Operand(ebp, StandardFrameConstants::kCallerSPOffset));
-+
-+ // Copy arguments and receiver to the expression stack.
-+ Label loop, entry;
-+ __ mov(ecx, eax);
-+ // ----------- S t a t e -------------
-+ // -- eax: number of arguments (untagged)
-+ // -- edi: constructor function
-+ // -- edx: new target
-+ // -- ebx: pointer to last argument
-+ // -- ecx: counter
-+ // -- sp[0*kPointerSize]: the hole (receiver)
-+ // -- sp[1*kPointerSize]: number of arguments (tagged)
-+ // -- sp[2*kPointerSize]: context
-+ // -----------------------------------
-+ __ jmp(&entry);
-+ __ bind(&loop);
-+ __ push(Operand(ebx, ecx, times_4, 0));
-+ __ bind(&entry);
-+ __ dec(ecx);
-+ __ j(greater_equal, &loop);
-+
-+ // Call the function.
-+ // eax: number of arguments (untagged)
-+ // edi: constructor function
-+ // edx: new target
-+ ParameterCount actual(eax);
-+ __ InvokeFunction(edi, edx, actual, CALL_FUNCTION,
-+ CheckDebugStepCallWrapper());
-+
-+ // Restore context from the frame.
-+ __ mov(esi, Operand(ebp, ConstructFrameConstants::kContextOffset));
-+ // Restore smi-tagged arguments count from the frame.
-+ __ mov(ebx, Operand(ebp, ConstructFrameConstants::kLengthOffset));
-+ // Leave construct frame.
-+ }
-+
-+ // Remove caller arguments from the stack and return.
-+ STATIC_ASSERT(kSmiTagSize == 1 && kSmiTag == 0);
-+ __ pop(ecx);
-+ __ lea(esp, Operand(esp, ebx, times_2, 1 * kPointerSize)); // 1 ~ receiver
-+ __ push(ecx);
-+ __ ret(0);
-+}
-+
-+// The construct stub for ES5 constructor functions and ES6 class constructors.
-+void Generate_JSConstructStubGeneric(MacroAssembler* masm,
-+ bool restrict_constructor_return) {
-+ // ----------- S t a t e -------------
-+ // -- eax: number of arguments (untagged)
-+ // -- edi: constructor function
-+ // -- edx: new target
-+ // -- esi: context
-+ // -- sp[...]: constructor arguments
-+ // -----------------------------------
-+
-+ // Enter a construct frame.
-+ {
-+ FrameScope scope(masm, StackFrame::CONSTRUCT);
-+ Label post_instantiation_deopt_entry, not_create_implicit_receiver;
-+
-+ // Preserve the incoming parameters on the stack.
-+ __ mov(ecx, eax);
-+ __ SmiTag(ecx);
-+ __ Push(esi);
-+ __ Push(ecx);
-+ __ Push(edi);
-+ __ Push(edx);
-+
-+ // ----------- S t a t e -------------
-+ // -- sp[0*kPointerSize]: new target
-+ // -- edi and sp[1*kPointerSize]: constructor function
-+ // -- sp[2*kPointerSize]: argument count
-+ // -- sp[3*kPointerSize]: context
-+ // -----------------------------------
-+
-+ __ mov(ebx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ test(FieldOperand(ebx, SharedFunctionInfo::kCompilerHintsOffset),
-+ Immediate(SharedFunctionInfo::kDerivedConstructorMask));
-+ __ j(not_zero, ¬_create_implicit_receiver);
-+
-+ // If not derived class constructor: Allocate the new receiver object.
-+ __ IncrementCounter(masm->isolate()->counters()->constructed_objects(),
1);
-+ __ Call(masm->isolate()->builtins()->FastNewObject(),
-+ RelocInfo::CODE_TARGET);
-+ __ jmp(&post_instantiation_deopt_entry, Label::kNear);
-+
-+ // Else: use TheHoleValue as receiver for constructor call
-+ __ bind(¬_create_implicit_receiver);
-+ __ LoadRoot(eax, Heap::kTheHoleValueRootIndex);
-+
-+ // ----------- S t a t e -------------
-+ // -- eax: implicit receiver
-+ // -- Slot 3 / sp[0*kPointerSize]: new target
-+ // -- Slot 2 / sp[1*kPointerSize]: constructor function
-+ // -- Slot 1 / sp[2*kPointerSize]: number of arguments (tagged)
-+ // -- Slot 0 / sp[3*kPointerSize]: context
-+ // -----------------------------------
-+ // Deoptimizer enters here.
-+ masm->isolate()->heap()->SetConstructStubCreateDeoptPCOffset(
-+ masm->pc_offset());
-+ __ bind(&post_instantiation_deopt_entry);
-+
-+ // Restore new target.
-+ __ Pop(edx);
-+
-+ // Push the allocated receiver to the stack. We need two copies
-+ // because we may have to return the original one and the calling
-+ // conventions dictate that the called function pops the receiver.
-+ __ Push(eax);
-+ __ Push(eax);
-+
-+ // ----------- S t a t e -------------
-+ // -- edx: new target
-+ // -- sp[0*kPointerSize]: implicit receiver
-+ // -- sp[1*kPointerSize]: implicit receiver
-+ // -- sp[2*kPointerSize]: constructor function
-+ // -- sp[3*kPointerSize]: number of arguments (tagged)
-+ // -- sp[4*kPointerSize]: context
-+ // -----------------------------------
-+
-+ // Restore constructor function and argument count.
-+ __ mov(edi, Operand(ebp, ConstructFrameConstants::kConstructorOffset));
-+ __ mov(eax, Operand(ebp, ConstructFrameConstants::kLengthOffset));
-+ __ SmiUntag(eax);
-+
-+ // Set up pointer to last argument.
-+ __ lea(ebx, Operand(ebp, StandardFrameConstants::kCallerSPOffset));
-+
-+ // Copy arguments and receiver to the expression stack.
-+ Label loop, entry;
-+ __ mov(ecx, eax);
-+ // ----------- S t a t e -------------
-+ // -- eax: number of arguments (untagged)
-+ // -- edx: new target
-+ // -- ebx: pointer to last argument
-+ // -- ecx: counter (tagged)
-+ // -- sp[0*kPointerSize]: implicit receiver
-+ // -- sp[1*kPointerSize]: implicit receiver
-+ // -- edi and sp[2*kPointerSize]: constructor function
-+ // -- sp[3*kPointerSize]: number of arguments (tagged)
-+ // -- sp[4*kPointerSize]: context
-+ // -----------------------------------
-+ __ jmp(&entry, Label::kNear);
-+ __ bind(&loop);
-+ __ Push(Operand(ebx, ecx, times_pointer_size, 0));
-+ __ bind(&entry);
-+ __ dec(ecx);
-+ __ j(greater_equal, &loop);
-+
-+ // Call the function.
-+ ParameterCount actual(eax);
-+ __ InvokeFunction(edi, edx, actual, CALL_FUNCTION,
-+ CheckDebugStepCallWrapper());
-+
-+ // ----------- S t a t e -------------
-+ // -- eax: constructor result
-+ // -- sp[0*kPointerSize]: implicit receiver
-+ // -- sp[1*kPointerSize]: constructor function
-+ // -- sp[2*kPointerSize]: number of arguments
-+ // -- sp[3*kPointerSize]: context
-+ // -----------------------------------
-+
-+ // Store offset of return address for deoptimizer.
-+ masm->isolate()->heap()->SetConstructStubInvokeDeoptPCOffset(
-+ masm->pc_offset());
-+
-+ // Restore context from the frame.
-+ __ mov(esi, Operand(ebp, ConstructFrameConstants::kContextOffset));
-+
-+ // If the result is an object (in the ECMA sense), we should get rid
-+ // of the receiver and use the result; see ECMA-262 section 13.2.2-7
-+ // on page 74.
-+ Label use_receiver, do_throw, other_result, leave_frame;
-+
-+ // If the result is undefined, we jump out to using the implicit receiver.
-+ __ JumpIfRoot(eax, Heap::kUndefinedValueRootIndex, &use_receiver,
-+ Label::kNear);
-+
-+ // Otherwise we do a smi check and fall through to check if the return value
-+ // is a valid receiver.
-+
-+ // If the result is a smi, it is *not* an object in the ECMA sense.
-+ __ JumpIfSmi(eax, &other_result, Label::kNear);
-+
-+ // If the type of the result (stored in its map) is less than
-+ // FIRST_JS_RECEIVER_TYPE, it is not an object in the ECMA sense.
-+ STATIC_ASSERT(LAST_JS_RECEIVER_TYPE == LAST_TYPE);
-+ __ CmpObjectType(eax, FIRST_JS_RECEIVER_TYPE, ecx);
-+ __ j(above_equal, &leave_frame, Label::kNear);
-+
-+ // The result is now neither undefined nor an object.
-+ __ bind(&other_result);
-+ __ mov(ebx, Operand(ebp, ConstructFrameConstants::kConstructorOffset));
-+ __ mov(ebx, FieldOperand(ebx, JSFunction::kSharedFunctionInfoOffset));
-+ __ test(FieldOperand(ebx, SharedFunctionInfo::kCompilerHintsOffset),
-+ Immediate(SharedFunctionInfo::kClassConstructorMask));
-+
-+ if (restrict_constructor_return) {
-+ // Throw if constructor function is a class constructor
-+ __ j(Condition::zero, &use_receiver, Label::kNear);
-+ } else {
-+ __ j(not_zero, &use_receiver, Label::kNear);
-+ __ CallRuntime(
-+ Runtime::kIncrementUseCounterConstructorReturnNonUndefinedPrimitive);
-+ __ jmp(&use_receiver, Label::kNear);
-+ }
-+
-+ __ bind(&do_throw);
-+ __ CallRuntime(Runtime::kThrowConstructorReturnedNonObject);
-+
-+ // Throw away the result of the constructor invocation and use the
-+ // on-stack receiver as the result.
-+ __ bind(&use_receiver);
-+ __ mov(eax, Operand(esp, 0 * kPointerSize));
-+ __ JumpIfRoot(eax, Heap::kTheHoleValueRootIndex, &do_throw);
-+
-+ __ bind(&leave_frame);
-+ // Restore smi-tagged arguments count from the frame.
-+ __ mov(ebx, Operand(ebp, ConstructFrameConstants::kLengthOffset));
-+ // Leave construct frame.
-+ }
-+ // Remove caller arguments from the stack and return.
-+ STATIC_ASSERT(kSmiTagSize == 1 && kSmiTag == 0);
-+ __ pop(ecx);
-+ __ lea(esp, Operand(esp, ebx, times_2, 1 * kPointerSize)); // 1 ~ receiver
-+ __ push(ecx);
-+ __ ret(0);
-+}
-+} // namespace
-+
-+void Builtins::Generate_JSConstructStubGenericRestrictedReturn(
-+ MacroAssembler* masm) {
-+ return Generate_JSConstructStubGeneric(masm, true);
-+}
-+void Builtins::Generate_JSConstructStubGenericUnrestrictedReturn(
-+ MacroAssembler* masm) {
-+ return Generate_JSConstructStubGeneric(masm, false);
-+}
-+void Builtins::Generate_JSConstructStubApi(MacroAssembler* masm) {
-+ Generate_JSBuiltinsConstructStubHelper(masm);
-+}
-+void Builtins::Generate_JSBuiltinsConstructStub(MacroAssembler* masm) {
-+ Generate_JSBuiltinsConstructStubHelper(masm);
-+}
-+
-+void Builtins::Generate_ConstructedNonConstructable(MacroAssembler* masm) {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ push(edi);
-+ __ CallRuntime(Runtime::kThrowConstructedNonConstructable);
-+}
-+
-+enum IsTagged { kEaxIsSmiTagged, kEaxIsUntaggedInt };
-+
-+// Clobbers ecx, edx, edi; preserves all other registers.
-+static void Generate_CheckStackOverflow(MacroAssembler* masm,
-+ IsTagged eax_is_tagged) {
-+ // eax : the number of items to be pushed to the stack
-+ //
-+ // Check the stack for overflow. We are not trying to catch
-+ // interruptions (e.g. debug break and preemption) here, so the "real stack
-+ // limit" is checked.
-+ Label okay;
-+ ExternalReference real_stack_limit =
-+ ExternalReference::address_of_real_stack_limit(masm->isolate());
-+ __ mov(edi, Operand::StaticVariable(real_stack_limit));
-+ // Make ecx the space we have left. The stack might already be overflowed
-+ // here which will cause ecx to become negative.
-+ __ mov(ecx, esp);
-+ __ sub(ecx, edi);
-+ // Make edx the space we need for the array when it is unrolled onto the
-+ // stack.
-+ __ mov(edx, eax);
-+ int smi_tag = eax_is_tagged == kEaxIsSmiTagged ? kSmiTagSize : 0;
-+ __ shl(edx, kPointerSizeLog2 - smi_tag);
-+ // Check if the arguments will overflow the stack.
-+ __ cmp(ecx, edx);
-+ __ j(greater, &okay); // Signed comparison.
-+
-+ // Out of stack space.
-+ __ CallRuntime(Runtime::kThrowStackOverflow);
-+
-+ __ bind(&okay);
-+}
-+
-+static void Generate_JSEntryTrampolineHelper(MacroAssembler* masm,
-+ bool is_construct) {
-+ ProfileEntryHookStub::MaybeCallEntryHook(masm);
-+
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+
-+ // Setup the context (we need to use the caller context from the isolate).
-+ ExternalReference context_address(IsolateAddressId::kContextAddress,
-+ masm->isolate());
-+ __ mov(esi, Operand::StaticVariable(context_address));
-+
-+ // Load the previous frame pointer (ebx) to access C arguments
-+ __ mov(ebx, Operand(ebp, 0));
-+
-+ // Push the function and the receiver onto the stack.
-+ __ push(Operand(ebx, EntryFrameConstants::kFunctionArgOffset));
-+ __ push(Operand(ebx, EntryFrameConstants::kReceiverArgOffset));
-+
-+ // Load the number of arguments and setup pointer to the arguments.
-+ __ mov(eax, Operand(ebx, EntryFrameConstants::kArgcOffset));
-+ __ mov(ebx, Operand(ebx, EntryFrameConstants::kArgvOffset));
-+
-+ // Check if we have enough stack space to push all arguments.
-+ // Expects argument count in eax. Clobbers ecx, edx, edi.
-+ Generate_CheckStackOverflow(masm, kEaxIsUntaggedInt);
-+
-+ // Copy arguments to the stack in a loop.
-+ Label loop, entry;
-+ __ Move(ecx, Immediate(0));
-+ __ jmp(&entry, Label::kNear);
-+ __ bind(&loop);
-+ __ mov(edx, Operand(ebx, ecx, times_4, 0)); // push parameter from argv
-+ __ push(Operand(edx, 0)); // dereference handle
-+ __ inc(ecx);
-+ __ bind(&entry);
-+ __ cmp(ecx, eax);
-+ __ j(not_equal, &loop);
-+
-+ // Load the previous frame pointer (ebx) to access C arguments
-+ __ mov(ebx, Operand(ebp, 0));
-+
-+ // Get the new.target and function from the frame.
-+ __ mov(edx, Operand(ebx, EntryFrameConstants::kNewTargetArgOffset));
-+ __ mov(edi, Operand(ebx, EntryFrameConstants::kFunctionArgOffset));
-+
-+ // Invoke the code.
-+ Handle<Code> builtin = is_construct
-+ ? masm->isolate()->builtins()->Construct()
-+ : masm->isolate()->builtins()->Call();
-+ __ Call(builtin, RelocInfo::CODE_TARGET);
-+
-+ // Exit the internal frame. Notice that this also removes the empty.
-+ // context and the function left on the stack by the code
-+ // invocation.
-+ }
-+ __ ret(kPointerSize); // Remove receiver.
-+}
-+
-+void Builtins::Generate_JSEntryTrampoline(MacroAssembler* masm) {
-+ Generate_JSEntryTrampolineHelper(masm, false);
-+}
-+
-+void Builtins::Generate_JSConstructEntryTrampoline(MacroAssembler* masm) {
-+ Generate_JSEntryTrampolineHelper(masm, true);
-+}
-+
-+// static
-+void Builtins::Generate_ResumeGeneratorTrampoline(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the value to pass to the generator
-+ // -- ebx : the JSGeneratorObject to resume
-+ // -- edx : the resume mode (tagged)
-+ // -- esp[0] : return address
-+ // -----------------------------------
-+ __ AssertGeneratorObject(ebx);
-+
-+ // Store input value into generator object.
-+ __ mov(FieldOperand(ebx, JSGeneratorObject::kInputOrDebugPosOffset), eax);
-+ __ RecordWriteField(ebx, JSGeneratorObject::kInputOrDebugPosOffset, eax, ecx,
-+ kDontSaveFPRegs);
-+
-+ // Store resume mode into generator object.
-+ __ mov(FieldOperand(ebx, JSGeneratorObject::kResumeModeOffset), edx);
-+
-+ // Load suspended function and context.
-+ __ mov(edi, FieldOperand(ebx, JSGeneratorObject::kFunctionOffset));
-+ __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset));
-+
-+ // Flood function if we are stepping.
-+ Label prepare_step_in_if_stepping, prepare_step_in_suspended_generator;
-+ Label stepping_prepared;
-+ ExternalReference debug_hook =
-+ ExternalReference::debug_hook_on_function_call_address(masm->isolate());
-+ __ cmpb(Operand::StaticVariable(debug_hook), Immediate(0));
-+ __ j(not_equal, &prepare_step_in_if_stepping);
-+
-+ // Flood function if we need to continue stepping in the suspended generator.
-+ ExternalReference debug_suspended_generator =
-+ ExternalReference::debug_suspended_generator_address(masm->isolate());
-+ __ cmp(ebx, Operand::StaticVariable(debug_suspended_generator));
-+ __ j(equal, &prepare_step_in_suspended_generator);
-+ __ bind(&stepping_prepared);
-+
-+ // Pop return address.
-+ __ PopReturnAddressTo(eax);
-+
-+ // Push receiver.
-+ __ Push(FieldOperand(ebx, JSGeneratorObject::kReceiverOffset));
-+
-+ // ----------- S t a t e -------------
-+ // -- eax : return address
-+ // -- ebx : the JSGeneratorObject to resume
-+ // -- edx : the resume mode (tagged)
-+ // -- edi : generator function
-+ // -- esi : generator context
-+ // -- esp[0] : generator receiver
-+ // -----------------------------------
-+
-+ // Push holes for arguments to generator function. Since the parser forced
-+ // context allocation for any variables in generators, the actual argument
-+ // values have already been copied into the context and these dummy values
-+ // will never be used.
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(ecx,
-+ FieldOperand(ecx, SharedFunctionInfo::kFormalParameterCountOffset));
-+ {
-+ Label done_loop, loop;
-+ __ bind(&loop);
-+ __ sub(ecx, Immediate(1));
-+ __ j(carry, &done_loop, Label::kNear);
-+ __ PushRoot(Heap::kTheHoleValueRootIndex);
-+ __ jmp(&loop);
-+ __ bind(&done_loop);
-+ }
-+
-+ // Underlying function needs to have bytecode available.
-+ if (FLAG_debug_code) {
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(ecx, FieldOperand(ecx, SharedFunctionInfo::kFunctionDataOffset));
-+ __ CmpObjectType(ecx, BYTECODE_ARRAY_TYPE, ecx);
-+ __ Assert(equal, kMissingBytecodeArray);
-+ }
-+
-+ // Resume (Ignition/TurboFan) generator object.
-+ {
-+ __ PushReturnAddressFrom(eax);
-+ __ mov(eax, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(eax,
-+ FieldOperand(eax, SharedFunctionInfo::kFormalParameterCountOffset));
-+ // We abuse new.target both to indicate that this is a resume call and to
-+ // pass in the generator object. In ordinary calls, new.target is always
-+ // undefined because generator functions are non-constructable.
-+ __ mov(edx, ebx);
-+ __ jmp(FieldOperand(edi, JSFunction::kCodeEntryOffset));
-+ }
-+
-+ __ bind(&prepare_step_in_if_stepping);
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ Push(ebx);
-+ __ Push(edx);
-+ __ Push(edi);
-+ __ CallRuntime(Runtime::kDebugOnFunctionCall);
-+ __ Pop(edx);
-+ __ Pop(ebx);
-+ __ mov(edi, FieldOperand(ebx, JSGeneratorObject::kFunctionOffset));
-+ }
-+ __ jmp(&stepping_prepared);
-+
-+ __ bind(&prepare_step_in_suspended_generator);
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ Push(ebx);
-+ __ Push(edx);
-+ __ CallRuntime(Runtime::kDebugPrepareStepInSuspendedGenerator);
-+ __ Pop(edx);
-+ __ Pop(ebx);
-+ __ mov(edi, FieldOperand(ebx, JSGeneratorObject::kFunctionOffset));
-+ }
-+ __ jmp(&stepping_prepared);
-+}
-+
-+static void ReplaceClosureEntryWithOptimizedCode(
-+ MacroAssembler* masm, Register optimized_code_entry, Register closure,
-+ Register scratch1, Register scratch2, Register scratch3) {
-+ Register native_context = scratch1;
-+
-+ // Store the optimized code in the closure.
-+ __ lea(optimized_code_entry,
-+ FieldOperand(optimized_code_entry, Code::kHeaderSize));
-+ __ mov(FieldOperand(closure, JSFunction::kCodeEntryOffset),
-+ optimized_code_entry);
-+ __ RecordWriteCodeEntryField(closure, optimized_code_entry, scratch2);
-+
-+ // Link the closure into the optimized function list.
-+ __ mov(native_context, NativeContextOperand());
-+ __ mov(scratch3,
-+ ContextOperand(native_context, Context::OPTIMIZED_FUNCTIONS_LIST));
-+ __ mov(FieldOperand(closure, JSFunction::kNextFunctionLinkOffset), scratch3);
-+ __ RecordWriteField(closure, JSFunction::kNextFunctionLinkOffset, scratch3,
-+ scratch2, kDontSaveFPRegs, EMIT_REMEMBERED_SET,
-+ OMIT_SMI_CHECK);
-+ const int function_list_offset =
-+ Context::SlotOffset(Context::OPTIMIZED_FUNCTIONS_LIST);
-+ __ mov(ContextOperand(native_context, Context::OPTIMIZED_FUNCTIONS_LIST),
-+ closure);
-+ // Save closure before the write barrier.
-+ __ mov(scratch3, closure);
-+ __ RecordWriteContextSlot(native_context, function_list_offset, closure,
-+ scratch2, kDontSaveFPRegs);
-+ __ mov(closure, scratch3);
-+}
-+
-+static void LeaveInterpreterFrame(MacroAssembler* masm, Register scratch1,
-+ Register scratch2) {
-+ Register args_count = scratch1;
-+ Register return_pc = scratch2;
-+
-+ // Get the arguments + reciever count.
-+ __ mov(args_count,
-+ Operand(ebp, InterpreterFrameConstants::kBytecodeArrayFromFp));
-+ __ mov(args_count,
-+ FieldOperand(args_count, BytecodeArray::kParameterSizeOffset));
-+
-+ // Leave the frame (also dropping the register file).
-+ __ leave();
-+
-+ // Drop receiver + arguments.
-+ __ pop(return_pc);
-+ __ add(esp, args_count);
-+ __ push(return_pc);
-+}
-+
-+// Tail-call |function_id| if |smi_entry| == |marker|
-+static void TailCallRuntimeIfMarkerEquals(MacroAssembler* masm,
-+ Register smi_entry,
-+ OptimizationMarker marker,
-+ Runtime::FunctionId function_id) {
-+ Label no_match;
-+ __ cmp(smi_entry, Immediate(Smi::FromEnum(marker)));
-+ __ j(not_equal, &no_match, Label::kNear);
-+ GenerateTailCallToReturnedCode(masm, function_id);
-+ __ bind(&no_match);
-+}
-+
-+static void MaybeTailCallOptimizedCodeSlot(MacroAssembler* masm,
-+ Register feedback_vector,
-+ Register scratch) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argument count (preserved for callee if needed, and caller)
-+ // -- edx : new target (preserved for callee if needed, and caller)
-+ // -- edi : target function (preserved for callee if needed, and caller)
-+ // -- feedback vector (preserved for caller if needed)
-+ // -----------------------------------
-+ DCHECK(!AreAliased(feedback_vector, eax, edx, edi, scratch));
-+
-+ Label optimized_code_slot_is_cell, fallthrough;
-+
-+ Register closure = edi;
-+ Register optimized_code_entry = scratch;
-+
-+ const int kOptimizedCodeCellOffset =
-+ FeedbackVector::kOptimizedCodeIndex * kPointerSize +
-+ FeedbackVector::kHeaderSize;
-+ __ mov(optimized_code_entry,
-+ FieldOperand(feedback_vector, kOptimizedCodeCellOffset));
-+
-+ // Check if the code entry is a Smi. If yes, we interpret it as an
-+ // optimisation marker. Otherwise, interpret is as a weak cell to a code
-+ // object.
-+ __ JumpIfNotSmi(optimized_code_entry, &optimized_code_slot_is_cell);
-+
-+ {
-+ // Optimized code slot is an optimization marker.
-+
-+ // Fall through if no optimization trigger.
-+ __ cmp(optimized_code_entry,
-+ Immediate(Smi::FromEnum(OptimizationMarker::kNone)));
-+ __ j(equal, &fallthrough);
-+
-+ TailCallRuntimeIfMarkerEquals(masm, optimized_code_entry,
-+ OptimizationMarker::kCompileOptimized,
-+ Runtime::kCompileOptimized_NotConcurrent);
-+ TailCallRuntimeIfMarkerEquals(
-+ masm, optimized_code_entry,
-+ OptimizationMarker::kCompileOptimizedConcurrent,
-+ Runtime::kCompileOptimized_Concurrent);
-+
-+ {
-+ // Otherwise, the marker is InOptimizationQueue.
-+ if (FLAG_debug_code) {
-+ __ cmp(
-+ optimized_code_entry,
-+ Immediate(Smi::FromEnum(OptimizationMarker::kInOptimizationQueue)));
-+ __ Assert(equal, kExpectedOptimizationSentinel);
-+ }
-+
-+ // Checking whether the queued function is ready for install is optional,
-+ // since we come across interrupts and stack checks elsewhere. However,
-+ // not checking may delay installing ready functions, and always checking
-+ // would be quite expensive. A good compromise is to first check against
-+ // stack limit as a cue for an interrupt signal.
-+ ExternalReference stack_limit =
-+ ExternalReference::address_of_stack_limit(masm->isolate());
-+ __ cmp(esp, Operand::StaticVariable(stack_limit));
-+ __ j(above_equal, &fallthrough);
-+ GenerateTailCallToReturnedCode(masm, Runtime::kTryInstallOptimizedCode);
-+ }
-+ }
-+
-+ {
-+ // Optimized code slot is a WeakCell.
-+ __ bind(&optimized_code_slot_is_cell);
-+
-+ __ mov(optimized_code_entry,
-+ FieldOperand(optimized_code_entry, WeakCell::kValueOffset));
-+ __ JumpIfSmi(optimized_code_entry, &fallthrough);
-+
-+ // Check if the optimized code is marked for deopt. If it is, bailout to a
-+ // given label.
-+ Label found_deoptimized_code;
-+ __ test(FieldOperand(optimized_code_entry, Code::kKindSpecificFlags1Offset),
-+ Immediate(1 << Code::kMarkedForDeoptimizationBit));
-+ __ j(not_zero, &found_deoptimized_code);
-+
-+ // Optimized code is good, get it into the closure and link the closure into
-+ // the optimized functions list, then tail call the optimized code.
-+ __ push(eax);
-+ __ push(edx);
-+ // The feedback vector is no longer used, so re-use it as a scratch
-+ // register.
-+ ReplaceClosureEntryWithOptimizedCode(masm, optimized_code_entry, closure,
-+ edx, eax, feedback_vector);
-+ __ pop(edx);
-+ __ pop(eax);
-+ __ jmp(optimized_code_entry);
-+
-+ // Optimized code slot contains deoptimized code, evict it and re-enter the
-+ // closure's code.
-+ __ bind(&found_deoptimized_code);
-+ GenerateTailCallToReturnedCode(masm, Runtime::kEvictOptimizedCodeSlot);
-+ }
-+
-+ // Fall-through if the optimized code cell is clear and there is no
-+ // optimization marker.
-+ __ bind(&fallthrough);
-+}
-+
-+// Generate code for entering a JS function with the interpreter.
-+// On entry to the function the receiver and arguments have been pushed on the
-+// stack left to right. The actual argument count matches the formal parameter
-+// count expected by the function.
-+//
-+// The live registers are:
-+// o edi: the JS function object being called
-+// o edx: the new target
-+// o esi: our context
-+// o ebp: the caller's frame pointer
-+// o esp: stack pointer (pointing to return address)
-+//
-+// The function builds an interpreter frame. See InterpreterFrameConstants in
-+// frames.h for its layout.
-+void Builtins::Generate_InterpreterEntryTrampoline(MacroAssembler* masm) {
-+ ProfileEntryHookStub::MaybeCallEntryHook(masm);
-+
-+ Register closure = edi;
-+ Register feedback_vector = ebx;
-+
-+ // Load the feedback vector from the closure.
-+ __ mov(feedback_vector,
-+ FieldOperand(closure, JSFunction::kFeedbackVectorOffset));
-+ __ mov(feedback_vector, FieldOperand(feedback_vector, Cell::kValueOffset));
-+ // Read off the optimized code slot in the feedback vector, and if there
-+ // is optimized code or an optimization marker, call that instead.
-+ MaybeTailCallOptimizedCodeSlot(masm, feedback_vector, ecx);
-+
-+ // Open a frame scope to indicate that there is a frame on the stack. The
-+ // MANUAL indicates that the scope shouldn't actually generate code to set
-+ // up the frame (that is done below).
-+ FrameScope frame_scope(masm, StackFrame::MANUAL);
-+ __ push(ebp); // Caller's frame pointer.
-+ __ mov(ebp, esp);
-+ __ push(esi); // Callee's context.
-+ __ push(edi); // Callee's JS function.
-+ __ push(edx); // Callee's new target.
-+
-+ // Get the bytecode array from the function object (or from the DebugInfo if
-+ // it is present) and load it into kInterpreterBytecodeArrayRegister.
-+ Label maybe_load_debug_bytecode_array, bytecode_array_loaded;
-+ __ mov(eax, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(kInterpreterBytecodeArrayRegister,
-+ FieldOperand(eax, SharedFunctionInfo::kFunctionDataOffset));
-+ __ JumpIfNotSmi(FieldOperand(eax, SharedFunctionInfo::kDebugInfoOffset),
-+ &maybe_load_debug_bytecode_array);
-+ __ bind(&bytecode_array_loaded);
-+
-+ // Check whether we should continue to use the interpreter.
-+ // TODO(rmcilroy) Remove self healing once liveedit only has to deal with
-+ // Ignition bytecode.
-+ Label switch_to_different_code_kind;
-+ __ Move(ecx, masm->CodeObject()); // Self-reference to this code.
-+ __ cmp(ecx, FieldOperand(eax, SharedFunctionInfo::kCodeOffset));
-+ __ j(not_equal, &switch_to_different_code_kind);
-+
-+ // Increment invocation count for the function.
-+ __ add(FieldOperand(feedback_vector,
-+ FeedbackVector::kInvocationCountIndex * kPointerSize +
-+ FeedbackVector::kHeaderSize),
-+ Immediate(Smi::FromInt(1)));
-+
-+ // Check function data field is actually a BytecodeArray object.
-+ if (FLAG_debug_code) {
-+ __ AssertNotSmi(kInterpreterBytecodeArrayRegister);
-+ __ CmpObjectType(kInterpreterBytecodeArrayRegister, BYTECODE_ARRAY_TYPE,
-+ eax);
-+ __ Assert(equal, kFunctionDataShouldBeBytecodeArrayOnInterpreterEntry);
-+ }
-+
-+ // Reset code age.
-+ __ mov_b(FieldOperand(kInterpreterBytecodeArrayRegister,
-+ BytecodeArray::kBytecodeAgeOffset),
-+ Immediate(BytecodeArray::kNoAgeBytecodeAge));
-+
-+ // Push bytecode array.
-+ __ push(kInterpreterBytecodeArrayRegister);
-+ // Push Smi tagged initial bytecode array offset.
-+ __ push(Immediate(Smi::FromInt(BytecodeArray::kHeaderSize - kHeapObjectTag)));
-+
-+ // Allocate the local and temporary register file on the stack.
-+ {
-+ // Load frame size from the BytecodeArray object.
-+ __ mov(ebx, FieldOperand(kInterpreterBytecodeArrayRegister,
-+ BytecodeArray::kFrameSizeOffset));
-+
-+ // Do a stack check to ensure we don't go over the limit.
-+ Label ok;
-+ __ mov(ecx, esp);
-+ __ sub(ecx, ebx);
-+ ExternalReference stack_limit =
-+ ExternalReference::address_of_real_stack_limit(masm->isolate());
-+ __ cmp(ecx, Operand::StaticVariable(stack_limit));
-+ __ j(above_equal, &ok);
-+ __ CallRuntime(Runtime::kThrowStackOverflow);
-+ __ bind(&ok);
-+
-+ // If ok, push undefined as the initial value for all register file entries.
-+ Label loop_header;
-+ Label loop_check;
-+ __ mov(eax, Immediate(masm->isolate()->factory()->undefined_value()));
-+ __ jmp(&loop_check);
-+ __ bind(&loop_header);
-+ // TODO(rmcilroy): Consider doing more than one push per loop iteration.
-+ __ push(eax);
-+ // Continue loop if not done.
-+ __ bind(&loop_check);
-+ __ sub(ebx, Immediate(kPointerSize));
-+ __ j(greater_equal, &loop_header);
-+ }
-+
-+ // Load accumulator, bytecode offset and dispatch table into registers.
-+ __ LoadRoot(kInterpreterAccumulatorRegister, Heap::kUndefinedValueRootIndex);
-+ __ mov(kInterpreterBytecodeOffsetRegister,
-+ Immediate(BytecodeArray::kHeaderSize - kHeapObjectTag));
-+ __ mov(kInterpreterDispatchTableRegister,
-+ Immediate(ExternalReference::interpreter_dispatch_table_address(
-+ masm->isolate())));
-+
-+ // Dispatch to the first bytecode handler for the function.
-+ __ movzx_b(ebx, Operand(kInterpreterBytecodeArrayRegister,
-+ kInterpreterBytecodeOffsetRegister, times_1, 0));
-+ __ mov(ebx, Operand(kInterpreterDispatchTableRegister, ebx,
-+ times_pointer_size, 0));
-+ __ call(ebx);
-+
masm->isolate()->heap()->SetInterpreterEntryReturnPCOffset(masm->pc_offset());
-+
-+ // The return value is in eax.
-+ LeaveInterpreterFrame(masm, ebx, ecx);
-+ __ ret(0);
-+
-+ // Load debug copy of the bytecode array if it exists.
-+ // kInterpreterBytecodeArrayRegister is already loaded with
-+ // SharedFunctionInfo::kFunctionDataOffset.
-+ __ bind(&maybe_load_debug_bytecode_array);
-+ __ push(ebx); // feedback_vector == ebx, so save it.
-+ __ mov(ecx, FieldOperand(eax, SharedFunctionInfo::kDebugInfoOffset));
-+ __ mov(ebx, FieldOperand(ecx, DebugInfo::kFlagsOffset));
-+ __ SmiUntag(ebx);
-+ __ test(ebx, Immediate(DebugInfo::kHasBreakInfo));
-+ __ pop(ebx);
-+ __ j(zero, &bytecode_array_loaded);
-+ __ mov(kInterpreterBytecodeArrayRegister,
-+ FieldOperand(ecx, DebugInfo::kDebugBytecodeArrayOffset));
-+ __ jmp(&bytecode_array_loaded);
-+
-+ // If the shared code is no longer this entry trampoline, then the underlying
-+ // function has been switched to a different kind of code and we heal the
-+ // closure by switching the code entry field over to the new code as well.
-+ __ bind(&switch_to_different_code_kind);
-+ __ pop(edx); // Callee's new target.
-+ __ pop(edi); // Callee's JS function.
-+ __ pop(esi); // Callee's context.
-+ __ leave(); // Leave the frame so we can tail call.
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(ecx, FieldOperand(ecx, SharedFunctionInfo::kCodeOffset));
-+ __ lea(ecx, FieldOperand(ecx, Code::kHeaderSize));
-+ __ mov(FieldOperand(edi, JSFunction::kCodeEntryOffset), ecx);
-+ __ RecordWriteCodeEntryField(edi, ecx, ebx);
-+ __ jmp(ecx);
-+}
-+
-+static void Generate_StackOverflowCheck(MacroAssembler* masm, Register num_args,
-+ Register scratch1, Register scratch2,
-+ Label* stack_overflow,
-+ bool include_receiver = false) {
-+ // Check the stack for overflow. We are not trying to catch
-+ // interruptions (e.g. debug break and preemption) here, so the "real stack
-+ // limit" is checked.
-+ ExternalReference real_stack_limit =
-+ ExternalReference::address_of_real_stack_limit(masm->isolate());
-+ __ mov(scratch1, Operand::StaticVariable(real_stack_limit));
-+ // Make scratch2 the space we have left. The stack might already be overflowed
-+ // here which will cause scratch2 to become negative.
-+ __ mov(scratch2, esp);
-+ __ sub(scratch2, scratch1);
-+ // Make scratch1 the space we need for the array when it is unrolled onto the
-+ // stack.
-+ __ mov(scratch1, num_args);
-+ if (include_receiver) {
-+ __ add(scratch1, Immediate(1));
-+ }
-+ __ shl(scratch1, kPointerSizeLog2);
-+ // Check if the arguments will overflow the stack.
-+ __ cmp(scratch2, scratch1);
-+ __ j(less_equal, stack_overflow); // Signed comparison.
-+}
-+
-+static void Generate_InterpreterPushArgs(MacroAssembler* masm,
-+ Register array_limit,
-+ Register start_address) {
-+ // ----------- S t a t e -------------
-+ // -- start_address : Pointer to the last argument in the args array.
-+ // -- array_limit : Pointer to one before the first argument in the
-+ // args array.
-+ // -----------------------------------
-+ Label loop_header, loop_check;
-+ __ jmp(&loop_check);
-+ __ bind(&loop_header);
-+ __ Push(Operand(start_address, 0));
-+ __ sub(start_address, Immediate(kPointerSize));
-+ __ bind(&loop_check);
-+ __ cmp(start_address, array_limit);
-+ __ j(greater, &loop_header, Label::kNear);
-+}
-+
-+// static
-+void Builtins::Generate_InterpreterPushArgsThenCallImpl(
-+ MacroAssembler* masm, ConvertReceiverMode receiver_mode,
-+ InterpreterPushArgsMode mode) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- ebx : the address of the first argument to be pushed. Subsequent
-+ // arguments should be consecutive above this, in the same order as
-+ // they are to be pushed onto the stack.
-+ // -- edi : the target to call (can be any Object).
-+ // -----------------------------------
-+ Label stack_overflow;
-+ // Compute the expected number of arguments.
-+ __ mov(ecx, eax);
-+ __ add(ecx, Immediate(1)); // Add one for receiver.
-+
-+ // Add a stack check before pushing the arguments. We need an extra register
-+ // to perform a stack check. So push it onto the stack temporarily. This
-+ // might cause stack overflow, but it will be detected by the check.
-+ __ Push(edi);
-+ Generate_StackOverflowCheck(masm, ecx, edx, edi, &stack_overflow);
-+ __ Pop(edi);
-+
-+ // Pop return address to allow tail-call after pushing arguments.
-+ __ Pop(edx);
-+
-+ // Push "undefined" as the receiver arg if we need to.
-+ if (receiver_mode == ConvertReceiverMode::kNullOrUndefined) {
-+ __ PushRoot(Heap::kUndefinedValueRootIndex);
-+ __ sub(ecx, Immediate(1)); // Subtract one for receiver.
-+ }
-+
-+ // Find the address of the last argument.
-+ __ shl(ecx, kPointerSizeLog2);
-+ __ neg(ecx);
-+ __ add(ecx, ebx);
-+ Generate_InterpreterPushArgs(masm, ecx, ebx);
-+
-+ if (mode == InterpreterPushArgsMode::kWithFinalSpread) {
-+ __ Pop(ebx); // Pass the spread in a register
-+ __ sub(eax, Immediate(1)); // Subtract one for spread
-+ }
-+
-+ // Call the target.
-+ __ Push(edx); // Re-push return address.
-+
-+ if (mode == InterpreterPushArgsMode::kJSFunction) {
-+ __ Jump(
-+ masm->isolate()->builtins()->CallFunction(ConvertReceiverMode::kAny),
-+ RelocInfo::CODE_TARGET);
-+ } else if (mode == InterpreterPushArgsMode::kWithFinalSpread) {
-+ __ Jump(masm->isolate()->builtins()->CallWithSpread(),
-+ RelocInfo::CODE_TARGET);
-+ } else {
-+ __ Jump(masm->isolate()->builtins()->Call(ConvertReceiverMode::kAny),
-+ RelocInfo::CODE_TARGET);
-+ }
-+
-+ __ bind(&stack_overflow);
-+ {
-+ // Pop the temporary registers, so that return address is on top of stack.
-+ __ Pop(edi);
-+
-+ __ TailCallRuntime(Runtime::kThrowStackOverflow);
-+
-+ // This should be unreachable.
-+ __ int3();
-+ }
-+}
-+
-+namespace {
-+
-+// This function modified start_addr, and only reads the contents of num_args
-+// register. scratch1 and scratch2 are used as temporary registers. Their
-+// original values are restored after the use.
-+void Generate_InterpreterPushZeroAndArgsAndReturnAddress(
-+ MacroAssembler* masm, Register num_args, Register start_addr,
-+ Register scratch1, Register scratch2, int num_slots_above_ret_addr,
-+ Label* stack_overflow) {
-+ // We have to move return address and the temporary registers above it
-+ // before we can copy arguments onto the stack. To achieve this:
-+ // Step 1: Increment the stack pointer by num_args + 1 (for receiver).
-+ // Step 2: Move the return address and values above it to the top of stack.
-+ // Step 3: Copy the arguments into the correct locations.
-+ // current stack =====> required stack layout
-+ // | | | scratch1 | (2) <-- esp(1)
-+ // | | | .... | (2)
-+ // | | | scratch-n | (2)
-+ // | | | return addr | (2)
-+ // | | | arg N | (3)
-+ // | scratch1 | <-- esp | .... |
-+ // | .... | | arg 1 |
-+ // | scratch-n | | arg 0 |
-+ // | return addr | | receiver slot |
-+
-+ // Check for stack overflow before we increment the stack pointer.
-+ Generate_StackOverflowCheck(masm, num_args, scratch1, scratch2,
-+ stack_overflow, true);
-+
-+// Step 1 - Update the stack pointer. scratch1 already contains the required
-+// increment to the stack. i.e. num_args + 1 stack slots. This is computed in
-+// the Generate_StackOverflowCheck.
-+
-+#ifdef _MSC_VER
-+ // TODO(mythria): Move it to macro assembler.
-+ // In windows, we cannot increment the stack size by more than one page
-+ // (mimimum page size is 4KB) without accessing at least one byte on the
-+ // page. Check this:
-+ //
https://msdn.microsoft.com/en-us/library/aa227153(v=vs.60).aspx.
-+ const int page_size = 4 * 1024;
-+ Label check_offset, update_stack_pointer;
-+ __ bind(&check_offset);
-+ __ cmp(scratch1, page_size);
-+ __ j(less, &update_stack_pointer);
-+ __ sub(esp, Immediate(page_size));
-+ // Just to touch the page, before we increment further.
-+ __ mov(Operand(esp, 0), Immediate(0));
-+ __ sub(scratch1, Immediate(page_size));
-+ __ jmp(&check_offset);
-+ __ bind(&update_stack_pointer);
-+#endif
-+
-+ __ sub(esp, scratch1);
-+
-+ // Step 2 move return_address and slots above it to the correct locations.
-+ // Move from top to bottom, otherwise we may overwrite when num_args = 0 or 1,
-+ // basically when the source and destination overlap. We at least need one
-+ // extra slot for receiver, so no extra checks are required to avoid copy.
-+ for (int i = 0; i < num_slots_above_ret_addr + 1; i++) {
-+ __ mov(scratch1,
-+ Operand(esp, num_args, times_pointer_size, (i + 1) * kPointerSize));
-+ __ mov(Operand(esp, i * kPointerSize), scratch1);
-+ }
-+
-+ // Step 3 copy arguments to correct locations.
-+ // Slot meant for receiver contains return address. Reset it so that
-+ // we will not incorrectly interpret return address as an object.
-+ __ mov(Operand(esp, num_args, times_pointer_size,
-+ (num_slots_above_ret_addr + 1) * kPointerSize),
-+ Immediate(0));
-+ __ mov(scratch1, num_args);
-+
-+ Label loop_header, loop_check;
-+ __ jmp(&loop_check);
-+ __ bind(&loop_header);
-+ __ mov(scratch2, Operand(start_addr, 0));
-+ __ mov(Operand(esp, scratch1, times_pointer_size,
-+ num_slots_above_ret_addr * kPointerSize),
-+ scratch2);
-+ __ sub(start_addr, Immediate(kPointerSize));
-+ __ sub(scratch1, Immediate(1));
-+ __ bind(&loop_check);
-+ __ cmp(scratch1, Immediate(0));
-+ __ j(greater, &loop_header, Label::kNear);
-+}
-+
-+} // end anonymous namespace
-+
-+// static
-+void Builtins::Generate_InterpreterPushArgsThenConstructImpl(
-+ MacroAssembler* masm, InterpreterPushArgsMode mode) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edx : the new target
-+ // -- edi : the constructor
-+ // -- ebx : allocation site feedback (if available or undefined)
-+ // -- ecx : the address of the first argument to be pushed. Subsequent
-+ // arguments should be consecutive above this, in the same order as
-+ // they are to be pushed onto the stack.
-+ // -----------------------------------
-+ Label stack_overflow;
-+ // We need two scratch registers. Push edi and edx onto stack.
-+ __ Push(edi);
-+ __ Push(edx);
-+
-+ // Push arguments and move return address to the top of stack.
-+ // The eax register is readonly. The ecx register will be modified. The edx
-+ // and edi registers will be modified but restored to their original values.
-+ Generate_InterpreterPushZeroAndArgsAndReturnAddress(masm, eax, ecx, edx, edi,
-+ 2, &stack_overflow);
-+
-+ // Restore edi and edx
-+ __ Pop(edx);
-+ __ Pop(edi);
-+
-+ if (mode == InterpreterPushArgsMode::kWithFinalSpread) {
-+ __ PopReturnAddressTo(ecx);
-+ __ Pop(ebx); // Pass the spread in a register
-+ __ PushReturnAddressFrom(ecx);
-+ __ sub(eax, Immediate(1)); // Subtract one for spread
-+ } else {
-+ __ AssertUndefinedOrAllocationSite(ebx);
-+ }
-+
-+ if (mode == InterpreterPushArgsMode::kJSFunction) {
-+ // Tail call to the function-specific construct stub (still in the caller
-+ // context at this point).
-+ __ AssertFunction(edi);
-+
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(ecx, FieldOperand(ecx, SharedFunctionInfo::kConstructStubOffset));
-+ __ lea(ecx, FieldOperand(ecx, Code::kHeaderSize));
-+ __ jmp(ecx);
-+ } else if (mode == InterpreterPushArgsMode::kWithFinalSpread) {
-+ // Call the constructor with unmodified eax, edi, edx values.
-+ __ Jump(masm->isolate()->builtins()->ConstructWithSpread(),
-+ RelocInfo::CODE_TARGET);
-+ } else {
-+ DCHECK_EQ(InterpreterPushArgsMode::kOther, mode);
-+ // Call the constructor with unmodified eax, edi, edx values.
-+ __ Jump(masm->isolate()->builtins()->Construct(), RelocInfo::CODE_TARGET);
-+ }
-+
-+ __ bind(&stack_overflow);
-+ {
-+ // Pop the temporary registers, so that return address is on top of stack.
-+ __ Pop(edx);
-+ __ Pop(edi);
-+
-+ __ TailCallRuntime(Runtime::kThrowStackOverflow);
-+
-+ // This should be unreachable.
-+ __ int3();
-+ }
-+}
-+
-+// static
-+void Builtins::Generate_InterpreterPushArgsThenConstructArray(
-+ MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edx : the target to call checked to be Array function.
-+ // -- ebx : the allocation site feedback
-+ // -- ecx : the address of the first argument to be pushed. Subsequent
-+ // arguments should be consecutive above this, in the same order as
-+ // they are to be pushed onto the stack.
-+ // -----------------------------------
-+ Label stack_overflow;
-+ // We need two scratch registers. Register edi is available, push edx onto
-+ // stack.
-+ __ Push(edx);
-+
-+ // Push arguments and move return address to the top of stack.
-+ // The eax register is readonly. The ecx register will be modified. The edx
-+ // and edi registers will be modified but restored to their original values.
-+ Generate_InterpreterPushZeroAndArgsAndReturnAddress(masm, eax, ecx, edx, edi,
-+ 1, &stack_overflow);
-+
-+ // Restore edx.
-+ __ Pop(edx);
-+
-+ // Array constructor expects constructor in edi. It is same as edx here.
-+ __ Move(edi, edx);
-+
-+ ArrayConstructorStub stub(masm->isolate());
-+ __ TailCallStub(&stub);
-+
-+ __ bind(&stack_overflow);
-+ {
-+ // Pop the temporary registers, so that return address is on top of stack.
-+ __ Pop(edx);
-+
-+ __ TailCallRuntime(Runtime::kThrowStackOverflow);
-+
-+ // This should be unreachable.
-+ __ int3();
-+ }
-+}
-+
-+static void Generate_InterpreterEnterBytecode(MacroAssembler* masm) {
-+ // Set the return address to the correct point in the interpreter entry
-+ // trampoline.
-+ Smi* interpreter_entry_return_pc_offset(
-+ masm->isolate()->heap()->interpreter_entry_return_pc_offset());
-+ DCHECK_NE(interpreter_entry_return_pc_offset, Smi::kZero);
-+ __ Move(ebx, masm->isolate()->builtins()->InterpreterEntryTrampoline());
-+ __ add(ebx, Immediate(interpreter_entry_return_pc_offset->value() +
-+ Code::kHeaderSize - kHeapObjectTag));
-+ __ push(ebx);
-+
-+ // Initialize the dispatch table register.
-+ __ mov(kInterpreterDispatchTableRegister,
-+ Immediate(ExternalReference::interpreter_dispatch_table_address(
-+ masm->isolate())));
-+
-+ // Get the bytecode array pointer from the frame.
-+ __ mov(kInterpreterBytecodeArrayRegister,
-+ Operand(ebp, InterpreterFrameConstants::kBytecodeArrayFromFp));
-+
-+ if (FLAG_debug_code) {
-+ // Check function data field is actually a BytecodeArray object.
-+ __ AssertNotSmi(kInterpreterBytecodeArrayRegister);
-+ __ CmpObjectType(kInterpreterBytecodeArrayRegister, BYTECODE_ARRAY_TYPE,
-+ ebx);
-+ __ Assert(equal, kFunctionDataShouldBeBytecodeArrayOnInterpreterEntry);
-+ }
-+
-+ // Get the target bytecode offset from the frame.
-+ __ mov(kInterpreterBytecodeOffsetRegister,
-+ Operand(ebp, InterpreterFrameConstants::kBytecodeOffsetFromFp));
-+ __ SmiUntag(kInterpreterBytecodeOffsetRegister);
-+
-+ // Dispatch to the target bytecode.
-+ __ movzx_b(ebx, Operand(kInterpreterBytecodeArrayRegister,
-+ kInterpreterBytecodeOffsetRegister, times_1, 0));
-+ __ mov(ebx, Operand(kInterpreterDispatchTableRegister, ebx,
-+ times_pointer_size, 0));
-+ __ jmp(ebx);
-+}
-+
-+void Builtins::Generate_InterpreterEnterBytecodeAdvance(MacroAssembler* masm) {
-+ // Advance the current bytecode offset stored within the given interpreter
-+ // stack frame. This simulates what all bytecode handlers do upon completion
-+ // of the underlying operation.
-+ __ mov(ebx, Operand(ebp, InterpreterFrameConstants::kBytecodeArrayFromFp));
-+ __ mov(edx, Operand(ebp, InterpreterFrameConstants::kBytecodeOffsetFromFp));
-+ __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset));
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ Push(kInterpreterAccumulatorRegister);
-+ __ Push(ebx); // First argument is the bytecode array.
-+ __ Push(edx); // Second argument is the bytecode offset.
-+ __ CallRuntime(Runtime::kInterpreterAdvanceBytecodeOffset);
-+ __ Move(edx, eax); // Result is the new bytecode offset.
-+ __ Pop(kInterpreterAccumulatorRegister);
-+ }
-+ __ mov(Operand(ebp, InterpreterFrameConstants::kBytecodeOffsetFromFp), edx);
-+
-+ Generate_InterpreterEnterBytecode(masm);
-+}
-+
-+void Builtins::Generate_InterpreterEnterBytecodeDispatch(MacroAssembler* masm) {
-+ Generate_InterpreterEnterBytecode(masm);
-+}
-+
-+void Builtins::Generate_CheckOptimizationMarker(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- rax : argument count (preserved for callee)
-+ // -- rdx : new target (preserved for callee)
-+ // -- rdi : target function (preserved for callee)
-+ // -----------------------------------
-+ Register closure = edi;
-+
-+ // Get the feedback vector.
-+ Register feedback_vector = ebx;
-+ __ mov(feedback_vector,
-+ FieldOperand(closure, JSFunction::kFeedbackVectorOffset));
-+ __ mov(feedback_vector, FieldOperand(feedback_vector, Cell::kValueOffset));
-+
-+ // The feedback vector must be defined.
-+ if (FLAG_debug_code) {
-+ __ CompareRoot(feedback_vector, Heap::kUndefinedValueRootIndex);
-+ __ Assert(not_equal, BailoutReason::kExpectedFeedbackVector);
-+ }
-+
-+ // Is there an optimization marker or optimized code in the feedback vector?
-+ MaybeTailCallOptimizedCodeSlot(masm, feedback_vector, ecx);
-+
-+ // Otherwise, tail call the SFI code.
-+ GenerateTailCallToSharedCode(masm);
-+}
-+
-+void Builtins::Generate_CompileLazy(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argument count (preserved for callee)
-+ // -- edx : new target (preserved for callee)
-+ // -- edi : target function (preserved for callee)
-+ // -----------------------------------
-+ // First lookup code, maybe we don't need to compile!
-+ Label gotta_call_runtime;
-+
-+ Register closure = edi;
-+ Register feedback_vector = ebx;
-+
-+ // Do we have a valid feedback vector?
-+ __ mov(feedback_vector,
-+ FieldOperand(closure, JSFunction::kFeedbackVectorOffset));
-+ __ mov(feedback_vector, FieldOperand(feedback_vector, Cell::kValueOffset));
-+ __ JumpIfRoot(feedback_vector, Heap::kUndefinedValueRootIndex,
-+ &gotta_call_runtime);
-+
-+ // Is there an optimization marker or optimized code in the feedback vector?
-+ MaybeTailCallOptimizedCodeSlot(masm, feedback_vector, ecx);
-+
-+ // We found no optimized code.
-+ Register entry = ecx;
-+ __ mov(entry, FieldOperand(closure, JSFunction::kSharedFunctionInfoOffset));
-+
-+ // If SFI points to anything other than CompileLazy, install that.
-+ __ mov(entry, FieldOperand(entry, SharedFunctionInfo::kCodeOffset));
-+ __ Move(ebx, masm->CodeObject());
-+ __ cmp(entry, ebx);
-+ __ j(equal, &gotta_call_runtime);
-+
-+ // Install the SFI's code entry.
-+ __ lea(entry, FieldOperand(entry, Code::kHeaderSize));
-+ __ mov(FieldOperand(closure, JSFunction::kCodeEntryOffset), entry);
-+ __ RecordWriteCodeEntryField(closure, entry, ebx);
-+ __ jmp(entry);
-+
-+ __ bind(&gotta_call_runtime);
-+ GenerateTailCallToReturnedCode(masm, Runtime::kCompileLazy);
-+}
-+
-+void Builtins::Generate_InstantiateAsmJs(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argument count (preserved for callee)
-+ // -- edx : new target (preserved for callee)
-+ // -- edi : target function (preserved for callee)
-+ // -----------------------------------
-+ Label failed;
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ // Preserve argument count for later compare.
-+ __ mov(ecx, eax);
-+ // Push the number of arguments to the callee.
-+ __ SmiTag(eax);
-+ __ push(eax);
-+ // Push a copy of the target function and the new target.
-+ __ push(edi);
-+ __ push(edx);
-+
-+ // The function.
-+ __ push(edi);
-+ // Copy arguments from caller (stdlib, foreign, heap).
-+ Label args_done;
-+ for (int j = 0; j < 4; ++j) {
-+ Label over;
-+ if (j < 3) {
-+ __ cmp(ecx, Immediate(j));
-+ __ j(not_equal, &over, Label::kNear);
-+ }
-+ for (int i = j - 1; i >= 0; --i) {
-+ __ Push(Operand(
-+ ebp, StandardFrameConstants::kCallerSPOffset + i * kPointerSize));
-+ }
-+ for (int i = 0; i < 3 - j; ++i) {
-+ __ PushRoot(Heap::kUndefinedValueRootIndex);
-+ }
-+ if (j < 3) {
-+ __ jmp(&args_done, Label::kNear);
-+ __ bind(&over);
-+ }
-+ }
-+ __ bind(&args_done);
-+
-+ // Call runtime, on success unwind frame, and parent frame.
-+ __ CallRuntime(Runtime::kInstantiateAsmJs, 4);
-+ // A smi 0 is returned on failure, an object on success.
-+ __ JumpIfSmi(eax, &failed, Label::kNear);
-+
-+ __ Drop(2);
-+ __ Pop(ecx);
-+ __ SmiUntag(ecx);
-+ scope.GenerateLeaveFrame();
-+
-+ __ PopReturnAddressTo(ebx);
-+ __ inc(ecx);
-+ __ lea(esp, Operand(esp, ecx, times_pointer_size, 0));
-+ __ PushReturnAddressFrom(ebx);
-+ __ ret(0);
-+
-+ __ bind(&failed);
-+ // Restore target function and new target.
-+ __ pop(edx);
-+ __ pop(edi);
-+ __ pop(eax);
-+ __ SmiUntag(eax);
-+ }
-+ // On failure, tail call back to regular js.
-+ GenerateTailCallToReturnedCode(masm, Runtime::kCompileLazy);
-+}
-+
-+static void GenerateMakeCodeYoungAgainCommon(MacroAssembler* masm) {
-+ // For now, we are relying on the fact that make_code_young doesn't do any
-+ // garbage collection which allows us to save/restore the registers without
-+ // worrying about which of them contain pointers. We also don't build an
-+ // internal frame to make the code faster, since we shouldn't have to do stack
-+ // crawls in MakeCodeYoung. This seems a bit fragile.
-+
-+ // Re-execute the code that was patched back to the young age when
-+ // the stub returns.
-+ __ sub(Operand(esp, 0), Immediate(5));
-+ __ pushad();
-+ __ mov(eax, Operand(esp, 8 * kPointerSize));
-+ {
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ PrepareCallCFunction(2, ebx);
-+ __ mov(Operand(esp, 1 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(masm->isolate())));
-+ __ mov(Operand(esp, 0), eax);
-+ __ CallCFunction(
-+ ExternalReference::get_make_code_young_function(masm->isolate()), 2);
-+ }
-+ __ popad();
-+ __ ret(0);
-+}
-+
-+#define DEFINE_CODE_AGE_BUILTIN_GENERATOR(C) \
-+ void Builtins::Generate_Make##C##CodeYoungAgain(MacroAssembler* masm) { \
-+ GenerateMakeCodeYoungAgainCommon(masm); \
-+ }
-+CODE_AGE_LIST(DEFINE_CODE_AGE_BUILTIN_GENERATOR)
-+#undef DEFINE_CODE_AGE_BUILTIN_GENERATOR
-+
-+void Builtins::Generate_MarkCodeAsExecutedOnce(MacroAssembler* masm) {
-+ // For now, as in GenerateMakeCodeYoungAgainCommon, we are relying on the fact
-+ // that make_code_young doesn't do any garbage collection which allows us to
-+ // save/restore the registers without worrying about which of them contain
-+ // pointers.
-+ __ pushad();
-+ __ mov(eax, Operand(esp, 8 * kPointerSize));
-+ __ sub(eax, Immediate(Assembler::kCallInstructionLength));
-+ { // NOLINT
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ PrepareCallCFunction(2, ebx);
-+ __ mov(Operand(esp, 1 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(masm->isolate())));
-+ __ mov(Operand(esp, 0), eax);
-+ __ CallCFunction(
-+ ExternalReference::get_mark_code_as_executed_function(masm->isolate()),
-+ 2);
-+ }
-+ __ popad();
-+
-+ // Perform prologue operations usually performed by the young code stub.
-+ __ pop(eax); // Pop return address into scratch register.
-+ __ push(ebp); // Caller's frame pointer.
-+ __ mov(ebp, esp);
-+ __ push(esi); // Callee's context.
-+ __ push(edi); // Callee's JS Function.
-+ __ push(eax); // Push return address after frame prologue.
-+
-+ // Jump to point after the code-age stub.
-+ __ ret(0);
-+}
-+
-+void Builtins::Generate_MarkCodeAsExecutedTwice(MacroAssembler* masm) {
-+ GenerateMakeCodeYoungAgainCommon(masm);
-+}
-+
-+void Builtins::Generate_MarkCodeAsToBeExecutedOnce(MacroAssembler* masm) {
-+ Generate_MarkCodeAsExecutedOnce(masm);
-+}
-+
-+void Builtins::Generate_NotifyBuiltinContinuation(MacroAssembler* masm) {
-+ // Enter an internal frame.
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ // Preserve possible return result from lazy deopt.
-+ __ push(eax);
-+ __ CallRuntime(Runtime::kNotifyStubFailure, false);
-+ __ pop(eax);
-+ // Tear down internal frame.
-+ }
-+
-+ __ pop(MemOperand(esp, 0)); // Ignore state offset
-+ __ ret(0); // Return to ContinueToBuiltin stub still on stack.
-+}
-+
-+namespace {
-+void Generate_ContinueToBuiltinHelper(MacroAssembler* masm,
-+ bool java_script_builtin,
-+ bool with_result) {
-+ const RegisterConfiguration* config(RegisterConfiguration::Turbofan());
-+ int allocatable_register_count = config->num_allocatable_general_registers();
-+ if (with_result) {
-+ // Overwrite the hole inserted by the deoptimizer with the return value from
-+ // the LAZY deopt point.
-+ __ mov(Operand(esp,
-+ config->num_allocatable_general_registers() * kPointerSize +
-+ BuiltinContinuationFrameConstants::kFixedFrameSize),
-+ eax);
-+ }
-+ for (int i = allocatable_register_count - 1; i >= 0; --i) {
-+ int code = config->GetAllocatableGeneralCode(i);
-+ __ pop(Register::from_code(code));
-+ if (java_script_builtin && code == kJavaScriptCallArgCountRegister.code())
{
-+ __ SmiUntag(Register::from_code(code));
-+ }
-+ }
-+ __ mov(
-+ ebp,
-+ Operand(esp, BuiltinContinuationFrameConstants::kFixedFrameSizeFromFp));
-+ const int offsetToPC =
-+ BuiltinContinuationFrameConstants::kFixedFrameSizeFromFp - kPointerSize;
-+ __ pop(Operand(esp, offsetToPC));
-+ __ Drop(offsetToPC / kPointerSize);
-+ __ add(Operand(esp, 0), Immediate(Code::kHeaderSize - kHeapObjectTag));
-+ __ ret(0);
-+}
-+} // namespace
-+
-+void Builtins::Generate_ContinueToCodeStubBuiltin(MacroAssembler* masm) {
-+ Generate_ContinueToBuiltinHelper(masm, false, false);
-+}
-+
-+void Builtins::Generate_ContinueToCodeStubBuiltinWithResult(
-+ MacroAssembler* masm) {
-+ Generate_ContinueToBuiltinHelper(masm, false, true);
-+}
-+
-+void Builtins::Generate_ContinueToJavaScriptBuiltin(MacroAssembler* masm) {
-+ Generate_ContinueToBuiltinHelper(masm, true, false);
-+}
-+
-+void Builtins::Generate_ContinueToJavaScriptBuiltinWithResult(
-+ MacroAssembler* masm) {
-+ Generate_ContinueToBuiltinHelper(masm, true, true);
-+}
-+
-+static void Generate_NotifyDeoptimizedHelper(MacroAssembler* masm,
-+ Deoptimizer::BailoutType type) {
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+
-+ // Pass deoptimization type to the runtime system.
-+ __ push(Immediate(Smi::FromInt(static_cast<int>(type))));
-+ __ CallRuntime(Runtime::kNotifyDeoptimized);
-+
-+ // Tear down internal frame.
-+ }
-+
-+ // Get the full codegen state from the stack and untag it.
-+ __ mov(ecx, Operand(esp, 1 * kPointerSize));
-+ __ SmiUntag(ecx);
-+
-+ // Switch on the state.
-+ Label not_no_registers, not_tos_eax;
-+ __ cmp(ecx, static_cast<int>(Deoptimizer::BailoutState::NO_REGISTERS));
-+ __ j(not_equal, ¬_no_registers, Label::kNear);
-+ __ ret(1 * kPointerSize); // Remove state.
-+
-+ __ bind(¬_no_registers);
-+ DCHECK_EQ(kInterpreterAccumulatorRegister.code(), eax.code());
-+ __ mov(eax, Operand(esp, 2 * kPointerSize));
-+ __ cmp(ecx, static_cast<int>(Deoptimizer::BailoutState::TOS_REGISTER));
-+ __ j(not_equal, ¬_tos_eax, Label::kNear);
-+ __ ret(2 * kPointerSize); // Remove state, eax.
-+
-+ __ bind(¬_tos_eax);
-+ __ Abort(kNoCasesLeft);
-+}
-+
-+void Builtins::Generate_NotifyDeoptimized(MacroAssembler* masm) {
-+ Generate_NotifyDeoptimizedHelper(masm, Deoptimizer::EAGER);
-+}
-+
-+void Builtins::Generate_NotifySoftDeoptimized(MacroAssembler* masm) {
-+ Generate_NotifyDeoptimizedHelper(masm, Deoptimizer::SOFT);
-+}
-+
-+void Builtins::Generate_NotifyLazyDeoptimized(MacroAssembler* masm) {
-+ Generate_NotifyDeoptimizedHelper(masm, Deoptimizer::LAZY);
-+}
-+
-+// static
-+void Builtins::Generate_FunctionPrototypeApply(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argc
-+ // -- esp[0] : return address
-+ // -- esp[4] : argArray
-+ // -- esp[8] : thisArg
-+ // -- esp[12] : receiver
-+ // -----------------------------------
-+
-+ // 1. Load receiver into edi, argArray into ebx (if present), remove all
-+ // arguments from the stack (including the receiver), and push thisArg (if
-+ // present) instead.
-+ {
-+ Label no_arg_array, no_this_arg;
-+ __ LoadRoot(edx, Heap::kUndefinedValueRootIndex);
-+ __ mov(ebx, edx);
-+ __ mov(edi, Operand(esp, eax, times_pointer_size, kPointerSize));
-+ __ test(eax, eax);
-+ __ j(zero, &no_this_arg, Label::kNear);
-+ {
-+ __ mov(edx, Operand(esp, eax, times_pointer_size, 0));
-+ __ cmp(eax, Immediate(1));
-+ __ j(equal, &no_arg_array, Label::kNear);
-+ __ mov(ebx, Operand(esp, eax, times_pointer_size, -kPointerSize));
-+ __ bind(&no_arg_array);
-+ }
-+ __ bind(&no_this_arg);
-+ __ PopReturnAddressTo(ecx);
-+ __ lea(esp, Operand(esp, eax, times_pointer_size, kPointerSize));
-+ __ Push(edx);
-+ __ PushReturnAddressFrom(ecx);
-+ }
-+
-+ // ----------- S t a t e -------------
-+ // -- ebx : argArray
-+ // -- edi : receiver
-+ // -- esp[0] : return address
-+ // -- esp[4] : thisArg
-+ // -----------------------------------
-+
-+ // 2. We don't need to check explicitly for callable receiver here,
-+ // since that's the first thing the Call/CallWithArrayLike builtins
-+ // will do.
-+
-+ // 3. Tail call with no arguments if argArray is null or undefined.
-+ Label no_arguments;
-+ __ JumpIfRoot(ebx, Heap::kNullValueRootIndex, &no_arguments, Label::kNear);
-+ __ JumpIfRoot(ebx, Heap::kUndefinedValueRootIndex, &no_arguments,
-+ Label::kNear);
-+
-+ // 4a. Apply the receiver to the given argArray.
-+ __ Jump(masm->isolate()->builtins()->CallWithArrayLike(),
-+ RelocInfo::CODE_TARGET);
-+
-+ // 4b. The argArray is either null or undefined, so we tail call without any
-+ // arguments to the receiver.
-+ __ bind(&no_arguments);
-+ {
-+ __ Set(eax, 0);
-+ __ Jump(masm->isolate()->builtins()->Call(), RelocInfo::CODE_TARGET);
-+ }
-+}
-+
-+// static
-+void Builtins::Generate_FunctionPrototypeCall(MacroAssembler* masm) {
-+ // Stack Layout:
-+ // esp[0] : Return address
-+ // esp[8] : Argument n
-+ // esp[16] : Argument n-1
-+ // ...
-+ // esp[8 * n] : Argument 1
-+ // esp[8 * (n + 1)] : Receiver (callable to call)
-+ //
-+ // eax contains the number of arguments, n, not counting the receiver.
-+ //
-+ // 1. Make sure we have at least one argument.
-+ {
-+ Label done;
-+ __ test(eax, eax);
-+ __ j(not_zero, &done, Label::kNear);
-+ __ PopReturnAddressTo(ebx);
-+ __ PushRoot(Heap::kUndefinedValueRootIndex);
-+ __ PushReturnAddressFrom(ebx);
-+ __ inc(eax);
-+ __ bind(&done);
-+ }
-+
-+ // 2. Get the callable to call (passed as receiver) from the stack.
-+ __ mov(edi, Operand(esp, eax, times_pointer_size, kPointerSize));
-+
-+ // 3. Shift arguments and return address one slot down on the stack
-+ // (overwriting the original receiver). Adjust argument count to make
-+ // the original first argument the new receiver.
-+ {
-+ Label loop;
-+ __ mov(ecx, eax);
-+ __ bind(&loop);
-+ __ mov(ebx, Operand(esp, ecx, times_pointer_size, 0));
-+ __ mov(Operand(esp, ecx, times_pointer_size, kPointerSize), ebx);
-+ __ dec(ecx);
-+ __ j(not_sign, &loop); // While non-negative (to copy return address).
-+ __ pop(ebx); // Discard copy of return address.
-+ __ dec(eax); // One fewer argument (first argument is new receiver).
-+ }
-+
-+ // 4. Call the callable.
-+ __ Jump(masm->isolate()->builtins()->Call(), RelocInfo::CODE_TARGET);
-+}
-+
-+void Builtins::Generate_ReflectApply(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argc
-+ // -- esp[0] : return address
-+ // -- esp[4] : argumentsList
-+ // -- esp[8] : thisArgument
-+ // -- esp[12] : target
-+ // -- esp[16] : receiver
-+ // -----------------------------------
-+
-+ // 1. Load target into edi (if present), argumentsList into ebx (if present),
-+ // remove all arguments from the stack (including the receiver), and push
-+ // thisArgument (if present) instead.
-+ {
-+ Label done;
-+ __ LoadRoot(edi, Heap::kUndefinedValueRootIndex);
-+ __ mov(edx, edi);
-+ __ mov(ebx, edi);
-+ __ cmp(eax, Immediate(1));
-+ __ j(below, &done, Label::kNear);
-+ __ mov(edi, Operand(esp, eax, times_pointer_size, -0 * kPointerSize));
-+ __ j(equal, &done, Label::kNear);
-+ __ mov(edx, Operand(esp, eax, times_pointer_size, -1 * kPointerSize));
-+ __ cmp(eax, Immediate(3));
-+ __ j(below, &done, Label::kNear);
-+ __ mov(ebx, Operand(esp, eax, times_pointer_size, -2 * kPointerSize));
-+ __ bind(&done);
-+ __ PopReturnAddressTo(ecx);
-+ __ lea(esp, Operand(esp, eax, times_pointer_size, kPointerSize));
-+ __ Push(edx);
-+ __ PushReturnAddressFrom(ecx);
-+ }
-+
-+ // ----------- S t a t e -------------
-+ // -- ebx : argumentsList
-+ // -- edi : target
-+ // -- esp[0] : return address
-+ // -- esp[4] : thisArgument
-+ // -----------------------------------
-+
-+ // 2. We don't need to check explicitly for callable target here,
-+ // since that's the first thing the Call/CallWithArrayLike builtins
-+ // will do.
-+
-+ // 3. Apply the target to the given argumentsList.
-+ __ Jump(masm->isolate()->builtins()->CallWithArrayLike(),
-+ RelocInfo::CODE_TARGET);
-+}
-+
-+void Builtins::Generate_ReflectConstruct(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argc
-+ // -- esp[0] : return address
-+ // -- esp[4] : new.target (optional)
-+ // -- esp[8] : argumentsList
-+ // -- esp[12] : target
-+ // -- esp[16] : receiver
-+ // -----------------------------------
-+
-+ // 1. Load target into edi (if present), argumentsList into ebx (if present),
-+ // new.target into edx (if present, otherwise use target), remove all
-+ // arguments from the stack (including the receiver), and push thisArgument
-+ // (if present) instead.
-+ {
-+ Label done;
-+ __ LoadRoot(edi, Heap::kUndefinedValueRootIndex);
-+ __ mov(edx, edi);
-+ __ mov(ebx, edi);
-+ __ cmp(eax, Immediate(1));
-+ __ j(below, &done, Label::kNear);
-+ __ mov(edi, Operand(esp, eax, times_pointer_size, -0 * kPointerSize));
-+ __ mov(edx, edi);
-+ __ j(equal, &done, Label::kNear);
-+ __ mov(ebx, Operand(esp, eax, times_pointer_size, -1 * kPointerSize));
-+ __ cmp(eax, Immediate(3));
-+ __ j(below, &done, Label::kNear);
-+ __ mov(edx, Operand(esp, eax, times_pointer_size, -2 * kPointerSize));
-+ __ bind(&done);
-+ __ PopReturnAddressTo(ecx);
-+ __ lea(esp, Operand(esp, eax, times_pointer_size, kPointerSize));
-+ __ PushRoot(Heap::kUndefinedValueRootIndex);
-+ __ PushReturnAddressFrom(ecx);
-+ }
-+
-+ // ----------- S t a t e -------------
-+ // -- ebx : argumentsList
-+ // -- edx : new.target
-+ // -- edi : target
-+ // -- esp[0] : return address
-+ // -- esp[4] : receiver (undefined)
-+ // -----------------------------------
-+
-+ // 2. We don't need to check explicitly for constructor target here,
-+ // since that's the first thing the Construct/ConstructWithArrayLike
-+ // builtins will do.
-+
-+ // 3. We don't need to check explicitly for constructor new.target here,
-+ // since that's the second thing the Construct/ConstructWithArrayLike
-+ // builtins will do.
-+
-+ // 4. Construct the target with the given new.target and argumentsList.
-+ __ Jump(masm->isolate()->builtins()->ConstructWithArrayLike(),
-+ RelocInfo::CODE_TARGET);
-+}
-+
-+void Builtins::Generate_InternalArrayCode(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argc
-+ // -- esp[0] : return address
-+ // -- esp[4] : last argument
-+ // -----------------------------------
-+ Label generic_array_code;
-+
-+ // Get the InternalArray function.
-+ __ LoadGlobalFunction(Context::INTERNAL_ARRAY_FUNCTION_INDEX, edi);
-+
-+ if (FLAG_debug_code) {
-+ // Initial map for the builtin InternalArray function should be a map.
-+ __ mov(ebx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset));
-+ // Will both indicate a NULL and a Smi.
-+ __ test(ebx, Immediate(kSmiTagMask));
-+ __ Assert(not_zero, kUnexpectedInitialMapForInternalArrayFunction);
-+ __ CmpObjectType(ebx, MAP_TYPE, ecx);
-+ __ Assert(equal, kUnexpectedInitialMapForInternalArrayFunction);
-+ }
-+
-+ // Run the native code for the InternalArray function called as a normal
-+ // function.
-+ // tail call a stub
-+ InternalArrayConstructorStub stub(masm->isolate());
-+ __ TailCallStub(&stub);
-+}
-+
-+void Builtins::Generate_ArrayCode(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argc
-+ // -- esp[0] : return address
-+ // -- esp[4] : last argument
-+ // -----------------------------------
-+ Label generic_array_code;
-+
-+ // Get the Array function.
-+ __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, edi);
-+ __ mov(edx, edi);
-+
-+ if (FLAG_debug_code) {
-+ // Initial map for the builtin Array function should be a map.
-+ __ mov(ebx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset));
-+ // Will both indicate a NULL and a Smi.
-+ __ test(ebx, Immediate(kSmiTagMask));
-+ __ Assert(not_zero, kUnexpectedInitialMapForArrayFunction);
-+ __ CmpObjectType(ebx, MAP_TYPE, ecx);
-+ __ Assert(equal, kUnexpectedInitialMapForArrayFunction);
-+ }
-+
-+ // Run the native code for the Array function called as a normal function.
-+ // tail call a stub
-+ __ mov(ebx, masm->isolate()->factory()->undefined_value());
-+ ArrayConstructorStub stub(masm->isolate());
-+ __ TailCallStub(&stub);
-+}
-+
-+// static
-+void Builtins::Generate_NumberConstructor(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : number of arguments
-+ // -- edi : constructor function
-+ // -- esi : context
-+ // -- esp[0] : return address
-+ // -- esp[(argc - n) * 4] : arg[n] (zero-based)
-+ // -- esp[(argc + 1) * 4] : receiver
-+ // -----------------------------------
-+
-+ // 1. Load the first argument into ebx.
-+ Label no_arguments;
-+ {
-+ __ test(eax, eax);
-+ __ j(zero, &no_arguments, Label::kNear);
-+ __ mov(ebx, Operand(esp, eax, times_pointer_size, 0));
-+ }
-+
-+ // 2a. Convert the first argument to a number.
-+ {
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ SmiTag(eax);
-+ __ EnterBuiltinFrame(esi, edi, eax);
-+ __ mov(eax, ebx);
-+ __ Call(masm->isolate()->builtins()->ToNumber(), RelocInfo::CODE_TARGET);
-+ __ LeaveBuiltinFrame(esi, edi, ebx); // Argc popped to ebx.
-+ __ SmiUntag(ebx);
-+ }
-+
-+ {
-+ // Drop all arguments including the receiver.
-+ __ PopReturnAddressTo(ecx);
-+ __ lea(esp, Operand(esp, ebx, times_pointer_size, kPointerSize));
-+ __ PushReturnAddressFrom(ecx);
-+ __ Ret();
-+ }
-+
-+ // 2b. No arguments, return +0 (already in eax).
-+ __ bind(&no_arguments);
-+ __ ret(1 * kPointerSize);
-+}
-+
-+// static
-+void Builtins::Generate_NumberConstructor_ConstructStub(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : number of arguments
-+ // -- edi : constructor function
-+ // -- edx : new target
-+ // -- esi : context
-+ // -- esp[0] : return address
-+ // -- esp[(argc - n) * 4] : arg[n] (zero-based)
-+ // -- esp[(argc + 1) * 4] : receiver
-+ // -----------------------------------
-+
-+ // 1. Make sure we operate in the context of the called function.
-+ __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset));
-+
-+ // Store argc in r8.
-+ __ mov(ecx, eax);
-+ __ SmiTag(ecx);
-+
-+ // 2. Load the first argument into ebx.
-+ {
-+ Label no_arguments, done;
-+ __ test(eax, eax);
-+ __ j(zero, &no_arguments, Label::kNear);
-+ __ mov(ebx, Operand(esp, eax, times_pointer_size, 0));
-+ __ jmp(&done, Label::kNear);
-+ __ bind(&no_arguments);
-+ __ Move(ebx, Smi::kZero);
-+ __ bind(&done);
-+ }
-+
-+ // 3. Make sure ebx is a number.
-+ {
-+ Label done_convert;
-+ __ JumpIfSmi(ebx, &done_convert);
-+ __ CompareRoot(FieldOperand(ebx, HeapObject::kMapOffset),
-+ Heap::kHeapNumberMapRootIndex);
-+ __ j(equal, &done_convert);
-+ {
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ EnterBuiltinFrame(esi, edi, ecx);
-+ __ Push(edx);
-+ __ Move(eax, ebx);
-+ __ Call(masm->isolate()->builtins()->ToNumber(),
RelocInfo::CODE_TARGET);
-+ __ Move(ebx, eax);
-+ __ Pop(edx);
-+ __ LeaveBuiltinFrame(esi, edi, ecx);
-+ }
-+ __ bind(&done_convert);
-+ }
-+
-+ // 4. Check if new target and constructor differ.
-+ Label drop_frame_and_ret, done_alloc, new_object;
-+ __ cmp(edx, edi);
-+ __ j(not_equal, &new_object);
-+
-+ // 5. Allocate a JSValue wrapper for the number.
-+ __ AllocateJSValue(eax, edi, ebx, esi, &done_alloc);
-+ __ jmp(&drop_frame_and_ret);
-+
-+ __ bind(&done_alloc);
-+ __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset)); // Restore esi.
-+
-+ // 6. Fallback to the runtime to create new object.
-+ __ bind(&new_object);
-+ {
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ EnterBuiltinFrame(esi, edi, ecx);
-+ __ Push(ebx); // the first argument
-+ __ Call(masm->isolate()->builtins()->FastNewObject(),
-+ RelocInfo::CODE_TARGET);
-+ __ Pop(FieldOperand(eax, JSValue::kValueOffset));
-+ __ LeaveBuiltinFrame(esi, edi, ecx);
-+ }
-+
-+ __ bind(&drop_frame_and_ret);
-+ {
-+ // Drop all arguments including the receiver.
-+ __ PopReturnAddressTo(esi);
-+ __ SmiUntag(ecx);
-+ __ lea(esp, Operand(esp, ecx, times_pointer_size, kPointerSize));
-+ __ PushReturnAddressFrom(esi);
-+ __ Ret();
-+ }
-+}
-+
-+// static
-+void Builtins::Generate_StringConstructor(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : number of arguments
-+ // -- edi : constructor function
-+ // -- esi : context
-+ // -- esp[0] : return address
-+ // -- esp[(argc - n) * 4] : arg[n] (zero-based)
-+ // -- esp[(argc + 1) * 4] : receiver
-+ // -----------------------------------
-+
-+ // 1. Load the first argument into eax.
-+ Label no_arguments;
-+ {
-+ __ mov(ebx, eax); // Store argc in ebx.
-+ __ test(eax, eax);
-+ __ j(zero, &no_arguments, Label::kNear);
-+ __ mov(eax, Operand(esp, eax, times_pointer_size, 0));
-+ }
-+
-+ // 2a. At least one argument, return eax if it's a string, otherwise
-+ // dispatch to appropriate conversion.
-+ Label drop_frame_and_ret, to_string, symbol_descriptive_string;
-+ {
-+ __ JumpIfSmi(eax, &to_string, Label::kNear);
-+ STATIC_ASSERT(FIRST_NONSTRING_TYPE == SYMBOL_TYPE);
-+ __ CmpObjectType(eax, FIRST_NONSTRING_TYPE, edx);
-+ __ j(above, &to_string, Label::kNear);
-+ __ j(equal, &symbol_descriptive_string, Label::kNear);
-+ __ jmp(&drop_frame_and_ret, Label::kNear);
-+ }
-+
-+ // 2b. No arguments, return the empty string (and pop the receiver).
-+ __ bind(&no_arguments);
-+ {
-+ __ LoadRoot(eax, Heap::kempty_stringRootIndex);
-+ __ ret(1 * kPointerSize);
-+ }
-+
-+ // 3a. Convert eax to a string.
-+ __ bind(&to_string);
-+ {
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ SmiTag(ebx);
-+ __ EnterBuiltinFrame(esi, edi, ebx);
-+ __ Call(masm->isolate()->builtins()->ToString(), RelocInfo::CODE_TARGET);
-+ __ LeaveBuiltinFrame(esi, edi, ebx);
-+ __ SmiUntag(ebx);
-+ }
-+ __ jmp(&drop_frame_and_ret, Label::kNear);
-+
-+ // 3b. Convert symbol in eax to a string.
-+ __ bind(&symbol_descriptive_string);
-+ {
-+ __ PopReturnAddressTo(ecx);
-+ __ lea(esp, Operand(esp, ebx, times_pointer_size, kPointerSize));
-+ __ Push(eax);
-+ __ PushReturnAddressFrom(ecx);
-+ __ TailCallRuntime(Runtime::kSymbolDescriptiveString);
-+ }
-+
-+ __ bind(&drop_frame_and_ret);
-+ {
-+ // Drop all arguments including the receiver.
-+ __ PopReturnAddressTo(ecx);
-+ __ lea(esp, Operand(esp, ebx, times_pointer_size, kPointerSize));
-+ __ PushReturnAddressFrom(ecx);
-+ __ Ret();
-+ }
-+}
-+
-+// static
-+void Builtins::Generate_StringConstructor_ConstructStub(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : number of arguments
-+ // -- edi : constructor function
-+ // -- edx : new target
-+ // -- esi : context
-+ // -- esp[0] : return address
-+ // -- esp[(argc - n) * 4] : arg[n] (zero-based)
-+ // -- esp[(argc + 1) * 4] : receiver
-+ // -----------------------------------
-+
-+ // 1. Make sure we operate in the context of the called function.
-+ __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset));
-+
-+ __ mov(ebx, eax);
-+
-+ // 2. Load the first argument into eax.
-+ {
-+ Label no_arguments, done;
-+ __ test(ebx, ebx);
-+ __ j(zero, &no_arguments, Label::kNear);
-+ __ mov(eax, Operand(esp, ebx, times_pointer_size, 0));
-+ __ jmp(&done, Label::kNear);
-+ __ bind(&no_arguments);
-+ __ LoadRoot(eax, Heap::kempty_stringRootIndex);
-+ __ bind(&done);
-+ }
-+
-+ // 3. Make sure eax is a string.
-+ {
-+ Label convert, done_convert;
-+ __ JumpIfSmi(eax, &convert, Label::kNear);
-+ __ CmpObjectType(eax, FIRST_NONSTRING_TYPE, ecx);
-+ __ j(below, &done_convert);
-+ __ bind(&convert);
-+ {
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ SmiTag(ebx);
-+ __ EnterBuiltinFrame(esi, edi, ebx);
-+ __ Push(edx);
-+ __ Call(masm->isolate()->builtins()->ToString(),
RelocInfo::CODE_TARGET);
-+ __ Pop(edx);
-+ __ LeaveBuiltinFrame(esi, edi, ebx);
-+ __ SmiUntag(ebx);
-+ }
-+ __ bind(&done_convert);
-+ }
-+
-+ // 4. Check if new target and constructor differ.
-+ Label drop_frame_and_ret, done_alloc, new_object;
-+ __ cmp(edx, edi);
-+ __ j(not_equal, &new_object);
-+
-+ // 5. Allocate a JSValue wrapper for the string.
-+ // AllocateJSValue can't handle src == dst register. Reuse esi and restore it
-+ // as needed after the call.
-+ __ mov(esi, eax);
-+ __ AllocateJSValue(eax, edi, esi, ecx, &done_alloc);
-+ __ jmp(&drop_frame_and_ret);
-+
-+ __ bind(&done_alloc);
-+ {
-+ // Restore eax to the first argument and esi to the context.
-+ __ mov(eax, esi);
-+ __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset));
-+ }
-+
-+ // 6. Fallback to the runtime to create new object.
-+ __ bind(&new_object);
-+ {
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ SmiTag(ebx);
-+ __ EnterBuiltinFrame(esi, edi, ebx);
-+ __ Push(eax); // the first argument
-+ __ Call(masm->isolate()->builtins()->FastNewObject(),
-+ RelocInfo::CODE_TARGET);
-+ __ Pop(FieldOperand(eax, JSValue::kValueOffset));
-+ __ LeaveBuiltinFrame(esi, edi, ebx);
-+ __ SmiUntag(ebx);
-+ }
-+
-+ __ bind(&drop_frame_and_ret);
-+ {
-+ // Drop all arguments including the receiver.
-+ __ PopReturnAddressTo(ecx);
-+ __ lea(esp, Operand(esp, ebx, times_pointer_size, kPointerSize));
-+ __ PushReturnAddressFrom(ecx);
-+ __ Ret();
-+ }
-+}
-+
-+static void EnterArgumentsAdaptorFrame(MacroAssembler* masm) {
-+ __ push(ebp);
-+ __ mov(ebp, esp);
-+
-+ // Store the arguments adaptor context sentinel.
-+ __ push(Immediate(StackFrame::TypeToMarker(StackFrame::ARGUMENTS_ADAPTOR)));
-+
-+ // Push the function on the stack.
-+ __ push(edi);
-+
-+ // Preserve the number of arguments on the stack. Must preserve eax,
-+ // ebx and ecx because these registers are used when copying the
-+ // arguments and the receiver.
-+ STATIC_ASSERT(kSmiTagSize == 1);
-+ __ lea(edi, Operand(eax, eax, times_1, kSmiTag));
-+ __ push(edi);
-+}
-+
-+static void LeaveArgumentsAdaptorFrame(MacroAssembler* masm) {
-+ // Retrieve the number of arguments from the stack.
-+ __ mov(ebx, Operand(ebp, ArgumentsAdaptorFrameConstants::kLengthOffset));
-+
-+ // Leave the frame.
-+ __ leave();
-+
-+ // Remove caller arguments from the stack.
-+ STATIC_ASSERT(kSmiTagSize == 1 && kSmiTag == 0);
-+ __ pop(ecx);
-+ __ lea(esp, Operand(esp, ebx, times_2, 1 * kPointerSize)); // 1 ~ receiver
-+ __ push(ecx);
-+}
-+
-+// static
-+void Builtins::Generate_CallOrConstructVarargs(MacroAssembler* masm,
-+ Handle<Code> code) {
-+ // ----------- S t a t e -------------
-+ // -- edi : target
-+ // -- eax : number of parameters on the stack (not including the receiver)
-+ // -- ebx : arguments list (a FixedArray)
-+ // -- ecx : len (number of elements to from args)
-+ // -- edx : new.target (checked to be constructor or undefined)
-+ // -- esp[0] : return address.
-+ // -----------------------------------
-+ __ AssertFixedArray(ebx);
-+
-+ // Save edx/edi/eax to stX0/stX1/stX2.
-+ __ push(edx);
-+ __ push(edi);
-+ __ push(eax);
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fld_s(MemOperand(esp, 4));
-+ __ fld_s(MemOperand(esp, 8));
-+ __ lea(esp, Operand(esp, 3 * kFloatSize));
-+
-+ // Check for stack overflow.
-+ {
-+ // Check the stack for overflow. We are not trying to catch interruptions
-+ // (i.e. debug break and preemption) here, so check the "real stack
limit".
-+ Label done;
-+ ExternalReference real_stack_limit =
-+ ExternalReference::address_of_real_stack_limit(masm->isolate());
-+ __ mov(edx, Operand::StaticVariable(real_stack_limit));
-+ // Make edx the space we have left. The stack might already be overflowed
-+ // here which will cause edx to become negative.
-+ __ neg(edx);
-+ __ add(edx, esp);
-+ __ sar(edx, kPointerSizeLog2);
-+ // Check if the arguments will overflow the stack.
-+ __ cmp(edx, ecx);
-+ __ j(greater, &done, Label::kNear); // Signed comparison.
-+ __ TailCallRuntime(Runtime::kThrowStackOverflow);
-+ __ bind(&done);
-+ }
-+
-+ // Push additional arguments onto the stack.
-+ {
-+ __ PopReturnAddressTo(edx);
-+ __ Move(eax, Immediate(0));
-+ Label done, push, loop;
-+ __ bind(&loop);
-+ __ cmp(eax, ecx);
-+ __ j(equal, &done, Label::kNear);
-+ // Turn the hole into undefined as we go.
-+ __ mov(edi,
-+ FieldOperand(ebx, eax, times_pointer_size, FixedArray::kHeaderSize));
-+ __ CompareRoot(edi, Heap::kTheHoleValueRootIndex);
-+ __ j(not_equal, &push, Label::kNear);
-+ __ LoadRoot(edi, Heap::kUndefinedValueRootIndex);
-+ __ bind(&push);
-+ __ Push(edi);
-+ __ inc(eax);
-+ __ jmp(&loop);
-+ __ bind(&done);
-+ __ PushReturnAddressFrom(edx);
-+ }
-+
-+ // Restore edx/edi/eax from stX0/stX1/stX2.
-+ __ lea(esp, Operand(esp, -3 * kFloatSize));
-+ __ fstp_s(MemOperand(esp, 0));
-+ __ fstp_s(MemOperand(esp, 4));
-+ __ fstp_s(MemOperand(esp, 8));
-+ __ pop(edx);
-+ __ pop(edi);
-+ __ pop(eax);
-+
-+ // Compute the actual parameter count.
-+ __ add(eax, ecx);
-+
-+ // Tail-call to the actual Call or Construct builtin.
-+ __ Jump(code, RelocInfo::CODE_TARGET);
-+}
-+
-+// static
-+void Builtins::Generate_CallOrConstructForwardVarargs(MacroAssembler* masm,
-+ Handle<Code> code) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edi : the target to call (can be any Object)
-+ // -- edx : the new target (for [[Construct]] calls)
-+ // -- ecx : start index (to support rest parameters)
-+ // -----------------------------------
-+
-+ // Preserve new.target (in case of [[Construct]]).
-+ __ push(edx);
-+ __ fld_s(MemOperand(esp, 0));
-+ __ lea(esp, Operand(esp, kFloatSize));
-+
-+ // Check if we have an arguments adaptor frame below the function frame.
-+ Label arguments_adaptor, arguments_done;
-+ __ mov(ebx, Operand(ebp, StandardFrameConstants::kCallerFPOffset));
-+ __ cmp(Operand(ebx, CommonFrameConstants::kContextOrFrameTypeOffset),
-+ Immediate(StackFrame::TypeToMarker(StackFrame::ARGUMENTS_ADAPTOR)));
-+ __ j(equal, &arguments_adaptor, Label::kNear);
-+ {
-+ __ mov(edx, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ __ mov(edx, FieldOperand(edx, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(edx,
-+ FieldOperand(edx, SharedFunctionInfo::kFormalParameterCountOffset));
-+ __ mov(ebx, ebp);
-+ }
-+ __ jmp(&arguments_done, Label::kNear);
-+ __ bind(&arguments_adaptor);
-+ {
-+ // Just load the length from the ArgumentsAdaptorFrame.
-+ __ mov(edx, Operand(ebx, ArgumentsAdaptorFrameConstants::kLengthOffset));
-+ __ SmiUntag(edx);
-+ }
-+ __ bind(&arguments_done);
-+
-+ Label stack_done;
-+ __ sub(edx, ecx);
-+ __ j(less_equal, &stack_done);
-+ {
-+ // Check for stack overflow.
-+ {
-+ // Check the stack for overflow. We are not trying to catch interruptions
-+ // (i.e. debug break and preemption) here, so check the "real stack
-+ // limit".
-+ Label done;
-+ __ LoadRoot(ecx, Heap::kRealStackLimitRootIndex);
-+ // Make ecx the space we have left. The stack might already be
-+ // overflowed here which will cause ecx to become negative.
-+ __ neg(ecx);
-+ __ add(ecx, esp);
-+ __ sar(ecx, kPointerSizeLog2);
-+ // Check if the arguments will overflow the stack.
-+ __ cmp(ecx, edx);
-+ __ j(greater, &done, Label::kNear); // Signed comparison.
-+ __ TailCallRuntime(Runtime::kThrowStackOverflow);
-+ __ bind(&done);
-+ }
-+
-+ // Forward the arguments from the caller frame.
-+ {
-+ Label loop;
-+ __ add(eax, edx);
-+ __ PopReturnAddressTo(ecx);
-+ __ bind(&loop);
-+ {
-+ __ Push(Operand(ebx, edx, times_pointer_size, 1 * kPointerSize));
-+ __ dec(edx);
-+ __ j(not_zero, &loop);
-+ }
-+ __ PushReturnAddressFrom(ecx);
-+ }
-+ }
-+ __ bind(&stack_done);
-+
-+ // Restore new.target (in case of [[Construct]]).
-+ __ lea(esp, Operand(esp, -kFloatSize));
-+ __ fstp_s(MemOperand(esp, 0));
-+ __ pop(edx);
-+
-+ // Tail-call to the {code} handler.
-+ __ Jump(code, RelocInfo::CODE_TARGET);
-+}
-+
-+// static
-+void Builtins::Generate_CallFunction(MacroAssembler* masm,
-+ ConvertReceiverMode mode) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edi : the function to call (checked to be a JSFunction)
-+ // -----------------------------------
-+ __ AssertFunction(edi);
-+
-+ // See ES6 section 9.2.1 [[Call]] ( thisArgument, argumentsList)
-+ // Check that the function is not a "classConstructor".
-+ Label class_constructor;
-+ __ mov(edx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ test(FieldOperand(edx, SharedFunctionInfo::kCompilerHintsOffset),
-+ Immediate(SharedFunctionInfo::kClassConstructorMask));
-+ __ j(not_zero, &class_constructor);
-+
-+ // Enter the context of the function; ToObject has to run in the function
-+ // context, and we also need to take the global proxy from the function
-+ // context in case of conversion.
-+ __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset));
-+ // We need to convert the receiver for non-native sloppy mode functions.
-+ Label done_convert;
-+ __ test(FieldOperand(edx, SharedFunctionInfo::kCompilerHintsOffset),
-+ Immediate(SharedFunctionInfo::IsNativeBit::kMask |
-+ SharedFunctionInfo::IsStrictBit::kMask));
-+ __ j(not_zero, &done_convert);
-+ {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edx : the shared function info.
-+ // -- edi : the function to call (checked to be a JSFunction)
-+ // -- esi : the function context.
-+ // -----------------------------------
-+
-+ if (mode == ConvertReceiverMode::kNullOrUndefined) {
-+ // Patch receiver to global proxy.
-+ __ LoadGlobalProxy(ecx);
-+ } else {
-+ Label convert_to_object, convert_receiver;
-+ __ mov(ecx, Operand(esp, eax, times_pointer_size, kPointerSize));
-+ __ JumpIfSmi(ecx, &convert_to_object, Label::kNear);
-+ STATIC_ASSERT(LAST_JS_RECEIVER_TYPE == LAST_TYPE);
-+ __ CmpObjectType(ecx, FIRST_JS_RECEIVER_TYPE, ebx);
-+ __ j(above_equal, &done_convert);
-+ if (mode != ConvertReceiverMode::kNotNullOrUndefined) {
-+ Label convert_global_proxy;
-+ __ JumpIfRoot(ecx, Heap::kUndefinedValueRootIndex,
-+ &convert_global_proxy, Label::kNear);
-+ __ JumpIfNotRoot(ecx, Heap::kNullValueRootIndex, &convert_to_object,
-+ Label::kNear);
-+ __ bind(&convert_global_proxy);
-+ {
-+ // Patch receiver to global proxy.
-+ __ LoadGlobalProxy(ecx);
-+ }
-+ __ jmp(&convert_receiver);
-+ }
-+ __ bind(&convert_to_object);
-+ {
-+ // Convert receiver using ToObject.
-+ // TODO(bmeurer): Inline the allocation here to avoid building the frame
-+ // in the fast case? (fall back to AllocateInNewSpace?)
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ SmiTag(eax);
-+ __ Push(eax);
-+ __ Push(edi);
-+ __ mov(eax, ecx);
-+ __ Push(esi);
-+ __ Call(masm->isolate()->builtins()->ToObject(),
-+ RelocInfo::CODE_TARGET);
-+ __ Pop(esi);
-+ __ mov(ecx, eax);
-+ __ Pop(edi);
-+ __ Pop(eax);
-+ __ SmiUntag(eax);
-+ }
-+ __ mov(edx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ bind(&convert_receiver);
-+ }
-+ __ mov(Operand(esp, eax, times_pointer_size, kPointerSize), ecx);
-+ }
-+ __ bind(&done_convert);
-+
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edx : the shared function info.
-+ // -- edi : the function to call (checked to be a JSFunction)
-+ // -- esi : the function context.
-+ // -----------------------------------
-+
-+ __ mov(ebx,
-+ FieldOperand(edx, SharedFunctionInfo::kFormalParameterCountOffset));
-+ ParameterCount actual(eax);
-+ ParameterCount expected(ebx);
-+ __ InvokeFunctionCode(edi, no_reg, expected, actual, JUMP_FUNCTION,
-+ CheckDebugStepCallWrapper());
-+ // The function is a "classConstructor", need to raise an exception.
-+ __ bind(&class_constructor);
-+ {
-+ FrameScope frame(masm, StackFrame::INTERNAL);
-+ __ push(edi);
-+ __ CallRuntime(Runtime::kThrowConstructorNonCallableError);
-+ }
-+}
-+
-+namespace {
-+
-+void Generate_PushBoundArguments(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edx : new.target (only in case of [[Construct]])
-+ // -- edi : target (checked to be a JSBoundFunction)
-+ // -----------------------------------
-+
-+ // Load [[BoundArguments]] into ecx and length of that into ebx.
-+ Label no_bound_arguments;
-+ __ mov(ecx, FieldOperand(edi, JSBoundFunction::kBoundArgumentsOffset));
-+ __ mov(ebx, FieldOperand(ecx, FixedArray::kLengthOffset));
-+ __ SmiUntag(ebx);
-+ __ test(ebx, ebx);
-+ __ j(zero, &no_bound_arguments);
-+ {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edx : new.target (only in case of [[Construct]])
-+ // -- edi : target (checked to be a JSBoundFunction)
-+ // -- ecx : the [[BoundArguments]] (implemented as FixedArray)
-+ // -- ebx : the number of [[BoundArguments]]
-+ // -----------------------------------
-+
-+ // Reserve stack space for the [[BoundArguments]].
-+ {
-+ Label done;
-+ __ lea(ecx, Operand(ebx, times_pointer_size, 0));
-+ __ sub(esp, ecx);
-+ // Check the stack for overflow. We are not trying to catch interruptions
-+ // (i.e. debug break and preemption) here, so check the "real stack
-+ // limit".
-+ __ CompareRoot(esp, ecx, Heap::kRealStackLimitRootIndex);
-+ __ j(greater, &done, Label::kNear); // Signed comparison.
-+ // Restore the stack pointer.
-+ __ lea(esp, Operand(esp, ebx, times_pointer_size, 0));
-+ {
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ EnterFrame(StackFrame::INTERNAL);
-+ __ CallRuntime(Runtime::kThrowStackOverflow);
-+ }
-+ __ bind(&done);
-+ }
-+
-+ // Adjust effective number of arguments to include return address.
-+ __ inc(eax);
-+
-+ // Relocate arguments and return address down the stack.
-+ {
-+ Label loop;
-+ __ Set(ecx, 0);
-+ __ lea(ebx, Operand(esp, ebx, times_pointer_size, 0));
-+ __ bind(&loop);
-+ __ fld_s(Operand(ebx, ecx, times_pointer_size, 0));
-+ __ fstp_s(Operand(esp, ecx, times_pointer_size, 0));
-+ __ inc(ecx);
-+ __ cmp(ecx, eax);
-+ __ j(less, &loop);
-+ }
-+
-+ // Copy [[BoundArguments]] to the stack (below the arguments).
-+ {
-+ Label loop;
-+ __ mov(ecx, FieldOperand(edi, JSBoundFunction::kBoundArgumentsOffset));
-+ __ mov(ebx, FieldOperand(ecx, FixedArray::kLengthOffset));
-+ __ SmiUntag(ebx);
-+ __ bind(&loop);
-+ __ dec(ebx);
-+ __ fld_s(
-+ FieldOperand(ecx, ebx, times_pointer_size, FixedArray::kHeaderSize));
-+ __ fstp_s(Operand(esp, eax, times_pointer_size, 0));
-+ __ lea(eax, Operand(eax, 1));
-+ __ j(greater, &loop);
-+ }
-+
-+ // Adjust effective number of arguments (eax contains the number of
-+ // arguments from the call plus return address plus the number of
-+ // [[BoundArguments]]), so we need to subtract one for the return address.
-+ __ dec(eax);
-+ }
-+ __ bind(&no_bound_arguments);
-+}
-+
-+} // namespace
-+
-+// static
-+void Builtins::Generate_CallBoundFunctionImpl(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edi : the function to call (checked to be a JSBoundFunction)
-+ // -----------------------------------
-+ __ AssertBoundFunction(edi);
-+
-+ // Patch the receiver to [[BoundThis]].
-+ __ mov(ebx, FieldOperand(edi, JSBoundFunction::kBoundThisOffset));
-+ __ mov(Operand(esp, eax, times_pointer_size, kPointerSize), ebx);
-+
-+ // Push the [[BoundArguments]] onto the stack.
-+ Generate_PushBoundArguments(masm);
-+
-+ // Call the [[BoundTargetFunction]] via the Call builtin.
-+ __ mov(edi, FieldOperand(edi, JSBoundFunction::kBoundTargetFunctionOffset));
-+ __ mov(ecx, Operand::StaticVariable(ExternalReference(
-+ Builtins::kCall_ReceiverIsAny, masm->isolate())));
-+ __ lea(ecx, FieldOperand(ecx, Code::kHeaderSize));
-+ __ jmp(ecx);
-+}
-+
-+// static
-+void Builtins::Generate_Call(MacroAssembler* masm, ConvertReceiverMode mode) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edi : the target to call (can be any Object).
-+ // -----------------------------------
-+
-+ Label non_callable, non_function, non_smi;
-+ __ JumpIfSmi(edi, &non_callable);
-+ __ bind(&non_smi);
-+ __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx);
-+ __ j(equal, masm->isolate()->builtins()->CallFunction(mode),
-+ RelocInfo::CODE_TARGET);
-+ __ CmpInstanceType(ecx, JS_BOUND_FUNCTION_TYPE);
-+ __ j(equal, masm->isolate()->builtins()->CallBoundFunction(),
-+ RelocInfo::CODE_TARGET);
-+
-+ // Check if target is a proxy and call CallProxy external builtin
-+ __ test_b(FieldOperand(ecx, Map::kBitFieldOffset),
-+ Immediate(1 << Map::kIsCallable));
-+ __ j(zero, &non_callable);
-+
-+ // Call CallProxy external builtin
-+ __ CmpInstanceType(ecx, JS_PROXY_TYPE);
-+ __ j(not_equal, &non_function);
-+
-+ __ mov(ecx, Operand::StaticVariable(
-+ ExternalReference(Builtins::kCallProxy, masm->isolate())));
-+ __ lea(ecx, FieldOperand(ecx, Code::kHeaderSize));
-+ __ jmp(ecx);
-+
-+ // 2. Call to something else, which might have a [[Call]] internal method (if
-+ // not we raise an exception).
-+ __ bind(&non_function);
-+ // Overwrite the original receiver with the (original) target.
-+ __ mov(Operand(esp, eax, times_pointer_size, kPointerSize), edi);
-+ // Let the "call_as_function_delegate" take care of the rest.
-+ __ LoadGlobalFunction(Context::CALL_AS_FUNCTION_DELEGATE_INDEX, edi);
-+ __ Jump(masm->isolate()->builtins()->CallFunction(
-+ ConvertReceiverMode::kNotNullOrUndefined),
-+ RelocInfo::CODE_TARGET);
-+
-+ // 3. Call to something that is not callable.
-+ __ bind(&non_callable);
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ Push(edi);
-+ __ CallRuntime(Runtime::kThrowCalledNonCallable);
-+ }
-+}
-+
-+// static
-+void Builtins::Generate_ConstructFunction(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edx : the new target (checked to be a constructor)
-+ // -- edi : the constructor to call (checked to be a JSFunction)
-+ // -----------------------------------
-+ __ AssertFunction(edi);
-+
-+ // Calling convention for function specific ConstructStubs require
-+ // ebx to contain either an AllocationSite or undefined.
-+ __ LoadRoot(ebx, Heap::kUndefinedValueRootIndex);
-+
-+ // Tail call to the function-specific construct stub (still in the caller
-+ // context at this point).
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(ecx, FieldOperand(ecx, SharedFunctionInfo::kConstructStubOffset));
-+ __ lea(ecx, FieldOperand(ecx, Code::kHeaderSize));
-+ __ jmp(ecx);
-+}
-+
-+// static
-+void Builtins::Generate_ConstructBoundFunction(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edx : the new target (checked to be a constructor)
-+ // -- edi : the constructor to call (checked to be a JSBoundFunction)
-+ // -----------------------------------
-+ __ AssertBoundFunction(edi);
-+
-+ // Push the [[BoundArguments]] onto the stack.
-+ Generate_PushBoundArguments(masm);
-+
-+ // Patch new.target to [[BoundTargetFunction]] if new.target equals target.
-+ {
-+ Label done;
-+ __ cmp(edi, edx);
-+ __ j(not_equal, &done, Label::kNear);
-+ __ mov(edx, FieldOperand(edi, JSBoundFunction::kBoundTargetFunctionOffset));
-+ __ bind(&done);
-+ }
-+
-+ // Construct the [[BoundTargetFunction]] via the Construct builtin.
-+ __ mov(edi, FieldOperand(edi, JSBoundFunction::kBoundTargetFunctionOffset));
-+ __ mov(ecx, Operand::StaticVariable(
-+ ExternalReference(Builtins::kConstruct, masm->isolate())));
-+ __ lea(ecx, FieldOperand(ecx, Code::kHeaderSize));
-+ __ jmp(ecx);
-+}
-+
-+// static
-+void Builtins::Generate_ConstructProxy(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edi : the constructor to call (checked to be a JSProxy)
-+ // -- edx : the new target (either the same as the constructor or
-+ // the JSFunction on which new was invoked initially)
-+ // -----------------------------------
-+
-+ // Call into the Runtime for Proxy [[Construct]].
-+ __ PopReturnAddressTo(ecx);
-+ __ Push(edi);
-+ __ Push(edx);
-+ __ PushReturnAddressFrom(ecx);
-+ // Include the pushed new_target, constructor and the receiver.
-+ __ add(eax, Immediate(3));
-+ // Tail-call to the runtime.
-+ __ JumpToExternalReference(
-+ ExternalReference(Runtime::kJSProxyConstruct, masm->isolate()));
-+}
-+
-+// static
-+void Builtins::Generate_Construct(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : the number of arguments (not including the receiver)
-+ // -- edx : the new target (either the same as the constructor or
-+ // the JSFunction on which new was invoked initially)
-+ // -- edi : the constructor to call (can be any Object)
-+ // -----------------------------------
-+
-+ // Check if target is a Smi.
-+ Label non_constructor;
-+ __ JumpIfSmi(edi, &non_constructor, Label::kNear);
-+
-+ // Dispatch based on instance type.
-+ __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx);
-+ __ j(equal, masm->isolate()->builtins()->ConstructFunction(),
-+ RelocInfo::CODE_TARGET);
-+
-+ // Check if target has a [[Construct]] internal method.
-+ __ test_b(FieldOperand(ecx, Map::kBitFieldOffset),
-+ Immediate(1 << Map::kIsConstructor));
-+ __ j(zero, &non_constructor, Label::kNear);
-+
-+ // Only dispatch to bound functions after checking whether they are
-+ // constructors.
-+ __ CmpInstanceType(ecx, JS_BOUND_FUNCTION_TYPE);
-+ __ j(equal, masm->isolate()->builtins()->ConstructBoundFunction(),
-+ RelocInfo::CODE_TARGET);
-+
-+ // Only dispatch to proxies after checking whether they are constructors.
-+ __ CmpInstanceType(ecx, JS_PROXY_TYPE);
-+ __ j(equal, masm->isolate()->builtins()->ConstructProxy(),
-+ RelocInfo::CODE_TARGET);
-+
-+ // Called Construct on an exotic Object with a [[Construct]] internal method.
-+ {
-+ // Overwrite the original receiver with the (original) target.
-+ __ mov(Operand(esp, eax, times_pointer_size, kPointerSize), edi);
-+ // Let the "call_as_constructor_delegate" take care of the rest.
-+ __ LoadGlobalFunction(Context::CALL_AS_CONSTRUCTOR_DELEGATE_INDEX, edi);
-+ __ Jump(masm->isolate()->builtins()->CallFunction(),
-+ RelocInfo::CODE_TARGET);
-+ }
-+
-+ // Called Construct on an Object that doesn't have a [[Construct]] internal
-+ // method.
-+ __ bind(&non_constructor);
-+ __ Jump(masm->isolate()->builtins()->ConstructedNonConstructable(),
-+ RelocInfo::CODE_TARGET);
-+}
-+
-+// static
-+void Builtins::Generate_AllocateInNewSpace(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- edx : requested object size (untagged)
-+ // -- esp[0] : return address
-+ // -----------------------------------
-+ __ SmiTag(edx);
-+ __ PopReturnAddressTo(ecx);
-+ __ Push(edx);
-+ __ PushReturnAddressFrom(ecx);
-+ __ Move(esi, Smi::kZero);
-+ __ TailCallRuntime(Runtime::kAllocateInNewSpace);
-+}
-+
-+// static
-+void Builtins::Generate_AllocateInOldSpace(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- edx : requested object size (untagged)
-+ // -- esp[0] : return address
-+ // -----------------------------------
-+ __ SmiTag(edx);
-+ __ PopReturnAddressTo(ecx);
-+ __ Push(edx);
-+ __ Push(Smi::FromInt(AllocateTargetSpace::encode(OLD_SPACE)));
-+ __ PushReturnAddressFrom(ecx);
-+ __ Move(esi, Smi::kZero);
-+ __ TailCallRuntime(Runtime::kAllocateInTargetSpace);
-+}
-+
-+// static
-+void Builtins::Generate_Abort(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- edx : message_id as Smi
-+ // -- esp[0] : return address
-+ // -----------------------------------
-+ __ PopReturnAddressTo(ecx);
-+ __ Push(edx);
-+ __ PushReturnAddressFrom(ecx);
-+ __ Move(esi, Smi::kZero);
-+ __ TailCallRuntime(Runtime::kAbort);
-+}
-+
-+void Builtins::Generate_ArgumentsAdaptorTrampoline(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : actual number of arguments
-+ // -- ebx : expected number of arguments
-+ // -- edx : new target (passed through to callee)
-+ // -- edi : function (passed through to callee)
-+ // -----------------------------------
-+
-+ Label invoke, dont_adapt_arguments, stack_overflow;
-+ __ IncrementCounter(masm->isolate()->counters()->arguments_adaptors(), 1);
-+
-+ Label enough, too_few;
-+ __ cmp(eax, ebx);
-+ __ j(less, &too_few);
-+ __ cmp(ebx, SharedFunctionInfo::kDontAdaptArgumentsSentinel);
-+ __ j(equal, &dont_adapt_arguments);
-+
-+ { // Enough parameters: Actual >= expected.
-+ __ bind(&enough);
-+ EnterArgumentsAdaptorFrame(masm);
-+ // edi is used as a scratch register. It should be restored from the frame
-+ // when needed.
-+ Generate_StackOverflowCheck(masm, ebx, ecx, edi, &stack_overflow);
-+
-+ // Copy receiver and all expected arguments.
-+ const int offset = StandardFrameConstants::kCallerSPOffset;
-+ __ lea(edi, Operand(ebp, eax, times_4, offset));
-+ __ mov(eax, -1); // account for receiver
-+
-+ Label copy;
-+ __ bind(©);
-+ __ inc(eax);
-+ __ push(Operand(edi, 0));
-+ __ sub(edi, Immediate(kPointerSize));
-+ __ cmp(eax, ebx);
-+ __ j(less, ©);
-+ // eax now contains the expected number of arguments.
-+ __ jmp(&invoke);
-+ }
-+
-+ { // Too few parameters: Actual < expected.
-+ __ bind(&too_few);
-+ EnterArgumentsAdaptorFrame(masm);
-+ // edi is used as a scratch register. It should be restored from the frame
-+ // when needed.
-+ Generate_StackOverflowCheck(masm, ebx, ecx, edi, &stack_overflow);
-+
-+ // Remember expected arguments in ecx.
-+ __ mov(ecx, ebx);
-+
-+ // Copy receiver and all actual arguments.
-+ const int offset = StandardFrameConstants::kCallerSPOffset;
-+ __ lea(edi, Operand(ebp, eax, times_4, offset));
-+ // ebx = expected - actual.
-+ __ sub(ebx, eax);
-+ // eax = -actual - 1
-+ __ neg(eax);
-+ __ sub(eax, Immediate(1));
-+
-+ Label copy;
-+ __ bind(©);
-+ __ inc(eax);
-+ __ push(Operand(edi, 0));
-+ __ sub(edi, Immediate(kPointerSize));
-+ __ test(eax, eax);
-+ __ j(not_zero, ©);
-+
-+ // Fill remaining expected arguments with undefined values.
-+ Label fill;
-+ __ bind(&fill);
-+ __ inc(eax);
-+ __ push(Immediate(masm->isolate()->factory()->undefined_value()));
-+ __ cmp(eax, ebx);
-+ __ j(less, &fill);
-+
-+ // Restore expected arguments.
-+ __ mov(eax, ecx);
-+ }
-+
-+ // Call the entry point.
-+ __ bind(&invoke);
-+ // Restore function pointer.
-+ __ mov(edi, Operand(ebp, ArgumentsAdaptorFrameConstants::kFunctionOffset));
-+ // eax : expected number of arguments
-+ // edx : new target (passed through to callee)
-+ // edi : function (passed through to callee)
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kCodeEntryOffset));
-+ __ call(ecx);
-+
-+ // Store offset of return address for deoptimizer.
-+
masm->isolate()->heap()->SetArgumentsAdaptorDeoptPCOffset(masm->pc_offset());
-+
-+ // Leave frame and return.
-+ LeaveArgumentsAdaptorFrame(masm);
-+ __ ret(0);
-+
-+ // -------------------------------------------
-+ // Dont adapt arguments.
-+ // -------------------------------------------
-+ __ bind(&dont_adapt_arguments);
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kCodeEntryOffset));
-+ __ jmp(ecx);
-+
-+ __ bind(&stack_overflow);
-+ {
-+ FrameScope frame(masm, StackFrame::MANUAL);
-+ __ CallRuntime(Runtime::kThrowStackOverflow);
-+ __ int3();
-+ }
-+}
-+
-+static void Generate_OnStackReplacementHelper(MacroAssembler* masm,
-+ bool has_handler_frame) {
-+ // Lookup the function in the JavaScript frame.
-+ if (has_handler_frame) {
-+ __ mov(eax, Operand(ebp, StandardFrameConstants::kCallerFPOffset));
-+ __ mov(eax, Operand(eax, JavaScriptFrameConstants::kFunctionOffset));
-+ } else {
-+ __ mov(eax, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ }
-+
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ // Pass function as argument.
-+ __ push(eax);
-+ __ CallRuntime(Runtime::kCompileForOnStackReplacement);
-+ }
-+
-+ Label skip;
-+ // If the code object is null, just return to the caller.
-+ __ cmp(eax, Immediate(0));
-+ __ j(not_equal, &skip, Label::kNear);
-+ __ ret(0);
-+
-+ __ bind(&skip);
-+
-+ // Drop any potential handler frame that is be sitting on top of the actual
-+ // JavaScript frame. This is the case then OSR is triggered from bytecode.
-+ if (has_handler_frame) {
-+ __ leave();
-+ }
-+
-+ // Load deoptimization data from the code object.
-+ __ mov(ebx, Operand(eax, Code::kDeoptimizationDataOffset - kHeapObjectTag));
-+
-+ // Load the OSR entrypoint offset from the deoptimization data.
-+ __ mov(ebx, Operand(ebx, FixedArray::OffsetOfElementAt(
-+ DeoptimizationInputData::kOsrPcOffsetIndex) -
-+ kHeapObjectTag));
-+ __ SmiUntag(ebx);
-+
-+ // Compute the target address = code_obj + header_size + osr_offset
-+ __ lea(eax, Operand(eax, ebx, times_1, Code::kHeaderSize - kHeapObjectTag));
-+
-+ // Overwrite the return address on the stack.
-+ __ mov(Operand(esp, 0), eax);
-+
-+ // And "return" to the OSR entry point of the function.
-+ __ ret(0);
-+}
-+
-+void Builtins::Generate_OnStackReplacement(MacroAssembler* masm) {
-+ Generate_OnStackReplacementHelper(masm, false);
-+}
-+
-+void Builtins::Generate_InterpreterOnStackReplacement(MacroAssembler* masm) {
-+ Generate_OnStackReplacementHelper(masm, true);
-+}
-+
-+void Builtins::Generate_WasmCompileLazy(MacroAssembler* masm) {
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+
-+ // Save all parameter registers (see wasm-linkage.cc). They might be
-+ // overwritten in the runtime call below. We don't have any callee-saved
-+ // registers in wasm, so no need to store anything else.
-+ constexpr Register gp_regs[]{eax, ebx, ecx, edx, esi};
-+
-+ for (auto reg : gp_regs) {
-+ __ Push(reg);
-+ }
-+
-+ // Initialize esi register with kZero, CEntryStub will use it to set the
-+ // current context on the isolate.
-+ __ Move(esi, Smi::kZero);
-+ __ CallRuntime(Runtime::kWasmCompileLazy);
-+ // Store returned instruction start in edi.
-+ __ lea(edi, FieldOperand(eax, Code::kHeaderSize));
-+
-+ // Restore registers.
-+ for (int i = arraysize(gp_regs) - 1; i >= 0; --i) {
-+ __ Pop(gp_regs[i]);
-+ }
-+ }
-+ // Now jump to the instructions of the returned code object.
-+ __ jmp(edi);
-+}
-+
-+#undef __
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/builtins/x87/OWNERS
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/builtins/x87/OWNERS
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/builtins/x87/OWNERS 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/builtins/x87/OWNERS 2018-02-18
19:00:53.934422017 +0100
-@@ -0,0 +1,2 @@
-+weiliang.lin(a)intel.com
-+chunyang.dai(a)intel.com
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/codegen.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/codegen.h
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/codegen.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/codegen.h 2018-02-18
19:00:53.934422017 +0100
-@@ -59,6 +59,8 @@
- #include "src/mips64/codegen-mips64.h" // NOLINT
- #elif V8_TARGET_ARCH_S390
- #include "src/s390/codegen-s390.h" // NOLINT
-+#elif V8_TARGET_ARCH_X87
-+#include "src/x87/codegen-x87.h" // NOLINT
- #else
- #error Unsupported target architecture.
- #endif
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/code-stubs.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/code-stubs.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/code-stubs.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/code-stubs.h 2018-02-18
19:00:53.935422002 +0100
-@@ -514,6 +514,8 @@
- #include "src/mips64/code-stubs-mips64.h"
- #elif V8_TARGET_ARCH_S390
- #include "src/s390/code-stubs-s390.h"
-+#elif V8_TARGET_ARCH_X87
-+#include "src/x87/code-stubs-x87.h"
- #else
- #error Unsupported target architecture.
- #endif
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/c-linkage.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/c-linkage.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/c-linkage.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/c-linkage.cc 2018-02-18
19:00:53.935422002 +0100
-@@ -50,6 +50,12 @@
- rbx.bit() | r12.bit() | r13.bit() | r14.bit() | r15.bit()
- #endif
-
-+#elif V8_TARGET_ARCH_X87
-+// ===========================================================================
-+// == x87 ====================================================================
-+// ===========================================================================
-+#define CALLEE_SAVE_REGISTERS esi.bit() | edi.bit() | ebx.bit()
-+
- #elif V8_TARGET_ARCH_ARM
- // ===========================================================================
- // == arm ====================================================================
-@@ -155,7 +161,7 @@
- msig->parameter_count());
- // Check the types of the signature.
- // Currently no floating point parameters or returns are allowed because
-- // on ia32, the FP top of stack is involved.
-+ // on x87 and ia32, the FP top of stack is involved.
- for (size_t i = 0; i < msig->return_count(); i++) {
- MachineRepresentation rep = msig->GetReturn(i).representation();
- CHECK_NE(MachineRepresentation::kFloat32, rep);
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/instruction-codes.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/instruction-codes.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/instruction-codes.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/instruction-codes.h 2018-02-18
19:00:54.010420900 +0100
-@@ -23,6 +23,8 @@
- #include "src/compiler/ppc/instruction-codes-ppc.h"
- #elif V8_TARGET_ARCH_S390
- #include "src/compiler/s390/instruction-codes-s390.h"
-+#elif V8_TARGET_ARCH_X87
-+#include "src/compiler/x87/instruction-codes-x87.h"
- #else
- #define TARGET_ARCH_OPCODE_LIST(V)
- #define TARGET_ADDRESSING_MODE_LIST(V)
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/wasm-linkage.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/wasm-linkage.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/wasm-linkage.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/wasm-linkage.cc 2018-02-18
19:00:54.011420885 +0100
-@@ -69,6 +69,14 @@
- #define FP_PARAM_REGISTERS xmm1, xmm2, xmm3, xmm4, xmm5, xmm6
- #define FP_RETURN_REGISTERS xmm1, xmm2
-
-+#elif V8_TARGET_ARCH_X87
-+// ===========================================================================
-+// == x87 ====================================================================
-+// ===========================================================================
-+#define GP_PARAM_REGISTERS eax, edx, ecx, ebx, esi
-+#define GP_RETURN_REGISTERS eax, edx
-+#define FP_RETURN_REGISTERS stX_0
-+
- #elif V8_TARGET_ARCH_ARM
- // ===========================================================================
- // == arm ====================================================================
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/code-generator-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/code-generator-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/code-generator-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/code-generator-x87.cc 2018-02-18
19:00:54.012420870 +0100
-@@ -0,0 +1,2878 @@
-+// Copyright 2013 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#include "src/compiler/code-generator.h"
-+
-+#include "src/compilation-info.h"
-+#include "src/compiler/code-generator-impl.h"
-+#include "src/compiler/gap-resolver.h"
-+#include "src/compiler/node-matchers.h"
-+#include "src/compiler/osr.h"
-+#include "src/frames.h"
-+#include "src/x87/assembler-x87.h"
-+#include "src/x87/frames-x87.h"
-+#include "src/x87/macro-assembler-x87.h"
-+
-+namespace v8 {
-+namespace internal {
-+namespace compiler {
-+
-+#define __ tasm()->
-+
-+
-+// Adds X87 specific methods for decoding operands.
-+class X87OperandConverter : public InstructionOperandConverter {
-+ public:
-+ X87OperandConverter(CodeGenerator* gen, Instruction* instr)
-+ : InstructionOperandConverter(gen, instr) {}
-+
-+ Operand InputOperand(size_t index, int extra = 0) {
-+ return ToOperand(instr_->InputAt(index), extra);
-+ }
-+
-+ Immediate InputImmediate(size_t index) {
-+ return ToImmediate(instr_->InputAt(index));
-+ }
-+
-+ Operand OutputOperand() { return ToOperand(instr_->Output()); }
-+
-+ Operand ToOperand(InstructionOperand* op, int extra = 0) {
-+ if (op->IsRegister()) {
-+ DCHECK(extra == 0);
-+ return Operand(ToRegister(op));
-+ }
-+ DCHECK(op->IsStackSlot() || op->IsFPStackSlot());
-+ return SlotToOperand(AllocatedOperand::cast(op)->index(), extra);
-+ }
-+
-+ Operand SlotToOperand(int slot, int extra = 0) {
-+ FrameOffset offset = frame_access_state()->GetFrameOffset(slot);
-+ return Operand(offset.from_stack_pointer() ? esp : ebp,
-+ offset.offset() + extra);
-+ }
-+
-+ Operand HighOperand(InstructionOperand* op) {
-+ DCHECK(op->IsFPStackSlot());
-+ return ToOperand(op, kPointerSize);
-+ }
-+
-+ Immediate ToImmediate(InstructionOperand* operand) {
-+ Constant constant = ToConstant(operand);
-+ if (constant.type() == Constant::kInt32 &&
-+ RelocInfo::IsWasmReference(constant.rmode())) {
-+ return Immediate(reinterpret_cast<Address>(constant.ToInt32()),
-+ constant.rmode());
-+ }
-+ switch (constant.type()) {
-+ case Constant::kInt32:
-+ return Immediate(constant.ToInt32());
-+ case Constant::kFloat32:
-+ return Immediate::EmbeddedNumber(constant.ToFloat32());
-+ case Constant::kFloat64:
-+ return Immediate::EmbeddedNumber(constant.ToFloat64().value());
-+ case Constant::kExternalReference:
-+ return Immediate(constant.ToExternalReference());
-+ case Constant::kHeapObject:
-+ return Immediate(constant.ToHeapObject());
-+ case Constant::kInt64:
-+ break;
-+ case Constant::kRpoNumber:
-+ return Immediate::CodeRelativeOffset(ToLabel(operand));
-+ }
-+ UNREACHABLE();
-+ }
-+
-+ static size_t NextOffset(size_t* offset) {
-+ size_t i = *offset;
-+ (*offset)++;
-+ return i;
-+ }
-+
-+ static ScaleFactor ScaleFor(AddressingMode one, AddressingMode mode) {
-+ STATIC_ASSERT(0 == static_cast<int>(times_1));
-+ STATIC_ASSERT(1 == static_cast<int>(times_2));
-+ STATIC_ASSERT(2 == static_cast<int>(times_4));
-+ STATIC_ASSERT(3 == static_cast<int>(times_8));
-+ int scale = static_cast<int>(mode - one);
-+ DCHECK(scale >= 0 && scale < 4);
-+ return static_cast<ScaleFactor>(scale);
-+ }
-+
-+ Operand MemoryOperand(size_t* offset) {
-+ AddressingMode mode = AddressingModeField::decode(instr_->opcode());
-+ switch (mode) {
-+ case kMode_MR: {
-+ Register base = InputRegister(NextOffset(offset));
-+ int32_t disp = 0;
-+ return Operand(base, disp);
-+ }
-+ case kMode_MRI: {
-+ Register base = InputRegister(NextOffset(offset));
-+ Constant ctant = ToConstant(instr_->InputAt(NextOffset(offset)));
-+ return Operand(base, ctant.ToInt32(), ctant.rmode());
-+ }
-+ case kMode_MR1:
-+ case kMode_MR2:
-+ case kMode_MR4:
-+ case kMode_MR8: {
-+ Register base = InputRegister(NextOffset(offset));
-+ Register index = InputRegister(NextOffset(offset));
-+ ScaleFactor scale = ScaleFor(kMode_MR1, mode);
-+ int32_t disp = 0;
-+ return Operand(base, index, scale, disp);
-+ }
-+ case kMode_MR1I:
-+ case kMode_MR2I:
-+ case kMode_MR4I:
-+ case kMode_MR8I: {
-+ Register base = InputRegister(NextOffset(offset));
-+ Register index = InputRegister(NextOffset(offset));
-+ ScaleFactor scale = ScaleFor(kMode_MR1I, mode);
-+ Constant ctant = ToConstant(instr_->InputAt(NextOffset(offset)));
-+ return Operand(base, index, scale, ctant.ToInt32(), ctant.rmode());
-+ }
-+ case kMode_M1:
-+ case kMode_M2:
-+ case kMode_M4:
-+ case kMode_M8: {
-+ Register index = InputRegister(NextOffset(offset));
-+ ScaleFactor scale = ScaleFor(kMode_M1, mode);
-+ int32_t disp = 0;
-+ return Operand(index, scale, disp);
-+ }
-+ case kMode_M1I:
-+ case kMode_M2I:
-+ case kMode_M4I:
-+ case kMode_M8I: {
-+ Register index = InputRegister(NextOffset(offset));
-+ ScaleFactor scale = ScaleFor(kMode_M1I, mode);
-+ Constant ctant = ToConstant(instr_->InputAt(NextOffset(offset)));
-+ return Operand(index, scale, ctant.ToInt32(), ctant.rmode());
-+ }
-+ case kMode_MI: {
-+ Constant ctant = ToConstant(instr_->InputAt(NextOffset(offset)));
-+ return Operand(ctant.ToInt32(), ctant.rmode());
-+ }
-+ case kMode_None:
-+ UNREACHABLE();
-+ }
-+ UNREACHABLE();
-+ }
-+
-+ Operand MemoryOperand(size_t first_input = 0) {
-+ return MemoryOperand(&first_input);
-+ }
-+};
-+
-+
-+namespace {
-+
-+bool HasImmediateInput(Instruction* instr, size_t index) {
-+ return instr->InputAt(index)->IsImmediate();
-+}
-+
-+
-+class OutOfLineLoadInteger final : public OutOfLineCode {
-+ public:
-+ OutOfLineLoadInteger(CodeGenerator* gen, Register result)
-+ : OutOfLineCode(gen), result_(result) {}
-+
-+ void Generate() final { __ xor_(result_, result_); }
-+
-+ private:
-+ Register const result_;
-+};
-+
-+class OutOfLineLoadFloat32NaN final : public OutOfLineCode {
-+ public:
-+ OutOfLineLoadFloat32NaN(CodeGenerator* gen, X87Register result)
-+ : OutOfLineCode(gen), result_(result) {}
-+
-+ void Generate() final {
-+ DCHECK(result_.code() == 0);
-+ USE(result_);
-+ __ fstp(0);
-+ __ push(Immediate(0xffc00000));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ lea(esp, Operand(esp, kFloatSize));
-+ }
-+
-+ private:
-+ X87Register const result_;
-+};
-+
-+class OutOfLineLoadFloat64NaN final : public OutOfLineCode {
-+ public:
-+ OutOfLineLoadFloat64NaN(CodeGenerator* gen, X87Register result)
-+ : OutOfLineCode(gen), result_(result) {}
-+
-+ void Generate() final {
-+ DCHECK(result_.code() == 0);
-+ USE(result_);
-+ __ fstp(0);
-+ __ push(Immediate(0xfff80000));
-+ __ push(Immediate(0x00000000));
-+ __ fld_d(MemOperand(esp, 0));
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ }
-+
-+ private:
-+ X87Register const result_;
-+};
-+
-+class OutOfLineTruncateDoubleToI final : public OutOfLineCode {
-+ public:
-+ OutOfLineTruncateDoubleToI(CodeGenerator* gen, Register result,
-+ X87Register input)
-+ : OutOfLineCode(gen),
-+ result_(result),
-+ input_(input),
-+ zone_(gen->zone()) {}
-+
-+ void Generate() final {
-+ UNIMPLEMENTED();
-+ USE(result_);
-+ USE(input_);
-+ }
-+
-+ private:
-+ Register const result_;
-+ X87Register const input_;
-+ Zone* zone_;
-+};
-+
-+
-+class OutOfLineRecordWrite final : public OutOfLineCode {
-+ public:
-+ OutOfLineRecordWrite(CodeGenerator* gen, Register object, Operand operand,
-+ Register value, Register scratch0, Register scratch1,
-+ RecordWriteMode mode)
-+ : OutOfLineCode(gen),
-+ object_(object),
-+ operand_(operand),
-+ value_(value),
-+ scratch0_(scratch0),
-+ scratch1_(scratch1),
-+ mode_(mode),
-+ zone_(gen->zone()) {}
-+
-+ void Generate() final {
-+ if (mode_ > RecordWriteMode::kValueIsPointer) {
-+ __ JumpIfSmi(value_, exit());
-+ }
-+ __ CheckPageFlag(value_, scratch0_,
-+ MemoryChunk::kPointersToHereAreInterestingMask, zero,
-+ exit());
-+ RememberedSetAction const remembered_set_action =
-+ mode_ > RecordWriteMode::kValueIsMap ? EMIT_REMEMBERED_SET
-+ : OMIT_REMEMBERED_SET;
-+ SaveFPRegsMode const save_fp_mode =
-+ frame()->DidAllocateDoubleRegisters() ? kSaveFPRegs : kDontSaveFPRegs;
-+ __ lea(scratch1_, operand_);
-+ __ CallStubDelayed(
-+ new (zone_) RecordWriteStub(nullptr, object_, scratch0_, scratch1_,
-+ remembered_set_action, save_fp_mode));
-+ }
-+
-+ private:
-+ Register const object_;
-+ Operand const operand_;
-+ Register const value_;
-+ Register const scratch0_;
-+ Register const scratch1_;
-+ RecordWriteMode const mode_;
-+ Zone* zone_;
-+};
-+
-+} // namespace
-+
-+#define ASSEMBLE_CHECKED_LOAD_FLOAT(asm_instr, OutOfLineLoadNaN) \
-+ do { \
-+ auto result = i.OutputDoubleRegister(); \
-+ auto offset = i.InputRegister(0); \
-+ DCHECK(result.code() == 0); \
-+ if (instr->InputAt(1)->IsRegister()) { \
-+ __ cmp(offset, i.InputRegister(1)); \
-+ } else { \
-+ __ cmp(offset, i.InputImmediate(1)); \
-+ } \
-+ OutOfLineCode* ool = new (zone()) OutOfLineLoadNaN(this, result); \
-+ __ j(above_equal, ool->entry()); \
-+ __ fstp(0); \
-+ __ asm_instr(i.MemoryOperand(2)); \
-+ __ bind(ool->exit()); \
-+ } while (false)
-+
-+#define ASSEMBLE_CHECKED_LOAD_INTEGER(asm_instr) \
-+ do { \
-+ auto result = i.OutputRegister(); \
-+ auto offset = i.InputRegister(0); \
-+ if (instr->InputAt(1)->IsRegister()) { \
-+ __ cmp(offset, i.InputRegister(1)); \
-+ } else { \
-+ __ cmp(offset, i.InputImmediate(1)); \
-+ } \
-+ OutOfLineCode* ool = new (zone()) OutOfLineLoadInteger(this, result); \
-+ __ j(above_equal, ool->entry()); \
-+ __ asm_instr(result, i.MemoryOperand(2)); \
-+ __ bind(ool->exit()); \
-+ } while (false)
-+
-+
-+#define ASSEMBLE_CHECKED_STORE_FLOAT(asm_instr) \
-+ do { \
-+ auto offset = i.InputRegister(0); \
-+ if (instr->InputAt(1)->IsRegister()) { \
-+ __ cmp(offset, i.InputRegister(1)); \
-+ } else { \
-+ __ cmp(offset, i.InputImmediate(1)); \
-+ } \
-+ Label done; \
-+ DCHECK(i.InputDoubleRegister(2).code() == 0); \
-+ __ j(above_equal, &done, Label::kNear); \
-+ __ asm_instr(i.MemoryOperand(3)); \
-+ __ bind(&done); \
-+ } while (false)
-+
-+
-+#define ASSEMBLE_CHECKED_STORE_INTEGER(asm_instr) \
-+ do { \
-+ auto offset = i.InputRegister(0); \
-+ if (instr->InputAt(1)->IsRegister()) { \
-+ __ cmp(offset, i.InputRegister(1)); \
-+ } else { \
-+ __ cmp(offset, i.InputImmediate(1)); \
-+ } \
-+ Label done; \
-+ __ j(above_equal, &done, Label::kNear); \
-+ if (instr->InputAt(2)->IsRegister()) { \
-+ __ asm_instr(i.MemoryOperand(3), i.InputRegister(2)); \
-+ } else { \
-+ __ asm_instr(i.MemoryOperand(3), i.InputImmediate(2)); \
-+ } \
-+ __ bind(&done); \
-+ } while (false)
-+
-+#define ASSEMBLE_COMPARE(asm_instr) \
-+ do { \
-+ if (AddressingModeField::decode(instr->opcode()) != kMode_None) { \
-+ size_t index = 0; \
-+ Operand left = i.MemoryOperand(&index); \
-+ if (HasImmediateInput(instr, index)) { \
-+ __ asm_instr(left, i.InputImmediate(index)); \
-+ } else { \
-+ __ asm_instr(left, i.InputRegister(index)); \
-+ } \
-+ } else { \
-+ if (HasImmediateInput(instr, 1)) { \
-+ if (instr->InputAt(0)->IsRegister()) { \
-+ __ asm_instr(i.InputRegister(0), i.InputImmediate(1)); \
-+ } else { \
-+ __ asm_instr(i.InputOperand(0), i.InputImmediate(1)); \
-+ } \
-+ } else { \
-+ if (instr->InputAt(1)->IsRegister()) { \
-+ __ asm_instr(i.InputRegister(0), i.InputRegister(1)); \
-+ } else { \
-+ __ asm_instr(i.InputRegister(0), i.InputOperand(1)); \
-+ } \
-+ } \
-+ } \
-+ } while (0)
-+
-+#define ASSEMBLE_IEEE754_BINOP(name) \
-+ do { \
-+ /* Saves the esp into ebx */ \
-+ __ push(ebx); \
-+ __ mov(ebx, esp); \
-+ /* Pass one double as argument on the stack. */ \
-+ __ PrepareCallCFunction(4, eax); \
-+ __ fstp(0); \
-+ /* Load first operand from original stack */ \
-+ __ fld_d(MemOperand(ebx, 4 + kDoubleSize)); \
-+ /* Put first operand into stack for function call */ \
-+ __ fstp_d(Operand(esp, 0 * kDoubleSize)); \
-+ /* Load second operand from original stack */ \
-+ __ fld_d(MemOperand(ebx, 4)); \
-+ /* Put second operand into stack for function call */ \
-+ __ fstp_d(Operand(esp, 1 * kDoubleSize)); \
-+ __ CallCFunction( \
-+ ExternalReference::ieee754_##name##_function(__ isolate()), 4); \
-+ /* Restore the ebx */ \
-+ __ pop(ebx); \
-+ /* Return value is in st(0) on x87. */ \
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize)); \
-+ } while (false)
-+
-+#define ASSEMBLE_IEEE754_UNOP(name) \
-+ do { \
-+ /* Saves the esp into ebx */ \
-+ __ push(ebx); \
-+ __ mov(ebx, esp); \
-+ /* Pass one double as argument on the stack. */ \
-+ __ PrepareCallCFunction(2, eax); \
-+ __ fstp(0); \
-+ /* Load operand from original stack */ \
-+ __ fld_d(MemOperand(ebx, 4)); \
-+ /* Put operand into stack for function call */ \
-+ __ fstp_d(Operand(esp, 0)); \
-+ __ CallCFunction( \
-+ ExternalReference::ieee754_##name##_function(__ isolate()), 2); \
-+ /* Restore the ebx */ \
-+ __ pop(ebx); \
-+ /* Return value is in st(0) on x87. */ \
-+ __ lea(esp, Operand(esp, kDoubleSize)); \
-+ } while (false)
-+
-+#define ASSEMBLE_ATOMIC_BINOP(bin_inst, mov_inst, cmpxchg_inst) \
-+ do { \
-+ Label binop; \
-+ __ bind(&binop); \
-+ __ mov_inst(eax, i.MemoryOperand(1)); \
-+ __ Move(i.TempRegister(0), eax); \
-+ __ bin_inst(i.TempRegister(0), i.InputRegister(0)); \
-+ __ lock(); \
-+ __ cmpxchg_inst(i.MemoryOperand(1), i.TempRegister(0)); \
-+ __ j(not_equal, &binop); \
-+ } while (false)
-+
-+void CodeGenerator::AssembleDeconstructFrame() {
-+ __ mov(esp, ebp);
-+ __ pop(ebp);
-+}
-+
-+void CodeGenerator::AssemblePrepareTailCall() {
-+ if (frame_access_state()->has_frame()) {
-+ __ mov(ebp, MemOperand(ebp, 0));
-+ }
-+ frame_access_state()->SetFrameAccessToSP();
-+}
-+
-+void CodeGenerator::AssemblePopArgumentsAdaptorFrame(Register args_reg,
-+ Register, Register,
-+ Register) {
-+ // There are not enough temp registers left on ia32 for a call instruction
-+ // so we pick some scratch registers and save/restore them manually here.
-+ int scratch_count = 3;
-+ Register scratch1 = ebx;
-+ Register scratch2 = ecx;
-+ Register scratch3 = edx;
-+ DCHECK(!AreAliased(args_reg, scratch1, scratch2, scratch3));
-+ Label done;
-+
-+ // Check if current frame is an arguments adaptor frame.
-+ __ cmp(Operand(ebp, StandardFrameConstants::kContextOffset),
-+ Immediate(Smi::FromInt(StackFrame::ARGUMENTS_ADAPTOR)));
-+ __ j(not_equal, &done, Label::kNear);
-+
-+ __ push(scratch1);
-+ __ push(scratch2);
-+ __ push(scratch3);
-+
-+ // Load arguments count from current arguments adaptor frame (note, it
-+ // does not include receiver).
-+ Register caller_args_count_reg = scratch1;
-+ __ mov(caller_args_count_reg,
-+ Operand(ebp, ArgumentsAdaptorFrameConstants::kLengthOffset));
-+ __ SmiUntag(caller_args_count_reg);
-+
-+ ParameterCount callee_args_count(args_reg);
-+ __ PrepareForTailCall(callee_args_count, caller_args_count_reg, scratch2,
-+ scratch3, ReturnAddressState::kOnStack, scratch_count);
-+ __ pop(scratch3);
-+ __ pop(scratch2);
-+ __ pop(scratch1);
-+
-+ __ bind(&done);
-+}
-+
-+namespace {
-+
-+void AdjustStackPointerForTailCall(TurboAssembler* tasm,
-+ FrameAccessState* state,
-+ int new_slot_above_sp,
-+ bool allow_shrinkage = true) {
-+ int current_sp_offset = state->GetSPToFPSlotCount() +
-+ StandardFrameConstants::kFixedSlotCountAboveFp;
-+ int stack_slot_delta = new_slot_above_sp - current_sp_offset;
-+ if (stack_slot_delta > 0) {
-+ tasm->sub(esp, Immediate(stack_slot_delta * kPointerSize));
-+ state->IncreaseSPDelta(stack_slot_delta);
-+ } else if (allow_shrinkage && stack_slot_delta < 0) {
-+ tasm->add(esp, Immediate(-stack_slot_delta * kPointerSize));
-+ state->IncreaseSPDelta(stack_slot_delta);
-+ }
-+}
-+
-+} // namespace
-+
-+void CodeGenerator::AssembleTailCallBeforeGap(Instruction* instr,
-+ int first_unused_stack_slot) {
-+ CodeGenerator::PushTypeFlags flags(kImmediatePush | kScalarPush);
-+ ZoneVector<MoveOperands*> pushes(zone());
-+ GetPushCompatibleMoves(instr, flags, &pushes);
-+
-+ if (!pushes.empty() &&
-+ (LocationOperand::cast(pushes.back()->destination()).index() + 1 ==
-+ first_unused_stack_slot)) {
-+ X87OperandConverter g(this, instr);
-+ for (auto move : pushes) {
-+ LocationOperand destination_location(
-+ LocationOperand::cast(move->destination()));
-+ InstructionOperand source(move->source());
-+ AdjustStackPointerForTailCall(tasm(), frame_access_state(),
-+ destination_location.index());
-+ if (source.IsStackSlot()) {
-+ LocationOperand source_location(LocationOperand::cast(source));
-+ __ push(g.SlotToOperand(source_location.index()));
-+ } else if (source.IsRegister()) {
-+ LocationOperand source_location(LocationOperand::cast(source));
-+ __ push(source_location.GetRegister());
-+ } else if (source.IsImmediate()) {
-+ __ push(Immediate(ImmediateOperand::cast(source).inline_value()));
-+ } else {
-+ // Pushes of non-scalar data types is not supported.
-+ UNIMPLEMENTED();
-+ }
-+ frame_access_state()->IncreaseSPDelta(1);
-+ move->Eliminate();
-+ }
-+ }
-+ AdjustStackPointerForTailCall(tasm(), frame_access_state(),
-+ first_unused_stack_slot, false);
-+}
-+
-+void CodeGenerator::AssembleTailCallAfterGap(Instruction* instr,
-+ int first_unused_stack_slot) {
-+ AdjustStackPointerForTailCall(tasm(), frame_access_state(),
-+ first_unused_stack_slot);
-+}
-+
-+// Assembles an instruction after register allocation, producing machine code.
-+CodeGenerator::CodeGenResult CodeGenerator::AssembleArchInstruction(
-+ Instruction* instr) {
-+ X87OperandConverter i(this, instr);
-+ InstructionCode opcode = instr->opcode();
-+ ArchOpcode arch_opcode = ArchOpcodeField::decode(opcode);
-+
-+ switch (arch_opcode) {
-+ case kArchCallCodeObject: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ EnsureSpaceForLazyDeopt();
-+ if (HasImmediateInput(instr, 0)) {
-+ Handle<Code> code = i.InputCode(0);
-+ __ call(code, RelocInfo::CODE_TARGET);
-+ } else {
-+ Register reg = i.InputRegister(0);
-+ __ add(reg, Immediate(Code::kHeaderSize - kHeapObjectTag));
-+ __ call(reg);
-+ }
-+ RecordCallPosition(instr);
-+ bool double_result =
-+ instr->HasOutput() && instr->Output()->IsFPRegister();
-+ if (double_result) {
-+ __ lea(esp, Operand(esp, -kDoubleSize));
-+ __ fstp_d(Operand(esp, 0));
-+ }
-+ __ fninit();
-+ if (double_result) {
-+ __ fld_d(Operand(esp, 0));
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ } else {
-+ __ fld1();
-+ }
-+ frame_access_state()->ClearSPDelta();
-+ break;
-+ }
-+ case kArchTailCallCodeObjectFromJSFunction:
-+ case kArchTailCallCodeObject: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ if (arch_opcode == kArchTailCallCodeObjectFromJSFunction) {
-+ AssemblePopArgumentsAdaptorFrame(kJavaScriptCallArgCountRegister,
-+ no_reg, no_reg, no_reg);
-+ }
-+ if (HasImmediateInput(instr, 0)) {
-+ Handle<Code> code = i.InputCode(0);
-+ __ jmp(code, RelocInfo::CODE_TARGET);
-+ } else {
-+ Register reg = i.InputRegister(0);
-+ __ add(reg, Immediate(Code::kHeaderSize - kHeapObjectTag));
-+ __ jmp(reg);
-+ }
-+ frame_access_state()->ClearSPDelta();
-+ frame_access_state()->SetFrameAccessToDefault();
-+ break;
-+ }
-+ case kArchTailCallAddress: {
-+ CHECK(!HasImmediateInput(instr, 0));
-+ Register reg = i.InputRegister(0);
-+ __ jmp(reg);
-+ frame_access_state()->ClearSPDelta();
-+ frame_access_state()->SetFrameAccessToDefault();
-+ break;
-+ }
-+ case kArchCallJSFunction: {
-+ EnsureSpaceForLazyDeopt();
-+ Register func = i.InputRegister(0);
-+ if (FLAG_debug_code) {
-+ // Check the function's context matches the context argument.
-+ __ cmp(esi, FieldOperand(func, JSFunction::kContextOffset));
-+ __ Assert(equal, kWrongFunctionContext);
-+ }
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ call(FieldOperand(func, JSFunction::kCodeEntryOffset));
-+ RecordCallPosition(instr);
-+ bool double_result =
-+ instr->HasOutput() && instr->Output()->IsFPRegister();
-+ if (double_result) {
-+ __ lea(esp, Operand(esp, -kDoubleSize));
-+ __ fstp_d(Operand(esp, 0));
-+ }
-+ __ fninit();
-+ if (double_result) {
-+ __ fld_d(Operand(esp, 0));
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ } else {
-+ __ fld1();
-+ }
-+ frame_access_state()->ClearSPDelta();
-+ break;
-+ }
-+ case kArchTailCallJSFunctionFromJSFunction: {
-+ Register func = i.InputRegister(0);
-+ if (FLAG_debug_code) {
-+ // Check the function's context matches the context argument.
-+ __ cmp(esi, FieldOperand(func, JSFunction::kContextOffset));
-+ __ Assert(equal, kWrongFunctionContext);
-+ }
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ AssemblePopArgumentsAdaptorFrame(kJavaScriptCallArgCountRegister, no_reg,
-+ no_reg, no_reg);
-+ __ jmp(FieldOperand(func, JSFunction::kCodeEntryOffset));
-+ frame_access_state()->ClearSPDelta();
-+ frame_access_state()->SetFrameAccessToDefault();
-+ break;
-+ }
-+ case kArchPrepareCallCFunction: {
-+ // Frame alignment requires using FP-relative frame addressing.
-+ frame_access_state()->SetFrameAccessToFP();
-+ int const num_parameters = MiscField::decode(instr->opcode());
-+ __ PrepareCallCFunction(num_parameters, i.TempRegister(0));
-+ break;
-+ }
-+ case kArchPrepareTailCall:
-+ AssemblePrepareTailCall();
-+ break;
-+ case kArchCallCFunction: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ int const num_parameters = MiscField::decode(instr->opcode());
-+ if (HasImmediateInput(instr, 0)) {
-+ ExternalReference ref = i.InputExternalReference(0);
-+ __ CallCFunction(ref, num_parameters);
-+ } else {
-+ Register func = i.InputRegister(0);
-+ __ CallCFunction(func, num_parameters);
-+ }
-+ bool double_result =
-+ instr->HasOutput() && instr->Output()->IsFPRegister();
-+ if (double_result) {
-+ __ lea(esp, Operand(esp, -kDoubleSize));
-+ __ fstp_d(Operand(esp, 0));
-+ }
-+ __ fninit();
-+ if (double_result) {
-+ __ fld_d(Operand(esp, 0));
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ } else {
-+ __ fld1();
-+ }
-+ frame_access_state()->SetFrameAccessToDefault();
-+ frame_access_state()->ClearSPDelta();
-+ break;
-+ }
-+ case kArchJmp:
-+ AssembleArchJump(i.InputRpo(0));
-+ break;
-+ case kArchLookupSwitch:
-+ AssembleArchLookupSwitch(instr);
-+ break;
-+ case kArchTableSwitch:
-+ AssembleArchTableSwitch(instr);
-+ break;
-+ case kArchComment: {
-+ Address comment_string = i.InputExternalReference(0).address();
-+ __ RecordComment(reinterpret_cast<const char*>(comment_string));
-+ break;
-+ }
-+ case kArchDebugBreak:
-+ __ int3();
-+ break;
-+ case kArchNop:
-+ case kArchThrowTerminator:
-+ // don't emit code for nops.
-+ break;
-+ case kArchDeoptimize: {
-+ int deopt_state_id =
-+ BuildTranslation(instr, -1, 0, OutputFrameStateCombine::Ignore());
-+ int double_register_param_count = 0;
-+ int x87_layout = 0;
-+ for (size_t i = 0; i < instr->InputCount(); i++) {
-+ if (instr->InputAt(i)->IsFPRegister()) {
-+ double_register_param_count++;
-+ }
-+ }
-+ // Currently we use only one X87 register. If double_register_param_count
-+ // is bigger than 1, it means duplicated double register is added to input
-+ // of this instruction.
-+ if (double_register_param_count > 0) {
-+ x87_layout = (0 << 3) | 1;
-+ }
-+ // The layout of x87 register stack is loaded on the top of FPU register
-+ // stack for deoptimization.
-+ __ push(Immediate(x87_layout));
-+ __ fild_s(MemOperand(esp, 0));
-+ __ lea(esp, Operand(esp, kPointerSize));
-+
-+ CodeGenResult result =
-+ AssembleDeoptimizerCall(deopt_state_id, current_source_position_);
-+ if (result != kSuccess) return result;
-+ break;
-+ }
-+ case kArchRet:
-+ AssembleReturn(instr->InputAt(0));
-+ break;
-+ case kArchFramePointer:
-+ __ mov(i.OutputRegister(), ebp);
-+ break;
-+ case kArchStackPointer:
-+ __ mov(i.OutputRegister(), esp);
-+ break;
-+ case kArchParentFramePointer:
-+ if (frame_access_state()->has_frame()) {
-+ __ mov(i.OutputRegister(), Operand(ebp, 0));
-+ } else {
-+ __ mov(i.OutputRegister(), ebp);
-+ }
-+ break;
-+ case kArchTruncateDoubleToI: {
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fld_d(i.InputOperand(0));
-+ }
-+ __ TruncateX87TOSToI(zone(), i.OutputRegister());
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fstp(0);
-+ }
-+ break;
-+ }
-+ case kArchStoreWithWriteBarrier: {
-+ RecordWriteMode mode =
-+ static_cast<RecordWriteMode>(MiscField::decode(instr->opcode()));
-+ Register object = i.InputRegister(0);
-+ size_t index = 0;
-+ Operand operand = i.MemoryOperand(&index);
-+ Register value = i.InputRegister(index);
-+ Register scratch0 = i.TempRegister(0);
-+ Register scratch1 = i.TempRegister(1);
-+ auto ool = new (zone()) OutOfLineRecordWrite(this, object, operand, value,
-+ scratch0, scratch1, mode);
-+ __ mov(operand, value);
-+ __ CheckPageFlag(object, scratch0,
-+ MemoryChunk::kPointersFromHereAreInterestingMask,
-+ not_zero, ool->entry());
-+ __ bind(ool->exit());
-+ break;
-+ }
-+ case kArchStackSlot: {
-+ FrameOffset offset =
-+ frame_access_state()->GetFrameOffset(i.InputInt32(0));
-+ Register base;
-+ if (offset.from_stack_pointer()) {
-+ base = esp;
-+ } else {
-+ base = ebp;
-+ }
-+ __ lea(i.OutputRegister(), Operand(base, offset.offset()));
-+ break;
-+ }
-+ case kIeee754Float64Acos:
-+ ASSEMBLE_IEEE754_UNOP(acos);
-+ break;
-+ case kIeee754Float64Acosh:
-+ ASSEMBLE_IEEE754_UNOP(acosh);
-+ break;
-+ case kIeee754Float64Asin:
-+ ASSEMBLE_IEEE754_UNOP(asin);
-+ break;
-+ case kIeee754Float64Asinh:
-+ ASSEMBLE_IEEE754_UNOP(asinh);
-+ break;
-+ case kIeee754Float64Atan:
-+ ASSEMBLE_IEEE754_UNOP(atan);
-+ break;
-+ case kIeee754Float64Atanh:
-+ ASSEMBLE_IEEE754_UNOP(atanh);
-+ break;
-+ case kIeee754Float64Atan2:
-+ ASSEMBLE_IEEE754_BINOP(atan2);
-+ break;
-+ case kIeee754Float64Cbrt:
-+ ASSEMBLE_IEEE754_UNOP(cbrt);
-+ break;
-+ case kIeee754Float64Cos:
-+ __ X87SetFPUCW(0x027F);
-+ ASSEMBLE_IEEE754_UNOP(cos);
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ case kIeee754Float64Cosh:
-+ ASSEMBLE_IEEE754_UNOP(cosh);
-+ break;
-+ case kIeee754Float64Expm1:
-+ __ X87SetFPUCW(0x027F);
-+ ASSEMBLE_IEEE754_UNOP(expm1);
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ case kIeee754Float64Exp:
-+ ASSEMBLE_IEEE754_UNOP(exp);
-+ break;
-+ case kIeee754Float64Log:
-+ ASSEMBLE_IEEE754_UNOP(log);
-+ break;
-+ case kIeee754Float64Log1p:
-+ ASSEMBLE_IEEE754_UNOP(log1p);
-+ break;
-+ case kIeee754Float64Log2:
-+ ASSEMBLE_IEEE754_UNOP(log2);
-+ break;
-+ case kIeee754Float64Log10:
-+ ASSEMBLE_IEEE754_UNOP(log10);
-+ break;
-+ case kIeee754Float64Pow: {
-+ // Keep the x87 FPU stack empty before calling stub code
-+ __ fstp(0);
-+ // Call the MathStub and put return value in stX_0
-+ __ CallStubDelayed(new (zone())
-+ MathPowStub(nullptr, MathPowStub::DOUBLE));
-+ /* Return value is in st(0) on x87. */
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize));
-+ break;
-+ }
-+ case kIeee754Float64Sin:
-+ __ X87SetFPUCW(0x027F);
-+ ASSEMBLE_IEEE754_UNOP(sin);
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ case kIeee754Float64Sinh:
-+ ASSEMBLE_IEEE754_UNOP(sinh);
-+ break;
-+ case kIeee754Float64Tan:
-+ __ X87SetFPUCW(0x027F);
-+ ASSEMBLE_IEEE754_UNOP(tan);
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ case kIeee754Float64Tanh:
-+ ASSEMBLE_IEEE754_UNOP(tanh);
-+ break;
-+ case kX87Add:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ add(i.InputOperand(0), i.InputImmediate(1));
-+ } else {
-+ __ add(i.InputRegister(0), i.InputOperand(1));
-+ }
-+ break;
-+ case kX87And:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ and_(i.InputOperand(0), i.InputImmediate(1));
-+ } else {
-+ __ and_(i.InputRegister(0), i.InputOperand(1));
-+ }
-+ break;
-+ case kX87Cmp:
-+ ASSEMBLE_COMPARE(cmp);
-+ break;
-+ case kX87Cmp16:
-+ ASSEMBLE_COMPARE(cmpw);
-+ break;
-+ case kX87Cmp8:
-+ ASSEMBLE_COMPARE(cmpb);
-+ break;
-+ case kX87Test:
-+ ASSEMBLE_COMPARE(test);
-+ break;
-+ case kX87Test16:
-+ ASSEMBLE_COMPARE(test_w);
-+ break;
-+ case kX87Test8:
-+ ASSEMBLE_COMPARE(test_b);
-+ break;
-+ case kX87Imul:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ imul(i.OutputRegister(), i.InputOperand(0), i.InputInt32(1));
-+ } else {
-+ __ imul(i.OutputRegister(), i.InputOperand(1));
-+ }
-+ break;
-+ case kX87ImulHigh:
-+ __ imul(i.InputRegister(1));
-+ break;
-+ case kX87UmulHigh:
-+ __ mul(i.InputRegister(1));
-+ break;
-+ case kX87Idiv:
-+ __ cdq();
-+ __ idiv(i.InputOperand(1));
-+ break;
-+ case kX87Udiv:
-+ __ Move(edx, Immediate(0));
-+ __ div(i.InputOperand(1));
-+ break;
-+ case kX87Not:
-+ __ not_(i.OutputOperand());
-+ break;
-+ case kX87Neg:
-+ __ neg(i.OutputOperand());
-+ break;
-+ case kX87Or:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ or_(i.InputOperand(0), i.InputImmediate(1));
-+ } else {
-+ __ or_(i.InputRegister(0), i.InputOperand(1));
-+ }
-+ break;
-+ case kX87Xor:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ xor_(i.InputOperand(0), i.InputImmediate(1));
-+ } else {
-+ __ xor_(i.InputRegister(0), i.InputOperand(1));
-+ }
-+ break;
-+ case kX87Sub:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ sub(i.InputOperand(0), i.InputImmediate(1));
-+ } else {
-+ __ sub(i.InputRegister(0), i.InputOperand(1));
-+ }
-+ break;
-+ case kX87Shl:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ shl(i.OutputOperand(), i.InputInt5(1));
-+ } else {
-+ __ shl_cl(i.OutputOperand());
-+ }
-+ break;
-+ case kX87Shr:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ shr(i.OutputOperand(), i.InputInt5(1));
-+ } else {
-+ __ shr_cl(i.OutputOperand());
-+ }
-+ break;
-+ case kX87Sar:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ sar(i.OutputOperand(), i.InputInt5(1));
-+ } else {
-+ __ sar_cl(i.OutputOperand());
-+ }
-+ break;
-+ case kX87AddPair: {
-+ // i.OutputRegister(0) == i.InputRegister(0) ... left low word.
-+ // i.InputRegister(1) ... left high word.
-+ // i.InputRegister(2) ... right low word.
-+ // i.InputRegister(3) ... right high word.
-+ bool use_temp = false;
-+ if (i.OutputRegister(0).code() == i.InputRegister(1).code() ||
-+ i.OutputRegister(0).code() == i.InputRegister(3).code()) {
-+ // We cannot write to the output register directly, because it would
-+ // overwrite an input for adc. We have to use the temp register.
-+ use_temp = true;
-+ __ Move(i.TempRegister(0), i.InputRegister(0));
-+ __ add(i.TempRegister(0), i.InputRegister(2));
-+ } else {
-+ __ add(i.OutputRegister(0), i.InputRegister(2));
-+ }
-+ if (i.OutputRegister(1).code() != i.InputRegister(1).code()) {
-+ __ Move(i.OutputRegister(1), i.InputRegister(1));
-+ }
-+ __ adc(i.OutputRegister(1), Operand(i.InputRegister(3)));
-+ if (use_temp) {
-+ __ Move(i.OutputRegister(0), i.TempRegister(0));
-+ }
-+ break;
-+ }
-+ case kX87SubPair: {
-+ // i.OutputRegister(0) == i.InputRegister(0) ... left low word.
-+ // i.InputRegister(1) ... left high word.
-+ // i.InputRegister(2) ... right low word.
-+ // i.InputRegister(3) ... right high word.
-+ bool use_temp = false;
-+ if (i.OutputRegister(0).code() == i.InputRegister(1).code() ||
-+ i.OutputRegister(0).code() == i.InputRegister(3).code()) {
-+ // We cannot write to the output register directly, because it would
-+ // overwrite an input for adc. We have to use the temp register.
-+ use_temp = true;
-+ __ Move(i.TempRegister(0), i.InputRegister(0));
-+ __ sub(i.TempRegister(0), i.InputRegister(2));
-+ } else {
-+ __ sub(i.OutputRegister(0), i.InputRegister(2));
-+ }
-+ if (i.OutputRegister(1).code() != i.InputRegister(1).code()) {
-+ __ Move(i.OutputRegister(1), i.InputRegister(1));
-+ }
-+ __ sbb(i.OutputRegister(1), Operand(i.InputRegister(3)));
-+ if (use_temp) {
-+ __ Move(i.OutputRegister(0), i.TempRegister(0));
-+ }
-+ break;
-+ }
-+ case kX87MulPair: {
-+ __ imul(i.OutputRegister(1), i.InputOperand(0));
-+ __ mov(i.TempRegister(0), i.InputOperand(1));
-+ __ imul(i.TempRegister(0), i.InputOperand(2));
-+ __ add(i.OutputRegister(1), i.TempRegister(0));
-+ __ mov(i.OutputRegister(0), i.InputOperand(0));
-+ // Multiplies the low words and stores them in eax and edx.
-+ __ mul(i.InputRegister(2));
-+ __ add(i.OutputRegister(1), i.TempRegister(0));
-+
-+ break;
-+ }
-+ case kX87ShlPair:
-+ if (HasImmediateInput(instr, 2)) {
-+ __ ShlPair(i.InputRegister(1), i.InputRegister(0), i.InputInt6(2));
-+ } else {
-+ // Shift has been loaded into CL by the register allocator.
-+ __ ShlPair_cl(i.InputRegister(1), i.InputRegister(0));
-+ }
-+ break;
-+ case kX87ShrPair:
-+ if (HasImmediateInput(instr, 2)) {
-+ __ ShrPair(i.InputRegister(1), i.InputRegister(0), i.InputInt6(2));
-+ } else {
-+ // Shift has been loaded into CL by the register allocator.
-+ __ ShrPair_cl(i.InputRegister(1), i.InputRegister(0));
-+ }
-+ break;
-+ case kX87SarPair:
-+ if (HasImmediateInput(instr, 2)) {
-+ __ SarPair(i.InputRegister(1), i.InputRegister(0), i.InputInt6(2));
-+ } else {
-+ // Shift has been loaded into CL by the register allocator.
-+ __ SarPair_cl(i.InputRegister(1), i.InputRegister(0));
-+ }
-+ break;
-+ case kX87Ror:
-+ if (HasImmediateInput(instr, 1)) {
-+ __ ror(i.OutputOperand(), i.InputInt5(1));
-+ } else {
-+ __ ror_cl(i.OutputOperand());
-+ }
-+ break;
-+ case kX87Lzcnt:
-+ __ Lzcnt(i.OutputRegister(), i.InputOperand(0));
-+ break;
-+ case kX87Popcnt:
-+ __ Popcnt(i.OutputRegister(), i.InputOperand(0));
-+ break;
-+ case kX87LoadFloat64Constant: {
-+ InstructionOperand* source = instr->InputAt(0);
-+ InstructionOperand* destination = instr->Output();
-+ DCHECK(source->IsConstant());
-+ X87OperandConverter g(this, nullptr);
-+ Constant src_constant = g.ToConstant(source);
-+
-+ DCHECK_EQ(Constant::kFloat64, src_constant.type());
-+ uint64_t src = src_constant.ToFloat64().AsUint64();
-+ uint32_t lower = static_cast<uint32_t>(src);
-+ uint32_t upper = static_cast<uint32_t>(src >> 32);
-+ if (destination->IsFPRegister()) {
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ mov(MemOperand(esp, 0), Immediate(lower));
-+ __ mov(MemOperand(esp, kInt32Size), Immediate(upper));
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, 0));
-+ __ add(esp, Immediate(kDoubleSize));
-+ } else {
-+ UNREACHABLE();
-+ }
-+ break;
-+ }
-+ case kX87Float32Cmp: {
-+ __ fld_s(MemOperand(esp, kFloatSize));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ FCmp();
-+ __ lea(esp, Operand(esp, 2 * kFloatSize));
-+ break;
-+ }
-+ case kX87Float32Add: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ X87SetFPUCW(0x027F);
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fld_s(MemOperand(esp, kFloatSize));
-+ __ faddp();
-+ // Clear stack.
-+ __ lea(esp, Operand(esp, 2 * kFloatSize));
-+ // Restore the default value of control word.
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ }
-+ case kX87Float32Sub: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ X87SetFPUCW(0x027F);
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, kFloatSize));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fsubp();
-+ // Clear stack.
-+ __ lea(esp, Operand(esp, 2 * kFloatSize));
-+ // Restore the default value of control word.
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ }
-+ case kX87Float32Mul: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ X87SetFPUCW(0x027F);
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, kFloatSize));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fmulp();
-+ // Clear stack.
-+ __ lea(esp, Operand(esp, 2 * kFloatSize));
-+ // Restore the default value of control word.
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ }
-+ case kX87Float32Div: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ X87SetFPUCW(0x027F);
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, kFloatSize));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fdivp();
-+ // Clear stack.
-+ __ lea(esp, Operand(esp, 2 * kFloatSize));
-+ // Restore the default value of control word.
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ }
-+
-+ case kX87Float32Sqrt: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fsqrt();
-+ __ lea(esp, Operand(esp, kFloatSize));
-+ break;
-+ }
-+ case kX87Float32Abs: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fabs();
-+ __ lea(esp, Operand(esp, kFloatSize));
-+ break;
-+ }
-+ case kX87Float32Neg: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fchs();
-+ __ lea(esp, Operand(esp, kFloatSize));
-+ break;
-+ }
-+ case kX87Float32Round: {
-+ RoundingMode mode =
-+ static_cast<RoundingMode>(MiscField::decode(instr->opcode()));
-+ // Set the correct round mode in x87 control register
-+ __ X87SetRC((mode << 10));
-+
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ InstructionOperand* input = instr->InputAt(0);
-+ USE(input);
-+ DCHECK(input->IsFPStackSlot());
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_s(i.InputOperand(0));
-+ }
-+ __ frndint();
-+ __ X87SetRC(0x0000);
-+ break;
-+ }
-+ case kX87Float64Add: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ X87SetFPUCW(0x027F);
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, 0));
-+ __ fld_d(MemOperand(esp, kDoubleSize));
-+ __ faddp();
-+ // Clear stack.
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize));
-+ // Restore the default value of control word.
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ }
-+ case kX87Float64Sub: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ X87SetFPUCW(0x027F);
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, kDoubleSize));
-+ __ fsub_d(MemOperand(esp, 0));
-+ // Clear stack.
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize));
-+ // Restore the default value of control word.
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ }
-+ case kX87Float64Mul: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ X87SetFPUCW(0x027F);
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, kDoubleSize));
-+ __ fmul_d(MemOperand(esp, 0));
-+ // Clear stack.
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize));
-+ // Restore the default value of control word.
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ }
-+ case kX87Float64Div: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ X87SetFPUCW(0x027F);
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, kDoubleSize));
-+ __ fdiv_d(MemOperand(esp, 0));
-+ // Clear stack.
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize));
-+ // Restore the default value of control word.
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ }
-+ case kX87Float64Mod: {
-+ FrameScope frame_scope(tasm(), StackFrame::MANUAL);
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ mov(eax, esp);
-+ __ PrepareCallCFunction(4, eax);
-+ __ fstp(0);
-+ __ fld_d(MemOperand(eax, 0));
-+ __ fstp_d(Operand(esp, 1 * kDoubleSize));
-+ __ fld_d(MemOperand(eax, kDoubleSize));
-+ __ fstp_d(Operand(esp, 0));
-+ __ CallCFunction(ExternalReference::mod_two_doubles_operation(isolate()),
-+ 4);
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize));
-+ break;
-+ }
-+ case kX87Float32Max: {
-+ Label compare_swap, done_compare;
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, kFloatSize));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fld(1);
-+ __ fld(1);
-+ __ FCmp();
-+
-+ auto ool =
-+ new (zone()) OutOfLineLoadFloat32NaN(this, i.OutputDoubleRegister());
-+ __ j(parity_even, ool->entry());
-+ __ j(below, &done_compare, Label::kNear);
-+ __ j(above, &compare_swap, Label::kNear);
-+ __ push(eax);
-+ __ lea(esp, Operand(esp, -kFloatSize));
-+ __ fld(1);
-+ __ fstp_s(Operand(esp, 0));
-+ __ mov(eax, MemOperand(esp, 0));
-+ __ and_(eax, Immediate(0x80000000));
-+ __ lea(esp, Operand(esp, kFloatSize));
-+ __ pop(eax);
-+ __ j(zero, &done_compare, Label::kNear);
-+
-+ __ bind(&compare_swap);
-+ __ bind(ool->exit());
-+ __ fxch(1);
-+
-+ __ bind(&done_compare);
-+ __ fstp(0);
-+ __ lea(esp, Operand(esp, 2 * kFloatSize));
-+ break;
-+ }
-+ case kX87Float64Max: {
-+ Label compare_swap, done_compare;
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, kDoubleSize));
-+ __ fld_d(MemOperand(esp, 0));
-+ __ fld(1);
-+ __ fld(1);
-+ __ FCmp();
-+
-+ auto ool =
-+ new (zone()) OutOfLineLoadFloat64NaN(this, i.OutputDoubleRegister());
-+ __ j(parity_even, ool->entry());
-+ __ j(below, &done_compare, Label::kNear);
-+ __ j(above, &compare_swap, Label::kNear);
-+ __ push(eax);
-+ __ lea(esp, Operand(esp, -kDoubleSize));
-+ __ fld(1);
-+ __ fstp_d(Operand(esp, 0));
-+ __ mov(eax, MemOperand(esp, 4));
-+ __ and_(eax, Immediate(0x80000000));
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ __ pop(eax);
-+ __ j(zero, &done_compare, Label::kNear);
-+
-+ __ bind(&compare_swap);
-+ __ bind(ool->exit());
-+ __ fxch(1);
-+
-+ __ bind(&done_compare);
-+ __ fstp(0);
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize));
-+ break;
-+ }
-+ case kX87Float32Min: {
-+ Label compare_swap, done_compare;
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, kFloatSize));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ fld(1);
-+ __ fld(1);
-+ __ FCmp();
-+
-+ auto ool =
-+ new (zone()) OutOfLineLoadFloat32NaN(this, i.OutputDoubleRegister());
-+ __ j(parity_even, ool->entry());
-+ __ j(above, &done_compare, Label::kNear);
-+ __ j(below, &compare_swap, Label::kNear);
-+ __ push(eax);
-+ __ lea(esp, Operand(esp, -kFloatSize));
-+ __ fld(0);
-+ __ fstp_s(Operand(esp, 0));
-+ __ mov(eax, MemOperand(esp, 0));
-+ __ and_(eax, Immediate(0x80000000));
-+ __ lea(esp, Operand(esp, kFloatSize));
-+ __ pop(eax);
-+ __ j(zero, &done_compare, Label::kNear);
-+
-+ __ bind(&compare_swap);
-+ __ bind(ool->exit());
-+ __ fxch(1);
-+
-+ __ bind(&done_compare);
-+ __ fstp(0);
-+ __ lea(esp, Operand(esp, 2 * kFloatSize));
-+ break;
-+ }
-+ case kX87Float64Min: {
-+ Label compare_swap, done_compare;
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, kDoubleSize));
-+ __ fld_d(MemOperand(esp, 0));
-+ __ fld(1);
-+ __ fld(1);
-+ __ FCmp();
-+
-+ auto ool =
-+ new (zone()) OutOfLineLoadFloat64NaN(this, i.OutputDoubleRegister());
-+ __ j(parity_even, ool->entry());
-+ __ j(above, &done_compare, Label::kNear);
-+ __ j(below, &compare_swap, Label::kNear);
-+ __ push(eax);
-+ __ lea(esp, Operand(esp, -kDoubleSize));
-+ __ fld(0);
-+ __ fstp_d(Operand(esp, 0));
-+ __ mov(eax, MemOperand(esp, 4));
-+ __ and_(eax, Immediate(0x80000000));
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ __ pop(eax);
-+ __ j(zero, &done_compare, Label::kNear);
-+
-+ __ bind(&compare_swap);
-+ __ bind(ool->exit());
-+ __ fxch(1);
-+
-+ __ bind(&done_compare);
-+ __ fstp(0);
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize));
-+ break;
-+ }
-+ case kX87Float64Abs: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, 0));
-+ __ fabs();
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ break;
-+ }
-+ case kX87Float64Neg: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, 0));
-+ __ fchs();
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ break;
-+ }
-+ case kX87Int32ToFloat32: {
-+ InstructionOperand* input = instr->InputAt(0);
-+ DCHECK(input->IsRegister() || input->IsStackSlot());
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ if (input->IsRegister()) {
-+ Register input_reg = i.InputRegister(0);
-+ __ push(input_reg);
-+ __ fild_s(Operand(esp, 0));
-+ __ pop(input_reg);
-+ } else {
-+ __ fild_s(i.InputOperand(0));
-+ }
-+ break;
-+ }
-+ case kX87Uint32ToFloat32: {
-+ InstructionOperand* input = instr->InputAt(0);
-+ DCHECK(input->IsRegister() || input->IsStackSlot());
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ Label msb_set_src;
-+ Label jmp_return;
-+ // Put input integer into eax(tmporarilly)
-+ __ push(eax);
-+ if (input->IsRegister())
-+ __ mov(eax, i.InputRegister(0));
-+ else
-+ __ mov(eax, i.InputOperand(0));
-+
-+ __ test(eax, eax);
-+ __ j(sign, &msb_set_src, Label::kNear);
-+ __ push(eax);
-+ __ fild_s(Operand(esp, 0));
-+ __ pop(eax);
-+
-+ __ jmp(&jmp_return, Label::kNear);
-+ __ bind(&msb_set_src);
-+ // Need another temp reg
-+ __ push(ebx);
-+ __ mov(ebx, eax);
-+ __ shr(eax, 1);
-+ // Recover the least significant bit to avoid rounding errors.
-+ __ and_(ebx, Immediate(1));
-+ __ or_(eax, ebx);
-+ __ push(eax);
-+ __ fild_s(Operand(esp, 0));
-+ __ pop(eax);
-+ __ fld(0);
-+ __ faddp();
-+ // Restore the ebx
-+ __ pop(ebx);
-+ __ bind(&jmp_return);
-+ // Restore the eax
-+ __ pop(eax);
-+ break;
-+ }
-+ case kX87Int32ToFloat64: {
-+ InstructionOperand* input = instr->InputAt(0);
-+ DCHECK(input->IsRegister() || input->IsStackSlot());
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ if (input->IsRegister()) {
-+ Register input_reg = i.InputRegister(0);
-+ __ push(input_reg);
-+ __ fild_s(Operand(esp, 0));
-+ __ pop(input_reg);
-+ } else {
-+ __ fild_s(i.InputOperand(0));
-+ }
-+ break;
-+ }
-+ case kX87Float32ToFloat64: {
-+ InstructionOperand* input = instr->InputAt(0);
-+ if (input->IsFPRegister()) {
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ fstp_s(MemOperand(esp, 0));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ add(esp, Immediate(kDoubleSize));
-+ } else {
-+ DCHECK(input->IsFPStackSlot());
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_s(i.InputOperand(0));
-+ }
-+ break;
-+ }
-+ case kX87Uint32ToFloat64: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ LoadUint32NoSSE2(i.InputRegister(0));
-+ break;
-+ }
-+ case kX87Float32ToInt32: {
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fld_s(i.InputOperand(0));
-+ }
-+ __ TruncateX87TOSToI(zone(), i.OutputRegister(0));
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fstp(0);
-+ }
-+ break;
-+ }
-+ case kX87Float32ToUint32: {
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fld_s(i.InputOperand(0));
-+ }
-+ Label success;
-+ __ TruncateX87TOSToI(zone(), i.OutputRegister(0));
-+ __ test(i.OutputRegister(0), i.OutputRegister(0));
-+ __ j(positive, &success);
-+ // Need to reserve the input float32 data.
-+ __ fld(0);
-+ __ push(Immediate(INT32_MIN));
-+ __ fild_s(Operand(esp, 0));
-+ __ lea(esp, Operand(esp, kPointerSize));
-+ __ faddp();
-+ __ TruncateX87TOSToI(zone(), i.OutputRegister(0));
-+ __ or_(i.OutputRegister(0), Immediate(0x80000000));
-+ // Only keep input float32 data in x87 stack when return.
-+ __ fstp(0);
-+ __ bind(&success);
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fstp(0);
-+ }
-+ break;
-+ }
-+ case kX87Float64ToInt32: {
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fld_d(i.InputOperand(0));
-+ }
-+ __ TruncateX87TOSToI(zone(), i.OutputRegister(0));
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fstp(0);
-+ }
-+ break;
-+ }
-+ case kX87Float64ToFloat32: {
-+ InstructionOperand* input = instr->InputAt(0);
-+ if (input->IsFPRegister()) {
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ fstp_s(MemOperand(esp, 0));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ add(esp, Immediate(kDoubleSize));
-+ } else {
-+ DCHECK(input->IsFPStackSlot());
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_d(i.InputOperand(0));
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ fstp_s(MemOperand(esp, 0));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ add(esp, Immediate(kDoubleSize));
-+ }
-+ break;
-+ }
-+ case kX87Float64ToUint32: {
-+ __ push_imm32(-2147483648);
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fld_d(i.InputOperand(0));
-+ }
-+ __ fild_s(Operand(esp, 0));
-+ __ fld(1);
-+ __ faddp();
-+ __ TruncateX87TOSToI(zone(), i.OutputRegister(0));
-+ __ add(esp, Immediate(kInt32Size));
-+ __ add(i.OutputRegister(), Immediate(0x80000000));
-+ __ fstp(0);
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ __ fstp(0);
-+ }
-+ break;
-+ }
-+ case kX87Float64ExtractHighWord32: {
-+ if (instr->InputAt(0)->IsFPRegister()) {
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ fst_d(MemOperand(esp, 0));
-+ __ mov(i.OutputRegister(), MemOperand(esp, kDoubleSize / 2));
-+ __ add(esp, Immediate(kDoubleSize));
-+ } else {
-+ InstructionOperand* input = instr->InputAt(0);
-+ USE(input);
-+ DCHECK(input->IsFPStackSlot());
-+ __ mov(i.OutputRegister(), i.InputOperand(0, kDoubleSize / 2));
-+ }
-+ break;
-+ }
-+ case kX87Float64ExtractLowWord32: {
-+ if (instr->InputAt(0)->IsFPRegister()) {
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ fst_d(MemOperand(esp, 0));
-+ __ mov(i.OutputRegister(), MemOperand(esp, 0));
-+ __ add(esp, Immediate(kDoubleSize));
-+ } else {
-+ InstructionOperand* input = instr->InputAt(0);
-+ USE(input);
-+ DCHECK(input->IsFPStackSlot());
-+ __ mov(i.OutputRegister(), i.InputOperand(0));
-+ }
-+ break;
-+ }
-+ case kX87Float64InsertHighWord32: {
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ fstp_d(MemOperand(esp, 0));
-+ __ mov(MemOperand(esp, kDoubleSize / 2), i.InputRegister(1));
-+ __ fld_d(MemOperand(esp, 0));
-+ __ add(esp, Immediate(kDoubleSize));
-+ break;
-+ }
-+ case kX87Float64InsertLowWord32: {
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ fstp_d(MemOperand(esp, 0));
-+ __ mov(MemOperand(esp, 0), i.InputRegister(1));
-+ __ fld_d(MemOperand(esp, 0));
-+ __ add(esp, Immediate(kDoubleSize));
-+ break;
-+ }
-+ case kX87Float64Sqrt: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ X87SetFPUCW(0x027F);
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, 0));
-+ __ fsqrt();
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ __ X87SetFPUCW(0x037F);
-+ break;
-+ }
-+ case kX87Float64Round: {
-+ RoundingMode mode =
-+ static_cast<RoundingMode>(MiscField::decode(instr->opcode()));
-+ // Set the correct round mode in x87 control register
-+ __ X87SetRC((mode << 10));
-+
-+ if (!instr->InputAt(0)->IsFPRegister()) {
-+ InstructionOperand* input = instr->InputAt(0);
-+ USE(input);
-+ DCHECK(input->IsFPStackSlot());
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_d(i.InputOperand(0));
-+ }
-+ __ frndint();
-+ __ X87SetRC(0x0000);
-+ break;
-+ }
-+ case kX87Float64Cmp: {
-+ __ fld_d(MemOperand(esp, kDoubleSize));
-+ __ fld_d(MemOperand(esp, 0));
-+ __ FCmp();
-+ __ lea(esp, Operand(esp, 2 * kDoubleSize));
-+ break;
-+ }
-+ case kX87Float64SilenceNaN: {
-+ Label end, return_qnan;
-+ __ fstp(0);
-+ __ push(ebx);
-+ // Load Half word of HoleNan(SNaN) into ebx
-+ __ mov(ebx, MemOperand(esp, 2 * kInt32Size));
-+ __ cmp(ebx, Immediate(kHoleNanUpper32));
-+ // Check input is HoleNaN(SNaN)?
-+ __ j(equal, &return_qnan, Label::kNear);
-+ // If input isn't HoleNaN(SNaN), just load it and return
-+ __ fld_d(MemOperand(esp, 1 * kInt32Size));
-+ __ jmp(&end);
-+ __ bind(&return_qnan);
-+ // If input is HoleNaN(SNaN), Return QNaN
-+ __ push(Immediate(0xffffffff));
-+ __ push(Immediate(0xfff7ffff));
-+ __ fld_d(MemOperand(esp, 0));
-+ __ lea(esp, Operand(esp, kDoubleSize));
-+ __ bind(&end);
-+ __ pop(ebx);
-+ // Clear stack.
-+ __ lea(esp, Operand(esp, 1 * kDoubleSize));
-+ break;
-+ }
-+ case kX87Movsxbl:
-+ __ movsx_b(i.OutputRegister(), i.MemoryOperand());
-+ break;
-+ case kX87Movzxbl:
-+ __ movzx_b(i.OutputRegister(), i.MemoryOperand());
-+ break;
-+ case kX87Movb: {
-+ size_t index = 0;
-+ Operand operand = i.MemoryOperand(&index);
-+ if (HasImmediateInput(instr, index)) {
-+ __ mov_b(operand, i.InputInt8(index));
-+ } else {
-+ __ mov_b(operand, i.InputRegister(index));
-+ }
-+ break;
-+ }
-+ case kX87Movsxwl:
-+ __ movsx_w(i.OutputRegister(), i.MemoryOperand());
-+ break;
-+ case kX87Movzxwl:
-+ __ movzx_w(i.OutputRegister(), i.MemoryOperand());
-+ break;
-+ case kX87Movw: {
-+ size_t index = 0;
-+ Operand operand = i.MemoryOperand(&index);
-+ if (HasImmediateInput(instr, index)) {
-+ __ mov_w(operand, i.InputInt16(index));
-+ } else {
-+ __ mov_w(operand, i.InputRegister(index));
-+ }
-+ break;
-+ }
-+ case kX87Movl:
-+ if (instr->HasOutput()) {
-+ __ mov(i.OutputRegister(), i.MemoryOperand());
-+ } else {
-+ size_t index = 0;
-+ Operand operand = i.MemoryOperand(&index);
-+ if (HasImmediateInput(instr, index)) {
-+ __ mov(operand, i.InputImmediate(index));
-+ } else {
-+ __ mov(operand, i.InputRegister(index));
-+ }
-+ }
-+ break;
-+ case kX87Movsd: {
-+ if (instr->HasOutput()) {
-+ X87Register output = i.OutputDoubleRegister();
-+ USE(output);
-+ DCHECK(output.code() == 0);
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_d(i.MemoryOperand());
-+ } else {
-+ size_t index = 0;
-+ Operand operand = i.MemoryOperand(&index);
-+ __ fst_d(operand);
-+ }
-+ break;
-+ }
-+ case kX87Movss: {
-+ if (instr->HasOutput()) {
-+ X87Register output = i.OutputDoubleRegister();
-+ USE(output);
-+ DCHECK(output.code() == 0);
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ __ fld_s(i.MemoryOperand());
-+ } else {
-+ size_t index = 0;
-+ Operand operand = i.MemoryOperand(&index);
-+ __ fst_s(operand);
-+ }
-+ break;
-+ }
-+ case kX87BitcastFI: {
-+ __ mov(i.OutputRegister(), MemOperand(esp, 0));
-+ __ lea(esp, Operand(esp, kFloatSize));
-+ break;
-+ }
-+ case kX87BitcastIF: {
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ __ fstp(0);
-+ if (instr->InputAt(0)->IsRegister()) {
-+ __ lea(esp, Operand(esp, -kFloatSize));
-+ __ mov(MemOperand(esp, 0), i.InputRegister(0));
-+ __ fld_s(MemOperand(esp, 0));
-+ __ lea(esp, Operand(esp, kFloatSize));
-+ } else {
-+ __ fld_s(i.InputOperand(0));
-+ }
-+ break;
-+ }
-+ case kX87Lea: {
-+ AddressingMode mode = AddressingModeField::decode(instr->opcode());
-+ // Shorten "leal" to "addl", "subl" or
"shll" if the register allocation
-+ // and addressing mode just happens to work out. The
"addl"/"subl" forms
-+ // in these cases are faster based on measurements.
-+ if (mode == kMode_MI) {
-+ __ Move(i.OutputRegister(), Immediate(i.InputInt32(0)));
-+ } else if (i.InputRegister(0).is(i.OutputRegister())) {
-+ if (mode == kMode_MRI) {
-+ int32_t constant_summand = i.InputInt32(1);
-+ if (constant_summand > 0) {
-+ __ add(i.OutputRegister(), Immediate(constant_summand));
-+ } else if (constant_summand < 0) {
-+ __ sub(i.OutputRegister(), Immediate(-constant_summand));
-+ }
-+ } else if (mode == kMode_MR1) {
-+ if (i.InputRegister(1).is(i.OutputRegister())) {
-+ __ shl(i.OutputRegister(), 1);
-+ } else {
-+ __ add(i.OutputRegister(), i.InputRegister(1));
-+ }
-+ } else if (mode == kMode_M2) {
-+ __ shl(i.OutputRegister(), 1);
-+ } else if (mode == kMode_M4) {
-+ __ shl(i.OutputRegister(), 2);
-+ } else if (mode == kMode_M8) {
-+ __ shl(i.OutputRegister(), 3);
-+ } else {
-+ __ lea(i.OutputRegister(), i.MemoryOperand());
-+ }
-+ } else if (mode == kMode_MR1 &&
-+ i.InputRegister(1).is(i.OutputRegister())) {
-+ __ add(i.OutputRegister(), i.InputRegister(0));
-+ } else {
-+ __ lea(i.OutputRegister(), i.MemoryOperand());
-+ }
-+ break;
-+ }
-+ case kX87Push:
-+ if (instr->InputAt(0)->IsFPRegister()) {
-+ auto allocated = AllocatedOperand::cast(*instr->InputAt(0));
-+ if (allocated.representation() == MachineRepresentation::kFloat32) {
-+ __ sub(esp, Immediate(kFloatSize));
-+ __ fst_s(Operand(esp, 0));
-+ frame_access_state()->IncreaseSPDelta(kFloatSize / kPointerSize);
-+ } else {
-+ DCHECK(allocated.representation() == MachineRepresentation::kFloat64);
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ fst_d(Operand(esp, 0));
-+ frame_access_state()->IncreaseSPDelta(kDoubleSize / kPointerSize);
-+ }
-+ } else if (instr->InputAt(0)->IsFPStackSlot()) {
-+ auto allocated = AllocatedOperand::cast(*instr->InputAt(0));
-+ if (allocated.representation() == MachineRepresentation::kFloat32) {
-+ __ sub(esp, Immediate(kFloatSize));
-+ __ fld_s(i.InputOperand(0));
-+ __ fstp_s(MemOperand(esp, 0));
-+ frame_access_state()->IncreaseSPDelta(kFloatSize / kPointerSize);
-+ } else {
-+ DCHECK(allocated.representation() == MachineRepresentation::kFloat64);
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ fld_d(i.InputOperand(0));
-+ __ fstp_d(MemOperand(esp, 0));
-+ frame_access_state()->IncreaseSPDelta(kDoubleSize / kPointerSize);
-+ }
-+ } else if (HasImmediateInput(instr, 0)) {
-+ __ push(i.InputImmediate(0));
-+ frame_access_state()->IncreaseSPDelta(1);
-+ } else {
-+ __ push(i.InputOperand(0));
-+ frame_access_state()->IncreaseSPDelta(1);
-+ }
-+ break;
-+ case kX87Poke: {
-+ int const slot = MiscField::decode(instr->opcode());
-+ if (HasImmediateInput(instr, 0)) {
-+ __ mov(Operand(esp, slot * kPointerSize), i.InputImmediate(0));
-+ } else {
-+ __ mov(Operand(esp, slot * kPointerSize), i.InputRegister(0));
-+ }
-+ break;
-+ }
-+ case kX87PushFloat32:
-+ __ lea(esp, Operand(esp, -kFloatSize));
-+ if (instr->InputAt(0)->IsFPStackSlot()) {
-+ __ fld_s(i.InputOperand(0));
-+ __ fstp_s(MemOperand(esp, 0));
-+ } else if (instr->InputAt(0)->IsFPRegister()) {
-+ __ fst_s(MemOperand(esp, 0));
-+ } else {
-+ UNREACHABLE();
-+ }
-+ break;
-+ case kX87PushFloat64:
-+ __ lea(esp, Operand(esp, -kDoubleSize));
-+ if (instr->InputAt(0)->IsFPStackSlot()) {
-+ __ fld_d(i.InputOperand(0));
-+ __ fstp_d(MemOperand(esp, 0));
-+ } else if (instr->InputAt(0)->IsFPRegister()) {
-+ __ fst_d(MemOperand(esp, 0));
-+ } else {
-+ UNREACHABLE();
-+ }
-+ break;
-+ case kCheckedLoadInt8:
-+ ASSEMBLE_CHECKED_LOAD_INTEGER(movsx_b);
-+ break;
-+ case kCheckedLoadUint8:
-+ ASSEMBLE_CHECKED_LOAD_INTEGER(movzx_b);
-+ break;
-+ case kCheckedLoadInt16:
-+ ASSEMBLE_CHECKED_LOAD_INTEGER(movsx_w);
-+ break;
-+ case kCheckedLoadUint16:
-+ ASSEMBLE_CHECKED_LOAD_INTEGER(movzx_w);
-+ break;
-+ case kCheckedLoadWord32:
-+ ASSEMBLE_CHECKED_LOAD_INTEGER(mov);
-+ break;
-+ case kCheckedLoadFloat32:
-+ ASSEMBLE_CHECKED_LOAD_FLOAT(fld_s, OutOfLineLoadFloat32NaN);
-+ break;
-+ case kCheckedLoadFloat64:
-+ ASSEMBLE_CHECKED_LOAD_FLOAT(fld_d, OutOfLineLoadFloat64NaN);
-+ break;
-+ case kCheckedStoreWord8:
-+ ASSEMBLE_CHECKED_STORE_INTEGER(mov_b);
-+ break;
-+ case kCheckedStoreWord16:
-+ ASSEMBLE_CHECKED_STORE_INTEGER(mov_w);
-+ break;
-+ case kCheckedStoreWord32:
-+ ASSEMBLE_CHECKED_STORE_INTEGER(mov);
-+ break;
-+ case kCheckedStoreFloat32:
-+ ASSEMBLE_CHECKED_STORE_FLOAT(fst_s);
-+ break;
-+ case kCheckedStoreFloat64:
-+ ASSEMBLE_CHECKED_STORE_FLOAT(fst_d);
-+ break;
-+ case kX87StackCheck: {
-+ ExternalReference const stack_limit =
-+ ExternalReference::address_of_stack_limit(isolate());
-+ __ cmp(esp, Operand::StaticVariable(stack_limit));
-+ break;
-+ }
-+ case kCheckedLoadWord64:
-+ case kCheckedStoreWord64:
-+ UNREACHABLE(); // currently unsupported checked int64 load/store.
-+ break;
-+ case kAtomicExchangeInt8: {
-+ __ xchg_b(i.InputRegister(0), i.MemoryOperand(1));
-+ __ movsx_b(i.InputRegister(0), i.InputRegister(0));
-+ break;
-+ }
-+ case kAtomicExchangeUint8: {
-+ __ xchg_b(i.InputRegister(0), i.MemoryOperand(1));
-+ __ movzx_b(i.InputRegister(0), i.InputRegister(0));
-+ break;
-+ }
-+ case kAtomicExchangeInt16: {
-+ __ xchg_w(i.InputRegister(0), i.MemoryOperand(1));
-+ __ movsx_w(i.InputRegister(0), i.InputRegister(0));
-+ break;
-+ }
-+ case kAtomicExchangeUint16: {
-+ __ xchg_w(i.InputRegister(0), i.MemoryOperand(1));
-+ __ movzx_w(i.InputRegister(0), i.InputRegister(0));
-+ break;
-+ }
-+ case kAtomicExchangeWord32: {
-+ __ xchg(i.InputRegister(0), i.MemoryOperand(1));
-+ break;
-+ }
-+ case kAtomicCompareExchangeInt8: {
-+ __ lock();
-+ __ cmpxchg_b(i.MemoryOperand(2), i.InputRegister(1));
-+ __ movsx_b(eax, eax);
-+ break;
-+ }
-+ case kAtomicCompareExchangeUint8: {
-+ __ lock();
-+ __ cmpxchg_b(i.MemoryOperand(2), i.InputRegister(1));
-+ __ movzx_b(eax, eax);
-+ break;
-+ }
-+ case kAtomicCompareExchangeInt16: {
-+ __ lock();
-+ __ cmpxchg_w(i.MemoryOperand(2), i.InputRegister(1));
-+ __ movsx_w(eax, eax);
-+ break;
-+ }
-+ case kAtomicCompareExchangeUint16: {
-+ __ lock();
-+ __ cmpxchg_w(i.MemoryOperand(2), i.InputRegister(1));
-+ __ movzx_w(eax, eax);
-+ break;
-+ }
-+ case kAtomicCompareExchangeWord32: {
-+ __ lock();
-+ __ cmpxchg(i.MemoryOperand(2), i.InputRegister(1));
-+ break;
-+ }
-+#define ATOMIC_BINOP_CASE(op, inst) \
-+ case kAtomic##op##Int8: { \
-+ ASSEMBLE_ATOMIC_BINOP(inst, mov_b, cmpxchg_b); \
-+ __ movsx_b(eax, eax); \
-+ break; \
-+ } \
-+ case kAtomic##op##Uint8: { \
-+ ASSEMBLE_ATOMIC_BINOP(inst, mov_b, cmpxchg_b); \
-+ __ movzx_b(eax, eax); \
-+ break; \
-+ } \
-+ case kAtomic##op##Int16: { \
-+ ASSEMBLE_ATOMIC_BINOP(inst, mov_w, cmpxchg_w); \
-+ __ movsx_w(eax, eax); \
-+ break; \
-+ } \
-+ case kAtomic##op##Uint16: { \
-+ ASSEMBLE_ATOMIC_BINOP(inst, mov_w, cmpxchg_w); \
-+ __ movzx_w(eax, eax); \
-+ break; \
-+ } \
-+ case kAtomic##op##Word32: { \
-+ ASSEMBLE_ATOMIC_BINOP(inst, mov, cmpxchg); \
-+ break; \
-+ }
-+ ATOMIC_BINOP_CASE(Add, add)
-+ ATOMIC_BINOP_CASE(Sub, sub)
-+ ATOMIC_BINOP_CASE(And, and_)
-+ ATOMIC_BINOP_CASE(Or, or_)
-+ ATOMIC_BINOP_CASE(Xor, xor_)
-+#undef ATOMIC_BINOP_CASE
-+ case kAtomicLoadInt8:
-+ case kAtomicLoadUint8:
-+ case kAtomicLoadInt16:
-+ case kAtomicLoadUint16:
-+ case kAtomicLoadWord32:
-+ case kAtomicStoreWord8:
-+ case kAtomicStoreWord16:
-+ case kAtomicStoreWord32:
-+ UNREACHABLE(); // Won't be generated by instruction selector.
-+ break;
-+ }
-+ return kSuccess;
-+} // NOLINT(readability/fn_size)
-+
-+static Condition FlagsConditionToCondition(FlagsCondition condition) {
-+ switch (condition) {
-+ case kUnorderedEqual:
-+ case kEqual:
-+ return equal;
-+ break;
-+ case kUnorderedNotEqual:
-+ case kNotEqual:
-+ return not_equal;
-+ break;
-+ case kSignedLessThan:
-+ return less;
-+ break;
-+ case kSignedGreaterThanOrEqual:
-+ return greater_equal;
-+ break;
-+ case kSignedLessThanOrEqual:
-+ return less_equal;
-+ break;
-+ case kSignedGreaterThan:
-+ return greater;
-+ break;
-+ case kUnsignedLessThan:
-+ return below;
-+ break;
-+ case kUnsignedGreaterThanOrEqual:
-+ return above_equal;
-+ break;
-+ case kUnsignedLessThanOrEqual:
-+ return below_equal;
-+ break;
-+ case kUnsignedGreaterThan:
-+ return above;
-+ break;
-+ case kOverflow:
-+ return overflow;
-+ break;
-+ case kNotOverflow:
-+ return no_overflow;
-+ break;
-+ default:
-+ UNREACHABLE();
-+ break;
-+ }
-+}
-+
-+// Assembles a branch after an instruction.
-+void CodeGenerator::AssembleArchBranch(Instruction* instr, BranchInfo* branch) {
-+ Label::Distance flabel_distance =
-+ branch->fallthru ? Label::kNear : Label::kFar;
-+
-+ Label done;
-+ Label tlabel_tmp;
-+ Label flabel_tmp;
-+ Label* tlabel = &tlabel_tmp;
-+ Label* flabel = &flabel_tmp;
-+
-+ Label* tlabel_dst = branch->true_label;
-+ Label* flabel_dst = branch->false_label;
-+
-+ if (branch->condition == kUnorderedEqual) {
-+ __ j(parity_even, flabel, flabel_distance);
-+ } else if (branch->condition == kUnorderedNotEqual) {
-+ __ j(parity_even, tlabel);
-+ }
-+ __ j(FlagsConditionToCondition(branch->condition), tlabel);
-+
-+ // Add a jump if not falling through to the next block.
-+ if (!branch->fallthru) __ jmp(flabel);
-+
-+ __ jmp(&done);
-+ __ bind(&tlabel_tmp);
-+ FlagsMode mode = FlagsModeField::decode(instr->opcode());
-+ if (mode == kFlags_deoptimize) {
-+ int double_register_param_count = 0;
-+ int x87_layout = 0;
-+ for (size_t i = 0; i < instr->InputCount(); i++) {
-+ if (instr->InputAt(i)->IsFPRegister()) {
-+ double_register_param_count++;
-+ }
-+ }
-+ // Currently we use only one X87 register. If double_register_param_count
-+ // is bigger than 1, it means duplicated double register is added to input
-+ // of this instruction.
-+ if (double_register_param_count > 0) {
-+ x87_layout = (0 << 3) | 1;
-+ }
-+ // The layout of x87 register stack is loaded on the top of FPU register
-+ // stack for deoptimization.
-+ __ push(Immediate(x87_layout));
-+ __ fild_s(MemOperand(esp, 0));
-+ __ lea(esp, Operand(esp, kPointerSize));
-+ }
-+ __ jmp(tlabel_dst);
-+ __ bind(&flabel_tmp);
-+ __ jmp(flabel_dst);
-+ __ bind(&done);
-+}
-+
-+
-+void CodeGenerator::AssembleArchJump(RpoNumber target) {
-+ if (!IsNextInAssemblyOrder(target)) __ jmp(GetLabel(target));
-+}
-+
-+void CodeGenerator::AssembleArchTrap(Instruction* instr,
-+ FlagsCondition condition) {
-+ class OutOfLineTrap final : public OutOfLineCode {
-+ public:
-+ OutOfLineTrap(CodeGenerator* gen, bool frame_elided, Instruction* instr)
-+ : OutOfLineCode(gen),
-+ frame_elided_(frame_elided),
-+ instr_(instr),
-+ gen_(gen) {}
-+
-+ void Generate() final {
-+ X87OperandConverter i(gen_, instr_);
-+
-+ Builtins::Name trap_id =
-+ static_cast<Builtins::Name>(i.InputInt32(instr_->InputCount() - 1));
-+ bool old_has_frame = __ has_frame();
-+ if (frame_elided_) {
-+ __ set_has_frame(true);
-+ __ EnterFrame(StackFrame::WASM_COMPILED);
-+ }
-+ GenerateCallToTrap(trap_id);
-+ if (frame_elided_) {
-+ __ set_has_frame(old_has_frame);
-+ }
-+ }
-+
-+ private:
-+ void GenerateCallToTrap(Builtins::Name trap_id) {
-+ if (trap_id == Builtins::builtin_count) {
-+ // We cannot test calls to the runtime in cctest/test-run-wasm.
-+ // Therefore we emit a call to C here instead of a call to the runtime.
-+ __ PrepareCallCFunction(0, esi);
-+ __ CallCFunction(ExternalReference::wasm_call_trap_callback_for_testing(
-+ __ isolate()),
-+ 0);
-+ __ LeaveFrame(StackFrame::WASM_COMPILED);
-+ __ Ret();
-+ } else {
-+ gen_->AssembleSourcePosition(instr_);
-+ __ Call(__ isolate()->builtins()->builtin_handle(trap_id),
-+ RelocInfo::CODE_TARGET);
-+ ReferenceMap* reference_map =
-+ new (gen_->zone()) ReferenceMap(gen_->zone());
-+ gen_->RecordSafepoint(reference_map, Safepoint::kSimple, 0,
-+ Safepoint::kNoLazyDeopt);
-+ __ AssertUnreachable(kUnexpectedReturnFromWasmTrap);
-+ }
-+ }
-+
-+ bool frame_elided_;
-+ Instruction* instr_;
-+ CodeGenerator* gen_;
-+ };
-+ bool frame_elided = !frame_access_state()->has_frame();
-+ auto ool = new (zone()) OutOfLineTrap(this, frame_elided, instr);
-+ Label* tlabel = ool->entry();
-+ Label end;
-+ if (condition == kUnorderedEqual) {
-+ __ j(parity_even, &end);
-+ } else if (condition == kUnorderedNotEqual) {
-+ __ j(parity_even, tlabel);
-+ }
-+ __ j(FlagsConditionToCondition(condition), tlabel);
-+ __ bind(&end);
-+}
-+
-+// Assembles boolean materializations after an instruction.
-+void CodeGenerator::AssembleArchBoolean(Instruction* instr,
-+ FlagsCondition condition) {
-+ X87OperandConverter i(this, instr);
-+ Label done;
-+
-+ // Materialize a full 32-bit 1 or 0 value. The result register is always the
-+ // last output of the instruction.
-+ Label check;
-+ DCHECK_NE(0u, instr->OutputCount());
-+ Register reg = i.OutputRegister(instr->OutputCount() - 1);
-+ if (condition == kUnorderedEqual) {
-+ __ j(parity_odd, &check, Label::kNear);
-+ __ Move(reg, Immediate(0));
-+ __ jmp(&done, Label::kNear);
-+ } else if (condition == kUnorderedNotEqual) {
-+ __ j(parity_odd, &check, Label::kNear);
-+ __ mov(reg, Immediate(1));
-+ __ jmp(&done, Label::kNear);
-+ }
-+ Condition cc = FlagsConditionToCondition(condition);
-+
-+ __ bind(&check);
-+ if (reg.is_byte_register()) {
-+ // setcc for byte registers (al, bl, cl, dl).
-+ __ setcc(cc, reg);
-+ __ movzx_b(reg, reg);
-+ } else {
-+ // Emit a branch to set a register to either 1 or 0.
-+ Label set;
-+ __ j(cc, &set, Label::kNear);
-+ __ Move(reg, Immediate(0));
-+ __ jmp(&done, Label::kNear);
-+ __ bind(&set);
-+ __ mov(reg, Immediate(1));
-+ }
-+ __ bind(&done);
-+}
-+
-+
-+void CodeGenerator::AssembleArchLookupSwitch(Instruction* instr) {
-+ X87OperandConverter i(this, instr);
-+ Register input = i.InputRegister(0);
-+ for (size_t index = 2; index < instr->InputCount(); index += 2) {
-+ __ cmp(input, Immediate(i.InputInt32(index + 0)));
-+ __ j(equal, GetLabel(i.InputRpo(index + 1)));
-+ }
-+ AssembleArchJump(i.InputRpo(1));
-+}
-+
-+
-+void CodeGenerator::AssembleArchTableSwitch(Instruction* instr) {
-+ X87OperandConverter i(this, instr);
-+ Register input = i.InputRegister(0);
-+ size_t const case_count = instr->InputCount() - 2;
-+ Label** cases = zone()->NewArray<Label*>(case_count);
-+ for (size_t index = 0; index < case_count; ++index) {
-+ cases[index] = GetLabel(i.InputRpo(index + 2));
-+ }
-+ Label* const table = AddJumpTable(cases, case_count);
-+ __ cmp(input, Immediate(case_count));
-+ __ j(above_equal, GetLabel(i.InputRpo(1)));
-+ __ jmp(Operand::JumpTable(input, times_4, table));
-+}
-+
-+CodeGenerator::CodeGenResult CodeGenerator::AssembleDeoptimizerCall(
-+ int deoptimization_id, SourcePosition pos) {
-+ DeoptimizeKind deoptimization_kind = GetDeoptimizationKind(deoptimization_id);
-+ DeoptimizeReason deoptimization_reason =
-+ GetDeoptimizationReason(deoptimization_id);
-+ Deoptimizer::BailoutType bailout_type =
-+ deoptimization_kind == DeoptimizeKind::kSoft ? Deoptimizer::SOFT
-+ : Deoptimizer::EAGER;
-+ Address deopt_entry = Deoptimizer::GetDeoptimizationEntry(
-+ isolate(), deoptimization_id, bailout_type);
-+ if (deopt_entry == nullptr) return kTooManyDeoptimizationBailouts;
-+ __ RecordDeoptReason(deoptimization_reason, pos, deoptimization_id);
-+ __ call(deopt_entry, RelocInfo::RUNTIME_ENTRY);
-+ return kSuccess;
-+}
-+
-+
-+// The calling convention for JSFunctions on X87 passes arguments on the
-+// stack and the JSFunction and context in EDI and ESI, respectively, thus
-+// the steps of the call look as follows:
-+
-+// --{ before the call instruction }--------------------------------------------
-+// | caller frame |
-+// ^ esp ^ ebp
-+
-+// --{ push arguments and setup ESI, EDI }--------------------------------------
-+// | args + receiver | caller frame |
-+// ^ esp ^ ebp
-+// [edi = JSFunction, esi = context]
-+
-+// --{ call [edi + kCodeEntryOffset] }------------------------------------------
-+// | RET | args + receiver | caller frame |
-+// ^ esp ^ ebp
-+
-+// =={ prologue of called function }============================================
-+// --{ push ebp }---------------------------------------------------------------
-+// | FP | RET | args + receiver | caller frame |
-+// ^ esp ^ ebp
-+
-+// --{ mov ebp, esp }-----------------------------------------------------------
-+// | FP | RET | args + receiver | caller frame |
-+// ^ ebp,esp
-+
-+// --{ push esi }---------------------------------------------------------------
-+// | CTX | FP | RET | args + receiver | caller frame |
-+// ^esp ^ ebp
-+
-+// --{ push edi }---------------------------------------------------------------
-+// | FNC | CTX | FP | RET | args + receiver | caller frame |
-+// ^esp ^ ebp
-+
-+// --{ subi esp, #N }-----------------------------------------------------------
-+// | callee frame | FNC | CTX | FP | RET | args + receiver | caller frame |
-+// ^esp ^ ebp
-+
-+// =={ body of called function }================================================
-+
-+// =={ epilogue of called function }============================================
-+// --{ mov esp, ebp }-----------------------------------------------------------
-+// | FP | RET | args + receiver | caller frame |
-+// ^ esp,ebp
-+
-+// --{ pop ebp }-----------------------------------------------------------
-+// | | RET | args + receiver | caller frame |
-+// ^ esp ^ ebp
-+
-+// --{ ret #A+1 }-----------------------------------------------------------
-+// | | caller frame |
-+// ^ esp ^ ebp
-+
-+
-+// Runtime function calls are accomplished by doing a stub call to the
-+// CEntryStub (a real code object). On X87 passes arguments on the
-+// stack, the number of arguments in EAX, the address of the runtime function
-+// in EBX, and the context in ESI.
-+
-+// --{ before the call instruction }--------------------------------------------
-+// | caller frame |
-+// ^ esp ^ ebp
-+
-+// --{ push arguments and setup EAX, EBX, and ESI }-----------------------------
-+// | args + receiver | caller frame |
-+// ^ esp ^ ebp
-+// [eax = #args, ebx = runtime function, esi = context]
-+
-+// --{ call #CEntryStub }-------------------------------------------------------
-+// | RET | args + receiver | caller frame |
-+// ^ esp ^ ebp
-+
-+// =={ body of runtime function }===============================================
-+
-+// --{ runtime returns }--------------------------------------------------------
-+// | caller frame |
-+// ^ esp ^ ebp
-+
-+// Other custom linkages (e.g. for calling directly into and out of C++) may
-+// need to save callee-saved registers on the stack, which is done in the
-+// function prologue of generated code.
-+
-+// --{ before the call instruction }--------------------------------------------
-+// | caller frame |
-+// ^ esp ^ ebp
-+
-+// --{ set up arguments in registers on stack }---------------------------------
-+// | args | caller frame |
-+// ^ esp ^ ebp
-+// [r0 = arg0, r1 = arg1, ...]
-+
-+// --{ call code }--------------------------------------------------------------
-+// | RET | args | caller frame |
-+// ^ esp ^ ebp
-+
-+// =={ prologue of called function }============================================
-+// --{ push ebp }---------------------------------------------------------------
-+// | FP | RET | args | caller frame |
-+// ^ esp ^ ebp
-+
-+// --{ mov ebp, esp }-----------------------------------------------------------
-+// | FP | RET | args | caller frame |
-+// ^ ebp,esp
-+
-+// --{ save registers }---------------------------------------------------------
-+// | regs | FP | RET | args | caller frame |
-+// ^ esp ^ ebp
-+
-+// --{ subi esp, #N }-----------------------------------------------------------
-+// | callee frame | regs | FP | RET | args | caller frame |
-+// ^esp ^ ebp
-+
-+// =={ body of called function }================================================
-+
-+// =={ epilogue of called function }============================================
-+// --{ restore registers }------------------------------------------------------
-+// | regs | FP | RET | args | caller frame |
-+// ^ esp ^ ebp
-+
-+// --{ mov esp, ebp }-----------------------------------------------------------
-+// | FP | RET | args | caller frame |
-+// ^ esp,ebp
-+
-+// --{ pop ebp }----------------------------------------------------------------
-+// | RET | args | caller frame |
-+// ^ esp ^ ebp
-+
-+void CodeGenerator::FinishFrame(Frame* frame) {
-+ CallDescriptor* descriptor = linkage()->GetIncomingDescriptor();
-+ const RegList saves = descriptor->CalleeSavedRegisters();
-+ if (saves != 0) { // Save callee-saved registers.
-+ DCHECK(!info()->is_osr());
-+ int pushed = 0;
-+ for (int i = Register::kNumRegisters - 1; i >= 0; i--) {
-+ if (!((1 << i) & saves)) continue;
-+ ++pushed;
-+ }
-+ frame->AllocateSavedCalleeRegisterSlots(pushed);
-+ }
-+
-+ // Initailize FPU state.
-+ __ fninit();
-+ __ fld1();
-+}
-+
-+void CodeGenerator::AssembleConstructFrame() {
-+ CallDescriptor* descriptor = linkage()->GetIncomingDescriptor();
-+ if (frame_access_state()->has_frame()) {
-+ if (descriptor->IsCFunctionCall()) {
-+ __ push(ebp);
-+ __ mov(ebp, esp);
-+ } else if (descriptor->IsJSFunctionCall()) {
-+ __ Prologue(this->info()->GeneratePreagedPrologue());
-+ if (descriptor->PushArgumentCount()) {
-+ __ push(kJavaScriptCallArgCountRegister);
-+ }
-+ } else {
-+ __ StubPrologue(info()->GetOutputStackFrameType());
-+ }
-+ }
-+
-+ int shrink_slots =
-+ frame()->GetTotalFrameSlotCount() - descriptor->CalculateFixedFrameSize();
-+
-+ if (info()->is_osr()) {
-+ // TurboFan OSR-compiled functions cannot be entered directly.
-+ __ Abort(kShouldNotDirectlyEnterOsrFunction);
-+
-+ // Unoptimized code jumps directly to this entrypoint while the unoptimized
-+ // frame is still on the stack. Optimized code uses OSR values directly from
-+ // the unoptimized frame. Thus, all that needs to be done is to allocate the
-+ // remaining stack slots.
-+ if (FLAG_code_comments) __ RecordComment("-- OSR entrypoint --");
-+ osr_pc_offset_ = __ pc_offset();
-+ shrink_slots -= osr_helper()->UnoptimizedFrameSlots();
-+
-+ // Initailize FPU state.
-+ __ fninit();
-+ __ fld1();
-+ }
-+
-+ const RegList saves = descriptor->CalleeSavedRegisters();
-+ if (shrink_slots > 0) {
-+ if (info()->IsWasm() && shrink_slots > 128) {
-+ // For WebAssembly functions with big frames we have to do the stack
-+ // overflow check before we construct the frame. Otherwise we may not
-+ // have enough space on the stack to call the runtime for the stack
-+ // overflow.
-+ Label done;
-+
-+ // If the frame is bigger than the stack, we throw the stack overflow
-+ // exception unconditionally. Thereby we can avoid the integer overflow
-+ // check in the condition code.
-+ if (shrink_slots * kPointerSize < FLAG_stack_size * 1024) {
-+ Register scratch = esi;
-+ __ push(scratch);
-+ __ mov(scratch,
-+ Immediate(ExternalReference::address_of_real_stack_limit(
-+ __ isolate())));
-+ __ mov(scratch, Operand(scratch, 0));
-+ __ add(scratch, Immediate(shrink_slots * kPointerSize));
-+ __ cmp(esp, scratch);
-+ __ pop(scratch);
-+ __ j(above_equal, &done);
-+ }
-+ if (!frame_access_state()->has_frame()) {
-+ __ set_has_frame(true);
-+ __ EnterFrame(StackFrame::WASM_COMPILED);
-+ }
-+ __ Move(esi, Smi::kZero);
-+ __ CallRuntimeDelayed(zone(), Runtime::kThrowWasmStackOverflow);
-+ ReferenceMap* reference_map = new (zone()) ReferenceMap(zone());
-+ RecordSafepoint(reference_map, Safepoint::kSimple, 0,
-+ Safepoint::kNoLazyDeopt);
-+ __ AssertUnreachable(kUnexpectedReturnFromWasmTrap);
-+ __ bind(&done);
-+ }
-+ __ sub(esp, Immediate(shrink_slots * kPointerSize));
-+ }
-+
-+ if (saves != 0) { // Save callee-saved registers.
-+ DCHECK(!info()->is_osr());
-+ int pushed = 0;
-+ for (int i = Register::kNumRegisters - 1; i >= 0; i--) {
-+ if (!((1 << i) & saves)) continue;
-+ __ push(Register::from_code(i));
-+ ++pushed;
-+ }
-+ }
-+}
-+
-+void CodeGenerator::AssembleReturn(InstructionOperand* pop) {
-+ CallDescriptor* descriptor = linkage()->GetIncomingDescriptor();
-+
-+ // Clear the FPU stack only if there is no return value in the stack.
-+ if (FLAG_debug_code && FLAG_enable_slow_asserts) {
-+ __ VerifyX87StackDepth(1);
-+ }
-+ bool clear_stack = true;
-+ for (size_t i = 0; i < descriptor->ReturnCount(); i++) {
-+ MachineRepresentation rep = descriptor->GetReturnType(i).representation();
-+ LinkageLocation loc = descriptor->GetReturnLocation(i);
-+ if (IsFloatingPoint(rep) && loc == LinkageLocation::ForRegister(0)) {
-+ clear_stack = false;
-+ break;
-+ }
-+ }
-+ if (clear_stack) __ fstp(0);
-+
-+ const RegList saves = descriptor->CalleeSavedRegisters();
-+ // Restore registers.
-+ if (saves != 0) {
-+ for (int i = 0; i < Register::kNumRegisters; i++) {
-+ if (!((1 << i) & saves)) continue;
-+ __ pop(Register::from_code(i));
-+ }
-+ }
-+
-+ // Might need ecx for scratch if pop_size is too big or if there is a variable
-+ // pop count.
-+ DCHECK_EQ(0u, descriptor->CalleeSavedRegisters() & ecx.bit());
-+ size_t pop_size = descriptor->StackParameterCount() * kPointerSize;
-+ X87OperandConverter g(this, nullptr);
-+ if (descriptor->IsCFunctionCall()) {
-+ AssembleDeconstructFrame();
-+ } else if (frame_access_state()->has_frame()) {
-+ // Canonicalize JSFunction return sites for now if they always have the same
-+ // number of return args.
-+ if (pop->IsImmediate() && g.ToConstant(pop).ToInt32() == 0) {
-+ if (return_label_.is_bound()) {
-+ __ jmp(&return_label_);
-+ return;
-+ } else {
-+ __ bind(&return_label_);
-+ AssembleDeconstructFrame();
-+ }
-+ } else {
-+ AssembleDeconstructFrame();
-+ }
-+ }
-+ DCHECK_EQ(0u, descriptor->CalleeSavedRegisters() & edx.bit());
-+ DCHECK_EQ(0u, descriptor->CalleeSavedRegisters() & ecx.bit());
-+ if (pop->IsImmediate()) {
-+ DCHECK_EQ(Constant::kInt32, g.ToConstant(pop).type());
-+ pop_size += g.ToConstant(pop).ToInt32() * kPointerSize;
-+ __ Ret(static_cast<int>(pop_size), ecx);
-+ } else {
-+ Register pop_reg = g.ToRegister(pop);
-+ Register scratch_reg = pop_reg.is(ecx) ? edx : ecx;
-+ __ pop(scratch_reg);
-+ __ lea(esp, Operand(esp, pop_reg, times_4, static_cast<int>(pop_size)));
-+ __ jmp(scratch_reg);
-+ }
-+}
-+
-+void CodeGenerator::FinishCode() {}
-+
-+void CodeGenerator::AssembleMove(InstructionOperand* source,
-+ InstructionOperand* destination) {
-+ X87OperandConverter g(this, nullptr);
-+ // Dispatch on the source and destination operand kinds. Not all
-+ // combinations are possible.
-+ if (source->IsRegister()) {
-+ DCHECK(destination->IsRegister() || destination->IsStackSlot());
-+ Register src = g.ToRegister(source);
-+ Operand dst = g.ToOperand(destination);
-+ __ mov(dst, src);
-+ } else if (source->IsStackSlot()) {
-+ DCHECK(destination->IsRegister() || destination->IsStackSlot());
-+ Operand src = g.ToOperand(source);
-+ if (destination->IsRegister()) {
-+ Register dst = g.ToRegister(destination);
-+ __ mov(dst, src);
-+ } else {
-+ Operand dst = g.ToOperand(destination);
-+ __ push(src);
-+ __ pop(dst);
-+ }
-+ } else if (source->IsConstant()) {
-+ Constant src_constant = g.ToConstant(source);
-+ if (src_constant.type() == Constant::kHeapObject) {
-+ Handle<HeapObject> src = src_constant.ToHeapObject();
-+ if (destination->IsRegister()) {
-+ Register dst = g.ToRegister(destination);
-+ __ Move(dst, src);
-+ } else {
-+ DCHECK(destination->IsStackSlot());
-+ Operand dst = g.ToOperand(destination);
-+ __ mov(dst, src);
-+ }
-+ } else if (destination->IsRegister()) {
-+ Register dst = g.ToRegister(destination);
-+ __ Move(dst, g.ToImmediate(source));
-+ } else if (destination->IsStackSlot()) {
-+ Operand dst = g.ToOperand(destination);
-+ __ Move(dst, g.ToImmediate(source));
-+ } else if (src_constant.type() == Constant::kFloat32) {
-+ // TODO(turbofan): Can we do better here?
-+ uint32_t src = src_constant.ToFloat32AsInt();
-+ if (destination->IsFPRegister()) {
-+ __ sub(esp, Immediate(kInt32Size));
-+ __ mov(MemOperand(esp, 0), Immediate(src));
-+ // always only push one value into the x87 stack.
-+ __ fstp(0);
-+ __ fld_s(MemOperand(esp, 0));
-+ __ add(esp, Immediate(kInt32Size));
-+ } else {
-+ DCHECK(destination->IsFPStackSlot());
-+ Operand dst = g.ToOperand(destination);
-+ __ Move(dst, Immediate(src));
-+ }
-+ } else {
-+ DCHECK_EQ(Constant::kFloat64, src_constant.type());
-+ uint64_t src = src_constant.ToFloat64().AsUint64();
-+ uint32_t lower = static_cast<uint32_t>(src);
-+ uint32_t upper = static_cast<uint32_t>(src >> 32);
-+ if (destination->IsFPRegister()) {
-+ __ sub(esp, Immediate(kDoubleSize));
-+ __ mov(MemOperand(esp, 0), Immediate(lower));
-+ __ mov(MemOperand(esp, kInt32Size), Immediate(upper));
-+ // always only push one value into the x87 stack.
-+ __ fstp(0);
-+ __ fld_d(MemOperand(esp, 0));
-+ __ add(esp, Immediate(kDoubleSize));
-+ } else {
-+ DCHECK(destination->IsFPStackSlot());
-+ Operand dst0 = g.ToOperand(destination);
-+ Operand dst1 = g.HighOperand(destination);
-+ __ Move(dst0, Immediate(lower));
-+ __ Move(dst1, Immediate(upper));
-+ }
-+ }
-+ } else if (source->IsFPRegister()) {
-+ DCHECK(destination->IsFPStackSlot());
-+ Operand dst = g.ToOperand(destination);
-+ auto allocated = AllocatedOperand::cast(*source);
-+ switch (allocated.representation()) {
-+ case MachineRepresentation::kFloat32:
-+ __ fst_s(dst);
-+ break;
-+ case MachineRepresentation::kFloat64:
-+ __ fst_d(dst);
-+ break;
-+ default:
-+ UNREACHABLE();
-+ }
-+ } else if (source->IsFPStackSlot()) {
-+ DCHECK(destination->IsFPRegister() || destination->IsFPStackSlot());
-+ Operand src = g.ToOperand(source);
-+ auto allocated = AllocatedOperand::cast(*source);
-+ if (destination->IsFPRegister()) {
-+ // always only push one value into the x87 stack.
-+ __ fstp(0);
-+ switch (allocated.representation()) {
-+ case MachineRepresentation::kFloat32:
-+ __ fld_s(src);
-+ break;
-+ case MachineRepresentation::kFloat64:
-+ __ fld_d(src);
-+ break;
-+ default:
-+ UNREACHABLE();
-+ }
-+ } else {
-+ Operand dst = g.ToOperand(destination);
-+ switch (allocated.representation()) {
-+ case MachineRepresentation::kFloat32:
-+ __ fld_s(src);
-+ __ fstp_s(dst);
-+ break;
-+ case MachineRepresentation::kFloat64:
-+ __ fld_d(src);
-+ __ fstp_d(dst);
-+ break;
-+ default:
-+ UNREACHABLE();
-+ }
-+ }
-+ } else {
-+ UNREACHABLE();
-+ }
-+}
-+
-+
-+void CodeGenerator::AssembleSwap(InstructionOperand* source,
-+ InstructionOperand* destination) {
-+ X87OperandConverter g(this, nullptr);
-+ // Dispatch on the source and destination operand kinds. Not all
-+ // combinations are possible.
-+ if (source->IsRegister() && destination->IsRegister()) {
-+ // Register-register.
-+ Register src = g.ToRegister(source);
-+ Register dst = g.ToRegister(destination);
-+ __ xchg(dst, src);
-+ } else if (source->IsRegister() && destination->IsStackSlot()) {
-+ // Register-memory.
-+ __ xchg(g.ToRegister(source), g.ToOperand(destination));
-+ } else if (source->IsStackSlot() && destination->IsStackSlot()) {
-+ // Memory-memory.
-+ Operand dst1 = g.ToOperand(destination);
-+ __ push(dst1);
-+ frame_access_state()->IncreaseSPDelta(1);
-+ Operand src1 = g.ToOperand(source);
-+ __ push(src1);
-+ Operand dst2 = g.ToOperand(destination);
-+ __ pop(dst2);
-+ frame_access_state()->IncreaseSPDelta(-1);
-+ Operand src2 = g.ToOperand(source);
-+ __ pop(src2);
-+ } else if (source->IsFPRegister() && destination->IsFPRegister()) {
-+ UNREACHABLE();
-+ } else if (source->IsFPRegister() && destination->IsFPStackSlot()) {
-+ auto allocated = AllocatedOperand::cast(*source);
-+ switch (allocated.representation()) {
-+ case MachineRepresentation::kFloat32:
-+ __ fld_s(g.ToOperand(destination));
-+ __ fxch();
-+ __ fstp_s(g.ToOperand(destination));
-+ break;
-+ case MachineRepresentation::kFloat64:
-+ __ fld_d(g.ToOperand(destination));
-+ __ fxch();
-+ __ fstp_d(g.ToOperand(destination));
-+ break;
-+ default:
-+ UNREACHABLE();
-+ }
-+ } else if (source->IsFPStackSlot() && destination->IsFPStackSlot()) {
-+ auto allocated = AllocatedOperand::cast(*source);
-+ switch (allocated.representation()) {
-+ case MachineRepresentation::kFloat32:
-+ __ fld_s(g.ToOperand(source));
-+ __ fld_s(g.ToOperand(destination));
-+ __ fstp_s(g.ToOperand(source));
-+ __ fstp_s(g.ToOperand(destination));
-+ break;
-+ case MachineRepresentation::kFloat64:
-+ __ fld_d(g.ToOperand(source));
-+ __ fld_d(g.ToOperand(destination));
-+ __ fstp_d(g.ToOperand(source));
-+ __ fstp_d(g.ToOperand(destination));
-+ break;
-+ default:
-+ UNREACHABLE();
-+ }
-+ } else {
-+ // No other combinations are possible.
-+ UNREACHABLE();
-+ }
-+}
-+
-+
-+void CodeGenerator::AssembleJumpTable(Label** targets, size_t target_count) {
-+ for (size_t index = 0; index < target_count; ++index) {
-+ __ dd(targets[index]);
-+ }
-+}
-+
-+
-+void CodeGenerator::EnsureSpaceForLazyDeopt() {
-+ if (!info()->ShouldEnsureSpaceForLazyDeopt()) {
-+ return;
-+ }
-+
-+ int space_needed = Deoptimizer::patch_size();
-+ // Ensure that we have enough space after the previous lazy-bailout
-+ // instruction for patching the code here.
-+ int current_pc = tasm()->pc_offset();
-+ if (current_pc < last_lazy_deopt_pc_ + space_needed) {
-+ int padding_size = last_lazy_deopt_pc_ + space_needed - current_pc;
-+ __ Nop(padding_size);
-+ }
-+}
-+
-+#undef __
-+
-+} // namespace compiler
-+} // namespace internal
-+} // namespace v8
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/instruction-codes-x87.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/instruction-codes-x87.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/instruction-codes-x87.h 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/instruction-codes-x87.h 2018-02-18
19:00:54.013420855 +0100
-@@ -0,0 +1,141 @@
-+// Copyright 2014 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#ifndef V8_COMPILER_X87_INSTRUCTION_CODES_X87_H_
-+#define V8_COMPILER_X87_INSTRUCTION_CODES_X87_H_
-+
-+#include "src/compiler/instruction.h"
-+#include "src/compiler/instruction-codes.h"
-+namespace v8 {
-+namespace internal {
-+namespace compiler {
-+
-+// X87-specific opcodes that specify which assembly sequence to emit.
-+// Most opcodes specify a single instruction.
-+#define TARGET_ARCH_OPCODE_LIST(V) \
-+ V(X87Add) \
-+ V(X87And) \
-+ V(X87Cmp) \
-+ V(X87Cmp16) \
-+ V(X87Cmp8) \
-+ V(X87Test) \
-+ V(X87Test16) \
-+ V(X87Test8) \
-+ V(X87Or) \
-+ V(X87Xor) \
-+ V(X87Sub) \
-+ V(X87Imul) \
-+ V(X87ImulHigh) \
-+ V(X87UmulHigh) \
-+ V(X87Idiv) \
-+ V(X87Udiv) \
-+ V(X87Not) \
-+ V(X87Neg) \
-+ V(X87Shl) \
-+ V(X87Shr) \
-+ V(X87Sar) \
-+ V(X87AddPair) \
-+ V(X87SubPair) \
-+ V(X87MulPair) \
-+ V(X87ShlPair) \
-+ V(X87ShrPair) \
-+ V(X87SarPair) \
-+ V(X87Ror) \
-+ V(X87Lzcnt) \
-+ V(X87Popcnt) \
-+ V(X87Float32Cmp) \
-+ V(X87Float32Add) \
-+ V(X87Float32Sub) \
-+ V(X87Float32Mul) \
-+ V(X87Float32Div) \
-+ V(X87Float32Abs) \
-+ V(X87Float32Neg) \
-+ V(X87Float32Sqrt) \
-+ V(X87Float32Round) \
-+ V(X87LoadFloat64Constant) \
-+ V(X87Float64Add) \
-+ V(X87Float64Sub) \
-+ V(X87Float64Mul) \
-+ V(X87Float64Div) \
-+ V(X87Float64Mod) \
-+ V(X87Float32Max) \
-+ V(X87Float64Max) \
-+ V(X87Float32Min) \
-+ V(X87Float64Min) \
-+ V(X87Float64Abs) \
-+ V(X87Float64Neg) \
-+ V(X87Int32ToFloat32) \
-+ V(X87Uint32ToFloat32) \
-+ V(X87Int32ToFloat64) \
-+ V(X87Float32ToFloat64) \
-+ V(X87Uint32ToFloat64) \
-+ V(X87Float64ToInt32) \
-+ V(X87Float32ToInt32) \
-+ V(X87Float32ToUint32) \
-+ V(X87Float64ToFloat32) \
-+ V(X87Float64ToUint32) \
-+ V(X87Float64ExtractHighWord32) \
-+ V(X87Float64ExtractLowWord32) \
-+ V(X87Float64InsertHighWord32) \
-+ V(X87Float64InsertLowWord32) \
-+ V(X87Float64Sqrt) \
-+ V(X87Float64Round) \
-+ V(X87Float64Cmp) \
-+ V(X87Float64SilenceNaN) \
-+ V(X87Movsxbl) \
-+ V(X87Movzxbl) \
-+ V(X87Movb) \
-+ V(X87Movsxwl) \
-+ V(X87Movzxwl) \
-+ V(X87Movw) \
-+ V(X87Movl) \
-+ V(X87Movss) \
-+ V(X87Movsd) \
-+ V(X87Lea) \
-+ V(X87BitcastFI) \
-+ V(X87BitcastIF) \
-+ V(X87Push) \
-+ V(X87PushFloat64) \
-+ V(X87PushFloat32) \
-+ V(X87Poke) \
-+ V(X87StackCheck)
-+
-+// Addressing modes represent the "shape" of inputs to an instruction.
-+// Many instructions support multiple addressing modes. Addressing modes
-+// are encoded into the InstructionCode of the instruction and tell the
-+// code generator after register allocation which assembler method to call.
-+//
-+// We use the following local notation for addressing modes:
-+//
-+// M = memory operand
-+// R = base register
-+// N = index register * N for N in {1, 2, 4, 8}
-+// I = immediate displacement (int32_t)
-+
-+#define TARGET_ADDRESSING_MODE_LIST(V) \
-+ V(MR) /* [%r1 ] */ \
-+ V(MRI) /* [%r1 + K] */ \
-+ V(MR1) /* [%r1 + %r2*1 ] */ \
-+ V(MR2) /* [%r1 + %r2*2 ] */ \
-+ V(MR4) /* [%r1 + %r2*4 ] */ \
-+ V(MR8) /* [%r1 + %r2*8 ] */ \
-+ V(MR1I) /* [%r1 + %r2*1 + K] */ \
-+ V(MR2I) /* [%r1 + %r2*2 + K] */ \
-+ V(MR4I) /* [%r1 + %r2*3 + K] */ \
-+ V(MR8I) /* [%r1 + %r2*4 + K] */ \
-+ V(M1) /* [ %r2*1 ] */ \
-+ V(M2) /* [ %r2*2 ] */ \
-+ V(M4) /* [ %r2*4 ] */ \
-+ V(M8) /* [ %r2*8 ] */ \
-+ V(M1I) /* [ %r2*1 + K] */ \
-+ V(M2I) /* [ %r2*2 + K] */ \
-+ V(M4I) /* [ %r2*4 + K] */ \
-+ V(M8I) /* [ %r2*8 + K] */ \
-+ V(MI) /* [ K] */
-+
-+} // namespace compiler
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_COMPILER_X87_INSTRUCTION_CODES_X87_H_
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/instruction-scheduler-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/instruction-scheduler-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/instruction-scheduler-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/instruction-scheduler-x87.cc 2018-02-18
19:00:54.013420855 +0100
-@@ -0,0 +1,26 @@
-+// Copyright 2015 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#include "src/compiler/instruction-scheduler.h"
-+
-+namespace v8 {
-+namespace internal {
-+namespace compiler {
-+
-+bool InstructionScheduler::SchedulerSupported() { return false; }
-+
-+
-+int InstructionScheduler::GetTargetInstructionFlags(
-+ const Instruction* instr) const {
-+ UNIMPLEMENTED();
-+}
-+
-+
-+int InstructionScheduler::GetInstructionLatency(const Instruction* instr) {
-+ UNIMPLEMENTED();
-+}
-+
-+} // namespace compiler
-+} // namespace internal
-+} // namespace v8
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/instruction-selector-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/instruction-selector-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/instruction-selector-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/instruction-selector-x87.cc 2018-02-18
19:00:54.014420841 +0100
-@@ -0,0 +1,2031 @@
-+// Copyright 2014 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#include "src/base/adapters.h"
-+#include "src/compiler/instruction-selector-impl.h"
-+#include "src/compiler/node-matchers.h"
-+#include "src/compiler/node-properties.h"
-+
-+namespace v8 {
-+namespace internal {
-+namespace compiler {
-+
-+// Adds X87-specific methods for generating operands.
-+class X87OperandGenerator final : public OperandGenerator {
-+ public:
-+ explicit X87OperandGenerator(InstructionSelector* selector)
-+ : OperandGenerator(selector) {}
-+
-+ InstructionOperand UseByteRegister(Node* node) {
-+ // TODO(titzer): encode byte register use constraints.
-+ return UseFixed(node, edx);
-+ }
-+
-+ InstructionOperand DefineAsByteRegister(Node* node) {
-+ // TODO(titzer): encode byte register def constraints.
-+ return DefineAsRegister(node);
-+ }
-+
-+ bool CanBeMemoryOperand(InstructionCode opcode, Node* node, Node* input,
-+ int effect_level) {
-+ if (input->opcode() != IrOpcode::kLoad ||
-+ !selector()->CanCover(node, input)) {
-+ return false;
-+ }
-+ if (effect_level != selector()->GetEffectLevel(input)) {
-+ return false;
-+ }
-+ MachineRepresentation rep =
-+ LoadRepresentationOf(input->op()).representation();
-+ switch (opcode) {
-+ case kX87Cmp:
-+ case kX87Test:
-+ return rep == MachineRepresentation::kWord32 ||
-+ rep == MachineRepresentation::kTagged;
-+ case kX87Cmp16:
-+ case kX87Test16:
-+ return rep == MachineRepresentation::kWord16;
-+ case kX87Cmp8:
-+ case kX87Test8:
-+ return rep == MachineRepresentation::kWord8;
-+ default:
-+ break;
-+ }
-+ return false;
-+ }
-+
-+ InstructionOperand CreateImmediate(int imm) {
-+ return sequence()->AddImmediate(Constant(imm));
-+ }
-+
-+ bool CanBeImmediate(Node* node) {
-+ switch (node->opcode()) {
-+ case IrOpcode::kInt32Constant:
-+ case IrOpcode::kNumberConstant:
-+ case IrOpcode::kExternalConstant:
-+ case IrOpcode::kRelocatableInt32Constant:
-+ case IrOpcode::kRelocatableInt64Constant:
-+ return true;
-+ case IrOpcode::kHeapConstant: {
-+// TODO(bmeurer): We must not dereference handles concurrently. If we
-+// really have to this here, then we need to find a way to put this
-+// information on the HeapConstant node already.
-+#if 0
-+ // Constants in new space cannot be used as immediates in V8 because
-+ // the GC does not scan code objects when collecting the new generation.
-+ Handle<HeapObject> value =
OpParameter<Handle<HeapObject>>(node);
-+ Isolate* isolate = value->GetIsolate();
-+ return !isolate->heap()->InNewSpace(*value);
-+#endif
-+ }
-+ default:
-+ return false;
-+ }
-+ }
-+
-+ AddressingMode GenerateMemoryOperandInputs(Node* index, int scale, Node* base,
-+ Node* displacement_node,
-+ DisplacementMode displacement_mode,
-+ InstructionOperand inputs[],
-+ size_t* input_count) {
-+ AddressingMode mode = kMode_MRI;
-+ int32_t displacement = (displacement_node == nullptr)
-+ ? 0
-+ : OpParameter<int32_t>(displacement_node);
-+ if (displacement_mode == kNegativeDisplacement) {
-+ displacement = -displacement;
-+ }
-+ if (base != nullptr) {
-+ if (base->opcode() == IrOpcode::kInt32Constant) {
-+ displacement += OpParameter<int32_t>(base);
-+ base = nullptr;
-+ }
-+ }
-+ if (base != nullptr) {
-+ inputs[(*input_count)++] = UseRegister(base);
-+ if (index != nullptr) {
-+ DCHECK(scale >= 0 && scale <= 3);
-+ inputs[(*input_count)++] = UseRegister(index);
-+ if (displacement != 0) {
-+ inputs[(*input_count)++] = TempImmediate(displacement);
-+ static const AddressingMode kMRnI_modes[] = {kMode_MR1I, kMode_MR2I,
-+ kMode_MR4I, kMode_MR8I};
-+ mode = kMRnI_modes[scale];
-+ } else {
-+ static const AddressingMode kMRn_modes[] = {kMode_MR1, kMode_MR2,
-+ kMode_MR4, kMode_MR8};
-+ mode = kMRn_modes[scale];
-+ }
-+ } else {
-+ if (displacement == 0) {
-+ mode = kMode_MR;
-+ } else {
-+ inputs[(*input_count)++] = TempImmediate(displacement);
-+ mode = kMode_MRI;
-+ }
-+ }
-+ } else {
-+ DCHECK(scale >= 0 && scale <= 3);
-+ if (index != nullptr) {
-+ inputs[(*input_count)++] = UseRegister(index);
-+ if (displacement != 0) {
-+ inputs[(*input_count)++] = TempImmediate(displacement);
-+ static const AddressingMode kMnI_modes[] = {kMode_MRI, kMode_M2I,
-+ kMode_M4I, kMode_M8I};
-+ mode = kMnI_modes[scale];
-+ } else {
-+ static const AddressingMode kMn_modes[] = {kMode_MR, kMode_M2,
-+ kMode_M4, kMode_M8};
-+ mode = kMn_modes[scale];
-+ }
-+ } else {
-+ inputs[(*input_count)++] = TempImmediate(displacement);
-+ return kMode_MI;
-+ }
-+ }
-+ return mode;
-+ }
-+
-+ AddressingMode GetEffectiveAddressMemoryOperand(Node* node,
-+ InstructionOperand inputs[],
-+ size_t* input_count) {
-+ BaseWithIndexAndDisplacement32Matcher m(node, AddressOption::kAllowAll);
-+ DCHECK(m.matches());
-+ if ((m.displacement() == nullptr || CanBeImmediate(m.displacement()))) {
-+ return GenerateMemoryOperandInputs(
-+ m.index(), m.scale(), m.base(), m.displacement(),
-+ m.displacement_mode(), inputs, input_count);
-+ } else {
-+ inputs[(*input_count)++] = UseRegister(node->InputAt(0));
-+ inputs[(*input_count)++] = UseRegister(node->InputAt(1));
-+ return kMode_MR1;
-+ }
-+ }
-+
-+ bool CanBeBetterLeftOperand(Node* node) const {
-+ return !selector()->IsLive(node);
-+ }
-+};
-+
-+void InstructionSelector::VisitStackSlot(Node* node) {
-+ StackSlotRepresentation rep = StackSlotRepresentationOf(node->op());
-+ int slot = frame_->AllocateSpillSlot(rep.size());
-+ OperandGenerator g(this);
-+
-+ Emit(kArchStackSlot, g.DefineAsRegister(node),
-+ sequence()->AddImmediate(Constant(slot)), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitLoad(Node* node) {
-+ LoadRepresentation load_rep = LoadRepresentationOf(node->op());
-+
-+ ArchOpcode opcode = kArchNop;
-+ switch (load_rep.representation()) {
-+ case MachineRepresentation::kFloat32:
-+ opcode = kX87Movss;
-+ break;
-+ case MachineRepresentation::kFloat64:
-+ opcode = kX87Movsd;
-+ break;
-+ case MachineRepresentation::kBit: // Fall through.
-+ case MachineRepresentation::kWord8:
-+ opcode = load_rep.IsSigned() ? kX87Movsxbl : kX87Movzxbl;
-+ break;
-+ case MachineRepresentation::kWord16:
-+ opcode = load_rep.IsSigned() ? kX87Movsxwl : kX87Movzxwl;
-+ break;
-+ case MachineRepresentation::kTaggedSigned: // Fall through.
-+ case MachineRepresentation::kTaggedPointer: // Fall through.
-+ case MachineRepresentation::kTagged: // Fall through.
-+ case MachineRepresentation::kWord32:
-+ opcode = kX87Movl;
-+ break;
-+ case MachineRepresentation::kWord64: // Fall through.
-+ case MachineRepresentation::kSimd128: // Fall through.
-+ case MachineRepresentation::kNone:
-+ UNREACHABLE();
-+ return;
-+ }
-+
-+ X87OperandGenerator g(this);
-+ InstructionOperand outputs[1];
-+ outputs[0] = g.DefineAsRegister(node);
-+ InstructionOperand inputs[3];
-+ size_t input_count = 0;
-+ AddressingMode mode =
-+ g.GetEffectiveAddressMemoryOperand(node, inputs, &input_count);
-+ InstructionCode code = opcode | AddressingModeField::encode(mode);
-+ Emit(code, 1, outputs, input_count, inputs);
-+}
-+
-+void InstructionSelector::VisitProtectedLoad(Node* node) {
-+ // TODO(eholk)
-+ UNIMPLEMENTED();
-+}
-+
-+void InstructionSelector::VisitStore(Node* node) {
-+ X87OperandGenerator g(this);
-+ Node* base = node->InputAt(0);
-+ Node* index = node->InputAt(1);
-+ Node* value = node->InputAt(2);
-+
-+ StoreRepresentation store_rep = StoreRepresentationOf(node->op());
-+ WriteBarrierKind write_barrier_kind = store_rep.write_barrier_kind();
-+ MachineRepresentation rep = store_rep.representation();
-+
-+ if (write_barrier_kind != kNoWriteBarrier) {
-+ DCHECK(CanBeTaggedPointer(rep));
-+ AddressingMode addressing_mode;
-+ InstructionOperand inputs[3];
-+ size_t input_count = 0;
-+ inputs[input_count++] = g.UseUniqueRegister(base);
-+ if (g.CanBeImmediate(index)) {
-+ inputs[input_count++] = g.UseImmediate(index);
-+ addressing_mode = kMode_MRI;
-+ } else {
-+ inputs[input_count++] = g.UseUniqueRegister(index);
-+ addressing_mode = kMode_MR1;
-+ }
-+ inputs[input_count++] = g.UseUniqueRegister(value);
-+ RecordWriteMode record_write_mode = RecordWriteMode::kValueIsAny;
-+ switch (write_barrier_kind) {
-+ case kNoWriteBarrier:
-+ UNREACHABLE();
-+ break;
-+ case kMapWriteBarrier:
-+ record_write_mode = RecordWriteMode::kValueIsMap;
-+ break;
-+ case kPointerWriteBarrier:
-+ record_write_mode = RecordWriteMode::kValueIsPointer;
-+ break;
-+ case kFullWriteBarrier:
-+ record_write_mode = RecordWriteMode::kValueIsAny;
-+ break;
-+ }
-+ InstructionOperand temps[] = {g.TempRegister(), g.TempRegister()};
-+ size_t const temp_count = arraysize(temps);
-+ InstructionCode code = kArchStoreWithWriteBarrier;
-+ code |= AddressingModeField::encode(addressing_mode);
-+ code |= MiscField::encode(static_cast<int>(record_write_mode));
-+ Emit(code, 0, nullptr, input_count, inputs, temp_count, temps);
-+ } else {
-+ ArchOpcode opcode = kArchNop;
-+ switch (rep) {
-+ case MachineRepresentation::kFloat32:
-+ opcode = kX87Movss;
-+ break;
-+ case MachineRepresentation::kFloat64:
-+ opcode = kX87Movsd;
-+ break;
-+ case MachineRepresentation::kBit: // Fall through.
-+ case MachineRepresentation::kWord8:
-+ opcode = kX87Movb;
-+ break;
-+ case MachineRepresentation::kWord16:
-+ opcode = kX87Movw;
-+ break;
-+ case MachineRepresentation::kTaggedSigned: // Fall through.
-+ case MachineRepresentation::kTaggedPointer: // Fall through.
-+ case MachineRepresentation::kTagged: // Fall through.
-+ case MachineRepresentation::kWord32:
-+ opcode = kX87Movl;
-+ break;
-+ case MachineRepresentation::kWord64: // Fall through.
-+ case MachineRepresentation::kSimd128: // Fall through.
-+ case MachineRepresentation::kNone:
-+ UNREACHABLE();
-+ return;
-+ }
-+
-+ InstructionOperand val;
-+ if (g.CanBeImmediate(value)) {
-+ val = g.UseImmediate(value);
-+ } else if (rep == MachineRepresentation::kWord8 ||
-+ rep == MachineRepresentation::kBit) {
-+ val = g.UseByteRegister(value);
-+ } else {
-+ val = g.UseRegister(value);
-+ }
-+
-+ InstructionOperand inputs[4];
-+ size_t input_count = 0;
-+ AddressingMode addressing_mode =
-+ g.GetEffectiveAddressMemoryOperand(node, inputs, &input_count);
-+ InstructionCode code =
-+ opcode | AddressingModeField::encode(addressing_mode);
-+ inputs[input_count++] = val;
-+ Emit(code, 0, static_cast<InstructionOperand*>(nullptr), input_count,
-+ inputs);
-+ }
-+}
-+
-+void InstructionSelector::VisitProtectedStore(Node* node) {
-+ // TODO(eholk)
-+ UNIMPLEMENTED();
-+}
-+
-+// Architecture supports unaligned access, therefore VisitLoad is used instead
-+void InstructionSelector::VisitUnalignedLoad(Node* node) { UNREACHABLE(); }
-+
-+// Architecture supports unaligned access, therefore VisitStore is used instead
-+void InstructionSelector::VisitUnalignedStore(Node* node) { UNREACHABLE(); }
-+
-+void InstructionSelector::VisitCheckedLoad(Node* node) {
-+ CheckedLoadRepresentation load_rep = CheckedLoadRepresentationOf(node->op());
-+ X87OperandGenerator g(this);
-+ Node* const buffer = node->InputAt(0);
-+ Node* const offset = node->InputAt(1);
-+ Node* const length = node->InputAt(2);
-+ ArchOpcode opcode = kArchNop;
-+ switch (load_rep.representation()) {
-+ case MachineRepresentation::kWord8:
-+ opcode = load_rep.IsSigned() ? kCheckedLoadInt8 : kCheckedLoadUint8;
-+ break;
-+ case MachineRepresentation::kWord16:
-+ opcode = load_rep.IsSigned() ? kCheckedLoadInt16 : kCheckedLoadUint16;
-+ break;
-+ case MachineRepresentation::kWord32:
-+ opcode = kCheckedLoadWord32;
-+ break;
-+ case MachineRepresentation::kFloat32:
-+ opcode = kCheckedLoadFloat32;
-+ break;
-+ case MachineRepresentation::kFloat64:
-+ opcode = kCheckedLoadFloat64;
-+ break;
-+ case MachineRepresentation::kBit: // Fall through.
-+ case MachineRepresentation::kTaggedSigned: // Fall through.
-+ case MachineRepresentation::kTaggedPointer: // Fall through.
-+ case MachineRepresentation::kTagged: // Fall through.
-+ case MachineRepresentation::kWord64: // Fall through.
-+ case MachineRepresentation::kSimd128: // Fall through.
-+ case MachineRepresentation::kNone:
-+ UNREACHABLE();
-+ return;
-+ }
-+ InstructionOperand offset_operand = g.UseRegister(offset);
-+ InstructionOperand length_operand =
-+ g.CanBeImmediate(length) ? g.UseImmediate(length) : g.UseRegister(length);
-+ if (g.CanBeImmediate(buffer)) {
-+ Emit(opcode | AddressingModeField::encode(kMode_MRI),
-+ g.DefineAsRegister(node), offset_operand, length_operand,
-+ offset_operand, g.UseImmediate(buffer));
-+ } else {
-+ Emit(opcode | AddressingModeField::encode(kMode_MR1),
-+ g.DefineAsRegister(node), offset_operand, length_operand,
-+ g.UseRegister(buffer), offset_operand);
-+ }
-+}
-+
-+
-+void InstructionSelector::VisitCheckedStore(Node* node) {
-+ MachineRepresentation rep = CheckedStoreRepresentationOf(node->op());
-+ X87OperandGenerator g(this);
-+ Node* const buffer = node->InputAt(0);
-+ Node* const offset = node->InputAt(1);
-+ Node* const length = node->InputAt(2);
-+ Node* const value = node->InputAt(3);
-+ ArchOpcode opcode = kArchNop;
-+ switch (rep) {
-+ case MachineRepresentation::kWord8:
-+ opcode = kCheckedStoreWord8;
-+ break;
-+ case MachineRepresentation::kWord16:
-+ opcode = kCheckedStoreWord16;
-+ break;
-+ case MachineRepresentation::kWord32:
-+ opcode = kCheckedStoreWord32;
-+ break;
-+ case MachineRepresentation::kFloat32:
-+ opcode = kCheckedStoreFloat32;
-+ break;
-+ case MachineRepresentation::kFloat64:
-+ opcode = kCheckedStoreFloat64;
-+ break;
-+ case MachineRepresentation::kBit: // Fall through.
-+ case MachineRepresentation::kTaggedSigned: // Fall through.
-+ case MachineRepresentation::kTaggedPointer: // Fall through.
-+ case MachineRepresentation::kTagged: // Fall through.
-+ case MachineRepresentation::kWord64: // Fall through.
-+ case MachineRepresentation::kSimd128: // Fall through.
-+ case MachineRepresentation::kNone:
-+ UNREACHABLE();
-+ return;
-+ }
-+ InstructionOperand value_operand =
-+ g.CanBeImmediate(value) ? g.UseImmediate(value)
-+ : ((rep == MachineRepresentation::kWord8 ||
-+ rep == MachineRepresentation::kBit)
-+ ? g.UseByteRegister(value)
-+ : g.UseRegister(value));
-+ InstructionOperand offset_operand = g.UseRegister(offset);
-+ InstructionOperand length_operand =
-+ g.CanBeImmediate(length) ? g.UseImmediate(length) : g.UseRegister(length);
-+ if (g.CanBeImmediate(buffer)) {
-+ Emit(opcode | AddressingModeField::encode(kMode_MRI), g.NoOutput(),
-+ offset_operand, length_operand, value_operand, offset_operand,
-+ g.UseImmediate(buffer));
-+ } else {
-+ Emit(opcode | AddressingModeField::encode(kMode_MR1), g.NoOutput(),
-+ offset_operand, length_operand, value_operand, g.UseRegister(buffer),
-+ offset_operand);
-+ }
-+}
-+
-+namespace {
-+
-+// Shared routine for multiple binary operations.
-+void VisitBinop(InstructionSelector* selector, Node* node,
-+ InstructionCode opcode, FlagsContinuation* cont) {
-+ X87OperandGenerator g(selector);
-+ Int32BinopMatcher m(node);
-+ Node* left = m.left().node();
-+ Node* right = m.right().node();
-+ InstructionOperand inputs[4];
-+ size_t input_count = 0;
-+ InstructionOperand outputs[2];
-+ size_t output_count = 0;
-+
-+ // TODO(turbofan): match complex addressing modes.
-+ if (left == right) {
-+ // If both inputs refer to the same operand, enforce allocating a register
-+ // for both of them to ensure that we don't end up generating code like
-+ // this:
-+ //
-+ // mov eax, [ebp-0x10]
-+ // add eax, [ebp-0x10]
-+ // jo label
-+ InstructionOperand const input = g.UseRegister(left);
-+ inputs[input_count++] = input;
-+ inputs[input_count++] = input;
-+ } else if (g.CanBeImmediate(right)) {
-+ inputs[input_count++] = g.UseRegister(left);
-+ inputs[input_count++] = g.UseImmediate(right);
-+ } else {
-+ if (node->op()->HasProperty(Operator::kCommutative) &&
-+ g.CanBeBetterLeftOperand(right)) {
-+ std::swap(left, right);
-+ }
-+ inputs[input_count++] = g.UseRegister(left);
-+ inputs[input_count++] = g.Use(right);
-+ }
-+
-+ if (cont->IsBranch()) {
-+ inputs[input_count++] = g.Label(cont->true_block());
-+ inputs[input_count++] = g.Label(cont->false_block());
-+ }
-+
-+ outputs[output_count++] = g.DefineSameAsFirst(node);
-+ if (cont->IsSet()) {
-+ outputs[output_count++] = g.DefineAsRegister(cont->result());
-+ }
-+
-+ DCHECK_NE(0u, input_count);
-+ DCHECK_NE(0u, output_count);
-+ DCHECK_GE(arraysize(inputs), input_count);
-+ DCHECK_GE(arraysize(outputs), output_count);
-+
-+ opcode = cont->Encode(opcode);
-+ if (cont->IsDeoptimize()) {
-+ selector->EmitDeoptimize(opcode, output_count, outputs, input_count, inputs,
-+ cont->kind(), cont->reason(),
cont->frame_state());
-+ } else {
-+ selector->Emit(opcode, output_count, outputs, input_count, inputs);
-+ }
-+}
-+
-+
-+// Shared routine for multiple binary operations.
-+void VisitBinop(InstructionSelector* selector, Node* node,
-+ InstructionCode opcode) {
-+ FlagsContinuation cont;
-+ VisitBinop(selector, node, opcode, &cont);
-+}
-+
-+} // namespace
-+
-+void InstructionSelector::VisitWord32And(Node* node) {
-+ VisitBinop(this, node, kX87And);
-+}
-+
-+
-+void InstructionSelector::VisitWord32Or(Node* node) {
-+ VisitBinop(this, node, kX87Or);
-+}
-+
-+
-+void InstructionSelector::VisitWord32Xor(Node* node) {
-+ X87OperandGenerator g(this);
-+ Int32BinopMatcher m(node);
-+ if (m.right().Is(-1)) {
-+ Emit(kX87Not, g.DefineSameAsFirst(node), g.UseRegister(m.left().node()));
-+ } else {
-+ VisitBinop(this, node, kX87Xor);
-+ }
-+}
-+
-+
-+// Shared routine for multiple shift operations.
-+static inline void VisitShift(InstructionSelector* selector, Node* node,
-+ ArchOpcode opcode) {
-+ X87OperandGenerator g(selector);
-+ Node* left = node->InputAt(0);
-+ Node* right = node->InputAt(1);
-+
-+ if (g.CanBeImmediate(right)) {
-+ selector->Emit(opcode, g.DefineSameAsFirst(node), g.UseRegister(left),
-+ g.UseImmediate(right));
-+ } else {
-+ selector->Emit(opcode, g.DefineSameAsFirst(node), g.UseRegister(left),
-+ g.UseFixed(right, ecx));
-+ }
-+}
-+
-+
-+namespace {
-+
-+void VisitMulHigh(InstructionSelector* selector, Node* node,
-+ ArchOpcode opcode) {
-+ X87OperandGenerator g(selector);
-+ InstructionOperand temps[] = {g.TempRegister(eax)};
-+ selector->Emit(
-+ opcode, g.DefineAsFixed(node, edx), g.UseFixed(node->InputAt(0), eax),
-+ g.UseUniqueRegister(node->InputAt(1)), arraysize(temps), temps);
-+}
-+
-+
-+void VisitDiv(InstructionSelector* selector, Node* node, ArchOpcode opcode) {
-+ X87OperandGenerator g(selector);
-+ InstructionOperand temps[] = {g.TempRegister(edx)};
-+ selector->Emit(opcode, g.DefineAsFixed(node, eax),
-+ g.UseFixed(node->InputAt(0), eax),
-+ g.UseUnique(node->InputAt(1)), arraysize(temps), temps);
-+}
-+
-+
-+void VisitMod(InstructionSelector* selector, Node* node, ArchOpcode opcode) {
-+ X87OperandGenerator g(selector);
-+ InstructionOperand temps[] = {g.TempRegister(eax)};
-+ selector->Emit(opcode, g.DefineAsFixed(node, edx),
-+ g.UseFixed(node->InputAt(0), eax),
-+ g.UseUnique(node->InputAt(1)), arraysize(temps), temps);
-+}
-+
-+void EmitLea(InstructionSelector* selector, Node* result, Node* index,
-+ int scale, Node* base, Node* displacement,
-+ DisplacementMode displacement_mode) {
-+ X87OperandGenerator g(selector);
-+ InstructionOperand inputs[4];
-+ size_t input_count = 0;
-+ AddressingMode mode =
-+ g.GenerateMemoryOperandInputs(index, scale, base, displacement,
-+ displacement_mode, inputs, &input_count);
-+
-+ DCHECK_NE(0u, input_count);
-+ DCHECK_GE(arraysize(inputs), input_count);
-+
-+ InstructionOperand outputs[1];
-+ outputs[0] = g.DefineAsRegister(result);
-+
-+ InstructionCode opcode = AddressingModeField::encode(mode) | kX87Lea;
-+
-+ selector->Emit(opcode, 1, outputs, input_count, inputs);
-+}
-+
-+} // namespace
-+
-+
-+void InstructionSelector::VisitWord32Shl(Node* node) {
-+ Int32ScaleMatcher m(node, true);
-+ if (m.matches()) {
-+ Node* index = node->InputAt(0);
-+ Node* base = m.power_of_two_plus_one() ? index : nullptr;
-+ EmitLea(this, node, index, m.scale(), base, nullptr, kPositiveDisplacement);
-+ return;
-+ }
-+ VisitShift(this, node, kX87Shl);
-+}
-+
-+
-+void InstructionSelector::VisitWord32Shr(Node* node) {
-+ VisitShift(this, node, kX87Shr);
-+}
-+
-+
-+void InstructionSelector::VisitWord32Sar(Node* node) {
-+ VisitShift(this, node, kX87Sar);
-+}
-+
-+void InstructionSelector::VisitInt32PairAdd(Node* node) {
-+ X87OperandGenerator g(this);
-+
-+ Node* projection1 = NodeProperties::FindProjection(node, 1);
-+ if (projection1) {
-+ // We use UseUniqueRegister here to avoid register sharing with the temp
-+ // register.
-+ InstructionOperand inputs[] = {
-+ g.UseRegister(node->InputAt(0)), g.UseUniqueRegister(node->InputAt(1)),
-+ g.UseRegister(node->InputAt(2)), g.UseUniqueRegister(node->InputAt(3))};
-+
-+ InstructionOperand outputs[] = {g.DefineSameAsFirst(node),
-+ g.DefineAsRegister(projection1)};
-+
-+ InstructionOperand temps[] = {g.TempRegister()};
-+
-+ Emit(kX87AddPair, 2, outputs, 4, inputs, 1, temps);
-+ } else {
-+ // The high word of the result is not used, so we emit the standard 32 bit
-+ // instruction.
-+ Emit(kX87Add, g.DefineSameAsFirst(node), g.UseRegister(node->InputAt(0)),
-+ g.Use(node->InputAt(2)));
-+ }
-+}
-+
-+void InstructionSelector::VisitInt32PairSub(Node* node) {
-+ X87OperandGenerator g(this);
-+
-+ Node* projection1 = NodeProperties::FindProjection(node, 1);
-+ if (projection1) {
-+ // We use UseUniqueRegister here to avoid register sharing with the temp
-+ // register.
-+ InstructionOperand inputs[] = {
-+ g.UseRegister(node->InputAt(0)), g.UseUniqueRegister(node->InputAt(1)),
-+ g.UseRegister(node->InputAt(2)), g.UseUniqueRegister(node->InputAt(3))};
-+
-+ InstructionOperand outputs[] = {g.DefineSameAsFirst(node),
-+ g.DefineAsRegister(projection1)};
-+
-+ InstructionOperand temps[] = {g.TempRegister()};
-+
-+ Emit(kX87SubPair, 2, outputs, 4, inputs, 1, temps);
-+ } else {
-+ // The high word of the result is not used, so we emit the standard 32 bit
-+ // instruction.
-+ Emit(kX87Sub, g.DefineSameAsFirst(node), g.UseRegister(node->InputAt(0)),
-+ g.Use(node->InputAt(2)));
-+ }
-+}
-+
-+void InstructionSelector::VisitInt32PairMul(Node* node) {
-+ X87OperandGenerator g(this);
-+
-+ Node* projection1 = NodeProperties::FindProjection(node, 1);
-+ if (projection1) {
-+ // InputAt(3) explicitly shares ecx with OutputRegister(1) to save one
-+ // register and one mov instruction.
-+ InstructionOperand inputs[] = {g.UseUnique(node->InputAt(0)),
-+ g.UseUnique(node->InputAt(1)),
-+ g.UseUniqueRegister(node->InputAt(2)),
-+ g.UseFixed(node->InputAt(3), ecx)};
-+
-+ InstructionOperand outputs[] = {
-+ g.DefineAsFixed(node, eax),
-+ g.DefineAsFixed(NodeProperties::FindProjection(node, 1), ecx)};
-+
-+ InstructionOperand temps[] = {g.TempRegister(edx)};
-+
-+ Emit(kX87MulPair, 2, outputs, 4, inputs, 1, temps);
-+ } else {
-+ // The high word of the result is not used, so we emit the standard 32 bit
-+ // instruction.
-+ Emit(kX87Imul, g.DefineSameAsFirst(node), g.UseRegister(node->InputAt(0)),
-+ g.Use(node->InputAt(2)));
-+ }
-+}
-+
-+void VisitWord32PairShift(InstructionSelector* selector, InstructionCode opcode,
-+ Node* node) {
-+ X87OperandGenerator g(selector);
-+
-+ Node* shift = node->InputAt(2);
-+ InstructionOperand shift_operand;
-+ if (g.CanBeImmediate(shift)) {
-+ shift_operand = g.UseImmediate(shift);
-+ } else {
-+ shift_operand = g.UseFixed(shift, ecx);
-+ }
-+ InstructionOperand inputs[] = {g.UseFixed(node->InputAt(0), eax),
-+ g.UseFixed(node->InputAt(1), edx),
-+ shift_operand};
-+
-+ InstructionOperand outputs[2];
-+ InstructionOperand temps[1];
-+ int32_t output_count = 0;
-+ int32_t temp_count = 0;
-+ outputs[output_count++] = g.DefineAsFixed(node, eax);
-+ Node* projection1 = NodeProperties::FindProjection(node, 1);
-+ if (projection1) {
-+ outputs[output_count++] = g.DefineAsFixed(projection1, edx);
-+ } else {
-+ temps[temp_count++] = g.TempRegister(edx);
-+ }
-+
-+ selector->Emit(opcode, output_count, outputs, 3, inputs, temp_count, temps);
-+}
-+
-+void InstructionSelector::VisitWord32PairShl(Node* node) {
-+ VisitWord32PairShift(this, kX87ShlPair, node);
-+}
-+
-+void InstructionSelector::VisitWord32PairShr(Node* node) {
-+ VisitWord32PairShift(this, kX87ShrPair, node);
-+}
-+
-+void InstructionSelector::VisitWord32PairSar(Node* node) {
-+ VisitWord32PairShift(this, kX87SarPair, node);
-+}
-+
-+void InstructionSelector::VisitWord32Ror(Node* node) {
-+ VisitShift(this, node, kX87Ror);
-+}
-+
-+
-+void InstructionSelector::VisitWord32Clz(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Lzcnt, g.DefineAsRegister(node), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitWord32Ctz(Node* node) { UNREACHABLE(); }
-+
-+
-+void InstructionSelector::VisitWord32ReverseBits(Node* node) { UNREACHABLE(); }
-+
-+void InstructionSelector::VisitWord64ReverseBytes(Node* node) { UNREACHABLE(); }
-+
-+void InstructionSelector::VisitWord32ReverseBytes(Node* node) { UNREACHABLE(); }
-+
-+void InstructionSelector::VisitWord32Popcnt(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Popcnt, g.DefineAsRegister(node), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitInt32Add(Node* node) {
-+ X87OperandGenerator g(this);
-+
-+ // Try to match the Add to a lea pattern
-+ BaseWithIndexAndDisplacement32Matcher m(node);
-+ if (m.matches() &&
-+ (m.displacement() == nullptr || g.CanBeImmediate(m.displacement()))) {
-+ InstructionOperand inputs[4];
-+ size_t input_count = 0;
-+ AddressingMode mode = g.GenerateMemoryOperandInputs(
-+ m.index(), m.scale(), m.base(), m.displacement(), m.displacement_mode(),
-+ inputs, &input_count);
-+
-+ DCHECK_NE(0u, input_count);
-+ DCHECK_GE(arraysize(inputs), input_count);
-+
-+ InstructionOperand outputs[1];
-+ outputs[0] = g.DefineAsRegister(node);
-+
-+ InstructionCode opcode = AddressingModeField::encode(mode) | kX87Lea;
-+ Emit(opcode, 1, outputs, input_count, inputs);
-+ return;
-+ }
-+
-+ // No lea pattern match, use add
-+ VisitBinop(this, node, kX87Add);
-+}
-+
-+
-+void InstructionSelector::VisitInt32Sub(Node* node) {
-+ X87OperandGenerator g(this);
-+ Int32BinopMatcher m(node);
-+ if (m.left().Is(0)) {
-+ Emit(kX87Neg, g.DefineSameAsFirst(node), g.Use(m.right().node()));
-+ } else {
-+ VisitBinop(this, node, kX87Sub);
-+ }
-+}
-+
-+
-+void InstructionSelector::VisitInt32Mul(Node* node) {
-+ Int32ScaleMatcher m(node, true);
-+ if (m.matches()) {
-+ Node* index = node->InputAt(0);
-+ Node* base = m.power_of_two_plus_one() ? index : nullptr;
-+ EmitLea(this, node, index, m.scale(), base, nullptr, kPositiveDisplacement);
-+ return;
-+ }
-+ X87OperandGenerator g(this);
-+ Node* left = node->InputAt(0);
-+ Node* right = node->InputAt(1);
-+ if (g.CanBeImmediate(right)) {
-+ Emit(kX87Imul, g.DefineAsRegister(node), g.Use(left),
-+ g.UseImmediate(right));
-+ } else {
-+ if (g.CanBeBetterLeftOperand(right)) {
-+ std::swap(left, right);
-+ }
-+ Emit(kX87Imul, g.DefineSameAsFirst(node), g.UseRegister(left),
-+ g.Use(right));
-+ }
-+}
-+
-+
-+void InstructionSelector::VisitInt32MulHigh(Node* node) {
-+ VisitMulHigh(this, node, kX87ImulHigh);
-+}
-+
-+
-+void InstructionSelector::VisitUint32MulHigh(Node* node) {
-+ VisitMulHigh(this, node, kX87UmulHigh);
-+}
-+
-+
-+void InstructionSelector::VisitInt32Div(Node* node) {
-+ VisitDiv(this, node, kX87Idiv);
-+}
-+
-+
-+void InstructionSelector::VisitUint32Div(Node* node) {
-+ VisitDiv(this, node, kX87Udiv);
-+}
-+
-+
-+void InstructionSelector::VisitInt32Mod(Node* node) {
-+ VisitMod(this, node, kX87Idiv);
-+}
-+
-+
-+void InstructionSelector::VisitUint32Mod(Node* node) {
-+ VisitMod(this, node, kX87Udiv);
-+}
-+
-+
-+void InstructionSelector::VisitChangeFloat32ToFloat64(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float32ToFloat64, g.DefineAsFixed(node, stX_0),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitRoundInt32ToFloat32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Int32ToFloat32, g.DefineAsFixed(node, stX_0),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitRoundUint32ToFloat32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Uint32ToFloat32, g.DefineAsFixed(node, stX_0),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitChangeInt32ToFloat64(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Int32ToFloat64, g.DefineAsFixed(node, stX_0),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitChangeUint32ToFloat64(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Uint32ToFloat64, g.DefineAsFixed(node, stX_0),
-+ g.UseRegister(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitTruncateFloat32ToInt32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float32ToInt32, g.DefineAsRegister(node), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitTruncateFloat32ToUint32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float32ToUint32, g.DefineAsRegister(node), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitChangeFloat64ToInt32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64ToInt32, g.DefineAsRegister(node), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitChangeFloat64ToUint32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64ToUint32, g.DefineAsRegister(node), g.Use(node->InputAt(0)));
-+}
-+
-+void InstructionSelector::VisitTruncateFloat64ToUint32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64ToUint32, g.DefineAsRegister(node), g.Use(node->InputAt(0)));
-+}
-+
-+void InstructionSelector::VisitTruncateFloat64ToFloat32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64ToFloat32, g.DefineAsFixed(node, stX_0),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+void InstructionSelector::VisitTruncateFloat64ToWord32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kArchTruncateDoubleToI, g.DefineAsRegister(node),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+void InstructionSelector::VisitRoundFloat64ToInt32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64ToInt32, g.DefineAsRegister(node), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitBitcastFloat32ToInt32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87BitcastFI, g.DefineAsRegister(node), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitBitcastInt32ToFloat32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87BitcastIF, g.DefineAsFixed(node, stX_0), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat32Add(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float32Add, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64Add(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float64Add, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat32Sub(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float32Sub, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitFloat64Sub(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float64Sub, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitFloat32Mul(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float32Mul, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64Mul(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float64Mul, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat32Div(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float32Div, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64Div(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float64Div, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64Mod(Node* node) {
-+ X87OperandGenerator g(this);
-+ InstructionOperand temps[] = {g.TempRegister(eax)};
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float64Mod, g.DefineAsFixed(node, stX_0), 1, temps)->MarkAsCall();
-+}
-+
-+void InstructionSelector::VisitFloat32Max(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float32Max, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitFloat64Max(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float64Max, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitFloat32Min(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float32Min, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitFloat64Min(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(kX87Float64Min, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat32Abs(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87Float32Abs, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64Abs(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87Float64Abs, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitFloat32Sqrt(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87Float32Sqrt, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64Sqrt(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87Float64Sqrt, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+
-+void InstructionSelector::VisitFloat32RoundDown(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float32Round | MiscField::encode(kRoundDown),
-+ g.UseFixed(node, stX_0), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat64RoundDown(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64Round | MiscField::encode(kRoundDown),
-+ g.UseFixed(node, stX_0), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat32RoundUp(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float32Round | MiscField::encode(kRoundUp), g.UseFixed(node, stX_0),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat64RoundUp(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64Round | MiscField::encode(kRoundUp), g.UseFixed(node, stX_0),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat32RoundTruncate(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float32Round | MiscField::encode(kRoundToZero),
-+ g.UseFixed(node, stX_0), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat64RoundTruncate(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64Round | MiscField::encode(kRoundToZero),
-+ g.UseFixed(node, stX_0), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat64RoundTiesAway(Node* node) {
-+ UNREACHABLE();
-+}
-+
-+
-+void InstructionSelector::VisitFloat32RoundTiesEven(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float32Round | MiscField::encode(kRoundToNearest),
-+ g.UseFixed(node, stX_0), g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat64RoundTiesEven(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64Round | MiscField::encode(kRoundToNearest),
-+ g.UseFixed(node, stX_0), g.Use(node->InputAt(0)));
-+}
-+
-+void InstructionSelector::VisitFloat32Neg(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87Float32Neg, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitFloat64Neg(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87Float64Neg, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitFloat64Ieee754Binop(Node* node,
-+ InstructionCode opcode) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(1)));
-+ Emit(opcode, g.DefineAsFixed(node, stX_0), 0, nullptr)->MarkAsCall();
-+}
-+
-+void InstructionSelector::VisitFloat64Ieee754Unop(Node* node,
-+ InstructionCode opcode) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(opcode, g.DefineAsFixed(node, stX_0), 0, nullptr)->MarkAsCall();
-+}
-+
-+void InstructionSelector::EmitPrepareArguments(
-+ ZoneVector<PushParameter>* arguments, const CallDescriptor* descriptor,
-+ Node* node) {
-+ X87OperandGenerator g(this);
-+
-+ // Prepare for C function call.
-+ if (descriptor->IsCFunctionCall()) {
-+ InstructionOperand temps[] = {g.TempRegister()};
-+ size_t const temp_count = arraysize(temps);
-+ Emit(kArchPrepareCallCFunction |
-+
MiscField::encode(static_cast<int>(descriptor->ParameterCount())),
-+ 0, nullptr, 0, nullptr, temp_count, temps);
-+
-+ // Poke any stack arguments.
-+ for (size_t n = 0; n < arguments->size(); ++n) {
-+ PushParameter input = (*arguments)[n];
-+ if (input.node()) {
-+ int const slot = static_cast<int>(n);
-+ InstructionOperand value = g.CanBeImmediate(input.node())
-+ ? g.UseImmediate(input.node())
-+ : g.UseRegister(input.node());
-+ Emit(kX87Poke | MiscField::encode(slot), g.NoOutput(), value);
-+ }
-+ }
-+ } else {
-+ // Push any stack arguments.
-+ for (PushParameter input : base::Reversed(*arguments)) {
-+ // TODO(titzer): handle pushing double parameters.
-+ if (input.node() == nullptr) continue;
-+ InstructionOperand value =
-+ g.CanBeImmediate(input.node())
-+ ? g.UseImmediate(input.node())
-+ : IsSupported(ATOM) ||
-+ sequence()->IsFP(GetVirtualRegister(input.node()))
-+ ? g.UseRegister(input.node())
-+ : g.Use(input.node());
-+ Emit(kX87Push, g.NoOutput(), value);
-+ }
-+ }
-+}
-+
-+
-+bool InstructionSelector::IsTailCallAddressImmediate() { return true; }
-+
-+int InstructionSelector::GetTempsCountForTailCallFromJSFunction() { return 0; }
-+
-+namespace {
-+
-+void VisitCompareWithMemoryOperand(InstructionSelector* selector,
-+ InstructionCode opcode, Node* left,
-+ InstructionOperand right,
-+ FlagsContinuation* cont) {
-+ DCHECK(left->opcode() == IrOpcode::kLoad);
-+ X87OperandGenerator g(selector);
-+ size_t input_count = 0;
-+ InstructionOperand inputs[6];
-+ AddressingMode addressing_mode =
-+ g.GetEffectiveAddressMemoryOperand(left, inputs, &input_count);
-+ opcode |= AddressingModeField::encode(addressing_mode);
-+ opcode = cont->Encode(opcode);
-+ inputs[input_count++] = right;
-+
-+ if (cont->IsBranch()) {
-+ inputs[input_count++] = g.Label(cont->true_block());
-+ inputs[input_count++] = g.Label(cont->false_block());
-+ selector->Emit(opcode, 0, nullptr, input_count, inputs);
-+ } else if (cont->IsDeoptimize()) {
-+ selector->EmitDeoptimize(opcode, 0, nullptr, input_count, inputs,
-+ cont->kind(), cont->reason(),
cont->frame_state());
-+ } else if (cont->IsSet()) {
-+ InstructionOperand output = g.DefineAsRegister(cont->result());
-+ selector->Emit(opcode, 1, &output, input_count, inputs);
-+ } else {
-+ DCHECK(cont->IsTrap());
-+ inputs[input_count++] = g.UseImmediate(cont->trap_id());
-+ selector->Emit(opcode, 0, nullptr, input_count, inputs);
-+ }
-+}
-+
-+// Shared routine for multiple compare operations.
-+void VisitCompare(InstructionSelector* selector, InstructionCode opcode,
-+ InstructionOperand left, InstructionOperand right,
-+ FlagsContinuation* cont) {
-+ X87OperandGenerator g(selector);
-+ opcode = cont->Encode(opcode);
-+ if (cont->IsBranch()) {
-+ selector->Emit(opcode, g.NoOutput(), left, right,
-+ g.Label(cont->true_block()), g.Label(cont->false_block()));
-+ } else if (cont->IsDeoptimize()) {
-+ selector->EmitDeoptimize(opcode, g.NoOutput(), left, right, cont->kind(),
-+ cont->reason(), cont->frame_state());
-+ } else if (cont->IsSet()) {
-+ selector->Emit(opcode, g.DefineAsByteRegister(cont->result()), left, right);
-+ } else {
-+ DCHECK(cont->IsTrap());
-+ selector->Emit(opcode, g.NoOutput(), left, right,
-+ g.UseImmediate(cont->trap_id()));
-+ }
-+}
-+
-+
-+// Shared routine for multiple compare operations.
-+void VisitCompare(InstructionSelector* selector, InstructionCode opcode,
-+ Node* left, Node* right, FlagsContinuation* cont,
-+ bool commutative) {
-+ X87OperandGenerator g(selector);
-+ if (commutative && g.CanBeBetterLeftOperand(right)) {
-+ std::swap(left, right);
-+ }
-+ VisitCompare(selector, opcode, g.UseRegister(left), g.Use(right), cont);
-+}
-+
-+MachineType MachineTypeForNarrow(Node* node, Node* hint_node) {
-+ if (hint_node->opcode() == IrOpcode::kLoad) {
-+ MachineType hint = LoadRepresentationOf(hint_node->op());
-+ if (node->opcode() == IrOpcode::kInt32Constant ||
-+ node->opcode() == IrOpcode::kInt64Constant) {
-+ int64_t constant = node->opcode() == IrOpcode::kInt32Constant
-+ ? OpParameter<int32_t>(node)
-+ : OpParameter<int64_t>(node);
-+ if (hint == MachineType::Int8()) {
-+ if (constant >= std::numeric_limits<int8_t>::min() &&
-+ constant <= std::numeric_limits<int8_t>::max()) {
-+ return hint;
-+ }
-+ } else if (hint == MachineType::Uint8()) {
-+ if (constant >= std::numeric_limits<uint8_t>::min() &&
-+ constant <= std::numeric_limits<uint8_t>::max()) {
-+ return hint;
-+ }
-+ } else if (hint == MachineType::Int16()) {
-+ if (constant >= std::numeric_limits<int16_t>::min() &&
-+ constant <= std::numeric_limits<int16_t>::max()) {
-+ return hint;
-+ }
-+ } else if (hint == MachineType::Uint16()) {
-+ if (constant >= std::numeric_limits<uint16_t>::min() &&
-+ constant <= std::numeric_limits<uint16_t>::max()) {
-+ return hint;
-+ }
-+ } else if (hint == MachineType::Int32()) {
-+ return hint;
-+ } else if (hint == MachineType::Uint32()) {
-+ if (constant >= 0) return hint;
-+ }
-+ }
-+ }
-+ return node->opcode() == IrOpcode::kLoad ? LoadRepresentationOf(node->op())
-+ : MachineType::None();
-+}
-+
-+// Tries to match the size of the given opcode to that of the operands, if
-+// possible.
-+InstructionCode TryNarrowOpcodeSize(InstructionCode opcode, Node* left,
-+ Node* right, FlagsContinuation* cont) {
-+ // TODO(epertoso): we can probably get some size information out of phi nodes.
-+ // If the load representations don't match, both operands will be
-+ // zero/sign-extended to 32bit.
-+ MachineType left_type = MachineTypeForNarrow(left, right);
-+ MachineType right_type = MachineTypeForNarrow(right, left);
-+ if (left_type == right_type) {
-+ switch (left_type.representation()) {
-+ case MachineRepresentation::kBit:
-+ case MachineRepresentation::kWord8: {
-+ if (opcode == kX87Test) return kX87Test8;
-+ if (opcode == kX87Cmp) {
-+ if (left_type.semantic() == MachineSemantic::kUint32) {
-+ cont->OverwriteUnsignedIfSigned();
-+ } else {
-+ CHECK_EQ(MachineSemantic::kInt32, left_type.semantic());
-+ }
-+ return kX87Cmp8;
-+ }
-+ break;
-+ }
-+ case MachineRepresentation::kWord16:
-+ if (opcode == kX87Test) return kX87Test16;
-+ if (opcode == kX87Cmp) {
-+ if (left_type.semantic() == MachineSemantic::kUint32) {
-+ cont->OverwriteUnsignedIfSigned();
-+ } else {
-+ CHECK_EQ(MachineSemantic::kInt32, left_type.semantic());
-+ }
-+ return kX87Cmp16;
-+ }
-+ break;
-+ default:
-+ break;
-+ }
-+ }
-+ return opcode;
-+}
-+
-+// Shared routine for multiple float32 compare operations (inputs commuted).
-+void VisitFloat32Compare(InstructionSelector* selector, Node* node,
-+ FlagsContinuation* cont) {
-+ X87OperandGenerator g(selector);
-+ selector->Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(0)));
-+ selector->Emit(kX87PushFloat32, g.NoOutput(), g.Use(node->InputAt(1)));
-+ if (cont->IsBranch()) {
-+ selector->Emit(cont->Encode(kX87Float32Cmp), g.NoOutput(),
-+ g.Label(cont->true_block()), g.Label(cont->false_block()));
-+ } else if (cont->IsDeoptimize()) {
-+ selector->EmitDeoptimize(cont->Encode(kX87Float32Cmp), g.NoOutput(),
-+ g.Use(node->InputAt(0)), g.Use(node->InputAt(1)),
-+ cont->kind(), cont->reason(),
cont->frame_state());
-+ } else if (cont->IsSet()) {
-+ selector->Emit(cont->Encode(kX87Float32Cmp),
-+ g.DefineAsByteRegister(cont->result()));
-+ } else {
-+ DCHECK(cont->IsTrap());
-+ selector->Emit(cont->Encode(kX87Float32Cmp), g.NoOutput(),
-+ g.UseImmediate(cont->trap_id()));
-+ }
-+}
-+
-+
-+// Shared routine for multiple float64 compare operations (inputs commuted).
-+void VisitFloat64Compare(InstructionSelector* selector, Node* node,
-+ FlagsContinuation* cont) {
-+ X87OperandGenerator g(selector);
-+ selector->Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ selector->Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(1)));
-+ if (cont->IsBranch()) {
-+ selector->Emit(cont->Encode(kX87Float64Cmp), g.NoOutput(),
-+ g.Label(cont->true_block()), g.Label(cont->false_block()));
-+ } else if (cont->IsDeoptimize()) {
-+ selector->EmitDeoptimize(cont->Encode(kX87Float64Cmp), g.NoOutput(),
-+ g.Use(node->InputAt(0)), g.Use(node->InputAt(1)),
-+ cont->kind(), cont->reason(),
cont->frame_state());
-+ } else if (cont->IsSet()) {
-+ selector->Emit(cont->Encode(kX87Float64Cmp),
-+ g.DefineAsByteRegister(cont->result()));
-+ } else {
-+ DCHECK(cont->IsTrap());
-+ selector->Emit(cont->Encode(kX87Float64Cmp), g.NoOutput(),
-+ g.UseImmediate(cont->trap_id()));
-+ }
-+}
-+
-+// Shared routine for multiple word compare operations.
-+void VisitWordCompare(InstructionSelector* selector, Node* node,
-+ InstructionCode opcode, FlagsContinuation* cont) {
-+ X87OperandGenerator g(selector);
-+ Node* left = node->InputAt(0);
-+ Node* right = node->InputAt(1);
-+
-+ InstructionCode narrowed_opcode =
-+ TryNarrowOpcodeSize(opcode, left, right, cont);
-+
-+ int effect_level = selector->GetEffectLevel(node);
-+ if (cont->IsBranch()) {
-+ effect_level = selector->GetEffectLevel(
-+ cont->true_block()->PredecessorAt(0)->control_input());
-+ }
-+
-+ // If one of the two inputs is an immediate, make sure it's on the right, or
-+ // if one of the two inputs is a memory operand, make sure it's on the left.
-+ if ((!g.CanBeImmediate(right) && g.CanBeImmediate(left)) ||
-+ (g.CanBeMemoryOperand(narrowed_opcode, node, right, effect_level) &&
-+ !g.CanBeMemoryOperand(narrowed_opcode, node, left, effect_level))) {
-+ if (!node->op()->HasProperty(Operator::kCommutative)) cont->Commute();
-+ std::swap(left, right);
-+ }
-+
-+ // Match immediates on right side of comparison.
-+ if (g.CanBeImmediate(right)) {
-+ if (g.CanBeMemoryOperand(narrowed_opcode, node, left, effect_level)) {
-+ return VisitCompareWithMemoryOperand(selector, narrowed_opcode, left,
-+ g.UseImmediate(right), cont);
-+ }
-+ return VisitCompare(selector, opcode, g.Use(left), g.UseImmediate(right),
-+ cont);
-+ }
-+
-+ // Match memory operands on left side of comparison.
-+ if (g.CanBeMemoryOperand(narrowed_opcode, node, left, effect_level)) {
-+ bool needs_byte_register =
-+ narrowed_opcode == kX87Test8 || narrowed_opcode == kX87Cmp8;
-+ return VisitCompareWithMemoryOperand(
-+ selector, narrowed_opcode, left,
-+ needs_byte_register ? g.UseByteRegister(right) : g.UseRegister(right),
-+ cont);
-+ }
-+
-+ if (g.CanBeBetterLeftOperand(right)) {
-+ if (!node->op()->HasProperty(Operator::kCommutative)) cont->Commute();
-+ std::swap(left, right);
-+ }
-+
-+ return VisitCompare(selector, opcode, left, right, cont,
-+ node->op()->HasProperty(Operator::kCommutative));
-+}
-+
-+void VisitWordCompare(InstructionSelector* selector, Node* node,
-+ FlagsContinuation* cont) {
-+ X87OperandGenerator g(selector);
-+ Int32BinopMatcher m(node);
-+ if (m.left().IsLoad() && m.right().IsLoadStackPointer()) {
-+ LoadMatcher<ExternalReferenceMatcher> mleft(m.left().node());
-+ ExternalReference js_stack_limit =
-+ ExternalReference::address_of_stack_limit(selector->isolate());
-+ if (mleft.object().Is(js_stack_limit) && mleft.index().Is(0)) {
-+ // Compare(Load(js_stack_limit), LoadStackPointer)
-+ if (!node->op()->HasProperty(Operator::kCommutative)) cont->Commute();
-+ InstructionCode opcode = cont->Encode(kX87StackCheck);
-+ if (cont->IsBranch()) {
-+ selector->Emit(opcode, g.NoOutput(), g.Label(cont->true_block()),
-+ g.Label(cont->false_block()));
-+ } else if (cont->IsDeoptimize()) {
-+ selector->EmitDeoptimize(opcode, 0, nullptr, 0, nullptr, cont->kind(),
-+ cont->reason(), cont->frame_state());
-+ } else {
-+ DCHECK(cont->IsSet());
-+ selector->Emit(opcode, g.DefineAsRegister(cont->result()));
-+ }
-+ return;
-+ }
-+ }
-+ VisitWordCompare(selector, node, kX87Cmp, cont);
-+}
-+
-+
-+// Shared routine for word comparison with zero.
-+void VisitWordCompareZero(InstructionSelector* selector, Node* user,
-+ Node* value, FlagsContinuation* cont) {
-+ // Try to combine with comparisons against 0 by simply inverting the branch.
-+ while (value->opcode() == IrOpcode::kWord32Equal &&
-+ selector->CanCover(user, value)) {
-+ Int32BinopMatcher m(value);
-+ if (!m.right().Is(0)) break;
-+
-+ user = value;
-+ value = m.left().node();
-+ cont->Negate();
-+ }
-+
-+ if (selector->CanCover(user, value)) {
-+ switch (value->opcode()) {
-+ case IrOpcode::kWord32Equal:
-+ cont->OverwriteAndNegateIfEqual(kEqual);
-+ return VisitWordCompare(selector, value, cont);
-+ case IrOpcode::kInt32LessThan:
-+ cont->OverwriteAndNegateIfEqual(kSignedLessThan);
-+ return VisitWordCompare(selector, value, cont);
-+ case IrOpcode::kInt32LessThanOrEqual:
-+ cont->OverwriteAndNegateIfEqual(kSignedLessThanOrEqual);
-+ return VisitWordCompare(selector, value, cont);
-+ case IrOpcode::kUint32LessThan:
-+ cont->OverwriteAndNegateIfEqual(kUnsignedLessThan);
-+ return VisitWordCompare(selector, value, cont);
-+ case IrOpcode::kUint32LessThanOrEqual:
-+ cont->OverwriteAndNegateIfEqual(kUnsignedLessThanOrEqual);
-+ return VisitWordCompare(selector, value, cont);
-+ case IrOpcode::kFloat32Equal:
-+ cont->OverwriteAndNegateIfEqual(kUnorderedEqual);
-+ return VisitFloat32Compare(selector, value, cont);
-+ case IrOpcode::kFloat32LessThan:
-+ cont->OverwriteAndNegateIfEqual(kUnsignedGreaterThan);
-+ return VisitFloat32Compare(selector, value, cont);
-+ case IrOpcode::kFloat32LessThanOrEqual:
-+ cont->OverwriteAndNegateIfEqual(kUnsignedGreaterThanOrEqual);
-+ return VisitFloat32Compare(selector, value, cont);
-+ case IrOpcode::kFloat64Equal:
-+ cont->OverwriteAndNegateIfEqual(kUnorderedEqual);
-+ return VisitFloat64Compare(selector, value, cont);
-+ case IrOpcode::kFloat64LessThan:
-+ cont->OverwriteAndNegateIfEqual(kUnsignedGreaterThan);
-+ return VisitFloat64Compare(selector, value, cont);
-+ case IrOpcode::kFloat64LessThanOrEqual:
-+ cont->OverwriteAndNegateIfEqual(kUnsignedGreaterThanOrEqual);
-+ return VisitFloat64Compare(selector, value, cont);
-+ case IrOpcode::kProjection:
-+ // Check if this is the overflow output projection of an
-+ // <Operation>WithOverflow node.
-+ if (ProjectionIndexOf(value->op()) == 1u) {
-+ // We cannot combine the <Operation>WithOverflow with this branch
-+ // unless the 0th projection (the use of the actual value of the
-+ // <Operation> is either nullptr, which means there's no use of the
-+ // actual value, or was already defined, which means it is scheduled
-+ // *AFTER* this branch).
-+ Node* const node = value->InputAt(0);
-+ Node* const result = NodeProperties::FindProjection(node, 0);
-+ if (result == nullptr || selector->IsDefined(result)) {
-+ switch (node->opcode()) {
-+ case IrOpcode::kInt32AddWithOverflow:
-+ cont->OverwriteAndNegateIfEqual(kOverflow);
-+ return VisitBinop(selector, node, kX87Add, cont);
-+ case IrOpcode::kInt32SubWithOverflow:
-+ cont->OverwriteAndNegateIfEqual(kOverflow);
-+ return VisitBinop(selector, node, kX87Sub, cont);
-+ case IrOpcode::kInt32MulWithOverflow:
-+ cont->OverwriteAndNegateIfEqual(kOverflow);
-+ return VisitBinop(selector, node, kX87Imul, cont);
-+ default:
-+ break;
-+ }
-+ }
-+ }
-+ break;
-+ case IrOpcode::kInt32Sub:
-+ return VisitWordCompare(selector, value, cont);
-+ case IrOpcode::kWord32And:
-+ return VisitWordCompare(selector, value, kX87Test, cont);
-+ default:
-+ break;
-+ }
-+ }
-+
-+ // Continuation could not be combined with a compare, emit compare against 0.
-+ X87OperandGenerator g(selector);
-+ VisitCompare(selector, kX87Cmp, g.Use(value), g.TempImmediate(0), cont);
-+}
-+
-+} // namespace
-+
-+
-+void InstructionSelector::VisitBranch(Node* branch, BasicBlock* tbranch,
-+ BasicBlock* fbranch) {
-+ FlagsContinuation cont(kNotEqual, tbranch, fbranch);
-+ VisitWordCompareZero(this, branch, branch->InputAt(0), &cont);
-+}
-+
-+void InstructionSelector::VisitDeoptimizeIf(Node* node) {
-+ DeoptimizeParameters p = DeoptimizeParametersOf(node->op());
-+ FlagsContinuation cont = FlagsContinuation::ForDeoptimize(
-+ kNotEqual, p.kind(), p.reason(), node->InputAt(1));
-+ VisitWordCompareZero(this, node, node->InputAt(0), &cont);
-+}
-+
-+void InstructionSelector::VisitDeoptimizeUnless(Node* node) {
-+ DeoptimizeParameters p = DeoptimizeParametersOf(node->op());
-+ FlagsContinuation cont = FlagsContinuation::ForDeoptimize(
-+ kEqual, p.kind(), p.reason(), node->InputAt(1));
-+ VisitWordCompareZero(this, node, node->InputAt(0), &cont);
-+}
-+
-+void InstructionSelector::VisitTrapIf(Node* node, Runtime::FunctionId func_id) {
-+ FlagsContinuation cont =
-+ FlagsContinuation::ForTrap(kNotEqual, func_id, node->InputAt(1));
-+ VisitWordCompareZero(this, node, node->InputAt(0), &cont);
-+}
-+
-+void InstructionSelector::VisitTrapUnless(Node* node,
-+ Runtime::FunctionId func_id) {
-+ FlagsContinuation cont =
-+ FlagsContinuation::ForTrap(kEqual, func_id, node->InputAt(1));
-+ VisitWordCompareZero(this, node, node->InputAt(0), &cont);
-+}
-+
-+void InstructionSelector::VisitSwitch(Node* node, const SwitchInfo& sw) {
-+ X87OperandGenerator g(this);
-+ InstructionOperand value_operand = g.UseRegister(node->InputAt(0));
-+
-+ // Emit either ArchTableSwitch or ArchLookupSwitch.
-+ static const size_t kMaxTableSwitchValueRange = 2 << 16;
-+ size_t table_space_cost = 4 + sw.value_range;
-+ size_t table_time_cost = 3;
-+ size_t lookup_space_cost = 3 + 2 * sw.case_count;
-+ size_t lookup_time_cost = sw.case_count;
-+ if (sw.case_count > 4 &&
-+ table_space_cost + 3 * table_time_cost <=
-+ lookup_space_cost + 3 * lookup_time_cost &&
-+ sw.min_value > std::numeric_limits<int32_t>::min() &&
-+ sw.value_range <= kMaxTableSwitchValueRange) {
-+ InstructionOperand index_operand = value_operand;
-+ if (sw.min_value) {
-+ index_operand = g.TempRegister();
-+ Emit(kX87Lea | AddressingModeField::encode(kMode_MRI), index_operand,
-+ value_operand, g.TempImmediate(-sw.min_value));
-+ }
-+ // Generate a table lookup.
-+ return EmitTableSwitch(sw, index_operand);
-+ }
-+
-+ // Generate a sequence of conditional jumps.
-+ return EmitLookupSwitch(sw, value_operand);
-+}
-+
-+
-+void InstructionSelector::VisitWord32Equal(Node* const node) {
-+ FlagsContinuation cont = FlagsContinuation::ForSet(kEqual, node);
-+ Int32BinopMatcher m(node);
-+ if (m.right().Is(0)) {
-+ return VisitWordCompareZero(this, m.node(), m.left().node(), &cont);
-+ }
-+ VisitWordCompare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitInt32LessThan(Node* node) {
-+ FlagsContinuation cont = FlagsContinuation::ForSet(kSignedLessThan, node);
-+ VisitWordCompare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitInt32LessThanOrEqual(Node* node) {
-+ FlagsContinuation cont =
-+ FlagsContinuation::ForSet(kSignedLessThanOrEqual, node);
-+ VisitWordCompare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitUint32LessThan(Node* node) {
-+ FlagsContinuation cont = FlagsContinuation::ForSet(kUnsignedLessThan, node);
-+ VisitWordCompare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitUint32LessThanOrEqual(Node* node) {
-+ FlagsContinuation cont =
-+ FlagsContinuation::ForSet(kUnsignedLessThanOrEqual, node);
-+ VisitWordCompare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitInt32AddWithOverflow(Node* node) {
-+ if (Node* ovf = NodeProperties::FindProjection(node, 1)) {
-+ FlagsContinuation cont = FlagsContinuation::ForSet(kOverflow, ovf);
-+ return VisitBinop(this, node, kX87Add, &cont);
-+ }
-+ FlagsContinuation cont;
-+ VisitBinop(this, node, kX87Add, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitInt32SubWithOverflow(Node* node) {
-+ if (Node* ovf = NodeProperties::FindProjection(node, 1)) {
-+ FlagsContinuation cont = FlagsContinuation::ForSet(kOverflow, ovf);
-+ return VisitBinop(this, node, kX87Sub, &cont);
-+ }
-+ FlagsContinuation cont;
-+ VisitBinop(this, node, kX87Sub, &cont);
-+}
-+
-+void InstructionSelector::VisitInt32MulWithOverflow(Node* node) {
-+ if (Node* ovf = NodeProperties::FindProjection(node, 1)) {
-+ FlagsContinuation cont = FlagsContinuation::ForSet(kOverflow, ovf);
-+ return VisitBinop(this, node, kX87Imul, &cont);
-+ }
-+ FlagsContinuation cont;
-+ VisitBinop(this, node, kX87Imul, &cont);
-+}
-+
-+void InstructionSelector::VisitFloat32Equal(Node* node) {
-+ FlagsContinuation cont = FlagsContinuation::ForSet(kUnorderedEqual, node);
-+ VisitFloat32Compare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitFloat32LessThan(Node* node) {
-+ FlagsContinuation cont =
-+ FlagsContinuation::ForSet(kUnsignedGreaterThan, node);
-+ VisitFloat32Compare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitFloat32LessThanOrEqual(Node* node) {
-+ FlagsContinuation cont =
-+ FlagsContinuation::ForSet(kUnsignedGreaterThanOrEqual, node);
-+ VisitFloat32Compare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64Equal(Node* node) {
-+ FlagsContinuation cont = FlagsContinuation::ForSet(kUnorderedEqual, node);
-+ VisitFloat64Compare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64LessThan(Node* node) {
-+ FlagsContinuation cont =
-+ FlagsContinuation::ForSet(kUnsignedGreaterThan, node);
-+ VisitFloat64Compare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64LessThanOrEqual(Node* node) {
-+ FlagsContinuation cont =
-+ FlagsContinuation::ForSet(kUnsignedGreaterThanOrEqual, node);
-+ VisitFloat64Compare(this, node, &cont);
-+}
-+
-+
-+void InstructionSelector::VisitFloat64ExtractLowWord32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64ExtractLowWord32, g.DefineAsRegister(node),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat64ExtractHighWord32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87Float64ExtractHighWord32, g.DefineAsRegister(node),
-+ g.Use(node->InputAt(0)));
-+}
-+
-+
-+void InstructionSelector::VisitFloat64InsertLowWord32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Node* left = node->InputAt(0);
-+ Node* right = node->InputAt(1);
-+ Emit(kX87Float64InsertLowWord32, g.UseFixed(node, stX_0), g.UseRegister(left),
-+ g.UseRegister(right));
-+}
-+
-+
-+void InstructionSelector::VisitFloat64InsertHighWord32(Node* node) {
-+ X87OperandGenerator g(this);
-+ Node* left = node->InputAt(0);
-+ Node* right = node->InputAt(1);
-+ Emit(kX87Float64InsertHighWord32, g.UseFixed(node, stX_0),
-+ g.UseRegister(left), g.UseRegister(right));
-+}
-+
-+void InstructionSelector::VisitFloat64SilenceNaN(Node* node) {
-+ X87OperandGenerator g(this);
-+ Emit(kX87PushFloat64, g.NoOutput(), g.Use(node->InputAt(0)));
-+ Emit(kX87Float64SilenceNaN, g.DefineAsFixed(node, stX_0), 0, nullptr);
-+}
-+
-+void InstructionSelector::VisitAtomicLoad(Node* node) {
-+ LoadRepresentation load_rep = LoadRepresentationOf(node->op());
-+ DCHECK(load_rep.representation() == MachineRepresentation::kWord8 ||
-+ load_rep.representation() == MachineRepresentation::kWord16 ||
-+ load_rep.representation() == MachineRepresentation::kWord32);
-+ USE(load_rep);
-+ VisitLoad(node);
-+}
-+
-+void InstructionSelector::VisitAtomicStore(Node* node) {
-+ X87OperandGenerator g(this);
-+ Node* base = node->InputAt(0);
-+ Node* index = node->InputAt(1);
-+ Node* value = node->InputAt(2);
-+
-+ MachineRepresentation rep = AtomicStoreRepresentationOf(node->op());
-+ ArchOpcode opcode = kArchNop;
-+ switch (rep) {
-+ case MachineRepresentation::kWord8:
-+ opcode = kAtomicExchangeInt8;
-+ break;
-+ case MachineRepresentation::kWord16:
-+ opcode = kAtomicExchangeInt16;
-+ break;
-+ case MachineRepresentation::kWord32:
-+ opcode = kAtomicExchangeWord32;
-+ break;
-+ default:
-+ UNREACHABLE();
-+ break;
-+ }
-+ AddressingMode addressing_mode;
-+ InstructionOperand inputs[4];
-+ size_t input_count = 0;
-+ if (rep == MachineRepresentation::kWord8) {
-+ inputs[input_count++] = g.UseByteRegister(value);
-+ } else {
-+ inputs[input_count++] = g.UseUniqueRegister(value);
-+ }
-+ inputs[input_count++] = g.UseUniqueRegister(base);
-+ if (g.CanBeImmediate(index)) {
-+ inputs[input_count++] = g.UseImmediate(index);
-+ addressing_mode = kMode_MRI;
-+ } else {
-+ inputs[input_count++] = g.UseUniqueRegister(index);
-+ addressing_mode = kMode_MR1;
-+ }
-+ InstructionCode code = opcode | AddressingModeField::encode(addressing_mode);
-+ Emit(code, 0, nullptr, input_count, inputs);
-+}
-+
-+void InstructionSelector::VisitAtomicExchange(Node* node) {
-+ X87OperandGenerator g(this);
-+ Node* base = node->InputAt(0);
-+ Node* index = node->InputAt(1);
-+ Node* value = node->InputAt(2);
-+
-+ MachineType type = AtomicOpRepresentationOf(node->op());
-+ ArchOpcode opcode = kArchNop;
-+ if (type == MachineType::Int8()) {
-+ opcode = kAtomicExchangeInt8;
-+ } else if (type == MachineType::Uint8()) {
-+ opcode = kAtomicExchangeUint8;
-+ } else if (type == MachineType::Int16()) {
-+ opcode = kAtomicExchangeInt16;
-+ } else if (type == MachineType::Uint16()) {
-+ opcode = kAtomicExchangeUint16;
-+ } else if (type == MachineType::Int32() || type == MachineType::Uint32()) {
-+ opcode = kAtomicExchangeWord32;
-+ } else {
-+ UNREACHABLE();
-+ return;
-+ }
-+ InstructionOperand outputs[1];
-+ AddressingMode addressing_mode;
-+ InstructionOperand inputs[3];
-+ size_t input_count = 0;
-+ if (type == MachineType::Int8() || type == MachineType::Uint8()) {
-+ inputs[input_count++] = g.UseFixed(value, edx);
-+ } else {
-+ inputs[input_count++] = g.UseUniqueRegister(value);
-+ }
-+ inputs[input_count++] = g.UseUniqueRegister(base);
-+ if (g.CanBeImmediate(index)) {
-+ inputs[input_count++] = g.UseImmediate(index);
-+ addressing_mode = kMode_MRI;
-+ } else {
-+ inputs[input_count++] = g.UseUniqueRegister(index);
-+ addressing_mode = kMode_MR1;
-+ }
-+ if (type == MachineType::Int8() || type == MachineType::Uint8()) {
-+ // Using DefineSameAsFirst requires the register to be unallocated.
-+ outputs[0] = g.DefineAsFixed(node, edx);
-+ } else {
-+ outputs[0] = g.DefineSameAsFirst(node);
-+ }
-+ InstructionCode code = opcode | AddressingModeField::encode(addressing_mode);
-+ Emit(code, 1, outputs, input_count, inputs);
-+}
-+
-+void InstructionSelector::VisitAtomicCompareExchange(Node* node) {
-+ X87OperandGenerator g(this);
-+ Node* base = node->InputAt(0);
-+ Node* index = node->InputAt(1);
-+ Node* old_value = node->InputAt(2);
-+ Node* new_value = node->InputAt(3);
-+
-+ MachineType type = AtomicOpRepresentationOf(node->op());
-+ ArchOpcode opcode = kArchNop;
-+ if (type == MachineType::Int8()) {
-+ opcode = kAtomicCompareExchangeInt8;
-+ } else if (type == MachineType::Uint8()) {
-+ opcode = kAtomicCompareExchangeUint8;
-+ } else if (type == MachineType::Int16()) {
-+ opcode = kAtomicCompareExchangeInt16;
-+ } else if (type == MachineType::Uint16()) {
-+ opcode = kAtomicCompareExchangeUint16;
-+ } else if (type == MachineType::Int32() || type == MachineType::Uint32()) {
-+ opcode = kAtomicCompareExchangeWord32;
-+ } else {
-+ UNREACHABLE();
-+ return;
-+ }
-+ InstructionOperand outputs[1];
-+ AddressingMode addressing_mode;
-+ InstructionOperand inputs[4];
-+ size_t input_count = 0;
-+ inputs[input_count++] = g.UseFixed(old_value, eax);
-+ if (type == MachineType::Int8() || type == MachineType::Uint8()) {
-+ inputs[input_count++] = g.UseByteRegister(new_value);
-+ } else {
-+ inputs[input_count++] = g.UseUniqueRegister(new_value);
-+ }
-+ inputs[input_count++] = g.UseUniqueRegister(base);
-+ if (g.CanBeImmediate(index)) {
-+ inputs[input_count++] = g.UseImmediate(index);
-+ addressing_mode = kMode_MRI;
-+ } else {
-+ inputs[input_count++] = g.UseUniqueRegister(index);
-+ addressing_mode = kMode_MR1;
-+ }
-+ outputs[0] = g.DefineAsFixed(node, eax);
-+ InstructionCode code = opcode | AddressingModeField::encode(addressing_mode);
-+ Emit(code, 1, outputs, input_count, inputs);
-+}
-+
-+void InstructionSelector::VisitAtomicBinaryOperation(
-+ Node* node, ArchOpcode int8_op, ArchOpcode uint8_op, ArchOpcode int16_op,
-+ ArchOpcode uint16_op, ArchOpcode word32_op) {
-+ X87OperandGenerator g(this);
-+ Node* base = node->InputAt(0);
-+ Node* index = node->InputAt(1);
-+ Node* value = node->InputAt(2);
-+
-+ MachineType type = AtomicOpRepresentationOf(node->op());
-+ ArchOpcode opcode = kArchNop;
-+ if (type == MachineType::Int8()) {
-+ opcode = int8_op;
-+ } else if (type == MachineType::Uint8()) {
-+ opcode = uint8_op;
-+ } else if (type == MachineType::Int16()) {
-+ opcode = int16_op;
-+ } else if (type == MachineType::Uint16()) {
-+ opcode = uint16_op;
-+ } else if (type == MachineType::Int32() || type == MachineType::Uint32()) {
-+ opcode = word32_op;
-+ } else {
-+ UNREACHABLE();
-+ return;
-+ }
-+ InstructionOperand outputs[1];
-+ AddressingMode addressing_mode;
-+ InstructionOperand inputs[3];
-+ size_t input_count = 0;
-+ inputs[input_count++] = g.UseUniqueRegister(value);
-+ inputs[input_count++] = g.UseUniqueRegister(base);
-+ if (g.CanBeImmediate(index)) {
-+ inputs[input_count++] = g.UseImmediate(index);
-+ addressing_mode = kMode_MRI;
-+ } else {
-+ inputs[input_count++] = g.UseUniqueRegister(index);
-+ addressing_mode = kMode_MR1;
-+ }
-+ outputs[0] = g.DefineAsFixed(node, eax);
-+ InstructionOperand temp[1];
-+ if (type == MachineType::Int8() || type == MachineType::Uint8()) {
-+ temp[0] = g.UseByteRegister(node);
-+ } else {
-+ temp[0] = g.TempRegister();
-+ }
-+ InstructionCode code = opcode | AddressingModeField::encode(addressing_mode);
-+ Emit(code, 1, outputs, input_count, inputs, 1, temp);
-+}
-+
-+#define VISIT_ATOMIC_BINOP(op) \
-+ void InstructionSelector::VisitAtomic##op(Node* node) { \
-+ VisitAtomicBinaryOperation(node, kAtomic##op##Int8, kAtomic##op##Uint8, \
-+ kAtomic##op##Int16, kAtomic##op##Uint16, \
-+ kAtomic##op##Word32); \
-+ }
-+VISIT_ATOMIC_BINOP(Add)
-+VISIT_ATOMIC_BINOP(Sub)
-+VISIT_ATOMIC_BINOP(And)
-+VISIT_ATOMIC_BINOP(Or)
-+VISIT_ATOMIC_BINOP(Xor)
-+#undef VISIT_ATOMIC_BINOP
-+
-+void InstructionSelector::VisitInt32AbsWithOverflow(Node* node) {
-+ UNREACHABLE();
-+}
-+
-+void InstructionSelector::VisitInt64AbsWithOverflow(Node* node) {
-+ UNREACHABLE();
-+}
-+
-+// static
-+MachineOperatorBuilder::Flags
-+InstructionSelector::SupportedMachineOperatorFlags() {
-+ MachineOperatorBuilder::Flags flags =
-+ MachineOperatorBuilder::kWord32ShiftIsSafe;
-+ if (CpuFeatures::IsSupported(POPCNT)) {
-+ flags |= MachineOperatorBuilder::kWord32Popcnt;
-+ }
-+
-+ flags |= MachineOperatorBuilder::kFloat32RoundDown |
-+ MachineOperatorBuilder::kFloat64RoundDown |
-+ MachineOperatorBuilder::kFloat32RoundUp |
-+ MachineOperatorBuilder::kFloat64RoundUp |
-+ MachineOperatorBuilder::kFloat32RoundTruncate |
-+ MachineOperatorBuilder::kFloat64RoundTruncate |
-+ MachineOperatorBuilder::kFloat32RoundTiesEven |
-+ MachineOperatorBuilder::kFloat64RoundTiesEven;
-+ return flags;
-+}
-+
-+// static
-+MachineOperatorBuilder::AlignmentRequirements
-+InstructionSelector::AlignmentRequirements() {
-+ return MachineOperatorBuilder::AlignmentRequirements::
-+ FullUnalignedAccessSupport();
-+}
-+
-+} // namespace compiler
-+} // namespace internal
-+} // namespace v8
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/OWNERS
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/OWNERS
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/compiler/x87/OWNERS 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/compiler/x87/OWNERS 2018-02-18
19:00:54.014420841 +0100
-@@ -0,0 +1,2 @@
-+weiliang.lin(a)intel.com
-+chunyang.dai(a)intel.com
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/debug/x87/debug-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/debug/x87/debug-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/debug/x87/debug-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/debug/x87/debug-x87.cc 2018-02-18
19:00:54.014420841 +0100
-@@ -0,0 +1,141 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/debug/debug.h"
-+
-+#include "src/codegen.h"
-+#include "src/debug/liveedit.h"
-+#include "src/x87/frames-x87.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+#define __ ACCESS_MASM(masm)
-+
-+
-+void EmitDebugBreakSlot(MacroAssembler* masm) {
-+ Label check_codesize;
-+ __ bind(&check_codesize);
-+ __ Nop(Assembler::kDebugBreakSlotLength);
-+ DCHECK_EQ(Assembler::kDebugBreakSlotLength,
-+ masm->SizeOfCodeGeneratedSince(&check_codesize));
-+}
-+
-+
-+void DebugCodegen::GenerateSlot(MacroAssembler* masm, RelocInfo::Mode mode) {
-+ // Generate enough nop's to make space for a call instruction.
-+ masm->RecordDebugBreakSlot(mode);
-+ EmitDebugBreakSlot(masm);
-+}
-+
-+
-+void DebugCodegen::ClearDebugBreakSlot(Isolate* isolate, Address pc) {
-+ CodePatcher patcher(isolate, pc, Assembler::kDebugBreakSlotLength);
-+ EmitDebugBreakSlot(patcher.masm());
-+}
-+
-+
-+void DebugCodegen::PatchDebugBreakSlot(Isolate* isolate, Address pc,
-+ Handle<Code> code) {
-+ DCHECK(code->is_debug_stub());
-+ static const int kSize = Assembler::kDebugBreakSlotLength;
-+ CodePatcher patcher(isolate, pc, kSize);
-+
-+ // Add a label for checking the size of the code used for returning.
-+ Label check_codesize;
-+ patcher.masm()->bind(&check_codesize);
-+ patcher.masm()->call(code->entry(), RelocInfo::NONE32);
-+ // Check that the size of the code generated is as expected.
-+ DCHECK_EQ(kSize, patcher.masm()->SizeOfCodeGeneratedSince(&check_codesize));
-+}
-+
-+bool DebugCodegen::DebugBreakSlotIsPatched(Address pc) {
-+ return !Assembler::IsNop(pc);
-+}
-+
-+void DebugCodegen::GenerateDebugBreakStub(MacroAssembler* masm,
-+ DebugBreakCallHelperMode mode) {
-+ __ RecordComment("Debug break");
-+
-+ // Enter an internal frame.
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+
-+ // Push arguments for DebugBreak call.
-+ if (mode == SAVE_RESULT_REGISTER) {
-+ // Break on return.
-+ __ push(eax);
-+ } else {
-+ // Non-return breaks.
-+ __ Push(masm->isolate()->factory()->the_hole_value());
-+ }
-+ __ Move(eax, Immediate(1));
-+ __ mov(ebx,
-+ Immediate(ExternalReference(
-+ Runtime::FunctionForId(Runtime::kDebugBreak), masm->isolate())));
-+
-+ CEntryStub ceb(masm->isolate(), 1);
-+ __ CallStub(&ceb);
-+
-+ if (FLAG_debug_code) {
-+ for (int i = 0; i < kNumJSCallerSaved; ++i) {
-+ Register reg = {JSCallerSavedCode(i)};
-+ // Do not clobber eax if mode is SAVE_RESULT_REGISTER. It will
-+ // contain return value of the function.
-+ if (!(reg.is(eax) && (mode == SAVE_RESULT_REGISTER))) {
-+ __ Move(reg, Immediate(kDebugZapValue));
-+ }
-+ }
-+ }
-+
-+ // Get rid of the internal frame.
-+ }
-+
-+ __ MaybeDropFrames();
-+
-+ // Return to caller.
-+ __ ret(0);
-+}
-+
-+void DebugCodegen::GenerateHandleDebuggerStatement(MacroAssembler* masm) {
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ CallRuntime(Runtime::kHandleDebuggerStatement, 0);
-+ }
-+ __ MaybeDropFrames();
-+
-+ // Return to caller.
-+ __ ret(0);
-+}
-+
-+void DebugCodegen::GenerateFrameDropperTrampoline(MacroAssembler* masm) {
-+ // Frame is being dropped:
-+ // - Drop to the target frame specified by ebx.
-+ // - Look up current function on the frame.
-+ // - Leave the frame.
-+ // - Restart the frame by calling the function.
-+ __ mov(ebp, ebx);
-+ __ mov(edi, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ __ leave();
-+
-+ __ mov(ebx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(ebx,
-+ FieldOperand(ebx, SharedFunctionInfo::kFormalParameterCountOffset));
-+
-+ ParameterCount dummy(ebx);
-+ __ InvokeFunction(edi, dummy, dummy, JUMP_FUNCTION,
-+ CheckDebugStepCallWrapper());
-+}
-+
-+
-+const bool LiveEdit::kFrameDropperSupported = true;
-+
-+#undef __
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/debug/x87/OWNERS
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/debug/x87/OWNERS
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/debug/x87/OWNERS 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/debug/x87/OWNERS 2018-02-18
19:00:54.015420826 +0100
-@@ -0,0 +1,2 @@
-+weiliang.lin(a)intel.com
-+chunyang.dai(a)intel.com
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/frames-inl.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/frames-inl.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/frames-inl.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/frames-inl.h 2018-02-18
19:00:54.015420826 +0100
-@@ -26,6 +26,8 @@
- #include "src/mips64/frames-mips64.h" // NOLINT
- #elif V8_TARGET_ARCH_S390
- #include "src/s390/frames-s390.h" // NOLINT
-+#elif V8_TARGET_ARCH_X87
-+#include "src/x87/frames-x87.h" // NOLINT
- #else
- #error Unsupported target architecture.
- #endif
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/full-codegen/full-codegen.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/full-codegen/full-codegen.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/full-codegen/full-codegen.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/full-codegen/full-codegen.h 2018-02-18
19:00:54.015420826 +0100
-@@ -45,7 +45,7 @@
- static const int kMaxBackEdgeWeight = 127;
-
- // Platform-specific code size multiplier.
--#if V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- static const int kCodeSizeMultiplier = 105;
- #elif V8_TARGET_ARCH_X64
- static const int kCodeSizeMultiplier = 165;
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/full-codegen/x87/full-codegen-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/full-codegen/x87/full-codegen-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/full-codegen/x87/full-codegen-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/full-codegen/x87/full-codegen-x87.cc 2018-02-18
19:00:54.100419575 +0100
-@@ -0,0 +1,2410 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/ast/compile-time-value.h"
-+#include "src/ast/scopes.h"
-+#include "src/builtins/builtins-constructor.h"
-+#include "src/code-factory.h"
-+#include "src/code-stubs.h"
-+#include "src/codegen.h"
-+#include "src/compilation-info.h"
-+#include "src/compiler.h"
-+#include "src/debug/debug.h"
-+#include "src/full-codegen/full-codegen.h"
-+#include "src/ic/ic.h"
-+#include "src/x87/frames-x87.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+#define __ ACCESS_MASM(masm())
-+
-+class JumpPatchSite BASE_EMBEDDED {
-+ public:
-+ explicit JumpPatchSite(MacroAssembler* masm) : masm_(masm) {
-+#ifdef DEBUG
-+ info_emitted_ = false;
-+#endif
-+ }
-+
-+ ~JumpPatchSite() {
-+ DCHECK(patch_site_.is_bound() == info_emitted_);
-+ }
-+
-+ void EmitJumpIfNotSmi(Register reg,
-+ Label* target,
-+ Label::Distance distance = Label::kFar) {
-+ __ test(reg, Immediate(kSmiTagMask));
-+ EmitJump(not_carry, target, distance); // Always taken before patched.
-+ }
-+
-+ void EmitJumpIfSmi(Register reg,
-+ Label* target,
-+ Label::Distance distance = Label::kFar) {
-+ __ test(reg, Immediate(kSmiTagMask));
-+ EmitJump(carry, target, distance); // Never taken before patched.
-+ }
-+
-+ void EmitPatchInfo() {
-+ if (patch_site_.is_bound()) {
-+ int delta_to_patch_site = masm_->SizeOfCodeGeneratedSince(&patch_site_);
-+ DCHECK(is_uint8(delta_to_patch_site));
-+ __ test(eax, Immediate(delta_to_patch_site));
-+#ifdef DEBUG
-+ info_emitted_ = true;
-+#endif
-+ } else {
-+ __ nop(); // Signals no inlined code.
-+ }
-+ }
-+
-+ private:
-+ // jc will be patched with jz, jnc will become jnz.
-+ void EmitJump(Condition cc, Label* target, Label::Distance distance) {
-+ DCHECK(!patch_site_.is_bound() && !info_emitted_);
-+ DCHECK(cc == carry || cc == not_carry);
-+ __ bind(&patch_site_);
-+ __ j(cc, target, distance);
-+ }
-+
-+ MacroAssembler* masm() { return masm_; }
-+ MacroAssembler* masm_;
-+ Label patch_site_;
-+#ifdef DEBUG
-+ bool info_emitted_;
-+#endif
-+};
-+
-+
-+// Generate code for a JS function. On entry to the function the receiver
-+// and arguments have been pushed on the stack left to right, with the
-+// return address on top of them. The actual argument count matches the
-+// formal parameter count expected by the function.
-+//
-+// The live registers are:
-+// o edi: the JS function object being called (i.e. ourselves)
-+// o edx: the new target value
-+// o esi: our context
-+// o ebp: our caller's frame pointer
-+// o esp: stack pointer (pointing to return address)
-+//
-+// The function builds a JS frame. Please see JavaScriptFrameConstants in
-+// frames-x87.h for its layout.
-+void FullCodeGenerator::Generate() {
-+ CompilationInfo* info = info_;
-+ profiling_counter_ = isolate()->factory()->NewCell(
-+ Handle<Smi>(Smi::FromInt(FLAG_interrupt_budget), isolate()));
-+ SetFunctionPosition(literal());
-+ Comment cmnt(masm_, "[ function compiled by full code generator");
-+
-+ ProfileEntryHookStub::MaybeCallEntryHook(masm_);
-+
-+ if (FLAG_debug_code && info->ExpectsJSReceiverAsReceiver()) {
-+ int receiver_offset = (info->scope()->num_parameters() + 1) * kPointerSize;
-+ __ mov(ecx, Operand(esp, receiver_offset));
-+ __ AssertNotSmi(ecx);
-+ __ CmpObjectType(ecx, FIRST_JS_RECEIVER_TYPE, ecx);
-+ __ Assert(above_equal, kSloppyFunctionExpectsJSReceiverReceiver);
-+ }
-+
-+ // Open a frame scope to indicate that there is a frame on the stack. The
-+ // MANUAL indicates that the scope shouldn't actually generate code to set up
-+ // the frame (that is done below).
-+ FrameScope frame_scope(masm_, StackFrame::MANUAL);
-+
-+ info->set_prologue_offset(masm_->pc_offset());
-+ __ Prologue(info->GeneratePreagedPrologue());
-+
-+ // Increment invocation count for the function.
-+ {
-+ Comment cmnt(masm_, "[ Increment invocation count");
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kFeedbackVectorOffset));
-+ __ mov(ecx, FieldOperand(ecx, Cell::kValueOffset));
-+ __ add(
-+ FieldOperand(ecx, FeedbackVector::kInvocationCountIndex * kPointerSize +
-+ FeedbackVector::kHeaderSize),
-+ Immediate(Smi::FromInt(1)));
-+ }
-+
-+ { Comment cmnt(masm_, "[ Allocate locals");
-+ int locals_count = info->scope()->num_stack_slots();
-+ OperandStackDepthIncrement(locals_count);
-+ if (locals_count == 1) {
-+ __ push(Immediate(isolate()->factory()->undefined_value()));
-+ } else if (locals_count > 1) {
-+ if (locals_count >= 128) {
-+ Label ok;
-+ __ mov(ecx, esp);
-+ __ sub(ecx, Immediate(locals_count * kPointerSize));
-+ ExternalReference stack_limit =
-+ ExternalReference::address_of_real_stack_limit(isolate());
-+ __ cmp(ecx, Operand::StaticVariable(stack_limit));
-+ __ j(above_equal, &ok, Label::kNear);
-+ __ CallRuntime(Runtime::kThrowStackOverflow);
-+ __ bind(&ok);
-+ }
-+ __ mov(eax, Immediate(isolate()->factory()->undefined_value()));
-+ const int kMaxPushes = 32;
-+ if (locals_count >= kMaxPushes) {
-+ int loop_iterations = locals_count / kMaxPushes;
-+ __ mov(ecx, loop_iterations);
-+ Label loop_header;
-+ __ bind(&loop_header);
-+ // Do pushes.
-+ for (int i = 0; i < kMaxPushes; i++) {
-+ __ push(eax);
-+ }
-+ __ dec(ecx);
-+ __ j(not_zero, &loop_header, Label::kNear);
-+ }
-+ int remaining = locals_count % kMaxPushes;
-+ // Emit the remaining pushes.
-+ for (int i = 0; i < remaining; i++) {
-+ __ push(eax);
-+ }
-+ }
-+ }
-+
-+ bool function_in_register = true;
-+
-+ // Possibly allocate a local context.
-+ if (info->scope()->NeedsContext()) {
-+ Comment cmnt(masm_, "[ Allocate context");
-+ bool need_write_barrier = true;
-+ int slots = info->scope()->num_heap_slots() - Context::MIN_CONTEXT_SLOTS;
-+ // Argument to NewContext is the function, which is still in edi.
-+ if (info->scope()->is_script_scope()) {
-+ __ push(edi);
-+ __ Push(info->scope()->scope_info());
-+ __ CallRuntime(Runtime::kNewScriptContext);
-+ // The new target value is not used, clobbering is safe.
-+ DCHECK_NULL(info->scope()->new_target_var());
-+ } else {
-+ if (info->scope()->new_target_var() != nullptr) {
-+ __ push(edx); // Preserve new target.
-+ }
-+ if (slots <= ConstructorBuiltins::MaximumFunctionContextSlots()) {
-+ Callable callable = CodeFactory::FastNewFunctionContext(
-+ isolate(), info->scope()->scope_type());
-+ __ mov(FastNewFunctionContextDescriptor::SlotsRegister(),
-+ Immediate(slots));
-+ __ Call(callable.code(), RelocInfo::CODE_TARGET);
-+ // Result of the FastNewFunctionContext builtin is always in new space.
-+ need_write_barrier = false;
-+ } else {
-+ __ push(edi);
-+ __ Push(Smi::FromInt(info->scope()->scope_type()));
-+ __ CallRuntime(Runtime::kNewFunctionContext);
-+ }
-+ if (info->scope()->new_target_var() != nullptr) {
-+ __ pop(edx); // Restore new target.
-+ }
-+ }
-+ function_in_register = false;
-+ // Context is returned in eax. It replaces the context passed to us.
-+ // It's saved in the stack and kept live in esi.
-+ __ mov(esi, eax);
-+ __ mov(Operand(ebp, StandardFrameConstants::kContextOffset), eax);
-+
-+ // Copy parameters into context if necessary.
-+ int num_parameters = info->scope()->num_parameters();
-+ int first_parameter = info->scope()->has_this_declaration() ? -1 : 0;
-+ for (int i = first_parameter; i < num_parameters; i++) {
-+ Variable* var =
-+ (i == -1) ? info->scope()->receiver() :
info->scope()->parameter(i);
-+ if (var->IsContextSlot()) {
-+ int parameter_offset = StandardFrameConstants::kCallerSPOffset +
-+ (num_parameters - 1 - i) * kPointerSize;
-+ // Load parameter from stack.
-+ __ mov(eax, Operand(ebp, parameter_offset));
-+ // Store it in the context.
-+ int context_offset = Context::SlotOffset(var->index());
-+ __ mov(Operand(esi, context_offset), eax);
-+ // Update the write barrier. This clobbers eax and ebx.
-+ if (need_write_barrier) {
-+ __ RecordWriteContextSlot(esi, context_offset, eax, ebx,
-+ kDontSaveFPRegs);
-+ } else if (FLAG_debug_code) {
-+ Label done;
-+ __ JumpIfInNewSpace(esi, eax, &done, Label::kNear);
-+ __ Abort(kExpectedNewSpaceObject);
-+ __ bind(&done);
-+ }
-+ }
-+ }
-+ }
-+
-+ // We don't support new.target and rest parameters here.
-+ DCHECK_NULL(info->scope()->new_target_var());
-+ DCHECK_NULL(info->scope()->rest_parameter());
-+ DCHECK_NULL(info->scope()->this_function_var());
-+
-+ Variable* arguments = info->scope()->arguments();
-+ if (arguments != NULL) {
-+ // Arguments object must be allocated after the context object, in
-+ // case the "arguments" or ".arguments" variables are in the
context.
-+ Comment cmnt(masm_, "[ Allocate arguments object");
-+ if (!function_in_register) {
-+ __ mov(edi, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ }
-+ if (is_strict(language_mode()) || !has_simple_parameters()) {
-+ __ call(isolate()->builtins()->FastNewStrictArguments(),
-+ RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+ } else if (literal()->has_duplicate_parameters()) {
-+ __ Push(edi);
-+ __ CallRuntime(Runtime::kNewSloppyArguments_Generic);
-+ } else {
-+ __ call(isolate()->builtins()->FastNewSloppyArguments(),
-+ RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+ }
-+
-+ SetVar(arguments, eax, ebx, edx);
-+ }
-+
-+ if (FLAG_trace) {
-+ __ CallRuntime(Runtime::kTraceEnter);
-+ }
-+
-+ // Visit the declarations and body.
-+ {
-+ Comment cmnt(masm_, "[ Declarations");
-+ VisitDeclarations(scope()->declarations());
-+ }
-+
-+ // Assert that the declarations do not use ICs. Otherwise the debugger
-+ // won't be able to redirect a PC at an IC to the correct IC in newly
-+ // recompiled code.
-+ DCHECK_EQ(0, ic_total_count_);
-+
-+ {
-+ Comment cmnt(masm_, "[ Stack check");
-+ Label ok;
-+ ExternalReference stack_limit =
-+ ExternalReference::address_of_stack_limit(isolate());
-+ __ cmp(esp, Operand::StaticVariable(stack_limit));
-+ __ j(above_equal, &ok, Label::kNear);
-+ __ call(isolate()->builtins()->StackCheck(), RelocInfo::CODE_TARGET);
-+ __ bind(&ok);
-+ }
-+
-+ {
-+ Comment cmnt(masm_, "[ Body");
-+ DCHECK(loop_depth() == 0);
-+ VisitStatements(literal()->body());
-+ DCHECK(loop_depth() == 0);
-+ }
-+
-+ // Always emit a 'return undefined' in case control fell off the end of
-+ // the body.
-+ { Comment cmnt(masm_, "[ return <undefined>;");
-+ __ mov(eax, isolate()->factory()->undefined_value());
-+ EmitReturnSequence();
-+ }
-+}
-+
-+
-+void FullCodeGenerator::ClearAccumulator() {
-+ __ Move(eax, Immediate(Smi::kZero));
-+}
-+
-+
-+void FullCodeGenerator::EmitProfilingCounterDecrement(int delta) {
-+ __ mov(ebx, Immediate(profiling_counter_));
-+ __ sub(FieldOperand(ebx, Cell::kValueOffset),
-+ Immediate(Smi::FromInt(delta)));
-+}
-+
-+
-+void FullCodeGenerator::EmitProfilingCounterReset() {
-+ int reset_value = FLAG_interrupt_budget;
-+ __ mov(ebx, Immediate(profiling_counter_));
-+ __ mov(FieldOperand(ebx, Cell::kValueOffset),
-+ Immediate(Smi::FromInt(reset_value)));
-+}
-+
-+
-+void FullCodeGenerator::EmitBackEdgeBookkeeping(IterationStatement* stmt,
-+ Label* back_edge_target) {
-+ Comment cmnt(masm_, "[ Back edge bookkeeping");
-+ Label ok;
-+
-+ DCHECK(back_edge_target->is_bound());
-+ int distance = masm_->SizeOfCodeGeneratedSince(back_edge_target);
-+ int weight = Min(kMaxBackEdgeWeight,
-+ Max(1, distance / kCodeSizeMultiplier));
-+ EmitProfilingCounterDecrement(weight);
-+ __ j(positive, &ok, Label::kNear);
-+ __ call(isolate()->builtins()->InterruptCheck(), RelocInfo::CODE_TARGET);
-+
-+ // Record a mapping of this PC offset to the OSR id. This is used to find
-+ // the AST id from the unoptimized code in order to use it as a key into
-+ // the deoptimization input data found in the optimized code.
-+ RecordBackEdge(stmt->OsrEntryId());
-+
-+ EmitProfilingCounterReset();
-+
-+ __ bind(&ok);
-+}
-+
-+void FullCodeGenerator::EmitProfilingCounterHandlingForReturnSequence(
-+ bool is_tail_call) {
-+ // Pretend that the exit is a backwards jump to the entry.
-+ int weight = 1;
-+ if (info_->ShouldSelfOptimize()) {
-+ weight = FLAG_interrupt_budget / FLAG_self_opt_count;
-+ } else {
-+ int distance = masm_->pc_offset();
-+ weight = Min(kMaxBackEdgeWeight, Max(1, distance / kCodeSizeMultiplier));
-+ }
-+ EmitProfilingCounterDecrement(weight);
-+ Label ok;
-+ __ j(positive, &ok, Label::kNear);
-+ // Don't need to save result register if we are going to do a tail call.
-+ if (!is_tail_call) {
-+ __ push(eax);
-+ }
-+ __ call(isolate()->builtins()->InterruptCheck(), RelocInfo::CODE_TARGET);
-+ if (!is_tail_call) {
-+ __ pop(eax);
-+ }
-+ EmitProfilingCounterReset();
-+ __ bind(&ok);
-+}
-+
-+void FullCodeGenerator::EmitReturnSequence() {
-+ Comment cmnt(masm_, "[ Return sequence");
-+ if (return_label_.is_bound()) {
-+ __ jmp(&return_label_);
-+ } else {
-+ // Common return label
-+ __ bind(&return_label_);
-+ if (FLAG_trace) {
-+ __ push(eax);
-+ __ CallRuntime(Runtime::kTraceExit);
-+ }
-+ EmitProfilingCounterHandlingForReturnSequence(false);
-+
-+ SetReturnPosition(literal());
-+ __ leave();
-+
-+ int arg_count = info_->scope()->num_parameters() + 1;
-+ int arguments_bytes = arg_count * kPointerSize;
-+ __ Ret(arguments_bytes, ecx);
-+ }
-+}
-+
-+void FullCodeGenerator::RestoreContext() {
-+ __ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset));
-+}
-+
-+void FullCodeGenerator::StackValueContext::Plug(Variable* var) const {
-+ DCHECK(var->IsStackAllocated() || var->IsContextSlot());
-+ MemOperand operand = codegen()->VarOperand(var, result_register());
-+ // Memory operands can be pushed directly.
-+ codegen()->PushOperand(operand);
-+}
-+
-+
-+void FullCodeGenerator::EffectContext::Plug(Heap::RootListIndex index) const {
-+ UNREACHABLE(); // Not used on X87.
-+}
-+
-+
-+void FullCodeGenerator::AccumulatorValueContext::Plug(
-+ Heap::RootListIndex index) const {
-+ UNREACHABLE(); // Not used on X87.
-+}
-+
-+
-+void FullCodeGenerator::StackValueContext::Plug(
-+ Heap::RootListIndex index) const {
-+ UNREACHABLE(); // Not used on X87.
-+}
-+
-+
-+void FullCodeGenerator::TestContext::Plug(Heap::RootListIndex index) const {
-+ UNREACHABLE(); // Not used on X87.
-+}
-+
-+
-+void FullCodeGenerator::EffectContext::Plug(Handle<Object> lit) const {
-+}
-+
-+
-+void FullCodeGenerator::AccumulatorValueContext::Plug(
-+ Handle<Object> lit) const {
-+ if (lit->IsSmi()) {
-+ __ SafeMove(result_register(), Immediate(Smi::cast(*lit)));
-+ } else {
-+ __ Move(result_register(), Immediate(Handle<HeapObject>::cast(lit)));
-+ }
-+}
-+
-+
-+void FullCodeGenerator::StackValueContext::Plug(Handle<Object> lit) const {
-+ codegen()->OperandStackDepthIncrement(1);
-+ if (lit->IsSmi()) {
-+ __ SafePush(Immediate(Smi::cast(*lit)));
-+ } else {
-+ __ push(Immediate(Handle<HeapObject>::cast(lit)));
-+ }
-+}
-+
-+
-+void FullCodeGenerator::TestContext::Plug(Handle<Object> lit) const {
-+ DCHECK(lit->IsNullOrUndefined(isolate()) || !lit->IsUndetectable());
-+ if (lit->IsNullOrUndefined(isolate()) || lit->IsFalse(isolate())) {
-+ if (false_label_ != fall_through_) __ jmp(false_label_);
-+ } else if (lit->IsTrue(isolate()) || lit->IsJSObject()) {
-+ if (true_label_ != fall_through_) __ jmp(true_label_);
-+ } else if (lit->IsString()) {
-+ if (String::cast(*lit)->length() == 0) {
-+ if (false_label_ != fall_through_) __ jmp(false_label_);
-+ } else {
-+ if (true_label_ != fall_through_) __ jmp(true_label_);
-+ }
-+ } else if (lit->IsSmi()) {
-+ if (Smi::ToInt(*lit) == 0) {
-+ if (false_label_ != fall_through_) __ jmp(false_label_);
-+ } else {
-+ if (true_label_ != fall_through_) __ jmp(true_label_);
-+ }
-+ } else {
-+ // For simplicity we always test the accumulator register.
-+ __ mov(result_register(), Handle<HeapObject>::cast(lit));
-+ codegen()->DoTest(this);
-+ }
-+}
-+
-+
-+void FullCodeGenerator::StackValueContext::DropAndPlug(int count,
-+ Register reg) const {
-+ DCHECK(count > 0);
-+ if (count > 1) codegen()->DropOperands(count - 1);
-+ __ mov(Operand(esp, 0), reg);
-+}
-+
-+
-+void FullCodeGenerator::EffectContext::Plug(Label* materialize_true,
-+ Label* materialize_false) const {
-+ DCHECK(materialize_true == materialize_false);
-+ __ bind(materialize_true);
-+}
-+
-+
-+void FullCodeGenerator::AccumulatorValueContext::Plug(
-+ Label* materialize_true,
-+ Label* materialize_false) const {
-+ Label done;
-+ __ bind(materialize_true);
-+ __ mov(result_register(), isolate()->factory()->true_value());
-+ __ jmp(&done, Label::kNear);
-+ __ bind(materialize_false);
-+ __ mov(result_register(), isolate()->factory()->false_value());
-+ __ bind(&done);
-+}
-+
-+
-+void FullCodeGenerator::StackValueContext::Plug(
-+ Label* materialize_true,
-+ Label* materialize_false) const {
-+ codegen()->OperandStackDepthIncrement(1);
-+ Label done;
-+ __ bind(materialize_true);
-+ __ push(Immediate(isolate()->factory()->true_value()));
-+ __ jmp(&done, Label::kNear);
-+ __ bind(materialize_false);
-+ __ push(Immediate(isolate()->factory()->false_value()));
-+ __ bind(&done);
-+}
-+
-+
-+void FullCodeGenerator::TestContext::Plug(Label* materialize_true,
-+ Label* materialize_false) const {
-+ DCHECK(materialize_true == true_label_);
-+ DCHECK(materialize_false == false_label_);
-+}
-+
-+
-+void FullCodeGenerator::AccumulatorValueContext::Plug(bool flag) const {
-+ Handle<HeapObject> value = flag ? isolate()->factory()->true_value()
-+ : isolate()->factory()->false_value();
-+ __ mov(result_register(), value);
-+}
-+
-+
-+void FullCodeGenerator::StackValueContext::Plug(bool flag) const {
-+ codegen()->OperandStackDepthIncrement(1);
-+ Handle<HeapObject> value = flag ? isolate()->factory()->true_value()
-+ : isolate()->factory()->false_value();
-+ __ push(Immediate(value));
-+}
-+
-+
-+void FullCodeGenerator::TestContext::Plug(bool flag) const {
-+ if (flag) {
-+ if (true_label_ != fall_through_) __ jmp(true_label_);
-+ } else {
-+ if (false_label_ != fall_through_) __ jmp(false_label_);
-+ }
-+}
-+
-+
-+void FullCodeGenerator::DoTest(Expression* condition,
-+ Label* if_true,
-+ Label* if_false,
-+ Label* fall_through) {
-+ Callable callable = Builtins::CallableFor(isolate(), Builtins::kToBoolean);
-+ __ Call(callable.code(), RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+ __ CompareRoot(result_register(), Heap::kTrueValueRootIndex);
-+ Split(equal, if_true, if_false, fall_through);
-+}
-+
-+
-+void FullCodeGenerator::Split(Condition cc,
-+ Label* if_true,
-+ Label* if_false,
-+ Label* fall_through) {
-+ if (if_false == fall_through) {
-+ __ j(cc, if_true);
-+ } else if (if_true == fall_through) {
-+ __ j(NegateCondition(cc), if_false);
-+ } else {
-+ __ j(cc, if_true);
-+ __ jmp(if_false);
-+ }
-+}
-+
-+
-+MemOperand FullCodeGenerator::StackOperand(Variable* var) {
-+ DCHECK(var->IsStackAllocated());
-+ // Offset is negative because higher indexes are at lower addresses.
-+ int offset = -var->index() * kPointerSize;
-+ // Adjust by a (parameter or local) base offset.
-+ if (var->IsParameter()) {
-+ offset += (info_->scope()->num_parameters() + 1) * kPointerSize;
-+ } else {
-+ offset += JavaScriptFrameConstants::kLocal0Offset;
-+ }
-+ return Operand(ebp, offset);
-+}
-+
-+
-+MemOperand FullCodeGenerator::VarOperand(Variable* var, Register scratch) {
-+ DCHECK(var->IsContextSlot() || var->IsStackAllocated());
-+ if (var->IsContextSlot()) {
-+ int context_chain_length = scope()->ContextChainLength(var->scope());
-+ __ LoadContext(scratch, context_chain_length);
-+ return ContextOperand(scratch, var->index());
-+ } else {
-+ return StackOperand(var);
-+ }
-+}
-+
-+
-+void FullCodeGenerator::GetVar(Register dest, Variable* var) {
-+ DCHECK(var->IsContextSlot() || var->IsStackAllocated());
-+ MemOperand location = VarOperand(var, dest);
-+ __ mov(dest, location);
-+}
-+
-+
-+void FullCodeGenerator::SetVar(Variable* var,
-+ Register src,
-+ Register scratch0,
-+ Register scratch1) {
-+ DCHECK(var->IsContextSlot() || var->IsStackAllocated());
-+ DCHECK(!scratch0.is(src));
-+ DCHECK(!scratch0.is(scratch1));
-+ DCHECK(!scratch1.is(src));
-+ MemOperand location = VarOperand(var, scratch0);
-+ __ mov(location, src);
-+
-+ // Emit the write barrier code if the location is in the heap.
-+ if (var->IsContextSlot()) {
-+ int offset = Context::SlotOffset(var->index());
-+ DCHECK(!scratch0.is(esi) && !src.is(esi) && !scratch1.is(esi));
-+ __ RecordWriteContextSlot(scratch0, offset, src, scratch1, kDontSaveFPRegs);
-+ }
-+}
-+
-+
-+void FullCodeGenerator::EmitDebugCheckDeclarationContext(Variable* variable) {
-+ // The variable in the declaration always resides in the current context.
-+ DCHECK_EQ(0, scope()->ContextChainLength(variable->scope()));
-+ if (FLAG_debug_code) {
-+ // Check that we're not inside a with or catch context.
-+ __ mov(ebx, FieldOperand(esi, HeapObject::kMapOffset));
-+ __ cmp(ebx, isolate()->factory()->with_context_map());
-+ __ Check(not_equal, kDeclarationInWithContext);
-+ __ cmp(ebx, isolate()->factory()->catch_context_map());
-+ __ Check(not_equal, kDeclarationInCatchContext);
-+ }
-+}
-+
-+
-+void FullCodeGenerator::VisitVariableDeclaration(
-+ VariableDeclaration* declaration) {
-+ VariableProxy* proxy = declaration->proxy();
-+ Variable* variable = proxy->var();
-+ switch (variable->location()) {
-+ case VariableLocation::UNALLOCATED: {
-+ DCHECK(!variable->binding_needs_init());
-+ globals_->Add(variable->name(), zone());
-+ FeedbackSlot slot = proxy->VariableFeedbackSlot();
-+ DCHECK(!slot.IsInvalid());
-+ globals_->Add(handle(Smi::FromInt(slot.ToInt()), isolate()), zone());
-+ globals_->Add(isolate()->factory()->undefined_value(), zone());
-+ globals_->Add(isolate()->factory()->undefined_value(), zone());
-+ break;
-+ }
-+ case VariableLocation::PARAMETER:
-+ case VariableLocation::LOCAL:
-+ if (variable->binding_needs_init()) {
-+ Comment cmnt(masm_, "[ VariableDeclaration");
-+ __ mov(StackOperand(variable),
-+ Immediate(isolate()->factory()->the_hole_value()));
-+ }
-+ break;
-+
-+ case VariableLocation::CONTEXT:
-+ if (variable->binding_needs_init()) {
-+ Comment cmnt(masm_, "[ VariableDeclaration");
-+ EmitDebugCheckDeclarationContext(variable);
-+ __ mov(ContextOperand(esi, variable->index()),
-+ Immediate(isolate()->factory()->the_hole_value()));
-+ // No write barrier since the hole value is in old space.
-+ }
-+ break;
-+
-+ case VariableLocation::LOOKUP:
-+ case VariableLocation::MODULE:
-+ UNREACHABLE();
-+ }
-+}
-+
-+void FullCodeGenerator::VisitFunctionDeclaration(
-+ FunctionDeclaration* declaration) {
-+ VariableProxy* proxy = declaration->proxy();
-+ Variable* variable = proxy->var();
-+ switch (variable->location()) {
-+ case VariableLocation::UNALLOCATED: {
-+ globals_->Add(variable->name(), zone());
-+ FeedbackSlot slot = proxy->VariableFeedbackSlot();
-+ DCHECK(!slot.IsInvalid());
-+ globals_->Add(handle(Smi::FromInt(slot.ToInt()), isolate()), zone());
-+
-+ // We need the slot where the literals array lives, too.
-+ slot = declaration->fun()->LiteralFeedbackSlot();
-+ DCHECK(!slot.IsInvalid());
-+ globals_->Add(handle(Smi::FromInt(slot.ToInt()), isolate()), zone());
-+
-+ Handle<SharedFunctionInfo> function =
-+ Compiler::GetSharedFunctionInfo(declaration->fun(), script(), info_);
-+ // Check for stack-overflow exception.
-+ if (function.is_null()) return SetStackOverflow();
-+ globals_->Add(function, zone());
-+ break;
-+ }
-+
-+ case VariableLocation::PARAMETER:
-+ case VariableLocation::LOCAL: {
-+ Comment cmnt(masm_, "[ FunctionDeclaration");
-+ VisitForAccumulatorValue(declaration->fun());
-+ __ mov(StackOperand(variable), result_register());
-+ break;
-+ }
-+
-+ case VariableLocation::CONTEXT: {
-+ Comment cmnt(masm_, "[ FunctionDeclaration");
-+ EmitDebugCheckDeclarationContext(variable);
-+ VisitForAccumulatorValue(declaration->fun());
-+ __ mov(ContextOperand(esi, variable->index()), result_register());
-+ // We know that we have written a function, which is not a smi.
-+ __ RecordWriteContextSlot(esi, Context::SlotOffset(variable->index()),
-+ result_register(), ecx, kDontSaveFPRegs,
-+ EMIT_REMEMBERED_SET, OMIT_SMI_CHECK);
-+ break;
-+ }
-+
-+ case VariableLocation::LOOKUP:
-+ case VariableLocation::MODULE:
-+ UNREACHABLE();
-+ }
-+}
-+
-+
-+void FullCodeGenerator::DeclareGlobals(Handle<FixedArray> pairs) {
-+ // Call the runtime to declare the globals.
-+ __ Push(pairs);
-+ __ Push(Smi::FromInt(DeclareGlobalsFlags()));
-+ __ EmitLoadFeedbackVector(eax);
-+ __ Push(eax);
-+ __ CallRuntime(Runtime::kDeclareGlobals);
-+ // Return value is ignored.
-+}
-+
-+
-+void FullCodeGenerator::VisitSwitchStatement(SwitchStatement* stmt) {
-+ Comment cmnt(masm_, "[ SwitchStatement");
-+ Breakable nested_statement(this, stmt);
-+ SetStatementPosition(stmt);
-+
-+ // Keep the switch value on the stack until a case matches.
-+ VisitForStackValue(stmt->tag());
-+
-+ ZoneList<CaseClause*>* clauses = stmt->cases();
-+ CaseClause* default_clause = NULL; // Can occur anywhere in the list.
-+
-+ Label next_test; // Recycled for each test.
-+ // Compile all the tests with branches to their bodies.
-+ for (int i = 0; i < clauses->length(); i++) {
-+ CaseClause* clause = clauses->at(i);
-+ clause->body_target()->Unuse();
-+
-+ // The default is not a test, but remember it as final fall through.
-+ if (clause->is_default()) {
-+ default_clause = clause;
-+ continue;
-+ }
-+
-+ Comment cmnt(masm_, "[ Case comparison");
-+ __ bind(&next_test);
-+ next_test.Unuse();
-+
-+ // Compile the label expression.
-+ VisitForAccumulatorValue(clause->label());
-+
-+ // Perform the comparison as if via '==='.
-+ __ mov(edx, Operand(esp, 0)); // Switch value.
-+ bool inline_smi_code = ShouldInlineSmiCase(Token::EQ_STRICT);
-+ JumpPatchSite patch_site(masm_);
-+ if (inline_smi_code) {
-+ Label slow_case;
-+ __ mov(ecx, edx);
-+ __ or_(ecx, eax);
-+ patch_site.EmitJumpIfNotSmi(ecx, &slow_case, Label::kNear);
-+
-+ __ cmp(edx, eax);
-+ __ j(not_equal, &next_test);
-+ __ Drop(1); // Switch value is no longer needed.
-+ __ jmp(clause->body_target());
-+ __ bind(&slow_case);
-+ }
-+
-+ SetExpressionPosition(clause);
-+ Handle<Code> ic =
-+ CodeFactory::CompareIC(isolate(), Token::EQ_STRICT).code();
-+ CallIC(ic);
-+ patch_site.EmitPatchInfo();
-+
-+ Label skip;
-+ __ jmp(&skip, Label::kNear);
-+ __ cmp(eax, isolate()->factory()->true_value());
-+ __ j(not_equal, &next_test);
-+ __ Drop(1);
-+ __ jmp(clause->body_target());
-+ __ bind(&skip);
-+
-+ __ test(eax, eax);
-+ __ j(not_equal, &next_test);
-+ __ Drop(1); // Switch value is no longer needed.
-+ __ jmp(clause->body_target());
-+ }
-+
-+ // Discard the test value and jump to the default if present, otherwise to
-+ // the end of the statement.
-+ __ bind(&next_test);
-+ DropOperands(1); // Switch value is no longer needed.
-+ if (default_clause == NULL) {
-+ __ jmp(nested_statement.break_label());
-+ } else {
-+ __ jmp(default_clause->body_target());
-+ }
-+
-+ // Compile all the case bodies.
-+ for (int i = 0; i < clauses->length(); i++) {
-+ Comment cmnt(masm_, "[ Case body");
-+ CaseClause* clause = clauses->at(i);
-+ __ bind(clause->body_target());
-+ VisitStatements(clause->statements());
-+ }
-+
-+ __ bind(nested_statement.break_label());
-+}
-+
-+
-+void FullCodeGenerator::VisitForInStatement(ForInStatement* stmt) {
-+ Comment cmnt(masm_, "[ ForInStatement");
-+ SetStatementPosition(stmt, SKIP_BREAK);
-+
-+ FeedbackSlot slot = stmt->ForInFeedbackSlot();
-+
-+ // Get the object to enumerate over.
-+ SetExpressionAsStatementPosition(stmt->enumerable());
-+ VisitForAccumulatorValue(stmt->enumerable());
-+ OperandStackDepthIncrement(5);
-+
-+ Label loop, exit;
-+ Iteration loop_statement(this, stmt);
-+ increment_loop_depth();
-+
-+ // If the object is null or undefined, skip over the loop, otherwise convert
-+ // it to a JS receiver. See ECMA-262 version 5, section 12.6.4.
-+ Label convert, done_convert;
-+ __ JumpIfSmi(eax, &convert, Label::kNear);
-+ __ CmpObjectType(eax, FIRST_JS_RECEIVER_TYPE, ecx);
-+ __ j(above_equal, &done_convert, Label::kNear);
-+ __ cmp(eax, isolate()->factory()->undefined_value());
-+ __ j(equal, &exit);
-+ __ cmp(eax, isolate()->factory()->null_value());
-+ __ j(equal, &exit);
-+ __ bind(&convert);
-+ __ Call(isolate()->builtins()->ToObject(), RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+ __ bind(&done_convert);
-+ __ push(eax);
-+
-+ // Check cache validity in generated code. If we cannot guarantee cache
-+ // validity, call the runtime system to check cache validity or get the
-+ // property names in a fixed array. Note: Proxies never have an enum cache,
-+ // so will always take the slow path.
-+ Label call_runtime, use_cache, fixed_array;
-+ __ CheckEnumCache(&call_runtime);
-+
-+ __ mov(eax, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ jmp(&use_cache, Label::kNear);
-+
-+ // Get the set of properties to enumerate.
-+ __ bind(&call_runtime);
-+ __ push(eax);
-+ __ CallRuntime(Runtime::kForInEnumerate);
-+ __ cmp(FieldOperand(eax, HeapObject::kMapOffset),
-+ isolate()->factory()->meta_map());
-+ __ j(not_equal, &fixed_array);
-+
-+
-+ // We got a map in register eax. Get the enumeration cache from it.
-+ Label no_descriptors;
-+ __ bind(&use_cache);
-+
-+ __ EnumLength(edx, eax);
-+ __ cmp(edx, Immediate(Smi::kZero));
-+ __ j(equal, &no_descriptors);
-+
-+ __ LoadInstanceDescriptors(eax, ecx);
-+ __ mov(ecx, FieldOperand(ecx, DescriptorArray::kEnumCacheBridgeOffset));
-+ __ mov(ecx, FieldOperand(ecx, DescriptorArray::kEnumCacheBridgeCacheOffset));
-+
-+ // Set up the four remaining stack slots.
-+ __ push(eax); // Map.
-+ __ push(ecx); // Enumeration cache.
-+ __ push(edx); // Number of valid entries for the map in the enum cache.
-+ __ push(Immediate(Smi::kZero)); // Initial index.
-+ __ jmp(&loop);
-+
-+ __ bind(&no_descriptors);
-+ __ add(esp, Immediate(kPointerSize));
-+ __ jmp(&exit);
-+
-+ // We got a fixed array in register eax. Iterate through that.
-+ __ bind(&fixed_array);
-+
-+ __ push(Immediate(Smi::FromInt(1))); // Smi(1) indicates slow check
-+ __ push(eax); // Array
-+ __ mov(eax, FieldOperand(eax, FixedArray::kLengthOffset));
-+ __ push(eax); // Fixed array length (as smi).
-+ __ push(Immediate(Smi::kZero)); // Initial index.
-+
-+ // Generate code for doing the condition check.
-+ __ bind(&loop);
-+ SetExpressionAsStatementPosition(stmt->each());
-+
-+ __ mov(eax, Operand(esp, 0 * kPointerSize)); // Get the current index.
-+ __ cmp(eax, Operand(esp, 1 * kPointerSize)); // Compare to the array length.
-+ __ j(above_equal, loop_statement.break_label());
-+
-+ // Get the current entry of the array into register eax.
-+ __ mov(ebx, Operand(esp, 2 * kPointerSize));
-+ __ mov(eax, FieldOperand(ebx, eax, times_2, FixedArray::kHeaderSize));
-+
-+ // Get the expected map from the stack or a smi in the
-+ // permanent slow case into register edx.
-+ __ mov(edx, Operand(esp, 3 * kPointerSize));
-+
-+ // Check if the expected map still matches that of the enumerable.
-+ // If not, we may have to filter the key.
-+ Label update_each;
-+ __ mov(ebx, Operand(esp, 4 * kPointerSize));
-+ __ cmp(edx, FieldOperand(ebx, HeapObject::kMapOffset));
-+ __ j(equal, &update_each, Label::kNear);
-+
-+ // We need to filter the key, record slow-path here.
-+ int const vector_index = SmiFromSlot(slot)->value();
-+ __ EmitLoadFeedbackVector(edx);
-+ __ mov(FieldOperand(edx, FixedArray::OffsetOfElementAt(vector_index)),
-+ Immediate(FeedbackVector::MegamorphicSentinel(isolate())));
-+
-+ // eax contains the key. The receiver in ebx is the second argument to the
-+ // ForInFilter. ForInFilter returns undefined if the receiver doesn't
-+ // have the key or returns the name-converted key.
-+ __ Call(isolate()->builtins()->ForInFilter(), RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+ __ JumpIfRoot(result_register(), Heap::kUndefinedValueRootIndex,
-+ loop_statement.continue_label());
-+
-+ // Update the 'each' property or variable from the possibly filtered
-+ // entry in register eax.
-+ __ bind(&update_each);
-+ // Perform the assignment as if via '='.
-+ { EffectContext context(this);
-+ EmitAssignment(stmt->each(), stmt->EachFeedbackSlot());
-+ }
-+
-+ // Generate code for the body of the loop.
-+ Visit(stmt->body());
-+
-+ // Generate code for going to the next element by incrementing the
-+ // index (smi) stored on top of the stack.
-+ __ bind(loop_statement.continue_label());
-+ __ add(Operand(esp, 0 * kPointerSize), Immediate(Smi::FromInt(1)));
-+
-+ EmitBackEdgeBookkeeping(stmt, &loop);
-+ __ jmp(&loop);
-+
-+ // Remove the pointers stored on the stack.
-+ __ bind(loop_statement.break_label());
-+ DropOperands(5);
-+
-+ // Exit and decrement the loop depth.
-+ __ bind(&exit);
-+ decrement_loop_depth();
-+}
-+
-+void FullCodeGenerator::EmitSetHomeObject(Expression* initializer, int offset,
-+ FeedbackSlot slot) {
-+ DCHECK(NeedsHomeObject(initializer));
-+ __ mov(StoreDescriptor::ReceiverRegister(), Operand(esp, 0));
-+ __ mov(StoreDescriptor::ValueRegister(), Operand(esp, offset * kPointerSize));
-+ CallStoreIC(slot, isolate()->factory()->home_object_symbol());
-+}
-+
-+void FullCodeGenerator::EmitSetHomeObjectAccumulator(Expression* initializer,
-+ int offset,
-+ FeedbackSlot slot) {
-+ DCHECK(NeedsHomeObject(initializer));
-+ __ mov(StoreDescriptor::ReceiverRegister(), eax);
-+ __ mov(StoreDescriptor::ValueRegister(), Operand(esp, offset * kPointerSize));
-+ CallStoreIC(slot, isolate()->factory()->home_object_symbol());
-+}
-+
-+void FullCodeGenerator::EmitVariableLoad(VariableProxy* proxy,
-+ TypeofMode typeof_mode) {
-+ SetExpressionPosition(proxy);
-+ Variable* var = proxy->var();
-+
-+ // Two cases: global variables and all other types of variables.
-+ switch (var->location()) {
-+ case VariableLocation::UNALLOCATED: {
-+ Comment cmnt(masm_, "[ Global variable");
-+ EmitGlobalVariableLoad(proxy, typeof_mode);
-+ context()->Plug(eax);
-+ break;
-+ }
-+
-+ case VariableLocation::PARAMETER:
-+ case VariableLocation::LOCAL:
-+ case VariableLocation::CONTEXT: {
-+ DCHECK_EQ(NOT_INSIDE_TYPEOF, typeof_mode);
-+ Comment cmnt(masm_, var->IsContextSlot() ? "[ Context variable"
-+ : "[ Stack variable");
-+
-+ if (proxy->hole_check_mode() == HoleCheckMode::kRequired) {
-+ // Throw a reference error when using an uninitialized let/const
-+ // binding in harmony mode.
-+ Label done;
-+ GetVar(eax, var);
-+ __ cmp(eax, isolate()->factory()->the_hole_value());
-+ __ j(not_equal, &done, Label::kNear);
-+ __ push(Immediate(var->name()));
-+ __ CallRuntime(Runtime::kThrowReferenceError);
-+ __ bind(&done);
-+ context()->Plug(eax);
-+ break;
-+ }
-+ context()->Plug(var);
-+ break;
-+ }
-+
-+ case VariableLocation::LOOKUP:
-+ case VariableLocation::MODULE:
-+ UNREACHABLE();
-+ }
-+}
-+
-+
-+void FullCodeGenerator::EmitAccessor(ObjectLiteralProperty* property) {
-+ Expression* expression = (property == NULL) ? NULL : property->value();
-+ if (expression == NULL) {
-+ PushOperand(isolate()->factory()->null_value());
-+ } else {
-+ VisitForStackValue(expression);
-+ if (NeedsHomeObject(expression)) {
-+ DCHECK(property->kind() == ObjectLiteral::Property::GETTER ||
-+ property->kind() == ObjectLiteral::Property::SETTER);
-+ int offset = property->kind() == ObjectLiteral::Property::GETTER ? 2 : 3;
-+ EmitSetHomeObject(expression, offset, property->GetSlot());
-+ }
-+ }
-+}
-+
-+
-+void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) {
-+ Comment cmnt(masm_, "[ ObjectLiteral");
-+
-+ Handle<BoilerplateDescription> constant_properties =
-+ expr->GetOrBuildConstantProperties(isolate());
-+ int flags = expr->ComputeFlags();
-+ // If any of the keys would store to the elements array, then we shouldn't
-+ // allow it.
-+ if (MustCreateObjectLiteralWithRuntime(expr)) {
-+ __ push(Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ __ push(Immediate(SmiFromSlot(expr->literal_slot())));
-+ __ push(Immediate(constant_properties));
-+ __ push(Immediate(Smi::FromInt(flags)));
-+ __ CallRuntime(Runtime::kCreateObjectLiteral);
-+ } else {
-+ __ mov(eax, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ __ mov(ebx, Immediate(SmiFromSlot(expr->literal_slot())));
-+ __ mov(ecx, Immediate(constant_properties));
-+ __ mov(edx, Immediate(Smi::FromInt(flags)));
-+ Callable callable =
-+ Builtins::CallableFor(isolate(), Builtins::kFastCloneShallowObject);
-+ __ Call(callable.code(), RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+ }
-+
-+ // If result_saved is true the result is on top of the stack. If
-+ // result_saved is false the result is in eax.
-+ bool result_saved = false;
-+
-+ AccessorTable accessor_table(zone());
-+ for (int i = 0; i < expr->properties()->length(); i++) {
-+ ObjectLiteral::Property* property = expr->properties()->at(i);
-+ DCHECK(!property->is_computed_name());
-+ if (property->IsCompileTimeValue()) continue;
-+
-+ Literal* key = property->key()->AsLiteral();
-+ Expression* value = property->value();
-+ if (!result_saved) {
-+ PushOperand(eax); // Save result on the stack
-+ result_saved = true;
-+ }
-+ switch (property->kind()) {
-+ case ObjectLiteral::Property::SPREAD:
-+ case ObjectLiteral::Property::CONSTANT:
-+ UNREACHABLE();
-+ case ObjectLiteral::Property::MATERIALIZED_LITERAL:
-+ DCHECK(!CompileTimeValue::IsCompileTimeValue(value));
-+ // Fall through.
-+ case ObjectLiteral::Property::COMPUTED:
-+ // It is safe to use [[Put]] here because the boilerplate already
-+ // contains computed properties with an uninitialized value.
-+ if (key->IsStringLiteral()) {
-+ DCHECK(key->IsPropertyName());
-+ if (property->emit_store()) {
-+ VisitForAccumulatorValue(value);
-+ DCHECK(StoreDescriptor::ValueRegister().is(eax));
-+ __ mov(StoreDescriptor::ReceiverRegister(), Operand(esp, 0));
-+ CallStoreIC(property->GetSlot(0), key->value(), kStoreOwn);
-+ if (NeedsHomeObject(value)) {
-+ EmitSetHomeObjectAccumulator(value, 0, property->GetSlot(1));
-+ }
-+ } else {
-+ VisitForEffect(value);
-+ }
-+ break;
-+ }
-+ PushOperand(Operand(esp, 0)); // Duplicate receiver.
-+ VisitForStackValue(key);
-+ VisitForStackValue(value);
-+ if (property->emit_store()) {
-+ if (NeedsHomeObject(value)) {
-+ EmitSetHomeObject(value, 2, property->GetSlot());
-+ }
-+ PushOperand(Smi::FromInt(SLOPPY)); // Language mode
-+ CallRuntimeWithOperands(Runtime::kSetProperty);
-+ } else {
-+ DropOperands(3);
-+ }
-+ break;
-+ case ObjectLiteral::Property::PROTOTYPE:
-+ PushOperand(Operand(esp, 0)); // Duplicate receiver.
-+ VisitForStackValue(value);
-+ DCHECK(property->emit_store());
-+ CallRuntimeWithOperands(Runtime::kInternalSetPrototype);
-+ break;
-+ case ObjectLiteral::Property::GETTER:
-+ if (property->emit_store()) {
-+ AccessorTable::Iterator it = accessor_table.lookup(key);
-+ it->second->getter = property;
-+ }
-+ break;
-+ case ObjectLiteral::Property::SETTER:
-+ if (property->emit_store()) {
-+ AccessorTable::Iterator it = accessor_table.lookup(key);
-+ it->second->setter = property;
-+ }
-+ break;
-+ }
-+ }
-+
-+ // Emit code to define accessors, using only a single call to the runtime for
-+ // each pair of corresponding getters and setters.
-+ for (AccessorTable::Iterator it = accessor_table.begin();
-+ it != accessor_table.end();
-+ ++it) {
-+ PushOperand(Operand(esp, 0)); // Duplicate receiver.
-+ VisitForStackValue(it->first);
-+
-+ EmitAccessor(it->second->getter);
-+ EmitAccessor(it->second->setter);
-+
-+ PushOperand(Smi::FromInt(NONE));
-+ CallRuntimeWithOperands(Runtime::kDefineAccessorPropertyUnchecked);
-+ }
-+
-+ if (result_saved) {
-+ context()->PlugTOS();
-+ } else {
-+ context()->Plug(eax);
-+ }
-+}
-+
-+
-+void FullCodeGenerator::VisitArrayLiteral(ArrayLiteral* expr) {
-+ Comment cmnt(masm_, "[ ArrayLiteral");
-+
-+ Handle<ConstantElementsPair> constant_elements =
-+ expr->GetOrBuildConstantElements(isolate());
-+
-+ if (MustCreateArrayLiteralWithRuntime(expr)) {
-+ __ push(Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ __ push(Immediate(SmiFromSlot(expr->literal_slot())));
-+ __ push(Immediate(constant_elements));
-+ __ push(Immediate(Smi::FromInt(expr->ComputeFlags())));
-+ __ CallRuntime(Runtime::kCreateArrayLiteral);
-+ } else {
-+ __ mov(eax, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ __ mov(ebx, Immediate(SmiFromSlot(expr->literal_slot())));
-+ __ mov(ecx, Immediate(constant_elements));
-+ Callable callable =
-+ CodeFactory::FastCloneShallowArray(isolate(), TRACK_ALLOCATION_SITE);
-+ __ Call(callable.code(), RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+ }
-+
-+ bool result_saved = false; // Is the result saved to the stack?
-+ ZoneList<Expression*>* subexprs = expr->values();
-+ int length = subexprs->length();
-+
-+ // Emit code to evaluate all the non-constant subexpressions and to store
-+ // them into the newly cloned array.
-+ for (int array_index = 0; array_index < length; array_index++) {
-+ Expression* subexpr = subexprs->at(array_index);
-+ DCHECK(!subexpr->IsSpread());
-+
-+ // If the subexpression is a literal or a simple materialized literal it
-+ // is already set in the cloned array.
-+ if (CompileTimeValue::IsCompileTimeValue(subexpr)) continue;
-+
-+ if (!result_saved) {
-+ PushOperand(eax); // array literal.
-+ result_saved = true;
-+ }
-+ VisitForAccumulatorValue(subexpr);
-+
-+ __ mov(StoreDescriptor::NameRegister(),
-+ Immediate(Smi::FromInt(array_index)));
-+ __ mov(StoreDescriptor::ReceiverRegister(), Operand(esp, 0));
-+ CallKeyedStoreIC(expr->LiteralFeedbackSlot());
-+ }
-+
-+ if (result_saved) {
-+ context()->PlugTOS();
-+ } else {
-+ context()->Plug(eax);
-+ }
-+}
-+
-+
-+void FullCodeGenerator::VisitAssignment(Assignment* expr) {
-+ DCHECK(expr->target()->IsValidReferenceExpressionOrThis());
-+
-+ Comment cmnt(masm_, "[ Assignment");
-+
-+ Property* property = expr->target()->AsProperty();
-+ LhsKind assign_type = Property::GetAssignType(property);
-+
-+ // Evaluate LHS expression.
-+ switch (assign_type) {
-+ case VARIABLE:
-+ // Nothing to do here.
-+ break;
-+ case NAMED_PROPERTY:
-+ if (expr->is_compound()) {
-+ // We need the receiver both on the stack and in the register.
-+ VisitForStackValue(property->obj());
-+ __ mov(LoadDescriptor::ReceiverRegister(), Operand(esp, 0));
-+ } else {
-+ VisitForStackValue(property->obj());
-+ }
-+ break;
-+ case KEYED_PROPERTY: {
-+ if (expr->is_compound()) {
-+ VisitForStackValue(property->obj());
-+ VisitForStackValue(property->key());
-+ __ mov(LoadDescriptor::ReceiverRegister(), Operand(esp, kPointerSize));
-+ __ mov(LoadDescriptor::NameRegister(), Operand(esp, 0));
-+ } else {
-+ VisitForStackValue(property->obj());
-+ VisitForStackValue(property->key());
-+ }
-+ break;
-+ }
-+ case NAMED_SUPER_PROPERTY:
-+ case KEYED_SUPER_PROPERTY:
-+ UNREACHABLE();
-+ break;
-+ }
-+
-+ // For compound assignments we need another deoptimization point after the
-+ // variable/property load.
-+ if (expr->is_compound()) {
-+ AccumulatorValueContext result_context(this);
-+ { AccumulatorValueContext left_operand_context(this);
-+ switch (assign_type) {
-+ case VARIABLE:
-+ EmitVariableLoad(expr->target()->AsVariableProxy());
-+ break;
-+ case NAMED_PROPERTY:
-+ EmitNamedPropertyLoad(property);
-+ break;
-+ case KEYED_PROPERTY:
-+ EmitKeyedPropertyLoad(property);
-+ break;
-+ case NAMED_SUPER_PROPERTY:
-+ case KEYED_SUPER_PROPERTY:
-+ UNREACHABLE();
-+ break;
-+ }
-+ }
-+
-+ Token::Value op = expr->binary_op();
-+ PushOperand(eax); // Left operand goes on the stack.
-+ VisitForAccumulatorValue(expr->value());
-+
-+ EmitBinaryOp(expr->binary_operation(), op);
-+ } else {
-+ VisitForAccumulatorValue(expr->value());
-+ }
-+
-+ SetExpressionPosition(expr);
-+
-+ // Store the value.
-+ switch (assign_type) {
-+ case VARIABLE: {
-+ VariableProxy* proxy = expr->target()->AsVariableProxy();
-+ EmitVariableAssignment(proxy->var(), expr->op(), expr->AssignmentSlot(),
-+ proxy->hole_check_mode());
-+ context()->Plug(eax);
-+ break;
-+ }
-+ case NAMED_PROPERTY:
-+ EmitNamedPropertyAssignment(expr);
-+ break;
-+ case KEYED_PROPERTY:
-+ EmitKeyedPropertyAssignment(expr);
-+ break;
-+ case NAMED_SUPER_PROPERTY:
-+ case KEYED_SUPER_PROPERTY:
-+ UNREACHABLE();
-+ break;
-+ }
-+}
-+
-+void FullCodeGenerator::PushOperand(MemOperand operand) {
-+ OperandStackDepthIncrement(1);
-+ __ Push(operand);
-+}
-+
-+void FullCodeGenerator::EmitOperandStackDepthCheck() {
-+ if (FLAG_debug_code) {
-+ int expected_diff = StandardFrameConstants::kFixedFrameSizeFromFp +
-+ operand_stack_depth_ * kPointerSize;
-+ __ mov(eax, ebp);
-+ __ sub(eax, esp);
-+ __ cmp(eax, Immediate(expected_diff));
-+ __ Assert(equal, kUnexpectedStackDepth);
-+ }
-+}
-+
-+
-+void FullCodeGenerator::EmitBinaryOp(BinaryOperation* expr, Token::Value op) {
-+ PopOperand(edx);
-+ Handle<Code> code = CodeFactory::BinaryOperation(isolate(), op).code();
-+ __ Call(code, RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+ context()->Plug(eax);
-+}
-+
-+void FullCodeGenerator::EmitAssignment(Expression* expr, FeedbackSlot slot) {
-+ DCHECK(expr->IsValidReferenceExpressionOrThis());
-+
-+ Property* prop = expr->AsProperty();
-+ LhsKind assign_type = Property::GetAssignType(prop);
-+
-+ switch (assign_type) {
-+ case VARIABLE: {
-+ VariableProxy* proxy = expr->AsVariableProxy();
-+ EffectContext context(this);
-+ EmitVariableAssignment(proxy->var(), Token::ASSIGN, slot,
-+ proxy->hole_check_mode());
-+ break;
-+ }
-+ case NAMED_PROPERTY: {
-+ PushOperand(eax); // Preserve value.
-+ VisitForAccumulatorValue(prop->obj());
-+ __ Move(StoreDescriptor::ReceiverRegister(), eax);
-+ PopOperand(StoreDescriptor::ValueRegister()); // Restore value.
-+ CallStoreIC(slot, prop->key()->AsLiteral()->value());
-+ break;
-+ }
-+ case KEYED_PROPERTY: {
-+ PushOperand(eax); // Preserve value.
-+ VisitForStackValue(prop->obj());
-+ VisitForAccumulatorValue(prop->key());
-+ __ Move(StoreDescriptor::NameRegister(), eax);
-+ PopOperand(StoreDescriptor::ReceiverRegister()); // Receiver.
-+ PopOperand(StoreDescriptor::ValueRegister()); // Restore value.
-+ CallKeyedStoreIC(slot);
-+ break;
-+ }
-+ case NAMED_SUPER_PROPERTY:
-+ case KEYED_SUPER_PROPERTY:
-+ UNREACHABLE();
-+ break;
-+ }
-+ context()->Plug(eax);
-+}
-+
-+
-+void FullCodeGenerator::EmitStoreToStackLocalOrContextSlot(
-+ Variable* var, MemOperand location) {
-+ __ mov(location, eax);
-+ if (var->IsContextSlot()) {
-+ __ mov(edx, eax);
-+ int offset = Context::SlotOffset(var->index());
-+ __ RecordWriteContextSlot(ecx, offset, edx, ebx, kDontSaveFPRegs);
-+ }
-+}
-+
-+void FullCodeGenerator::EmitVariableAssignment(Variable* var, Token::Value op,
-+ FeedbackSlot slot,
-+ HoleCheckMode hole_check_mode) {
-+ if (var->IsUnallocated()) {
-+ // Global var, const, or let.
-+ __ mov(StoreDescriptor::ReceiverRegister(), NativeContextOperand());
-+ __ mov(StoreDescriptor::ReceiverRegister(),
-+ ContextOperand(StoreDescriptor::ReceiverRegister(),
-+ Context::EXTENSION_INDEX));
-+ CallStoreIC(slot, var->name(), kStoreGlobal);
-+
-+ } else if (IsLexicalVariableMode(var->mode()) && op != Token::INIT) {
-+ DCHECK(!var->IsLookupSlot());
-+ DCHECK(var->IsStackAllocated() || var->IsContextSlot());
-+ MemOperand location = VarOperand(var, ecx);
-+ // Perform an initialization check for lexically declared variables.
-+ if (hole_check_mode == HoleCheckMode::kRequired) {
-+ Label assign;
-+ __ mov(edx, location);
-+ __ cmp(edx, isolate()->factory()->the_hole_value());
-+ __ j(not_equal, &assign, Label::kNear);
-+ __ push(Immediate(var->name()));
-+ __ CallRuntime(Runtime::kThrowReferenceError);
-+ __ bind(&assign);
-+ }
-+ if (var->mode() != CONST) {
-+ EmitStoreToStackLocalOrContextSlot(var, location);
-+ } else if (var->throw_on_const_assignment(language_mode())) {
-+ __ CallRuntime(Runtime::kThrowConstAssignError);
-+ }
-+ } else if (var->is_this() && var->mode() == CONST && op ==
Token::INIT) {
-+ // Initializing assignment to const {this} needs a write barrier.
-+ DCHECK(var->IsStackAllocated() || var->IsContextSlot());
-+ Label uninitialized_this;
-+ MemOperand location = VarOperand(var, ecx);
-+ __ mov(edx, location);
-+ __ cmp(edx, isolate()->factory()->the_hole_value());
-+ __ j(equal, &uninitialized_this);
-+ __ push(Immediate(var->name()));
-+ __ CallRuntime(Runtime::kThrowReferenceError);
-+ __ bind(&uninitialized_this);
-+ EmitStoreToStackLocalOrContextSlot(var, location);
-+
-+ } else {
-+ DCHECK(var->mode() != CONST || op == Token::INIT);
-+ DCHECK(var->IsStackAllocated() || var->IsContextSlot());
-+ DCHECK(!var->IsLookupSlot());
-+ // Assignment to var or initializing assignment to let/const in harmony
-+ // mode.
-+ MemOperand location = VarOperand(var, ecx);
-+ EmitStoreToStackLocalOrContextSlot(var, location);
-+ }
-+}
-+
-+
-+void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) {
-+ // Assignment to a property, using a named store IC.
-+ // eax : value
-+ // esp[0] : receiver
-+ Property* prop = expr->target()->AsProperty();
-+ DCHECK(prop != NULL);
-+ DCHECK(prop->key()->IsLiteral());
-+
-+ PopOperand(StoreDescriptor::ReceiverRegister());
-+ CallStoreIC(expr->AssignmentSlot(), prop->key()->AsLiteral()->value());
-+ context()->Plug(eax);
-+}
-+
-+
-+void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) {
-+ // Assignment to a property, using a keyed store IC.
-+ // eax : value
-+ // esp[0] : key
-+ // esp[kPointerSize] : receiver
-+
-+ PopOperand(StoreDescriptor::NameRegister()); // Key.
-+ PopOperand(StoreDescriptor::ReceiverRegister());
-+ DCHECK(StoreDescriptor::ValueRegister().is(eax));
-+ CallKeyedStoreIC(expr->AssignmentSlot());
-+ context()->Plug(eax);
-+}
-+
-+// Code common for calls using the IC.
-+void FullCodeGenerator::EmitCallWithLoadIC(Call* expr) {
-+ Expression* callee = expr->expression();
-+
-+ // Get the target function.
-+ ConvertReceiverMode convert_mode;
-+ if (callee->IsVariableProxy()) {
-+ { StackValueContext context(this);
-+ EmitVariableLoad(callee->AsVariableProxy());
-+ }
-+ // Push undefined as receiver. This is patched in the method prologue if it
-+ // is a sloppy mode method.
-+ PushOperand(isolate()->factory()->undefined_value());
-+ convert_mode = ConvertReceiverMode::kNullOrUndefined;
-+ } else {
-+ // Load the function from the receiver.
-+ DCHECK(callee->IsProperty());
-+ DCHECK(!callee->AsProperty()->IsSuperAccess());
-+ __ mov(LoadDescriptor::ReceiverRegister(), Operand(esp, 0));
-+ EmitNamedPropertyLoad(callee->AsProperty());
-+ // Push the target function under the receiver.
-+ PushOperand(Operand(esp, 0));
-+ __ mov(Operand(esp, kPointerSize), eax);
-+ convert_mode = ConvertReceiverMode::kNotNullOrUndefined;
-+ }
-+
-+ EmitCall(expr, convert_mode);
-+}
-+
-+
-+// Code common for calls using the IC.
-+void FullCodeGenerator::EmitKeyedCallWithLoadIC(Call* expr,
-+ Expression* key) {
-+ // Load the key.
-+ VisitForAccumulatorValue(key);
-+
-+ Expression* callee = expr->expression();
-+
-+ // Load the function from the receiver.
-+ DCHECK(callee->IsProperty());
-+ __ mov(LoadDescriptor::ReceiverRegister(), Operand(esp, 0));
-+ __ mov(LoadDescriptor::NameRegister(), eax);
-+ EmitKeyedPropertyLoad(callee->AsProperty());
-+
-+ // Push the target function under the receiver.
-+ PushOperand(Operand(esp, 0));
-+ __ mov(Operand(esp, kPointerSize), eax);
-+
-+ EmitCall(expr, ConvertReceiverMode::kNotNullOrUndefined);
-+}
-+
-+
-+void FullCodeGenerator::EmitCall(Call* expr, ConvertReceiverMode mode) {
-+ // Load the arguments.
-+ ZoneList<Expression*>* args = expr->arguments();
-+ int arg_count = args->length();
-+ for (int i = 0; i < arg_count; i++) {
-+ VisitForStackValue(args->at(i));
-+ }
-+
-+ SetCallPosition(expr);
-+ Handle<Code> code = CodeFactory::CallICTrampoline(isolate(), mode).code();
-+ __ Move(edx, Immediate(SmiFromSlot(expr->CallFeedbackICSlot())));
-+ __ mov(edi, Operand(esp, (arg_count + 1) * kPointerSize));
-+ __ Move(eax, Immediate(arg_count));
-+ CallIC(code);
-+ OperandStackDepthDecrement(arg_count + 1);
-+
-+ RestoreContext();
-+ context()->DropAndPlug(1, eax);
-+}
-+
-+void FullCodeGenerator::VisitCallNew(CallNew* expr) {
-+ Comment cmnt(masm_, "[ CallNew");
-+ // According to ECMA-262, section 11.2.2, page 44, the function
-+ // expression in new calls must be evaluated before the
-+ // arguments.
-+
-+ // Push constructor on the stack. If it's not a function it's used as
-+ // receiver for CALL_NON_FUNCTION, otherwise the value on the stack is
-+ // ignored.
-+ DCHECK(!expr->expression()->IsSuperPropertyReference());
-+ VisitForStackValue(expr->expression());
-+
-+ // Push the arguments ("left-to-right") on the stack.
-+ ZoneList<Expression*>* args = expr->arguments();
-+ int arg_count = args->length();
-+ for (int i = 0; i < arg_count; i++) {
-+ VisitForStackValue(args->at(i));
-+ }
-+
-+ // Call the construct call builtin that handles allocation and
-+ // constructor invocation.
-+ SetConstructCallPosition(expr);
-+
-+ // Load function and argument count into edi and eax.
-+ __ Move(eax, Immediate(arg_count));
-+ __ mov(edi, Operand(esp, arg_count * kPointerSize));
-+
-+ // Record call targets in unoptimized code.
-+ __ EmitLoadFeedbackVector(ebx);
-+ __ mov(edx, Immediate(SmiFromSlot(expr->CallNewFeedbackSlot())));
-+
-+ CallConstructStub stub(isolate());
-+ CallIC(stub.GetCode());
-+ OperandStackDepthDecrement(arg_count + 1);
-+ RestoreContext();
-+ context()->Plug(eax);
-+}
-+
-+
-+void FullCodeGenerator::EmitIsSmi(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ DCHECK(args->length() == 1);
-+
-+ VisitForAccumulatorValue(args->at(0));
-+
-+ Label materialize_true, materialize_false;
-+ Label* if_true = NULL;
-+ Label* if_false = NULL;
-+ Label* fall_through = NULL;
-+ context()->PrepareTest(&materialize_true, &materialize_false,
-+ &if_true, &if_false, &fall_through);
-+
-+ __ test(eax, Immediate(kSmiTagMask));
-+ Split(zero, if_true, if_false, fall_through);
-+
-+ context()->Plug(if_true, if_false);
-+}
-+
-+
-+void FullCodeGenerator::EmitIsJSReceiver(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ DCHECK(args->length() == 1);
-+
-+ VisitForAccumulatorValue(args->at(0));
-+
-+ Label materialize_true, materialize_false;
-+ Label* if_true = NULL;
-+ Label* if_false = NULL;
-+ Label* fall_through = NULL;
-+ context()->PrepareTest(&materialize_true, &materialize_false,
-+ &if_true, &if_false, &fall_through);
-+
-+ __ JumpIfSmi(eax, if_false);
-+ __ CmpObjectType(eax, FIRST_JS_RECEIVER_TYPE, ebx);
-+ Split(above_equal, if_true, if_false, fall_through);
-+
-+ context()->Plug(if_true, if_false);
-+}
-+
-+
-+void FullCodeGenerator::EmitIsArray(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ DCHECK(args->length() == 1);
-+
-+ VisitForAccumulatorValue(args->at(0));
-+
-+ Label materialize_true, materialize_false;
-+ Label* if_true = NULL;
-+ Label* if_false = NULL;
-+ Label* fall_through = NULL;
-+ context()->PrepareTest(&materialize_true, &materialize_false,
-+ &if_true, &if_false, &fall_through);
-+
-+ __ JumpIfSmi(eax, if_false);
-+ __ CmpObjectType(eax, JS_ARRAY_TYPE, ebx);
-+ Split(equal, if_true, if_false, fall_through);
-+
-+ context()->Plug(if_true, if_false);
-+}
-+
-+
-+void FullCodeGenerator::EmitIsTypedArray(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ DCHECK(args->length() == 1);
-+
-+ VisitForAccumulatorValue(args->at(0));
-+
-+ Label materialize_true, materialize_false;
-+ Label* if_true = NULL;
-+ Label* if_false = NULL;
-+ Label* fall_through = NULL;
-+ context()->PrepareTest(&materialize_true, &materialize_false,
&if_true,
-+ &if_false, &fall_through);
-+
-+ __ JumpIfSmi(eax, if_false);
-+ __ CmpObjectType(eax, JS_TYPED_ARRAY_TYPE, ebx);
-+ Split(equal, if_true, if_false, fall_through);
-+
-+ context()->Plug(if_true, if_false);
-+}
-+
-+
-+void FullCodeGenerator::EmitIsJSProxy(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ DCHECK(args->length() == 1);
-+
-+ VisitForAccumulatorValue(args->at(0));
-+
-+ Label materialize_true, materialize_false;
-+ Label* if_true = NULL;
-+ Label* if_false = NULL;
-+ Label* fall_through = NULL;
-+ context()->PrepareTest(&materialize_true, &materialize_false,
&if_true,
-+ &if_false, &fall_through);
-+
-+ __ JumpIfSmi(eax, if_false);
-+ __ CmpObjectType(eax, JS_PROXY_TYPE, ebx);
-+ Split(equal, if_true, if_false, fall_through);
-+
-+ context()->Plug(if_true, if_false);
-+}
-+
-+void FullCodeGenerator::EmitClassOf(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ DCHECK(args->length() == 1);
-+ Label done, null, function, non_function_constructor;
-+
-+ VisitForAccumulatorValue(args->at(0));
-+
-+ // If the object is not a JSReceiver, we return null.
-+ __ JumpIfSmi(eax, &null, Label::kNear);
-+ STATIC_ASSERT(LAST_JS_RECEIVER_TYPE == LAST_TYPE);
-+ __ CmpObjectType(eax, FIRST_JS_RECEIVER_TYPE, eax);
-+ __ j(below, &null, Label::kNear);
-+
-+ // Return 'Function' for JSFunction and JSBoundFunction objects.
-+ __ CmpInstanceType(eax, FIRST_FUNCTION_TYPE);
-+ STATIC_ASSERT(LAST_FUNCTION_TYPE == LAST_TYPE);
-+ __ j(above_equal, &function, Label::kNear);
-+
-+ // Check if the constructor in the map is a JS function.
-+ __ GetMapConstructor(eax, eax, ebx);
-+ __ CmpInstanceType(ebx, JS_FUNCTION_TYPE);
-+ __ j(not_equal, &non_function_constructor, Label::kNear);
-+
-+ // eax now contains the constructor function. Grab the
-+ // instance class name from there.
-+ __ mov(eax, FieldOperand(eax, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(eax, FieldOperand(eax, SharedFunctionInfo::kInstanceClassNameOffset));
-+ __ jmp(&done, Label::kNear);
-+
-+ // Non-JS objects have class null.
-+ __ bind(&null);
-+ __ mov(eax, isolate()->factory()->null_value());
-+ __ jmp(&done, Label::kNear);
-+
-+ // Functions have class 'Function'.
-+ __ bind(&function);
-+ __ mov(eax, isolate()->factory()->Function_string());
-+ __ jmp(&done, Label::kNear);
-+
-+ // Objects with a non-function constructor have class 'Object'.
-+ __ bind(&non_function_constructor);
-+ __ mov(eax, isolate()->factory()->Object_string());
-+
-+ // All done.
-+ __ bind(&done);
-+
-+ context()->Plug(eax);
-+}
-+
-+void FullCodeGenerator::EmitStringCharCodeAt(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ DCHECK(args->length() == 2);
-+
-+ VisitForStackValue(args->at(0));
-+ VisitForAccumulatorValue(args->at(1));
-+
-+ Register object = ebx;
-+ Register index = eax;
-+ Register result = edx;
-+
-+ PopOperand(object);
-+
-+ Label need_conversion;
-+ Label index_out_of_range;
-+ Label done;
-+ StringCharCodeAtGenerator generator(object, index, result, &need_conversion,
-+ &need_conversion, &index_out_of_range);
-+ generator.GenerateFast(masm_);
-+ __ jmp(&done);
-+
-+ __ bind(&index_out_of_range);
-+ // When the index is out of range, the spec requires us to return
-+ // NaN.
-+ __ Move(result, Immediate(isolate()->factory()->nan_value()));
-+ __ jmp(&done);
-+
-+ __ bind(&need_conversion);
-+ // Move the undefined value into the result register, which will
-+ // trigger conversion.
-+ __ Move(result, Immediate(isolate()->factory()->undefined_value()));
-+ __ jmp(&done);
-+
-+ NopRuntimeCallHelper call_helper;
-+ generator.GenerateSlow(masm_, NOT_PART_OF_IC_HANDLER, call_helper);
-+
-+ __ bind(&done);
-+ context()->Plug(result);
-+}
-+
-+
-+void FullCodeGenerator::EmitCall(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ DCHECK_LE(2, args->length());
-+ // Push target, receiver and arguments onto the stack.
-+ for (Expression* const arg : *args) {
-+ VisitForStackValue(arg);
-+ }
-+ // Move target to edi.
-+ int const argc = args->length() - 2;
-+ __ mov(edi, Operand(esp, (argc + 1) * kPointerSize));
-+ // Call the target.
-+ __ mov(eax, Immediate(argc));
-+ __ Call(isolate()->builtins()->Call(), RelocInfo::CODE_TARGET);
-+ OperandStackDepthDecrement(argc + 1);
-+ RestoreContext();
-+ // Discard the function left on TOS.
-+ context()->DropAndPlug(1, eax);
-+}
-+
-+void FullCodeGenerator::EmitGetSuperConstructor(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ DCHECK_EQ(1, args->length());
-+ VisitForAccumulatorValue(args->at(0));
-+ __ AssertFunction(eax);
-+ __ mov(eax, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ mov(eax, FieldOperand(eax, Map::kPrototypeOffset));
-+ context()->Plug(eax);
-+}
-+
-+void FullCodeGenerator::EmitDebugIsActive(CallRuntime* expr) {
-+ DCHECK(expr->arguments()->length() == 0);
-+ ExternalReference debug_is_active =
-+ ExternalReference::debug_is_active_address(isolate());
-+ __ movzx_b(eax, Operand::StaticVariable(debug_is_active));
-+ __ SmiTag(eax);
-+ context()->Plug(eax);
-+}
-+
-+
-+void FullCodeGenerator::EmitLoadJSRuntimeFunction(CallRuntime* expr) {
-+ // Push function.
-+ __ LoadGlobalFunction(expr->context_index(), eax);
-+ PushOperand(eax);
-+
-+ // Push undefined as receiver.
-+ PushOperand(isolate()->factory()->undefined_value());
-+}
-+
-+
-+void FullCodeGenerator::EmitCallJSRuntimeFunction(CallRuntime* expr) {
-+ ZoneList<Expression*>* args = expr->arguments();
-+ int arg_count = args->length();
-+
-+ SetCallPosition(expr);
-+ __ mov(edi, Operand(esp, (arg_count + 1) * kPointerSize));
-+ __ Set(eax, arg_count);
-+ __ Call(isolate()->builtins()->Call(ConvertReceiverMode::kNullOrUndefined),
-+ RelocInfo::CODE_TARGET);
-+ OperandStackDepthDecrement(arg_count + 1);
-+ RestoreContext();
-+}
-+
-+
-+void FullCodeGenerator::VisitUnaryOperation(UnaryOperation* expr) {
-+ switch (expr->op()) {
-+ case Token::DELETE: {
-+ Comment cmnt(masm_, "[ UnaryOperation (DELETE)");
-+ Property* property = expr->expression()->AsProperty();
-+ VariableProxy* proxy = expr->expression()->AsVariableProxy();
-+
-+ if (property != NULL) {
-+ VisitForStackValue(property->obj());
-+ VisitForStackValue(property->key());
-+ PushOperand(Smi::FromInt(language_mode()));
-+ CallRuntimeWithOperands(Runtime::kDeleteProperty);
-+ context()->Plug(eax);
-+ } else if (proxy != NULL) {
-+ Variable* var = proxy->var();
-+ // Delete of an unqualified identifier is disallowed in strict mode but
-+ // "delete this" is allowed.
-+ bool is_this = var->is_this();
-+ DCHECK(is_sloppy(language_mode()) || is_this);
-+ if (var->IsUnallocated()) {
-+ __ mov(eax, NativeContextOperand());
-+ __ push(ContextOperand(eax, Context::EXTENSION_INDEX));
-+ __ push(Immediate(var->name()));
-+ __ Push(Smi::FromInt(SLOPPY));
-+ __ CallRuntime(Runtime::kDeleteProperty);
-+ context()->Plug(eax);
-+ } else {
-+ DCHECK(!var->IsLookupSlot());
-+ DCHECK(var->IsStackAllocated() || var->IsContextSlot());
-+ // Result of deleting non-global variables is false. 'this' is
-+ // not really a variable, though we implement it as one. The
-+ // subexpression does not have side effects.
-+ context()->Plug(is_this);
-+ }
-+ } else {
-+ // Result of deleting non-property, non-variable reference is true.
-+ // The subexpression may have side effects.
-+ VisitForEffect(expr->expression());
-+ context()->Plug(true);
-+ }
-+ break;
-+ }
-+
-+ case Token::VOID: {
-+ Comment cmnt(masm_, "[ UnaryOperation (VOID)");
-+ VisitForEffect(expr->expression());
-+ context()->Plug(isolate()->factory()->undefined_value());
-+ break;
-+ }
-+
-+ case Token::NOT: {
-+ Comment cmnt(masm_, "[ UnaryOperation (NOT)");
-+ if (context()->IsEffect()) {
-+ // Unary NOT has no side effects so it's only necessary to visit the
-+ // subexpression. Match the optimizing compiler by not branching.
-+ VisitForEffect(expr->expression());
-+ } else if (context()->IsTest()) {
-+ const TestContext* test = TestContext::cast(context());
-+ // The labels are swapped for the recursive call.
-+ VisitForControl(expr->expression(),
-+ test->false_label(),
-+ test->true_label(),
-+ test->fall_through());
-+ context()->Plug(test->true_label(), test->false_label());
-+ } else {
-+ // We handle value contexts explicitly rather than simply visiting
-+ // for control and plugging the control flow into the context,
-+ // because we need to prepare a pair of extra administrative AST ids
-+ // for the optimizing compiler.
-+ DCHECK(context()->IsAccumulatorValue() || context()->IsStackValue());
-+ Label materialize_true, materialize_false, done;
-+ VisitForControl(expr->expression(),
-+ &materialize_false,
-+ &materialize_true,
-+ &materialize_true);
-+ if (!context()->IsAccumulatorValue()) OperandStackDepthIncrement(1);
-+ __ bind(&materialize_true);
-+ if (context()->IsAccumulatorValue()) {
-+ __ mov(eax, isolate()->factory()->true_value());
-+ } else {
-+ __ Push(isolate()->factory()->true_value());
-+ }
-+ __ jmp(&done, Label::kNear);
-+ __ bind(&materialize_false);
-+ if (context()->IsAccumulatorValue()) {
-+ __ mov(eax, isolate()->factory()->false_value());
-+ } else {
-+ __ Push(isolate()->factory()->false_value());
-+ }
-+ __ bind(&done);
-+ }
-+ break;
-+ }
-+
-+ case Token::TYPEOF: {
-+ Comment cmnt(masm_, "[ UnaryOperation (TYPEOF)");
-+ {
-+ AccumulatorValueContext context(this);
-+ VisitForTypeofValue(expr->expression());
-+ }
-+ __ mov(ebx, eax);
-+ __ Call(isolate()->builtins()->Typeof(), RelocInfo::CODE_TARGET);
-+ context()->Plug(eax);
-+ break;
-+ }
-+
-+ default:
-+ UNREACHABLE();
-+ }
-+}
-+
-+
-+void FullCodeGenerator::VisitCountOperation(CountOperation* expr) {
-+ DCHECK(expr->expression()->IsValidReferenceExpressionOrThis());
-+
-+ Comment cmnt(masm_, "[ CountOperation");
-+
-+ Property* prop = expr->expression()->AsProperty();
-+ LhsKind assign_type = Property::GetAssignType(prop);
-+
-+ // Evaluate expression and get value.
-+ if (assign_type == VARIABLE) {
-+ DCHECK(expr->expression()->AsVariableProxy()->var() != NULL);
-+ AccumulatorValueContext context(this);
-+ EmitVariableLoad(expr->expression()->AsVariableProxy());
-+ } else {
-+ // Reserve space for result of postfix operation.
-+ if (expr->is_postfix() && !context()->IsEffect()) {
-+ PushOperand(Smi::kZero);
-+ }
-+ switch (assign_type) {
-+ case NAMED_PROPERTY: {
-+ // Put the object both on the stack and in the register.
-+ VisitForStackValue(prop->obj());
-+ __ mov(LoadDescriptor::ReceiverRegister(), Operand(esp, 0));
-+ EmitNamedPropertyLoad(prop);
-+ break;
-+ }
-+
-+ case KEYED_PROPERTY: {
-+ VisitForStackValue(prop->obj());
-+ VisitForStackValue(prop->key());
-+ __ mov(LoadDescriptor::ReceiverRegister(),
-+ Operand(esp, kPointerSize)); // Object.
-+ __ mov(LoadDescriptor::NameRegister(), Operand(esp, 0)); // Key.
-+ EmitKeyedPropertyLoad(prop);
-+ break;
-+ }
-+
-+ case NAMED_SUPER_PROPERTY:
-+ case KEYED_SUPER_PROPERTY:
-+ case VARIABLE:
-+ UNREACHABLE();
-+ }
-+ }
-+
-+ // Convert old value into a number.
-+ __ Call(isolate()->builtins()->ToNumber(), RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+
-+ // Save result for postfix expressions.
-+ if (expr->is_postfix()) {
-+ if (!context()->IsEffect()) {
-+ // Save the result on the stack. If we have a named or keyed property
-+ // we store the result under the receiver that is currently on top
-+ // of the stack.
-+ switch (assign_type) {
-+ case VARIABLE:
-+ PushOperand(eax);
-+ break;
-+ case NAMED_PROPERTY:
-+ __ mov(Operand(esp, kPointerSize), eax);
-+ break;
-+ case KEYED_PROPERTY:
-+ __ mov(Operand(esp, 2 * kPointerSize), eax);
-+ break;
-+ case NAMED_SUPER_PROPERTY:
-+ case KEYED_SUPER_PROPERTY:
-+ UNREACHABLE();
-+ break;
-+ }
-+ }
-+ }
-+
-+ SetExpressionPosition(expr);
-+
-+ // Call stub for +1/-1.
-+ __ mov(edx, eax);
-+ __ mov(eax, Immediate(Smi::FromInt(1)));
-+ Handle<Code> code =
-+ CodeFactory::BinaryOperation(isolate(), expr->binary_op()).code();
-+ __ Call(code, RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+
-+ // Store the value returned in eax.
-+ switch (assign_type) {
-+ case VARIABLE: {
-+ VariableProxy* proxy = expr->expression()->AsVariableProxy();
-+ if (expr->is_postfix()) {
-+ // Perform the assignment as if via '='.
-+ { EffectContext context(this);
-+ EmitVariableAssignment(proxy->var(), Token::ASSIGN, expr->CountSlot(),
-+ proxy->hole_check_mode());
-+ context.Plug(eax);
-+ }
-+ // For all contexts except EffectContext We have the result on
-+ // top of the stack.
-+ if (!context()->IsEffect()) {
-+ context()->PlugTOS();
-+ }
-+ } else {
-+ // Perform the assignment as if via '='.
-+ EmitVariableAssignment(proxy->var(), Token::ASSIGN, expr->CountSlot(),
-+ proxy->hole_check_mode());
-+ context()->Plug(eax);
-+ }
-+ break;
-+ }
-+ case NAMED_PROPERTY: {
-+ PopOperand(StoreDescriptor::ReceiverRegister());
-+ CallStoreIC(expr->CountSlot(), prop->key()->AsLiteral()->value());
-+ if (expr->is_postfix()) {
-+ if (!context()->IsEffect()) {
-+ context()->PlugTOS();
-+ }
-+ } else {
-+ context()->Plug(eax);
-+ }
-+ break;
-+ }
-+ case KEYED_PROPERTY: {
-+ PopOperand(StoreDescriptor::NameRegister());
-+ PopOperand(StoreDescriptor::ReceiverRegister());
-+ CallKeyedStoreIC(expr->CountSlot());
-+ if (expr->is_postfix()) {
-+ // Result is on the stack
-+ if (!context()->IsEffect()) {
-+ context()->PlugTOS();
-+ }
-+ } else {
-+ context()->Plug(eax);
-+ }
-+ break;
-+ }
-+ case NAMED_SUPER_PROPERTY:
-+ case KEYED_SUPER_PROPERTY:
-+ UNREACHABLE();
-+ break;
-+ }
-+}
-+
-+
-+void FullCodeGenerator::EmitLiteralCompareTypeof(Expression* expr,
-+ Expression* sub_expr,
-+ Handle<String> check) {
-+ Label materialize_true, materialize_false;
-+ Label* if_true = NULL;
-+ Label* if_false = NULL;
-+ Label* fall_through = NULL;
-+ context()->PrepareTest(&materialize_true, &materialize_false,
-+ &if_true, &if_false, &fall_through);
-+
-+ { AccumulatorValueContext context(this);
-+ VisitForTypeofValue(sub_expr);
-+ }
-+
-+ Factory* factory = isolate()->factory();
-+ if (String::Equals(check, factory->number_string())) {
-+ __ JumpIfSmi(eax, if_true);
-+ __ cmp(FieldOperand(eax, HeapObject::kMapOffset),
-+ isolate()->factory()->heap_number_map());
-+ Split(equal, if_true, if_false, fall_through);
-+ } else if (String::Equals(check, factory->string_string())) {
-+ __ JumpIfSmi(eax, if_false);
-+ __ CmpObjectType(eax, FIRST_NONSTRING_TYPE, edx);
-+ Split(below, if_true, if_false, fall_through);
-+ } else if (String::Equals(check, factory->symbol_string())) {
-+ __ JumpIfSmi(eax, if_false);
-+ __ CmpObjectType(eax, SYMBOL_TYPE, edx);
-+ Split(equal, if_true, if_false, fall_through);
-+ } else if (String::Equals(check, factory->boolean_string())) {
-+ __ cmp(eax, isolate()->factory()->true_value());
-+ __ j(equal, if_true);
-+ __ cmp(eax, isolate()->factory()->false_value());
-+ Split(equal, if_true, if_false, fall_through);
-+ } else if (String::Equals(check, factory->undefined_string())) {
-+ __ cmp(eax, isolate()->factory()->null_value());
-+ __ j(equal, if_false);
-+ __ JumpIfSmi(eax, if_false);
-+ // Check for undetectable objects => true.
-+ __ mov(edx, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ test_b(FieldOperand(edx, Map::kBitFieldOffset),
-+ Immediate(1 << Map::kIsUndetectable));
-+ Split(not_zero, if_true, if_false, fall_through);
-+ } else if (String::Equals(check, factory->function_string())) {
-+ __ JumpIfSmi(eax, if_false);
-+ // Check for callable and not undetectable objects => true.
-+ __ mov(edx, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ movzx_b(ecx, FieldOperand(edx, Map::kBitFieldOffset));
-+ __ and_(ecx, (1 << Map::kIsCallable) | (1 << Map::kIsUndetectable));
-+ __ cmp(ecx, 1 << Map::kIsCallable);
-+ Split(equal, if_true, if_false, fall_through);
-+ } else if (String::Equals(check, factory->object_string())) {
-+ __ JumpIfSmi(eax, if_false);
-+ __ cmp(eax, isolate()->factory()->null_value());
-+ __ j(equal, if_true);
-+ STATIC_ASSERT(LAST_JS_RECEIVER_TYPE == LAST_TYPE);
-+ __ CmpObjectType(eax, FIRST_JS_RECEIVER_TYPE, edx);
-+ __ j(below, if_false);
-+ // Check for callable or undetectable objects => false.
-+ __ test_b(FieldOperand(edx, Map::kBitFieldOffset),
-+ Immediate((1 << Map::kIsCallable) | (1 <<
Map::kIsUndetectable)));
-+ Split(zero, if_true, if_false, fall_through);
-+ } else {
-+ if (if_false != fall_through) __ jmp(if_false);
-+ }
-+ context()->Plug(if_true, if_false);
-+}
-+
-+
-+void FullCodeGenerator::VisitCompareOperation(CompareOperation* expr) {
-+ Comment cmnt(masm_, "[ CompareOperation");
-+
-+ // First we try a fast inlined version of the compare when one of
-+ // the operands is a literal.
-+ if (TryLiteralCompare(expr)) return;
-+
-+ // Always perform the comparison for its control flow. Pack the result
-+ // into the expression's context after the comparison is performed.
-+ Label materialize_true, materialize_false;
-+ Label* if_true = NULL;
-+ Label* if_false = NULL;
-+ Label* fall_through = NULL;
-+ context()->PrepareTest(&materialize_true, &materialize_false,
-+ &if_true, &if_false, &fall_through);
-+
-+ Token::Value op = expr->op();
-+ VisitForStackValue(expr->left());
-+ switch (op) {
-+ case Token::IN:
-+ VisitForStackValue(expr->right());
-+ SetExpressionPosition(expr);
-+ EmitHasProperty();
-+ __ cmp(eax, isolate()->factory()->true_value());
-+ Split(equal, if_true, if_false, fall_through);
-+ break;
-+
-+ case Token::INSTANCEOF: {
-+ VisitForAccumulatorValue(expr->right());
-+ SetExpressionPosition(expr);
-+ PopOperand(edx);
-+ __ Call(isolate()->builtins()->InstanceOf(), RelocInfo::CODE_TARGET);
-+ RestoreContext();
-+ __ cmp(eax, isolate()->factory()->true_value());
-+ Split(equal, if_true, if_false, fall_through);
-+ break;
-+ }
-+
-+ default: {
-+ VisitForAccumulatorValue(expr->right());
-+ SetExpressionPosition(expr);
-+ Condition cc = CompareIC::ComputeCondition(op);
-+ PopOperand(edx);
-+
-+ bool inline_smi_code = ShouldInlineSmiCase(op);
-+ JumpPatchSite patch_site(masm_);
-+ if (inline_smi_code) {
-+ Label slow_case;
-+ __ mov(ecx, edx);
-+ __ or_(ecx, eax);
-+ patch_site.EmitJumpIfNotSmi(ecx, &slow_case, Label::kNear);
-+ __ cmp(edx, eax);
-+ Split(cc, if_true, if_false, NULL);
-+ __ bind(&slow_case);
-+ }
-+
-+ Handle<Code> ic = CodeFactory::CompareIC(isolate(), op).code();
-+ CallIC(ic);
-+ patch_site.EmitPatchInfo();
-+
-+ __ test(eax, eax);
-+ Split(cc, if_true, if_false, fall_through);
-+ }
-+ }
-+
-+ // Convert the result of the comparison into one expected for this
-+ // expression's context.
-+ context()->Plug(if_true, if_false);
-+}
-+
-+
-+void FullCodeGenerator::EmitLiteralCompareNil(CompareOperation* expr,
-+ Expression* sub_expr,
-+ NilValue nil) {
-+ Label materialize_true, materialize_false;
-+ Label* if_true = NULL;
-+ Label* if_false = NULL;
-+ Label* fall_through = NULL;
-+ context()->PrepareTest(&materialize_true, &materialize_false,
-+ &if_true, &if_false, &fall_through);
-+
-+ VisitForAccumulatorValue(sub_expr);
-+
-+ Handle<HeapObject> nil_value = nil == kNullValue
-+ ? isolate()->factory()->null_value()
-+ : isolate()->factory()->undefined_value();
-+ if (expr->op() == Token::EQ_STRICT) {
-+ __ cmp(eax, nil_value);
-+ Split(equal, if_true, if_false, fall_through);
-+ } else {
-+ __ JumpIfSmi(eax, if_false);
-+ __ mov(eax, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ test_b(FieldOperand(eax, Map::kBitFieldOffset),
-+ Immediate(1 << Map::kIsUndetectable));
-+ Split(not_zero, if_true, if_false, fall_through);
-+ }
-+ context()->Plug(if_true, if_false);
-+}
-+
-+
-+Register FullCodeGenerator::result_register() {
-+ return eax;
-+}
-+
-+
-+Register FullCodeGenerator::context_register() {
-+ return esi;
-+}
-+
-+void FullCodeGenerator::LoadFromFrameField(int frame_offset, Register value) {
-+ DCHECK_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset);
-+ __ mov(value, Operand(ebp, frame_offset));
-+}
-+
-+void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) {
-+ DCHECK_EQ(POINTER_SIZE_ALIGN(frame_offset), frame_offset);
-+ __ mov(Operand(ebp, frame_offset), value);
-+}
-+
-+
-+void FullCodeGenerator::LoadContextField(Register dst, int context_index) {
-+ __ mov(dst, ContextOperand(esi, context_index));
-+}
-+
-+
-+void FullCodeGenerator::PushFunctionArgumentForContextAllocation() {
-+ DeclarationScope* closure_scope = scope()->GetClosureScope();
-+ if (closure_scope->is_script_scope() ||
-+ closure_scope->is_module_scope()) {
-+ // Contexts nested in the native context have a canonical empty function
-+ // as their closure, not the anonymous closure containing the global
-+ // code.
-+ __ mov(eax, NativeContextOperand());
-+ PushOperand(ContextOperand(eax, Context::CLOSURE_INDEX));
-+ } else if (closure_scope->is_eval_scope()) {
-+ // Contexts nested inside eval code have the same closure as the context
-+ // calling eval, not the anonymous closure containing the eval code.
-+ // Fetch it from the context.
-+ PushOperand(ContextOperand(esi, Context::CLOSURE_INDEX));
-+ } else {
-+ DCHECK(closure_scope->is_function_scope());
-+ PushOperand(Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ }
-+}
-+
-+
-+#undef __
-+
-+
-+static const byte kJnsInstruction = 0x79;
-+static const byte kJnsOffset = 0x11;
-+static const byte kNopByteOne = 0x66;
-+static const byte kNopByteTwo = 0x90;
-+#ifdef DEBUG
-+static const byte kCallInstruction = 0xe8;
-+#endif
-+
-+
-+void BackEdgeTable::PatchAt(Code* unoptimized_code,
-+ Address pc,
-+ BackEdgeState target_state,
-+ Code* replacement_code) {
-+ Address call_target_address = pc - kIntSize;
-+ Address jns_instr_address = call_target_address - 3;
-+ Address jns_offset_address = call_target_address - 2;
-+
-+ switch (target_state) {
-+ case INTERRUPT:
-+ // sub <profiling_counter>, <delta> ;; Not changed
-+ // jns ok
-+ // call <interrupt stub>
-+ // ok:
-+ *jns_instr_address = kJnsInstruction;
-+ *jns_offset_address = kJnsOffset;
-+ break;
-+ case ON_STACK_REPLACEMENT:
-+ // sub <profiling_counter>, <delta> ;; Not changed
-+ // nop
-+ // nop
-+ // call <on-stack replacment>
-+ // ok:
-+ *jns_instr_address = kNopByteOne;
-+ *jns_offset_address = kNopByteTwo;
-+ break;
-+ }
-+
-+ Assembler::set_target_address_at(unoptimized_code->GetIsolate(),
-+ call_target_address, unoptimized_code,
-+ replacement_code->entry());
-+ unoptimized_code->GetHeap()->incremental_marking()->RecordCodeTargetPatch(
-+ unoptimized_code, call_target_address, replacement_code);
-+}
-+
-+
-+BackEdgeTable::BackEdgeState BackEdgeTable::GetBackEdgeState(
-+ Isolate* isolate,
-+ Code* unoptimized_code,
-+ Address pc) {
-+ Address call_target_address = pc - kIntSize;
-+ Address jns_instr_address = call_target_address - 3;
-+ DCHECK_EQ(kCallInstruction, *(call_target_address - 1));
-+
-+ if (*jns_instr_address == kJnsInstruction) {
-+ DCHECK_EQ(kJnsOffset, *(call_target_address - 2));
-+ DCHECK_EQ(isolate->builtins()->InterruptCheck()->entry(),
-+ Assembler::target_address_at(call_target_address,
-+ unoptimized_code));
-+ return INTERRUPT;
-+ }
-+
-+ DCHECK_EQ(kNopByteOne, *jns_instr_address);
-+ DCHECK_EQ(kNopByteTwo, *(call_target_address - 2));
-+
-+ DCHECK_EQ(
-+ isolate->builtins()->OnStackReplacement()->entry(),
-+ Assembler::target_address_at(call_target_address, unoptimized_code));
-+ return ON_STACK_REPLACEMENT;
-+}
-+
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/full-codegen/x87/OWNERS
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/full-codegen/x87/OWNERS
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/full-codegen/x87/OWNERS 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/full-codegen/x87/OWNERS 2018-02-18
19:00:54.100419575 +0100
-@@ -0,0 +1,2 @@
-+weiliang.lin(a)intel.com
-+chunyang.dai(a)intel.com
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/gdb-jit.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/gdb-jit.cc
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/gdb-jit.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/gdb-jit.cc 2018-02-18
19:00:54.101419561 +0100
-@@ -199,7 +199,7 @@
- struct MachOSectionHeader {
- char sectname[16];
- char segname[16];
--#if V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- uint32_t addr;
- uint32_t size;
- #else
-@@ -507,7 +507,7 @@
- uint32_t cmd;
- uint32_t cmdsize;
- char segname[16];
--#if V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- uint32_t vmaddr;
- uint32_t vmsize;
- uint32_t fileoff;
-@@ -533,7 +533,7 @@
- Writer::Slot<MachOHeader> WriteHeader(Writer* w) {
- DCHECK(w->position() == 0);
- Writer::Slot<MachOHeader> header = w->CreateSlotHere<MachOHeader>();
--#if V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- header->magic = 0xFEEDFACEu;
- header->cputype = 7; // i386
- header->cpusubtype = 3; // CPU_SUBTYPE_I386_ALL
-@@ -558,7 +558,7 @@
- uintptr_t code_size) {
- Writer::Slot<MachOSegmentCommand> cmd =
- w->CreateSlotHere<MachOSegmentCommand>();
--#if V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- cmd->cmd = LC_SEGMENT_32;
- #else
- cmd->cmd = LC_SEGMENT_64;
-@@ -646,7 +646,7 @@
- void WriteHeader(Writer* w) {
- DCHECK(w->position() == 0);
- Writer::Slot<ELFHeader> header = w->CreateSlotHere<ELFHeader>();
--#if (V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM || \
-+#if (V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_X87 || \
- (V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT))
- const uint8_t ident[16] =
- { 0x7f, 'E', 'L', 'F', 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0,
0};
-@@ -668,7 +668,7 @@
- #endif
- memcpy(header->ident, ident, 16);
- header->type = 1;
--#if V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- header->machine = 3;
- #elif V8_TARGET_ARCH_X64
- // Processor identification value for x64 is 62 as defined in
-@@ -783,8 +783,8 @@
- Binding binding() const {
- return static_cast<Binding>(info >> 4);
- }
--#if (V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM || \
-- (V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT) || \
-+#if (V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_X87 || \
-+ (V8_TARGET_ARCH_X64 && V8_TARGET_ARCH_32_BIT) || \
- (V8_TARGET_ARCH_S390 && V8_TARGET_ARCH_32_BIT))
- struct SerializedLayout {
- SerializedLayout(uint32_t name,
-@@ -1146,7 +1146,7 @@
- w->Write<intptr_t>(desc_->CodeStart() + desc_->CodeSize());
- Writer::Slot<uint32_t> fb_block_size =
w->CreateSlotHere<uint32_t>();
- uintptr_t fb_block_start = w->position();
--#if V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- w->Write<uint8_t>(DW_OP_reg5); // The frame pointer's here on ia32
- #elif V8_TARGET_ARCH_X64
- w->Write<uint8_t>(DW_OP_reg6); // and here on x64.
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/globals.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/globals.h
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/globals.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/globals.h 2018-02-18
19:00:54.102419546 +0100
-@@ -167,7 +167,7 @@
- const int kPCOnStackSize = kRegisterSize;
- const int kFPOnStackSize = kRegisterSize;
-
--#if V8_TARGET_ARCH_X64 || V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_X64 || V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- const int kElidedFrameSlots = kPCOnStackSize / kPointerSize;
- #else
- const int kElidedFrameSlots = 0;
-@@ -912,10 +912,16 @@
- };
-
- // The mips architecture prior to revision 5 has inverted encoding for sNaN.
-+// The x87 FPU convert the sNaN to qNaN automatically when loading sNaN from
-+// memmory.
-+// Use mips sNaN which is a not used qNaN in x87 port as sNaN to workaround this
-+// issue
-+// for some test cases.
- #if (V8_TARGET_ARCH_MIPS && !defined(_MIPS_ARCH_MIPS32R6) &&
\
- (!defined(USE_SIMULATOR) || !defined(_MIPS_TARGET_SIMULATOR))) || \
- (V8_TARGET_ARCH_MIPS64 && !defined(_MIPS_ARCH_MIPS64R6) &&
\
-- (!defined(USE_SIMULATOR) || !defined(_MIPS_TARGET_SIMULATOR)))
-+ (!defined(USE_SIMULATOR) || !defined(_MIPS_TARGET_SIMULATOR))) || \
-+ (V8_TARGET_ARCH_X87)
- const uint32_t kHoleNanUpper32 = 0xFFFF7FFF;
- const uint32_t kHoleNanLower32 = 0xFFFF7FFF;
- #else
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/ic/x87/access-compiler-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/ic/x87/access-compiler-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/ic/x87/access-compiler-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/ic/x87/access-compiler-x87.cc 2018-02-18
19:00:54.102419546 +0100
-@@ -0,0 +1,40 @@
-+// Copyright 2014 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/ic/access-compiler.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+#define __ ACCESS_MASM(masm)
-+
-+void PropertyAccessCompiler::GenerateTailCall(MacroAssembler* masm,
-+ Handle<Code> code) {
-+ __ jmp(code, RelocInfo::CODE_TARGET);
-+}
-+
-+void PropertyAccessCompiler::InitializePlatformSpecific(
-+ AccessCompilerData* data) {
-+ Register receiver = LoadDescriptor::ReceiverRegister();
-+ Register name = LoadDescriptor::NameRegister();
-+
-+ // Load calling convention.
-+ // receiver, name, scratch1, scratch2, scratch3.
-+ Register load_registers[] = {receiver, name, ebx, eax, edi};
-+
-+ // Store calling convention.
-+ // receiver, name, scratch1, scratch2.
-+ Register store_registers[] = {receiver, name, ebx, edi};
-+
-+ data->Initialize(arraysize(load_registers), load_registers,
-+ arraysize(store_registers), store_registers);
-+}
-+
-+#undef __
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/ic/x87/handler-compiler-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/ic/x87/handler-compiler-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/ic/x87/handler-compiler-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/ic/x87/handler-compiler-x87.cc 2018-02-18
19:00:54.102419546 +0100
-@@ -0,0 +1,447 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/ic/handler-compiler.h"
-+
-+#include "src/api-arguments.h"
-+#include "src/field-type.h"
-+#include "src/ic/call-optimization.h"
-+#include "src/ic/ic.h"
-+#include "src/isolate-inl.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+#define __ ACCESS_MASM(masm)
-+
-+void NamedLoadHandlerCompiler::GenerateLoadViaGetterForDeopt(
-+ MacroAssembler* masm) {
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ // If we generate a global code snippet for deoptimization only, remember
-+ // the place to continue after deoptimization.
-+ masm->isolate()->heap()->SetGetterStubDeoptPCOffset(masm->pc_offset());
-+ // Restore context register.
-+ __ pop(esi);
-+ }
-+ __ ret(0);
-+}
-+
-+
-+void PropertyHandlerCompiler::PushVectorAndSlot(Register vector,
-+ Register slot) {
-+ MacroAssembler* masm = this->masm();
-+ STATIC_ASSERT(LoadWithVectorDescriptor::kSlot <
-+ LoadWithVectorDescriptor::kVector);
-+ STATIC_ASSERT(StoreWithVectorDescriptor::kSlot <
-+ StoreWithVectorDescriptor::kVector);
-+ STATIC_ASSERT(StoreTransitionDescriptor::kSlot <
-+ StoreTransitionDescriptor::kVector);
-+ __ push(slot);
-+ __ push(vector);
-+}
-+
-+
-+void PropertyHandlerCompiler::PopVectorAndSlot(Register vector, Register slot) {
-+ MacroAssembler* masm = this->masm();
-+ __ pop(vector);
-+ __ pop(slot);
-+}
-+
-+
-+void PropertyHandlerCompiler::DiscardVectorAndSlot() {
-+ MacroAssembler* masm = this->masm();
-+ // Remove vector and slot.
-+ __ add(esp, Immediate(2 * kPointerSize));
-+}
-+
-+void PropertyHandlerCompiler::GenerateDictionaryNegativeLookup(
-+ MacroAssembler* masm, Label* miss_label, Register receiver,
-+ Handle<Name> name, Register scratch0, Register scratch1) {
-+ DCHECK(name->IsUniqueName());
-+ DCHECK(!receiver.is(scratch0));
-+ Counters* counters = masm->isolate()->counters();
-+ __ IncrementCounter(counters->negative_lookups(), 1);
-+ __ IncrementCounter(counters->negative_lookups_miss(), 1);
-+
-+ __ mov(scratch0, FieldOperand(receiver, HeapObject::kMapOffset));
-+
-+ const int kInterceptorOrAccessCheckNeededMask =
-+ (1 << Map::kHasNamedInterceptor) | (1 << Map::kIsAccessCheckNeeded);
-+
-+ // Bail out if the receiver has a named interceptor or requires access checks.
-+ __ test_b(FieldOperand(scratch0, Map::kBitFieldOffset),
-+ Immediate(kInterceptorOrAccessCheckNeededMask));
-+ __ j(not_zero, miss_label);
-+
-+ // Check that receiver is a JSObject.
-+ __ CmpInstanceType(scratch0, FIRST_JS_RECEIVER_TYPE);
-+ __ j(below, miss_label);
-+
-+ // Load properties array.
-+ Register properties = scratch0;
-+ __ mov(properties, FieldOperand(receiver, JSObject::kPropertiesOrHashOffset));
-+
-+ // Check that the properties array is a dictionary.
-+ __ cmp(FieldOperand(properties, HeapObject::kMapOffset),
-+ Immediate(masm->isolate()->factory()->hash_table_map()));
-+ __ j(not_equal, miss_label);
-+
-+ Label done;
-+ NameDictionaryLookupStub::GenerateNegativeLookup(masm, miss_label, &done,
-+ properties, name, scratch1);
-+ __ bind(&done);
-+ __ DecrementCounter(counters->negative_lookups_miss(), 1);
-+}
-+
-+// Generate call to api function.
-+// This function uses push() to generate smaller, faster code than
-+// the version above. It is an optimization that should will be removed
-+// when api call ICs are generated in hydrogen.
-+void PropertyHandlerCompiler::GenerateApiAccessorCall(
-+ MacroAssembler* masm, const CallOptimization& optimization,
-+ Handle<Map> receiver_map, Register receiver, Register scratch,
-+ bool is_store, Register store_parameter, Register accessor_holder,
-+ int accessor_index) {
-+ DCHECK(!accessor_holder.is(scratch));
-+ // Copy return value.
-+ __ pop(scratch);
-+
-+ if (is_store) {
-+ // Discard stack arguments.
-+ __ add(esp, Immediate(StoreWithVectorDescriptor::kStackArgumentsCount *
-+ kPointerSize));
-+ }
-+ // Write the receiver and arguments to stack frame.
-+ __ push(receiver);
-+ if (is_store) {
-+ DCHECK(!AreAliased(receiver, scratch, store_parameter));
-+ __ push(store_parameter);
-+ }
-+ __ push(scratch);
-+ // Stack now matches JSFunction abi.
-+ DCHECK(optimization.is_simple_api_call());
-+
-+ // Abi for CallApiCallbackStub.
-+ Register callee = edi;
-+ Register data = ebx;
-+ Register holder = ecx;
-+ Register api_function_address = edx;
-+ scratch = no_reg;
-+
-+ // Put callee in place.
-+ __ LoadAccessor(callee, accessor_holder, accessor_index,
-+ is_store ? ACCESSOR_SETTER : ACCESSOR_GETTER);
-+
-+ // Put holder in place.
-+ CallOptimization::HolderLookup holder_lookup;
-+ optimization.LookupHolderOfExpectedType(receiver_map, &holder_lookup);
-+ switch (holder_lookup) {
-+ case CallOptimization::kHolderIsReceiver:
-+ __ Move(holder, receiver);
-+ break;
-+ case CallOptimization::kHolderFound:
-+ __ mov(holder, FieldOperand(receiver, HeapObject::kMapOffset));
-+ __ mov(holder, FieldOperand(holder, Map::kPrototypeOffset));
-+ break;
-+ case CallOptimization::kHolderNotFound:
-+ UNREACHABLE();
-+ break;
-+ }
-+
-+ Isolate* isolate = masm->isolate();
-+ Handle<CallHandlerInfo> api_call_info = optimization.api_call_info();
-+ // Put call data in place.
-+ if (api_call_info->data()->IsUndefined(isolate)) {
-+ __ mov(data, Immediate(isolate->factory()->undefined_value()));
-+ } else {
-+ if (optimization.is_constant_call()) {
-+ __ mov(data, FieldOperand(callee, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(data, FieldOperand(data, SharedFunctionInfo::kFunctionDataOffset));
-+ __ mov(data, FieldOperand(data, FunctionTemplateInfo::kCallCodeOffset));
-+ } else {
-+ __ mov(data, FieldOperand(callee, FunctionTemplateInfo::kCallCodeOffset));
-+ }
-+ __ mov(data, FieldOperand(data, CallHandlerInfo::kDataOffset));
-+ }
-+
-+ // Put api_function_address in place.
-+ Address function_address = v8::ToCData<Address>(api_call_info->callback());
-+ __ mov(api_function_address, Immediate(function_address));
-+
-+ // Jump to stub.
-+ CallApiCallbackStub stub(isolate, is_store, !optimization.is_constant_call());
-+ __ TailCallStub(&stub);
-+}
-+
-+
-+// Generate code to check that a global property cell is empty. Create
-+// the property cell at compilation time if no cell exists for the
-+// property.
-+void PropertyHandlerCompiler::GenerateCheckPropertyCell(
-+ MacroAssembler* masm, Handle<JSGlobalObject> global, Handle<Name> name,
-+ Register scratch, Label* miss) {
-+ Handle<PropertyCell> cell = JSGlobalObject::EnsureEmptyPropertyCell(
-+ global, name, PropertyCellType::kInvalidated);
-+ Isolate* isolate = masm->isolate();
-+ DCHECK(cell->value()->IsTheHole(isolate));
-+ Handle<WeakCell> weak_cell = isolate->factory()->NewWeakCell(cell);
-+ __ LoadWeakValue(scratch, weak_cell, miss);
-+ __ cmp(FieldOperand(scratch, PropertyCell::kValueOffset),
-+ Immediate(isolate->factory()->the_hole_value()));
-+ __ j(not_equal, miss);
-+}
-+
-+
-+void NamedStoreHandlerCompiler::GenerateStoreViaSetter(
-+ MacroAssembler* masm, Handle<Map> map, Register receiver, Register holder,
-+ int accessor_index, int expected_arguments, Register scratch) {
-+ // ----------- S t a t e -------------
-+ // -- esp[12] : value
-+ // -- esp[8] : slot
-+ // -- esp[4] : vector
-+ // -- esp[0] : return address
-+ // -----------------------------------
-+ __ LoadParameterFromStack<Descriptor>(value(), Descriptor::kValue);
-+
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+
-+ // Save context register
-+ __ push(esi);
-+ // Save value register, so we can restore it later.
-+ __ push(value());
-+
-+ if (accessor_index >= 0) {
-+ DCHECK(!holder.is(scratch));
-+ DCHECK(!receiver.is(scratch));
-+ DCHECK(!value().is(scratch));
-+ // Call the JavaScript setter with receiver and value on the stack.
-+ if (map->IsJSGlobalObjectMap()) {
-+ __ mov(scratch,
-+ FieldOperand(receiver, JSGlobalObject::kGlobalProxyOffset));
-+ receiver = scratch;
-+ }
-+ __ push(receiver);
-+ __ push(value());
-+ __ LoadAccessor(edi, holder, accessor_index, ACCESSOR_SETTER);
-+ __ Set(eax, 1);
-+ __ Call(masm->isolate()->builtins()->CallFunction(
-+ ConvertReceiverMode::kNotNullOrUndefined),
-+ RelocInfo::CODE_TARGET);
-+ } else {
-+ // If we generate a global code snippet for deoptimization only, remember
-+ // the place to continue after deoptimization.
-+
masm->isolate()->heap()->SetSetterStubDeoptPCOffset(masm->pc_offset());
-+ }
-+
-+ // We have to return the passed value, not the return value of the setter.
-+ __ pop(eax);
-+ // Restore context register.
-+ __ pop(esi);
-+ }
-+ if (accessor_index >= 0) {
-+ __ ret(StoreWithVectorDescriptor::kStackArgumentsCount * kPointerSize);
-+ } else {
-+ // If we generate a global code snippet for deoptimization only, don't try
-+ // to drop stack arguments for the StoreIC because they are not a part of
-+ // expression stack and deoptimizer does not reconstruct them.
-+ __ ret(0);
-+ }
-+}
-+
-+#undef __
-+#define __ ACCESS_MASM(masm())
-+
-+
-+void NamedStoreHandlerCompiler::GenerateRestoreName(Label* label,
-+ Handle<Name> name) {
-+ if (!label->is_unused()) {
-+ __ bind(label);
-+ __ mov(this->name(), Immediate(name));
-+ }
-+}
-+
-+void PropertyHandlerCompiler::GenerateAccessCheck(
-+ Handle<WeakCell> native_context_cell, Register scratch1, Register scratch2,
-+ Label* miss, bool compare_native_contexts_only) {
-+ Label done;
-+ // Load current native context.
-+ __ mov(scratch1, NativeContextOperand());
-+ // Load expected native context.
-+ __ LoadWeakValue(scratch2, native_context_cell, miss);
-+ __ cmp(scratch1, scratch2);
-+
-+ if (!compare_native_contexts_only) {
-+ __ j(equal, &done);
-+
-+ // Compare security tokens of current and expected native contexts.
-+ __ mov(scratch1, ContextOperand(scratch1, Context::SECURITY_TOKEN_INDEX));
-+ __ mov(scratch2, ContextOperand(scratch2, Context::SECURITY_TOKEN_INDEX));
-+ __ cmp(scratch1, scratch2);
-+ }
-+ __ j(not_equal, miss);
-+
-+ __ bind(&done);
-+}
-+
-+Register PropertyHandlerCompiler::CheckPrototypes(
-+ Register object_reg, Register holder_reg, Register scratch1,
-+ Register scratch2, Handle<Name> name, Label* miss) {
-+ Handle<Map> receiver_map = map();
-+
-+ // Make sure there's no overlap between holder and object registers.
-+ DCHECK(!scratch1.is(object_reg) && !scratch1.is(holder_reg));
-+ DCHECK(!scratch2.is(object_reg) && !scratch2.is(holder_reg) &&
-+ !scratch2.is(scratch1));
-+
-+ Handle<Cell> validity_cell =
-+ Map::GetOrCreatePrototypeChainValidityCell(receiver_map, isolate());
-+ if (!validity_cell.is_null()) {
-+ DCHECK_EQ(Smi::FromInt(Map::kPrototypeChainValid), validity_cell->value());
-+ // Operand::ForCell(...) points to the cell's payload!
-+ __ cmp(Operand::ForCell(validity_cell),
-+ Immediate(Smi::FromInt(Map::kPrototypeChainValid)));
-+ __ j(not_equal, miss);
-+ }
-+
-+ // Keep track of the current object in register reg.
-+ Register reg = object_reg;
-+ int depth = 0;
-+
-+ Handle<JSObject> current = Handle<JSObject>::null();
-+ if (receiver_map->IsJSGlobalObjectMap()) {
-+ current = isolate()->global_object();
-+ }
-+
-+ Handle<Map> current_map(receiver_map->GetPrototypeChainRootMap(isolate()),
-+ isolate());
-+ Handle<Map> holder_map(holder()->map());
-+ // Traverse the prototype chain and check the maps in the prototype chain for
-+ // fast and global objects or do negative lookup for normal objects.
-+ while (!current_map.is_identical_to(holder_map)) {
-+ ++depth;
-+
-+ if (current_map->IsJSGlobalObjectMap()) {
-+ GenerateCheckPropertyCell(masm(), Handle<JSGlobalObject>::cast(current),
-+ name, scratch2, miss);
-+ } else if (current_map->is_dictionary_map()) {
-+ DCHECK(!current_map->IsJSGlobalProxyMap()); // Proxy maps are fast.
-+ DCHECK(name->IsUniqueName());
-+ DCHECK(current.is_null() ||
-+ current->property_dictionary()->FindEntry(name) ==
-+ NameDictionary::kNotFound);
-+
-+ if (depth > 1) {
-+ Handle<WeakCell> weak_cell =
-+ Map::GetOrCreatePrototypeWeakCell(current, isolate());
-+ __ LoadWeakValue(reg, weak_cell, miss);
-+ }
-+ GenerateDictionaryNegativeLookup(masm(), miss, reg, name, scratch1,
-+ scratch2);
-+ }
-+
-+ reg = holder_reg; // From now on the object will be in holder_reg.
-+ // Go to the next object in the prototype chain.
-+ current = handle(JSObject::cast(current_map->prototype()));
-+ current_map = handle(current->map());
-+ }
-+
-+ DCHECK(!current_map->IsJSGlobalProxyMap());
-+
-+ // Log the check depth.
-+ LOG(isolate(), IntEvent("check-maps-depth", depth + 1));
-+
-+ if (depth != 0) {
-+ Handle<WeakCell> weak_cell =
-+ Map::GetOrCreatePrototypeWeakCell(current, isolate());
-+ __ LoadWeakValue(reg, weak_cell, miss);
-+ }
-+
-+ // Return the register containing the holder.
-+ return reg;
-+}
-+
-+
-+void NamedLoadHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) {
-+ if (!miss->is_unused()) {
-+ Label success;
-+ __ jmp(&success);
-+ __ bind(miss);
-+ if (IC::ShouldPushPopSlotAndVector(kind())) {
-+ DCHECK(kind() == Code::LOAD_IC);
-+ PopVectorAndSlot();
-+ }
-+ TailCallBuiltin(masm(), MissBuiltin(kind()));
-+ __ bind(&success);
-+ }
-+}
-+
-+
-+void NamedStoreHandlerCompiler::FrontendFooter(Handle<Name> name, Label* miss) {
-+ if (!miss->is_unused()) {
-+ Label success;
-+ __ jmp(&success);
-+ GenerateRestoreName(miss, name);
-+ DCHECK(!IC::ShouldPushPopSlotAndVector(kind()));
-+ TailCallBuiltin(masm(), MissBuiltin(kind()));
-+ __ bind(&success);
-+ }
-+}
-+
-+void NamedStoreHandlerCompiler::ZapStackArgumentsRegisterAliases() {
-+ // Zap register aliases of the arguments passed on the stack to ensure they
-+ // are properly loaded by the handler (debug-only).
-+ STATIC_ASSERT(Descriptor::kPassLastArgsOnStack);
-+ STATIC_ASSERT(Descriptor::kStackArgumentsCount == 3);
-+ __ mov(Descriptor::ValueRegister(), Immediate(kDebugZapValue));
-+ __ mov(Descriptor::SlotRegister(), Immediate(kDebugZapValue));
-+ __ mov(Descriptor::VectorRegister(), Immediate(kDebugZapValue));
-+}
-+
-+Handle<Code> NamedStoreHandlerCompiler::CompileStoreCallback(
-+ Handle<JSObject> object, Handle<Name> name, Handle<AccessorInfo>
callback,
-+ LanguageMode language_mode) {
-+ Register holder_reg = Frontend(name);
-+ __ LoadParameterFromStack<Descriptor>(value(), Descriptor::kValue);
-+
-+ __ pop(scratch1()); // remove the return address
-+ // Discard stack arguments.
-+ __ add(esp, Immediate(StoreWithVectorDescriptor::kStackArgumentsCount *
-+ kPointerSize));
-+ __ push(receiver());
-+ __ push(holder_reg);
-+ // If the callback cannot leak, then push the callback directly,
-+ // otherwise wrap it in a weak cell.
-+ if (callback->data()->IsUndefined(isolate()) || callback->data()->IsSmi())
{
-+ __ Push(callback);
-+ } else {
-+ Handle<WeakCell> cell = isolate()->factory()->NewWeakCell(callback);
-+ __ Push(cell);
-+ }
-+ __ Push(name);
-+ __ push(value());
-+ __ push(Immediate(Smi::FromInt(language_mode)));
-+ __ push(scratch1()); // restore return address
-+
-+ // Do tail-call to the runtime system.
-+ __ TailCallRuntime(Runtime::kStoreCallbackProperty);
-+
-+ // Return the generated code.
-+ return GetCode(kind(), name);
-+}
-+
-+
-+Register NamedStoreHandlerCompiler::value() {
-+ return StoreDescriptor::ValueRegister();
-+}
-+
-+
-+#undef __
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/ic/x87/ic-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/ic/x87/ic-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/ic/x87/ic-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/ic/x87/ic-x87.cc 2018-02-18
19:00:54.103419531 +0100
-@@ -0,0 +1,84 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/codegen.h"
-+#include "src/ic/ic.h"
-+#include "src/ic/stub-cache.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+
-+Condition CompareIC::ComputeCondition(Token::Value op) {
-+ switch (op) {
-+ case Token::EQ_STRICT:
-+ case Token::EQ:
-+ return equal;
-+ case Token::LT:
-+ return less;
-+ case Token::GT:
-+ return greater;
-+ case Token::LTE:
-+ return less_equal;
-+ case Token::GTE:
-+ return greater_equal;
-+ default:
-+ UNREACHABLE();
-+ }
-+}
-+
-+
-+bool CompareIC::HasInlinedSmiCode(Address address) {
-+ // The address of the instruction following the call.
-+ Address test_instruction_address =
-+ address + Assembler::kCallTargetAddressOffset;
-+
-+ // If the instruction following the call is not a test al, nothing
-+ // was inlined.
-+ return *test_instruction_address == Assembler::kTestAlByte;
-+}
-+
-+
-+void PatchInlinedSmiCode(Isolate* isolate, Address address,
-+ InlinedSmiCheck check) {
-+ // The address of the instruction following the call.
-+ Address test_instruction_address =
-+ address + Assembler::kCallTargetAddressOffset;
-+
-+ // If the instruction following the call is not a test al, nothing
-+ // was inlined.
-+ if (*test_instruction_address != Assembler::kTestAlByte) {
-+ DCHECK(*test_instruction_address == Assembler::kNopByte);
-+ return;
-+ }
-+
-+ Address delta_address = test_instruction_address + 1;
-+ // The delta to the start of the map check instruction and the
-+ // condition code uses at the patched jump.
-+ uint8_t delta = *reinterpret_cast<uint8_t*>(delta_address);
-+ if (FLAG_trace_ic) {
-+ LOG(isolate, PatchIC(address, test_instruction_address, delta));
-+ }
-+
-+ // Patch with a short conditional jump. Enabling means switching from a short
-+ // jump-if-carry/not-carry to jump-if-zero/not-zero, whereas disabling is the
-+ // reverse operation of that.
-+ Address jmp_address = test_instruction_address - delta;
-+ DCHECK((check == ENABLE_INLINED_SMI_CHECK)
-+ ? (*jmp_address == Assembler::kJncShortOpcode ||
-+ *jmp_address == Assembler::kJcShortOpcode)
-+ : (*jmp_address == Assembler::kJnzShortOpcode ||
-+ *jmp_address == Assembler::kJzShortOpcode));
-+ Condition cc =
-+ (check == ENABLE_INLINED_SMI_CHECK)
-+ ? (*jmp_address == Assembler::kJncShortOpcode ? not_zero : zero)
-+ : (*jmp_address == Assembler::kJnzShortOpcode ? not_carry : carry);
-+ *jmp_address = static_cast<byte>(Assembler::kJccShortPrefix | cc);
-+}
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/ic/x87/OWNERS
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/ic/x87/OWNERS
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/ic/x87/OWNERS 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/ic/x87/OWNERS 2018-02-18
19:00:54.103419531 +0100
-@@ -0,0 +1,2 @@
-+weiliang.lin(a)intel.com
-+chunyang.dai(a)intel.com
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/inspector/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/inspector/BUILD.gn
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/inspector/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/inspector/BUILD.gn 2018-02-18
19:00:54.103419531 +0100
-@@ -106,7 +106,7 @@
- "/wd4996", # Deprecated function call.
- ]
- }
-- if (is_component_build) {
-+ if (is_component_build || v8_build_shared) {
- defines = [ "BUILDING_V8_SHARED" ]
- }
- }
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/interface-descriptors.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/interface-descriptors.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/interface-descriptors.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/interface-descriptors.h 2018-02-18
19:00:54.103419531 +0100
-@@ -392,7 +392,7 @@
- static const Register ValueRegister();
- static const Register SlotRegister();
-
--#if V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- static const bool kPassLastArgsOnStack = true;
- #else
- static const bool kPassLastArgsOnStack = false;
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/interpreter/interpreter-assembler.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/interpreter/interpreter-assembler.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/interpreter/interpreter-assembler.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/interpreter/interpreter-assembler.cc 2018-02-18
19:00:54.104419517 +0100
-@@ -1367,8 +1367,9 @@
- bool InterpreterAssembler::TargetSupportsUnalignedAccess() {
- #if V8_TARGET_ARCH_MIPS || V8_TARGET_ARCH_MIPS64
- return false;
--#elif V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X64 || V8_TARGET_ARCH_S390 || \
-- V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_ARM64 || V8_TARGET_ARCH_PPC
-+#elif V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X64 || V8_TARGET_ARCH_X87 || \
-+ V8_TARGET_ARCH_S390 || V8_TARGET_ARCH_ARM || V8_TARGET_ARCH_ARM64 || \
-+ V8_TARGET_ARCH_PPC
- return true;
- #else
- #error "Unknown Architecture"
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/log.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/log.cc
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/log.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/log.cc 2018-02-18
19:00:54.104419517 +0100
-@@ -370,6 +370,8 @@
- const char arch[] = "ppc";
- #elif V8_TARGET_ARCH_MIPS
- const char arch[] = "mips";
-+#elif V8_TARGET_ARCH_X87
-+ const char arch[] = "x87";
- #elif V8_TARGET_ARCH_ARM64
- const char arch[] = "arm64";
- #elif V8_TARGET_ARCH_S390
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/macro-assembler.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/macro-assembler.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/macro-assembler.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/macro-assembler.h 2018-02-18
19:00:54.105419502 +0100
-@@ -52,6 +52,8 @@
- #elif V8_TARGET_ARCH_S390
- #include "src/s390/constants-s390.h"
- #include "src/s390/macro-assembler-s390.h"
-+#elif V8_TARGET_ARCH_X87
-+#include "src/x87/macro-assembler-x87.h"
- #else
- #error Unsupported target architecture.
- #endif
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/regexp/jsregexp.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/regexp/jsregexp.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/regexp/jsregexp.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/regexp/jsregexp.cc 2018-02-18
19:00:54.106419487 +0100
-@@ -48,6 +48,8 @@
- #include "src/regexp/mips/regexp-macro-assembler-mips.h"
- #elif V8_TARGET_ARCH_MIPS64
- #include "src/regexp/mips64/regexp-macro-assembler-mips64.h"
-+#elif V8_TARGET_ARCH_X87
-+#include "src/regexp/x87/regexp-macro-assembler-x87.h"
- #else
- #error Unsupported target architecture.
- #endif
-@@ -6760,6 +6762,9 @@
- #elif V8_TARGET_ARCH_MIPS64
- RegExpMacroAssemblerMIPS macro_assembler(isolate, zone, mode,
- (data->capture_count + 1) * 2);
-+#elif V8_TARGET_ARCH_X87
-+ RegExpMacroAssemblerX87 macro_assembler(isolate, zone, mode,
-+ (data->capture_count + 1) * 2);
- #else
- #error "Unsupported architecture"
- #endif
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/regexp/x87/OWNERS
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/regexp/x87/OWNERS
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/regexp/x87/OWNERS 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/regexp/x87/OWNERS 2018-02-18
19:00:54.189418266 +0100
-@@ -0,0 +1,2 @@
-+weiliang.lin(a)intel.com
-+chunyang.dai(a)intel.com
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/regexp/x87/regexp-macro-assembler-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/regexp/x87/regexp-macro-assembler-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/regexp/x87/regexp-macro-assembler-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/regexp/x87/regexp-macro-assembler-x87.cc 2018-02-18
19:00:54.190418252 +0100
-@@ -0,0 +1,1273 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/regexp/x87/regexp-macro-assembler-x87.h"
-+
-+#include "src/log.h"
-+#include "src/macro-assembler.h"
-+#include "src/regexp/regexp-macro-assembler.h"
-+#include "src/regexp/regexp-stack.h"
-+#include "src/unicode.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+#ifndef V8_INTERPRETED_REGEXP
-+/*
-+ * This assembler uses the following register assignment convention
-+ * - edx : Current character. Must be loaded using LoadCurrentCharacter
-+ * before using any of the dispatch methods. Temporarily stores the
-+ * index of capture start after a matching pass for a global regexp.
-+ * - edi : Current position in input, as negative offset from end of string.
-+ * Please notice that this is the byte offset, not the character offset!
-+ * - esi : end of input (points to byte after last character in input).
-+ * - ebp : Frame pointer. Used to access arguments, local variables and
-+ * RegExp registers.
-+ * - esp : Points to tip of C stack.
-+ * - ecx : Points to tip of backtrack stack
-+ *
-+ * The registers eax and ebx are free to use for computations.
-+ *
-+ * Each call to a public method should retain this convention.
-+ * The stack will have the following structure:
-+ * - Isolate* isolate (address of the current isolate)
-+ * - direct_call (if 1, direct call from JavaScript code, if 0
-+ * call through the runtime system)
-+ * - stack_area_base (high end of the memory area to use as
-+ * backtracking stack)
-+ * - capture array size (may fit multiple sets of matches)
-+ * - int* capture_array (int[num_saved_registers_], for output).
-+ * - end of input (address of end of string)
-+ * - start of input (address of first character in string)
-+ * - start index (character index of start)
-+ * - String* input_string (location of a handle containing the string)
-+ * --- frame alignment (if applicable) ---
-+ * - return address
-+ * ebp-> - old ebp
-+ * - backup of caller esi
-+ * - backup of caller edi
-+ * - backup of caller ebx
-+ * - success counter (only for global regexps to count matches).
-+ * - Offset of location before start of input (effectively character
-+ * string start - 1). Used to initialize capture registers to a
-+ * non-position.
-+ * - register 0 ebp[-4] (only positions must be stored in the first
-+ * - register 1 ebp[-8] num_saved_registers_ registers)
-+ * - ...
-+ *
-+ * The first num_saved_registers_ registers are initialized to point to
-+ * "character -1" in the string (i.e., char_size() bytes before the first
-+ * character of the string). The remaining registers starts out as garbage.
-+ *
-+ * The data up to the return address must be placed there by the calling
-+ * code, by calling the code entry as cast to a function with the signature:
-+ * int (*match)(String* input_string,
-+ * int start_index,
-+ * Address start,
-+ * Address end,
-+ * int* capture_output_array,
-+ * int num_capture_registers,
-+ * byte* stack_area_base,
-+ * bool direct_call = false,
-+ * Isolate* isolate);
-+ */
-+
-+#define __ ACCESS_MASM(masm_)
-+
-+RegExpMacroAssemblerX87::RegExpMacroAssemblerX87(Isolate* isolate, Zone* zone,
-+ Mode mode,
-+ int registers_to_save)
-+ : NativeRegExpMacroAssembler(isolate, zone),
-+ masm_(new MacroAssembler(isolate, NULL, kRegExpCodeSize,
-+ CodeObjectRequired::kYes)),
-+ mode_(mode),
-+ num_registers_(registers_to_save),
-+ num_saved_registers_(registers_to_save),
-+ entry_label_(),
-+ start_label_(),
-+ success_label_(),
-+ backtrack_label_(),
-+ exit_label_() {
-+ DCHECK_EQ(0, registers_to_save % 2);
-+ __ jmp(&entry_label_); // We'll write the entry code later.
-+ __ bind(&start_label_); // And then continue from here.
-+}
-+
-+
-+RegExpMacroAssemblerX87::~RegExpMacroAssemblerX87() {
-+ delete masm_;
-+ // Unuse labels in case we throw away the assembler without calling GetCode.
-+ entry_label_.Unuse();
-+ start_label_.Unuse();
-+ success_label_.Unuse();
-+ backtrack_label_.Unuse();
-+ exit_label_.Unuse();
-+ check_preempt_label_.Unuse();
-+ stack_overflow_label_.Unuse();
-+}
-+
-+
-+int RegExpMacroAssemblerX87::stack_limit_slack() {
-+ return RegExpStack::kStackLimitSlack;
-+}
-+
-+
-+void RegExpMacroAssemblerX87::AdvanceCurrentPosition(int by) {
-+ if (by != 0) {
-+ __ add(edi, Immediate(by * char_size()));
-+ }
-+}
-+
-+
-+void RegExpMacroAssemblerX87::AdvanceRegister(int reg, int by) {
-+ DCHECK(reg >= 0);
-+ DCHECK(reg < num_registers_);
-+ if (by != 0) {
-+ __ add(register_location(reg), Immediate(by));
-+ }
-+}
-+
-+
-+void RegExpMacroAssemblerX87::Backtrack() {
-+ CheckPreemption();
-+ // Pop Code* offset from backtrack stack, add Code* and jump to location.
-+ Pop(ebx);
-+ __ add(ebx, Immediate(masm_->CodeObject()));
-+ __ jmp(ebx);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::Bind(Label* label) {
-+ __ bind(label);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckCharacter(uint32_t c, Label* on_equal) {
-+ __ cmp(current_character(), c);
-+ BranchOrBacktrack(equal, on_equal);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckCharacterGT(uc16 limit, Label* on_greater) {
-+ __ cmp(current_character(), limit);
-+ BranchOrBacktrack(greater, on_greater);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckAtStart(Label* on_at_start) {
-+ __ lea(eax, Operand(edi, -char_size()));
-+ __ cmp(eax, Operand(ebp, kStringStartMinusOne));
-+ BranchOrBacktrack(equal, on_at_start);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckNotAtStart(int cp_offset,
-+ Label* on_not_at_start) {
-+ __ lea(eax, Operand(edi, -char_size() + cp_offset * char_size()));
-+ __ cmp(eax, Operand(ebp, kStringStartMinusOne));
-+ BranchOrBacktrack(not_equal, on_not_at_start);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckCharacterLT(uc16 limit, Label* on_less) {
-+ __ cmp(current_character(), limit);
-+ BranchOrBacktrack(less, on_less);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckGreedyLoop(Label* on_equal) {
-+ Label fallthrough;
-+ __ cmp(edi, Operand(backtrack_stackpointer(), 0));
-+ __ j(not_equal, &fallthrough);
-+ __ add(backtrack_stackpointer(), Immediate(kPointerSize)); // Pop.
-+ BranchOrBacktrack(no_condition, on_equal);
-+ __ bind(&fallthrough);
-+}
-+
-+void RegExpMacroAssemblerX87::CheckNotBackReferenceIgnoreCase(
-+ int start_reg, bool read_backward, bool unicode, Label* on_no_match) {
-+ Label fallthrough;
-+ __ mov(edx, register_location(start_reg)); // Index of start of capture
-+ __ mov(ebx, register_location(start_reg + 1)); // Index of end of capture
-+ __ sub(ebx, edx); // Length of capture.
-+
-+ // At this point, the capture registers are either both set or both cleared.
-+ // If the capture length is zero, then the capture is either empty or cleared.
-+ // Fall through in both cases.
-+ __ j(equal, &fallthrough);
-+
-+ // Check that there are sufficient characters left in the input.
-+ if (read_backward) {
-+ __ mov(eax, Operand(ebp, kStringStartMinusOne));
-+ __ add(eax, ebx);
-+ __ cmp(edi, eax);
-+ BranchOrBacktrack(less_equal, on_no_match);
-+ } else {
-+ __ mov(eax, edi);
-+ __ add(eax, ebx);
-+ BranchOrBacktrack(greater, on_no_match);
-+ }
-+
-+ if (mode_ == LATIN1) {
-+ Label success;
-+ Label fail;
-+ Label loop_increment;
-+ // Save register contents to make the registers available below.
-+ __ push(edi);
-+ __ push(backtrack_stackpointer());
-+ // After this, the eax, ecx, and edi registers are available.
-+
-+ __ add(edx, esi); // Start of capture
-+ __ add(edi, esi); // Start of text to match against capture.
-+ if (read_backward) {
-+ __ sub(edi, ebx); // Offset by length when matching backwards.
-+ }
-+ __ add(ebx, edi); // End of text to match against capture.
-+
-+ Label loop;
-+ __ bind(&loop);
-+ __ movzx_b(eax, Operand(edi, 0));
-+ __ cmpb_al(Operand(edx, 0));
-+ __ j(equal, &loop_increment);
-+
-+ // Mismatch, try case-insensitive match (converting letters to lower-case).
-+ __ or_(eax, 0x20); // Convert match character to lower-case.
-+ __ lea(ecx, Operand(eax, -'a'));
-+ __ cmp(ecx, static_cast<int32_t>('z' - 'a')); // Is eax a
lowercase letter?
-+ Label convert_capture;
-+ __ j(below_equal, &convert_capture); // In range 'a'-'z'.
-+ // Latin-1: Check for values in range [224,254] but not 247.
-+ __ sub(ecx, Immediate(224 - 'a'));
-+ __ cmp(ecx, Immediate(254 - 224));
-+ __ j(above, &fail); // Weren't Latin-1 letters.
-+ __ cmp(ecx, Immediate(247 - 224)); // Check for 247.
-+ __ j(equal, &fail);
-+ __ bind(&convert_capture);
-+ // Also convert capture character.
-+ __ movzx_b(ecx, Operand(edx, 0));
-+ __ or_(ecx, 0x20);
-+
-+ __ cmp(eax, ecx);
-+ __ j(not_equal, &fail);
-+
-+ __ bind(&loop_increment);
-+ // Increment pointers into match and capture strings.
-+ __ add(edx, Immediate(1));
-+ __ add(edi, Immediate(1));
-+ // Compare to end of match, and loop if not done.
-+ __ cmp(edi, ebx);
-+ __ j(below, &loop);
-+ __ jmp(&success);
-+
-+ __ bind(&fail);
-+ // Restore original values before failing.
-+ __ pop(backtrack_stackpointer());
-+ __ pop(edi);
-+ BranchOrBacktrack(no_condition, on_no_match);
-+
-+ __ bind(&success);
-+ // Restore original value before continuing.
-+ __ pop(backtrack_stackpointer());
-+ // Drop original value of character position.
-+ __ add(esp, Immediate(kPointerSize));
-+ // Compute new value of character position after the matched part.
-+ __ sub(edi, esi);
-+ if (read_backward) {
-+ // Subtract match length if we matched backward.
-+ __ add(edi, register_location(start_reg));
-+ __ sub(edi, register_location(start_reg + 1));
-+ }
-+ } else {
-+ DCHECK(mode_ == UC16);
-+ // Save registers before calling C function.
-+ __ push(esi);
-+ __ push(edi);
-+ __ push(backtrack_stackpointer());
-+ __ push(ebx);
-+
-+ static const int argument_count = 4;
-+ __ PrepareCallCFunction(argument_count, ecx);
-+ // Put arguments into allocated stack area, last argument highest on stack.
-+ // Parameters are
-+ // Address byte_offset1 - Address captured substring's start.
-+ // Address byte_offset2 - Address of current character position.
-+ // size_t byte_length - length of capture in bytes(!)
-+// Isolate* isolate or 0 if unicode flag.
-+
-+ // Set isolate.
-+#ifdef V8_INTL_SUPPORT
-+ if (unicode) {
-+ __ mov(Operand(esp, 3 * kPointerSize), Immediate(0));
-+ } else // NOLINT
-+#endif // V8_INTL_SUPPORT
-+ {
-+ __ mov(Operand(esp, 3 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(isolate())));
-+ }
-+ // Set byte_length.
-+ __ mov(Operand(esp, 2 * kPointerSize), ebx);
-+ // Set byte_offset2.
-+ // Found by adding negative string-end offset of current position (edi)
-+ // to end of string.
-+ __ add(edi, esi);
-+ if (read_backward) {
-+ __ sub(edi, ebx); // Offset by length when matching backwards.
-+ }
-+ __ mov(Operand(esp, 1 * kPointerSize), edi);
-+ // Set byte_offset1.
-+ // Start of capture, where edx already holds string-end negative offset.
-+ __ add(edx, esi);
-+ __ mov(Operand(esp, 0 * kPointerSize), edx);
-+
-+ {
-+ AllowExternalCallThatCantCauseGC scope(masm_);
-+ ExternalReference compare =
-+ ExternalReference::re_case_insensitive_compare_uc16(isolate());
-+ __ CallCFunction(compare, argument_count);
-+ }
-+ // Pop original values before reacting on result value.
-+ __ pop(ebx);
-+ __ pop(backtrack_stackpointer());
-+ __ pop(edi);
-+ __ pop(esi);
-+
-+ // Check if function returned non-zero for success or zero for failure.
-+ __ or_(eax, eax);
-+ BranchOrBacktrack(zero, on_no_match);
-+ // On success, advance position by length of capture.
-+ if (read_backward) {
-+ __ sub(edi, ebx);
-+ } else {
-+ __ add(edi, ebx);
-+ }
-+ }
-+ __ bind(&fallthrough);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckNotBackReference(int start_reg,
-+ bool read_backward,
-+ Label* on_no_match) {
-+ Label fallthrough;
-+ Label success;
-+ Label fail;
-+
-+ // Find length of back-referenced capture.
-+ __ mov(edx, register_location(start_reg));
-+ __ mov(eax, register_location(start_reg + 1));
-+ __ sub(eax, edx); // Length to check.
-+
-+ // At this point, the capture registers are either both set or both cleared.
-+ // If the capture length is zero, then the capture is either empty or cleared.
-+ // Fall through in both cases.
-+ __ j(equal, &fallthrough);
-+
-+ // Check that there are sufficient characters left in the input.
-+ if (read_backward) {
-+ __ mov(ebx, Operand(ebp, kStringStartMinusOne));
-+ __ add(ebx, eax);
-+ __ cmp(edi, ebx);
-+ BranchOrBacktrack(less_equal, on_no_match);
-+ } else {
-+ __ mov(ebx, edi);
-+ __ add(ebx, eax);
-+ BranchOrBacktrack(greater, on_no_match);
-+ }
-+
-+ // Save register to make it available below.
-+ __ push(backtrack_stackpointer());
-+
-+ // Compute pointers to match string and capture string
-+ __ add(edx, esi); // Start of capture.
-+ __ lea(ebx, Operand(esi, edi, times_1, 0)); // Start of match.
-+ if (read_backward) {
-+ __ sub(ebx, eax); // Offset by length when matching backwards.
-+ }
-+ __ lea(ecx, Operand(eax, ebx, times_1, 0)); // End of match
-+
-+ Label loop;
-+ __ bind(&loop);
-+ if (mode_ == LATIN1) {
-+ __ movzx_b(eax, Operand(edx, 0));
-+ __ cmpb_al(Operand(ebx, 0));
-+ } else {
-+ DCHECK(mode_ == UC16);
-+ __ movzx_w(eax, Operand(edx, 0));
-+ __ cmpw_ax(Operand(ebx, 0));
-+ }
-+ __ j(not_equal, &fail);
-+ // Increment pointers into capture and match string.
-+ __ add(edx, Immediate(char_size()));
-+ __ add(ebx, Immediate(char_size()));
-+ // Check if we have reached end of match area.
-+ __ cmp(ebx, ecx);
-+ __ j(below, &loop);
-+ __ jmp(&success);
-+
-+ __ bind(&fail);
-+ // Restore backtrack stackpointer.
-+ __ pop(backtrack_stackpointer());
-+ BranchOrBacktrack(no_condition, on_no_match);
-+
-+ __ bind(&success);
-+ // Move current character position to position after match.
-+ __ mov(edi, ecx);
-+ __ sub(edi, esi);
-+ if (read_backward) {
-+ // Subtract match length if we matched backward.
-+ __ add(edi, register_location(start_reg));
-+ __ sub(edi, register_location(start_reg + 1));
-+ }
-+ // Restore backtrack stackpointer.
-+ __ pop(backtrack_stackpointer());
-+
-+ __ bind(&fallthrough);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckNotCharacter(uint32_t c,
-+ Label* on_not_equal) {
-+ __ cmp(current_character(), c);
-+ BranchOrBacktrack(not_equal, on_not_equal);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckCharacterAfterAnd(uint32_t c,
-+ uint32_t mask,
-+ Label* on_equal) {
-+ if (c == 0) {
-+ __ test(current_character(), Immediate(mask));
-+ } else {
-+ __ mov(eax, mask);
-+ __ and_(eax, current_character());
-+ __ cmp(eax, c);
-+ }
-+ BranchOrBacktrack(equal, on_equal);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckNotCharacterAfterAnd(uint32_t c,
-+ uint32_t mask,
-+ Label* on_not_equal) {
-+ if (c == 0) {
-+ __ test(current_character(), Immediate(mask));
-+ } else {
-+ __ mov(eax, mask);
-+ __ and_(eax, current_character());
-+ __ cmp(eax, c);
-+ }
-+ BranchOrBacktrack(not_equal, on_not_equal);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckNotCharacterAfterMinusAnd(
-+ uc16 c,
-+ uc16 minus,
-+ uc16 mask,
-+ Label* on_not_equal) {
-+ DCHECK(minus < String::kMaxUtf16CodeUnit);
-+ __ lea(eax, Operand(current_character(), -minus));
-+ if (c == 0) {
-+ __ test(eax, Immediate(mask));
-+ } else {
-+ __ and_(eax, mask);
-+ __ cmp(eax, c);
-+ }
-+ BranchOrBacktrack(not_equal, on_not_equal);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckCharacterInRange(
-+ uc16 from,
-+ uc16 to,
-+ Label* on_in_range) {
-+ __ lea(eax, Operand(current_character(), -from));
-+ __ cmp(eax, to - from);
-+ BranchOrBacktrack(below_equal, on_in_range);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckCharacterNotInRange(
-+ uc16 from,
-+ uc16 to,
-+ Label* on_not_in_range) {
-+ __ lea(eax, Operand(current_character(), -from));
-+ __ cmp(eax, to - from);
-+ BranchOrBacktrack(above, on_not_in_range);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckBitInTable(
-+ Handle<ByteArray> table,
-+ Label* on_bit_set) {
-+ __ mov(eax, Immediate(table));
-+ Register index = current_character();
-+ if (mode_ != LATIN1 || kTableMask != String::kMaxOneByteCharCode) {
-+ __ mov(ebx, kTableSize - 1);
-+ __ and_(ebx, current_character());
-+ index = ebx;
-+ }
-+ __ cmpb(FieldOperand(eax, index, times_1, ByteArray::kHeaderSize),
-+ Immediate(0));
-+ BranchOrBacktrack(not_equal, on_bit_set);
-+}
-+
-+
-+bool RegExpMacroAssemblerX87::CheckSpecialCharacterClass(uc16 type,
-+ Label* on_no_match) {
-+ // Range checks (c in min..max) are generally implemented by an unsigned
-+ // (c - min) <= (max - min) check
-+ switch (type) {
-+ case 's':
-+ // Match space-characters
-+ if (mode_ == LATIN1) {
-+ // One byte space characters are '\t'..'\r', ' ' and
\u00a0.
-+ Label success;
-+ __ cmp(current_character(), ' ');
-+ __ j(equal, &success, Label::kNear);
-+ // Check range 0x09..0x0d
-+ __ lea(eax, Operand(current_character(), -'\t'));
-+ __ cmp(eax, '\r' - '\t');
-+ __ j(below_equal, &success, Label::kNear);
-+ // \u00a0 (NBSP).
-+ __ cmp(eax, 0x00a0 - '\t');
-+ BranchOrBacktrack(not_equal, on_no_match);
-+ __ bind(&success);
-+ return true;
-+ }
-+ return false;
-+ case 'S':
-+ // The emitted code for generic character classes is good enough.
-+ return false;
-+ case 'd':
-+ // Match ASCII digits ('0'..'9')
-+ __ lea(eax, Operand(current_character(), -'0'));
-+ __ cmp(eax, '9' - '0');
-+ BranchOrBacktrack(above, on_no_match);
-+ return true;
-+ case 'D':
-+ // Match non ASCII-digits
-+ __ lea(eax, Operand(current_character(), -'0'));
-+ __ cmp(eax, '9' - '0');
-+ BranchOrBacktrack(below_equal, on_no_match);
-+ return true;
-+ case '.': {
-+ // Match non-newlines (not 0x0a('\n'), 0x0d('\r'), 0x2028 and
0x2029)
-+ __ mov(eax, current_character());
-+ __ xor_(eax, Immediate(0x01));
-+ // See if current character is '\n'^1 or '\r'^1, i.e., 0x0b or 0x0c
-+ __ sub(eax, Immediate(0x0b));
-+ __ cmp(eax, 0x0c - 0x0b);
-+ BranchOrBacktrack(below_equal, on_no_match);
-+ if (mode_ == UC16) {
-+ // Compare original value to 0x2028 and 0x2029, using the already
-+ // computed (current_char ^ 0x01 - 0x0b). I.e., check for
-+ // 0x201d (0x2028 - 0x0b) or 0x201e.
-+ __ sub(eax, Immediate(0x2028 - 0x0b));
-+ __ cmp(eax, 0x2029 - 0x2028);
-+ BranchOrBacktrack(below_equal, on_no_match);
-+ }
-+ return true;
-+ }
-+ case 'w': {
-+ if (mode_ != LATIN1) {
-+ // Table is 256 entries, so all Latin1 characters can be tested.
-+ __ cmp(current_character(), Immediate('z'));
-+ BranchOrBacktrack(above, on_no_match);
-+ }
-+ DCHECK_EQ(0, word_character_map[0]); // Character '\0' is not a word char.
-+ ExternalReference word_map = ExternalReference::re_word_character_map();
-+ __ test_b(current_character(),
-+ Operand::StaticArray(current_character(), times_1, word_map));
-+ BranchOrBacktrack(zero, on_no_match);
-+ return true;
-+ }
-+ case 'W': {
-+ Label done;
-+ if (mode_ != LATIN1) {
-+ // Table is 256 entries, so all Latin1 characters can be tested.
-+ __ cmp(current_character(), Immediate('z'));
-+ __ j(above, &done);
-+ }
-+ DCHECK_EQ(0, word_character_map[0]); // Character '\0' is not a word char.
-+ ExternalReference word_map = ExternalReference::re_word_character_map();
-+ __ test_b(current_character(),
-+ Operand::StaticArray(current_character(), times_1, word_map));
-+ BranchOrBacktrack(not_zero, on_no_match);
-+ if (mode_ != LATIN1) {
-+ __ bind(&done);
-+ }
-+ return true;
-+ }
-+ // Non-standard classes (with no syntactic shorthand) used internally.
-+ case '*':
-+ // Match any character.
-+ return true;
-+ case 'n': {
-+ // Match newlines (0x0a('\n'), 0x0d('\r'), 0x2028 or 0x2029).
-+ // The opposite of '.'.
-+ __ mov(eax, current_character());
-+ __ xor_(eax, Immediate(0x01));
-+ // See if current character is '\n'^1 or '\r'^1, i.e., 0x0b or 0x0c
-+ __ sub(eax, Immediate(0x0b));
-+ __ cmp(eax, 0x0c - 0x0b);
-+ if (mode_ == LATIN1) {
-+ BranchOrBacktrack(above, on_no_match);
-+ } else {
-+ Label done;
-+ BranchOrBacktrack(below_equal, &done);
-+ DCHECK_EQ(UC16, mode_);
-+ // Compare original value to 0x2028 and 0x2029, using the already
-+ // computed (current_char ^ 0x01 - 0x0b). I.e., check for
-+ // 0x201d (0x2028 - 0x0b) or 0x201e.
-+ __ sub(eax, Immediate(0x2028 - 0x0b));
-+ __ cmp(eax, 1);
-+ BranchOrBacktrack(above, on_no_match);
-+ __ bind(&done);
-+ }
-+ return true;
-+ }
-+ // No custom implementation (yet): s(UC16), S(UC16).
-+ default:
-+ return false;
-+ }
-+}
-+
-+
-+void RegExpMacroAssemblerX87::Fail() {
-+ STATIC_ASSERT(FAILURE == 0); // Return value for failure is zero.
-+ if (!global()) {
-+ __ Move(eax, Immediate(FAILURE));
-+ }
-+ __ jmp(&exit_label_);
-+}
-+
-+
-+Handle<HeapObject> RegExpMacroAssemblerX87::GetCode(Handle<String> source)
{
-+ Label return_eax;
-+ // Finalize code - write the entry point code now we know how many
-+ // registers we need.
-+
-+ // Entry code:
-+ __ bind(&entry_label_);
-+
-+ // Tell the system that we have a stack frame. Because the type is MANUAL, no
-+ // code is generated.
-+ FrameScope scope(masm_, StackFrame::MANUAL);
-+
-+ // Actually emit code to start a new stack frame.
-+ __ push(ebp);
-+ __ mov(ebp, esp);
-+ // Save callee-save registers. Order here should correspond to order of
-+ // kBackup_ebx etc.
-+ __ push(esi);
-+ __ push(edi);
-+ __ push(ebx); // Callee-save on MacOS.
-+ __ push(Immediate(0)); // Number of successful matches in a global regexp.
-+ __ push(Immediate(0)); // Make room for "string start - 1" constant.
-+
-+ // Check if we have space on the stack for registers.
-+ Label stack_limit_hit;
-+ Label stack_ok;
-+
-+ ExternalReference stack_limit =
-+ ExternalReference::address_of_stack_limit(isolate());
-+ __ mov(ecx, esp);
-+ __ sub(ecx, Operand::StaticVariable(stack_limit));
-+ // Handle it if the stack pointer is already below the stack limit.
-+ __ j(below_equal, &stack_limit_hit);
-+ // Check if there is room for the variable number of registers above
-+ // the stack limit.
-+ __ cmp(ecx, num_registers_ * kPointerSize);
-+ __ j(above_equal, &stack_ok);
-+ // Exit with OutOfMemory exception. There is not enough space on the stack
-+ // for our working registers.
-+ __ mov(eax, EXCEPTION);
-+ __ jmp(&return_eax);
-+
-+ __ bind(&stack_limit_hit);
-+ CallCheckStackGuardState(ebx);
-+ __ or_(eax, eax);
-+ // If returned value is non-zero, we exit with the returned value as result.
-+ __ j(not_zero, &return_eax);
-+
-+ __ bind(&stack_ok);
-+ // Load start index for later use.
-+ __ mov(ebx, Operand(ebp, kStartIndex));
-+
-+ // Allocate space on stack for registers.
-+ __ sub(esp, Immediate(num_registers_ * kPointerSize));
-+ // Load string length.
-+ __ mov(esi, Operand(ebp, kInputEnd));
-+ // Load input position.
-+ __ mov(edi, Operand(ebp, kInputStart));
-+ // Set up edi to be negative offset from string end.
-+ __ sub(edi, esi);
-+
-+ // Set eax to address of char before start of the string.
-+ // (effectively string position -1).
-+ __ neg(ebx);
-+ if (mode_ == UC16) {
-+ __ lea(eax, Operand(edi, ebx, times_2, -char_size()));
-+ } else {
-+ __ lea(eax, Operand(edi, ebx, times_1, -char_size()));
-+ }
-+ // Store this value in a local variable, for use when clearing
-+ // position registers.
-+ __ mov(Operand(ebp, kStringStartMinusOne), eax);
-+
-+#if V8_OS_WIN
-+ // Ensure that we write to each stack page, in order. Skipping a page
-+ // on Windows can cause segmentation faults. Assuming page size is 4k.
-+ const int kPageSize = 4096;
-+ const int kRegistersPerPage = kPageSize / kPointerSize;
-+ for (int i = num_saved_registers_ + kRegistersPerPage - 1;
-+ i < num_registers_;
-+ i += kRegistersPerPage) {
-+ __ mov(register_location(i), eax); // One write every page.
-+ }
-+#endif // V8_OS_WIN
-+
-+ Label load_char_start_regexp, start_regexp;
-+ // Load newline if index is at start, previous character otherwise.
-+ __ cmp(Operand(ebp, kStartIndex), Immediate(0));
-+ __ j(not_equal, &load_char_start_regexp, Label::kNear);
-+ __ mov(current_character(), '\n');
-+ __ jmp(&start_regexp, Label::kNear);
-+
-+ // Global regexp restarts matching here.
-+ __ bind(&load_char_start_regexp);
-+ // Load previous char as initial value of current character register.
-+ LoadCurrentCharacterUnchecked(-1, 1);
-+ __ bind(&start_regexp);
-+
-+ // Initialize on-stack registers.
-+ if (num_saved_registers_ > 0) { // Always is, if generated from a regexp.
-+ // Fill saved registers with initial value = start offset - 1
-+ // Fill in stack push order, to avoid accessing across an unwritten
-+ // page (a problem on Windows).
-+ if (num_saved_registers_ > 8) {
-+ __ mov(ecx, kRegisterZero);
-+ Label init_loop;
-+ __ bind(&init_loop);
-+ __ mov(Operand(ebp, ecx, times_1, 0), eax);
-+ __ sub(ecx, Immediate(kPointerSize));
-+ __ cmp(ecx, kRegisterZero - num_saved_registers_ * kPointerSize);
-+ __ j(greater, &init_loop);
-+ } else { // Unroll the loop.
-+ for (int i = 0; i < num_saved_registers_; i++) {
-+ __ mov(register_location(i), eax);
-+ }
-+ }
-+ }
-+
-+ // Initialize backtrack stack pointer.
-+ __ mov(backtrack_stackpointer(), Operand(ebp, kStackHighEnd));
-+
-+ __ jmp(&start_label_);
-+
-+ // Exit code:
-+ if (success_label_.is_linked()) {
-+ // Save captures when successful.
-+ __ bind(&success_label_);
-+ if (num_saved_registers_ > 0) {
-+ // copy captures to output
-+ __ mov(ebx, Operand(ebp, kRegisterOutput));
-+ __ mov(ecx, Operand(ebp, kInputEnd));
-+ __ mov(edx, Operand(ebp, kStartIndex));
-+ __ sub(ecx, Operand(ebp, kInputStart));
-+ if (mode_ == UC16) {
-+ __ lea(ecx, Operand(ecx, edx, times_2, 0));
-+ } else {
-+ __ add(ecx, edx);
-+ }
-+ for (int i = 0; i < num_saved_registers_; i++) {
-+ __ mov(eax, register_location(i));
-+ if (i == 0 && global_with_zero_length_check()) {
-+ // Keep capture start in edx for the zero-length check later.
-+ __ mov(edx, eax);
-+ }
-+ // Convert to index from start of string, not end.
-+ __ add(eax, ecx);
-+ if (mode_ == UC16) {
-+ __ sar(eax, 1); // Convert byte index to character index.
-+ }
-+ __ mov(Operand(ebx, i * kPointerSize), eax);
-+ }
-+ }
-+
-+ if (global()) {
-+ // Restart matching if the regular expression is flagged as global.
-+ // Increment success counter.
-+ __ inc(Operand(ebp, kSuccessfulCaptures));
-+ // Capture results have been stored, so the number of remaining global
-+ // output registers is reduced by the number of stored captures.
-+ __ mov(ecx, Operand(ebp, kNumOutputRegisters));
-+ __ sub(ecx, Immediate(num_saved_registers_));
-+ // Check whether we have enough room for another set of capture results.
-+ __ cmp(ecx, Immediate(num_saved_registers_));
-+ __ j(less, &exit_label_);
-+
-+ __ mov(Operand(ebp, kNumOutputRegisters), ecx);
-+ // Advance the location for output.
-+ __ add(Operand(ebp, kRegisterOutput),
-+ Immediate(num_saved_registers_ * kPointerSize));
-+
-+ // Prepare eax to initialize registers with its value in the next run.
-+ __ mov(eax, Operand(ebp, kStringStartMinusOne));
-+
-+ if (global_with_zero_length_check()) {
-+ // Special case for zero-length matches.
-+ // edx: capture start index
-+ __ cmp(edi, edx);
-+ // Not a zero-length match, restart.
-+ __ j(not_equal, &load_char_start_regexp);
-+ // edi (offset from the end) is zero if we already reached the end.
-+ __ test(edi, edi);
-+ __ j(zero, &exit_label_, Label::kNear);
-+ // Advance current position after a zero-length match.
-+ Label advance;
-+ __ bind(&advance);
-+ if (mode_ == UC16) {
-+ __ add(edi, Immediate(2));
-+ } else {
-+ __ inc(edi);
-+ }
-+ if (global_unicode()) CheckNotInSurrogatePair(0, &advance);
-+ }
-+ __ jmp(&load_char_start_regexp);
-+ } else {
-+ __ mov(eax, Immediate(SUCCESS));
-+ }
-+ }
-+
-+ __ bind(&exit_label_);
-+ if (global()) {
-+ // Return the number of successful captures.
-+ __ mov(eax, Operand(ebp, kSuccessfulCaptures));
-+ }
-+
-+ __ bind(&return_eax);
-+ // Skip esp past regexp registers.
-+ __ lea(esp, Operand(ebp, kBackup_ebx));
-+ // Restore callee-save registers.
-+ __ pop(ebx);
-+ __ pop(edi);
-+ __ pop(esi);
-+ // Exit function frame, restore previous one.
-+ __ pop(ebp);
-+ __ ret(0);
-+
-+ // Backtrack code (branch target for conditional backtracks).
-+ if (backtrack_label_.is_linked()) {
-+ __ bind(&backtrack_label_);
-+ Backtrack();
-+ }
-+
-+ Label exit_with_exception;
-+
-+ // Preempt-code
-+ if (check_preempt_label_.is_linked()) {
-+ SafeCallTarget(&check_preempt_label_);
-+
-+ __ push(backtrack_stackpointer());
-+ __ push(edi);
-+
-+ CallCheckStackGuardState(ebx);
-+ __ or_(eax, eax);
-+ // If returning non-zero, we should end execution with the given
-+ // result as return value.
-+ __ j(not_zero, &return_eax);
-+
-+ __ pop(edi);
-+ __ pop(backtrack_stackpointer());
-+ // String might have moved: Reload esi from frame.
-+ __ mov(esi, Operand(ebp, kInputEnd));
-+ SafeReturn();
-+ }
-+
-+ // Backtrack stack overflow code.
-+ if (stack_overflow_label_.is_linked()) {
-+ SafeCallTarget(&stack_overflow_label_);
-+ // Reached if the backtrack-stack limit has been hit.
-+
-+ Label grow_failed;
-+ // Save registers before calling C function
-+ __ push(esi);
-+ __ push(edi);
-+
-+ // Call GrowStack(backtrack_stackpointer())
-+ static const int num_arguments = 3;
-+ __ PrepareCallCFunction(num_arguments, ebx);
-+ __ mov(Operand(esp, 2 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(isolate())));
-+ __ lea(eax, Operand(ebp, kStackHighEnd));
-+ __ mov(Operand(esp, 1 * kPointerSize), eax);
-+ __ mov(Operand(esp, 0 * kPointerSize), backtrack_stackpointer());
-+ ExternalReference grow_stack =
-+ ExternalReference::re_grow_stack(isolate());
-+ __ CallCFunction(grow_stack, num_arguments);
-+ // If return NULL, we have failed to grow the stack, and
-+ // must exit with a stack-overflow exception.
-+ __ or_(eax, eax);
-+ __ j(equal, &exit_with_exception);
-+ // Otherwise use return value as new stack pointer.
-+ __ mov(backtrack_stackpointer(), eax);
-+ // Restore saved registers and continue.
-+ __ pop(edi);
-+ __ pop(esi);
-+ SafeReturn();
-+ }
-+
-+ if (exit_with_exception.is_linked()) {
-+ // If any of the code above needed to exit with an exception.
-+ __ bind(&exit_with_exception);
-+ // Exit with Result EXCEPTION(-1) to signal thrown exception.
-+ __ mov(eax, EXCEPTION);
-+ __ jmp(&return_eax);
-+ }
-+
-+ CodeDesc code_desc;
-+ masm_->GetCode(masm_->isolate(), &code_desc);
-+ Handle<Code> code =
-+ isolate()->factory()->NewCode(code_desc,
-+ Code::ComputeFlags(Code::REGEXP),
-+ masm_->CodeObject());
-+ PROFILE(masm_->isolate(),
-+ RegExpCodeCreateEvent(AbstractCode::cast(*code), *source));
-+ return Handle<HeapObject>::cast(code);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::GoTo(Label* to) {
-+ BranchOrBacktrack(no_condition, to);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::IfRegisterGE(int reg,
-+ int comparand,
-+ Label* if_ge) {
-+ __ cmp(register_location(reg), Immediate(comparand));
-+ BranchOrBacktrack(greater_equal, if_ge);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::IfRegisterLT(int reg,
-+ int comparand,
-+ Label* if_lt) {
-+ __ cmp(register_location(reg), Immediate(comparand));
-+ BranchOrBacktrack(less, if_lt);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::IfRegisterEqPos(int reg,
-+ Label* if_eq) {
-+ __ cmp(edi, register_location(reg));
-+ BranchOrBacktrack(equal, if_eq);
-+}
-+
-+
-+RegExpMacroAssembler::IrregexpImplementation
-+ RegExpMacroAssemblerX87::Implementation() {
-+ return kX87Implementation;
-+}
-+
-+
-+void RegExpMacroAssemblerX87::LoadCurrentCharacter(int cp_offset,
-+ Label* on_end_of_input,
-+ bool check_bounds,
-+ int characters) {
-+ DCHECK(cp_offset < (1<<30)); // Be sane! (And ensure negation works)
-+ if (check_bounds) {
-+ if (cp_offset >= 0) {
-+ CheckPosition(cp_offset + characters - 1, on_end_of_input);
-+ } else {
-+ CheckPosition(cp_offset, on_end_of_input);
-+ }
-+ }
-+ LoadCurrentCharacterUnchecked(cp_offset, characters);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::PopCurrentPosition() {
-+ Pop(edi);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::PopRegister(int register_index) {
-+ Pop(eax);
-+ __ mov(register_location(register_index), eax);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::PushBacktrack(Label* label) {
-+ Push(Immediate::CodeRelativeOffset(label));
-+ CheckStackLimit();
-+}
-+
-+
-+void RegExpMacroAssemblerX87::PushCurrentPosition() {
-+ Push(edi);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::PushRegister(int register_index,
-+ StackCheckFlag check_stack_limit) {
-+ __ mov(eax, register_location(register_index));
-+ Push(eax);
-+ if (check_stack_limit) CheckStackLimit();
-+}
-+
-+
-+void RegExpMacroAssemblerX87::ReadCurrentPositionFromRegister(int reg) {
-+ __ mov(edi, register_location(reg));
-+}
-+
-+
-+void RegExpMacroAssemblerX87::ReadStackPointerFromRegister(int reg) {
-+ __ mov(backtrack_stackpointer(), register_location(reg));
-+ __ add(backtrack_stackpointer(), Operand(ebp, kStackHighEnd));
-+}
-+
-+void RegExpMacroAssemblerX87::SetCurrentPositionFromEnd(int by) {
-+ Label after_position;
-+ __ cmp(edi, -by * char_size());
-+ __ j(greater_equal, &after_position, Label::kNear);
-+ __ mov(edi, -by * char_size());
-+ // On RegExp code entry (where this operation is used), the character before
-+ // the current position is expected to be already loaded.
-+ // We have advanced the position, so it's safe to read backwards.
-+ LoadCurrentCharacterUnchecked(-1, 1);
-+ __ bind(&after_position);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::SetRegister(int register_index, int to) {
-+ DCHECK(register_index >= num_saved_registers_); // Reserved for positions!
-+ __ mov(register_location(register_index), Immediate(to));
-+}
-+
-+
-+bool RegExpMacroAssemblerX87::Succeed() {
-+ __ jmp(&success_label_);
-+ return global();
-+}
-+
-+
-+void RegExpMacroAssemblerX87::WriteCurrentPositionToRegister(int reg,
-+ int cp_offset) {
-+ if (cp_offset == 0) {
-+ __ mov(register_location(reg), edi);
-+ } else {
-+ __ lea(eax, Operand(edi, cp_offset * char_size()));
-+ __ mov(register_location(reg), eax);
-+ }
-+}
-+
-+
-+void RegExpMacroAssemblerX87::ClearRegisters(int reg_from, int reg_to) {
-+ DCHECK(reg_from <= reg_to);
-+ __ mov(eax, Operand(ebp, kStringStartMinusOne));
-+ for (int reg = reg_from; reg <= reg_to; reg++) {
-+ __ mov(register_location(reg), eax);
-+ }
-+}
-+
-+
-+void RegExpMacroAssemblerX87::WriteStackPointerToRegister(int reg) {
-+ __ mov(eax, backtrack_stackpointer());
-+ __ sub(eax, Operand(ebp, kStackHighEnd));
-+ __ mov(register_location(reg), eax);
-+}
-+
-+
-+// Private methods:
-+
-+void RegExpMacroAssemblerX87::CallCheckStackGuardState(Register scratch) {
-+ static const int num_arguments = 3;
-+ __ PrepareCallCFunction(num_arguments, scratch);
-+ // RegExp code frame pointer.
-+ __ mov(Operand(esp, 2 * kPointerSize), ebp);
-+ // Code* of self.
-+ __ mov(Operand(esp, 1 * kPointerSize), Immediate(masm_->CodeObject()));
-+ // Next address on the stack (will be address of return address).
-+ __ lea(eax, Operand(esp, -kPointerSize));
-+ __ mov(Operand(esp, 0 * kPointerSize), eax);
-+ ExternalReference check_stack_guard =
-+ ExternalReference::re_check_stack_guard_state(isolate());
-+ __ CallCFunction(check_stack_guard, num_arguments);
-+}
-+
-+
-+// Helper function for reading a value out of a stack frame.
-+template <typename T>
-+static T& frame_entry(Address re_frame, int frame_offset) {
-+ return reinterpret_cast<T&>(Memory::int32_at(re_frame + frame_offset));
-+}
-+
-+
-+template <typename T>
-+static T* frame_entry_address(Address re_frame, int frame_offset) {
-+ return reinterpret_cast<T*>(re_frame + frame_offset);
-+}
-+
-+
-+int RegExpMacroAssemblerX87::CheckStackGuardState(Address* return_address,
-+ Code* re_code,
-+ Address re_frame) {
-+ return NativeRegExpMacroAssembler::CheckStackGuardState(
-+ frame_entry<Isolate*>(re_frame, kIsolate),
-+ frame_entry<int>(re_frame, kStartIndex),
-+ frame_entry<int>(re_frame, kDirectCall) == 1, return_address, re_code,
-+ frame_entry_address<String*>(re_frame, kInputString),
-+ frame_entry_address<const byte*>(re_frame, kInputStart),
-+ frame_entry_address<const byte*>(re_frame, kInputEnd));
-+}
-+
-+
-+Operand RegExpMacroAssemblerX87::register_location(int register_index) {
-+ DCHECK(register_index < (1<<30));
-+ if (num_registers_ <= register_index) {
-+ num_registers_ = register_index + 1;
-+ }
-+ return Operand(ebp, kRegisterZero - register_index * kPointerSize);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckPosition(int cp_offset,
-+ Label* on_outside_input) {
-+ if (cp_offset >= 0) {
-+ __ cmp(edi, -cp_offset * char_size());
-+ BranchOrBacktrack(greater_equal, on_outside_input);
-+ } else {
-+ __ lea(eax, Operand(edi, cp_offset * char_size()));
-+ __ cmp(eax, Operand(ebp, kStringStartMinusOne));
-+ BranchOrBacktrack(less_equal, on_outside_input);
-+ }
-+}
-+
-+
-+void RegExpMacroAssemblerX87::BranchOrBacktrack(Condition condition,
-+ Label* to) {
-+ if (condition < 0) { // No condition
-+ if (to == NULL) {
-+ Backtrack();
-+ return;
-+ }
-+ __ jmp(to);
-+ return;
-+ }
-+ if (to == NULL) {
-+ __ j(condition, &backtrack_label_);
-+ return;
-+ }
-+ __ j(condition, to);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::SafeCall(Label* to) {
-+ Label return_to;
-+ __ push(Immediate::CodeRelativeOffset(&return_to));
-+ __ jmp(to);
-+ __ bind(&return_to);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::SafeReturn() {
-+ __ pop(ebx);
-+ __ add(ebx, Immediate(masm_->CodeObject()));
-+ __ jmp(ebx);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::SafeCallTarget(Label* name) {
-+ __ bind(name);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::Push(Register source) {
-+ DCHECK(!source.is(backtrack_stackpointer()));
-+ // Notice: This updates flags, unlike normal Push.
-+ __ sub(backtrack_stackpointer(), Immediate(kPointerSize));
-+ __ mov(Operand(backtrack_stackpointer(), 0), source);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::Push(Immediate value) {
-+ // Notice: This updates flags, unlike normal Push.
-+ __ sub(backtrack_stackpointer(), Immediate(kPointerSize));
-+ __ mov(Operand(backtrack_stackpointer(), 0), value);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::Pop(Register target) {
-+ DCHECK(!target.is(backtrack_stackpointer()));
-+ __ mov(target, Operand(backtrack_stackpointer(), 0));
-+ // Notice: This updates flags, unlike normal Pop.
-+ __ add(backtrack_stackpointer(), Immediate(kPointerSize));
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckPreemption() {
-+ // Check for preemption.
-+ Label no_preempt;
-+ ExternalReference stack_limit =
-+ ExternalReference::address_of_stack_limit(isolate());
-+ __ cmp(esp, Operand::StaticVariable(stack_limit));
-+ __ j(above, &no_preempt);
-+
-+ SafeCall(&check_preempt_label_);
-+
-+ __ bind(&no_preempt);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::CheckStackLimit() {
-+ Label no_stack_overflow;
-+ ExternalReference stack_limit =
-+ ExternalReference::address_of_regexp_stack_limit(isolate());
-+ __ cmp(backtrack_stackpointer(), Operand::StaticVariable(stack_limit));
-+ __ j(above, &no_stack_overflow);
-+
-+ SafeCall(&stack_overflow_label_);
-+
-+ __ bind(&no_stack_overflow);
-+}
-+
-+
-+void RegExpMacroAssemblerX87::LoadCurrentCharacterUnchecked(int cp_offset,
-+ int characters) {
-+ if (mode_ == LATIN1) {
-+ if (characters == 4) {
-+ __ mov(current_character(), Operand(esi, edi, times_1, cp_offset));
-+ } else if (characters == 2) {
-+ __ movzx_w(current_character(), Operand(esi, edi, times_1, cp_offset));
-+ } else {
-+ DCHECK(characters == 1);
-+ __ movzx_b(current_character(), Operand(esi, edi, times_1, cp_offset));
-+ }
-+ } else {
-+ DCHECK(mode_ == UC16);
-+ if (characters == 2) {
-+ __ mov(current_character(),
-+ Operand(esi, edi, times_1, cp_offset * sizeof(uc16)));
-+ } else {
-+ DCHECK(characters == 1);
-+ __ movzx_w(current_character(),
-+ Operand(esi, edi, times_1, cp_offset * sizeof(uc16)));
-+ }
-+ }
-+}
-+
-+
-+#undef __
-+
-+#endif // V8_INTERPRETED_REGEXP
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/regexp/x87/regexp-macro-assembler-x87.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/regexp/x87/regexp-macro-assembler-x87.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/regexp/x87/regexp-macro-assembler-x87.h 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/regexp/x87/regexp-macro-assembler-x87.h 2018-02-18
19:00:54.190418252 +0100
-@@ -0,0 +1,204 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#ifndef V8_REGEXP_X87_REGEXP_MACRO_ASSEMBLER_X87_H_
-+#define V8_REGEXP_X87_REGEXP_MACRO_ASSEMBLER_X87_H_
-+
-+#include "src/macro-assembler.h"
-+#include "src/regexp/regexp-macro-assembler.h"
-+#include "src/x87/assembler-x87.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+#ifndef V8_INTERPRETED_REGEXP
-+class RegExpMacroAssemblerX87: public NativeRegExpMacroAssembler {
-+ public:
-+ RegExpMacroAssemblerX87(Isolate* isolate, Zone* zone, Mode mode,
-+ int registers_to_save);
-+ virtual ~RegExpMacroAssemblerX87();
-+ virtual int stack_limit_slack();
-+ virtual void AdvanceCurrentPosition(int by);
-+ virtual void AdvanceRegister(int reg, int by);
-+ virtual void Backtrack();
-+ virtual void Bind(Label* label);
-+ virtual void CheckAtStart(Label* on_at_start);
-+ virtual void CheckCharacter(uint32_t c, Label* on_equal);
-+ virtual void CheckCharacterAfterAnd(uint32_t c,
-+ uint32_t mask,
-+ Label* on_equal);
-+ virtual void CheckCharacterGT(uc16 limit, Label* on_greater);
-+ virtual void CheckCharacterLT(uc16 limit, Label* on_less);
-+ // A "greedy loop" is a loop that is both greedy and with a simple
-+ // body. It has a particularly simple implementation.
-+ virtual void CheckGreedyLoop(Label* on_tos_equals_current_position);
-+ virtual void CheckNotAtStart(int cp_offset, Label* on_not_at_start);
-+ virtual void CheckNotBackReference(int start_reg, bool read_backward,
-+ Label* on_no_match);
-+ virtual void CheckNotBackReferenceIgnoreCase(int start_reg,
-+ bool read_backward, bool unicode,
-+ Label* on_no_match);
-+ virtual void CheckNotCharacter(uint32_t c, Label* on_not_equal);
-+ virtual void CheckNotCharacterAfterAnd(uint32_t c,
-+ uint32_t mask,
-+ Label* on_not_equal);
-+ virtual void CheckNotCharacterAfterMinusAnd(uc16 c,
-+ uc16 minus,
-+ uc16 mask,
-+ Label* on_not_equal);
-+ virtual void CheckCharacterInRange(uc16 from,
-+ uc16 to,
-+ Label* on_in_range);
-+ virtual void CheckCharacterNotInRange(uc16 from,
-+ uc16 to,
-+ Label* on_not_in_range);
-+ virtual void CheckBitInTable(Handle<ByteArray> table, Label* on_bit_set);
-+
-+ // Checks whether the given offset from the current position is before
-+ // the end of the string.
-+ virtual void CheckPosition(int cp_offset, Label* on_outside_input);
-+ virtual bool CheckSpecialCharacterClass(uc16 type, Label* on_no_match);
-+ virtual void Fail();
-+ virtual Handle<HeapObject> GetCode(Handle<String> source);
-+ virtual void GoTo(Label* label);
-+ virtual void IfRegisterGE(int reg, int comparand, Label* if_ge);
-+ virtual void IfRegisterLT(int reg, int comparand, Label* if_lt);
-+ virtual void IfRegisterEqPos(int reg, Label* if_eq);
-+ virtual IrregexpImplementation Implementation();
-+ virtual void LoadCurrentCharacter(int cp_offset,
-+ Label* on_end_of_input,
-+ bool check_bounds = true,
-+ int characters = 1);
-+ virtual void PopCurrentPosition();
-+ virtual void PopRegister(int register_index);
-+ virtual void PushBacktrack(Label* label);
-+ virtual void PushCurrentPosition();
-+ virtual void PushRegister(int register_index,
-+ StackCheckFlag check_stack_limit);
-+ virtual void ReadCurrentPositionFromRegister(int reg);
-+ virtual void ReadStackPointerFromRegister(int reg);
-+ virtual void SetCurrentPositionFromEnd(int by);
-+ virtual void SetRegister(int register_index, int to);
-+ virtual bool Succeed();
-+ virtual void WriteCurrentPositionToRegister(int reg, int cp_offset);
-+ virtual void ClearRegisters(int reg_from, int reg_to);
-+ virtual void WriteStackPointerToRegister(int reg);
-+
-+ // Called from RegExp if the stack-guard is triggered.
-+ // If the code object is relocated, the return address is fixed before
-+ // returning.
-+ static int CheckStackGuardState(Address* return_address,
-+ Code* re_code,
-+ Address re_frame);
-+
-+ private:
-+ // Offsets from ebp of function parameters and stored registers.
-+ static const int kFramePointer = 0;
-+ // Above the frame pointer - function parameters and return address.
-+ static const int kReturn_eip = kFramePointer + kPointerSize;
-+ static const int kFrameAlign = kReturn_eip + kPointerSize;
-+ // Parameters.
-+ static const int kInputString = kFrameAlign;
-+ static const int kStartIndex = kInputString + kPointerSize;
-+ static const int kInputStart = kStartIndex + kPointerSize;
-+ static const int kInputEnd = kInputStart + kPointerSize;
-+ static const int kRegisterOutput = kInputEnd + kPointerSize;
-+ // For the case of global regular expression, we have room to store at least
-+ // one set of capture results. For the case of non-global regexp, we ignore
-+ // this value.
-+ static const int kNumOutputRegisters = kRegisterOutput + kPointerSize;
-+ static const int kStackHighEnd = kNumOutputRegisters + kPointerSize;
-+ static const int kDirectCall = kStackHighEnd + kPointerSize;
-+ static const int kIsolate = kDirectCall + kPointerSize;
-+ // Below the frame pointer - local stack variables.
-+ // When adding local variables remember to push space for them in
-+ // the frame in GetCode.
-+ static const int kBackup_esi = kFramePointer - kPointerSize;
-+ static const int kBackup_edi = kBackup_esi - kPointerSize;
-+ static const int kBackup_ebx = kBackup_edi - kPointerSize;
-+ static const int kSuccessfulCaptures = kBackup_ebx - kPointerSize;
-+ static const int kStringStartMinusOne = kSuccessfulCaptures - kPointerSize;
-+ // First register address. Following registers are below it on the stack.
-+ static const int kRegisterZero = kStringStartMinusOne - kPointerSize;
-+
-+ // Initial size of code buffer.
-+ static const size_t kRegExpCodeSize = 1024;
-+
-+ // Load a number of characters at the given offset from the
-+ // current position, into the current-character register.
-+ void LoadCurrentCharacterUnchecked(int cp_offset, int character_count);
-+
-+ // Check whether preemption has been requested.
-+ void CheckPreemption();
-+
-+ // Check whether we are exceeding the stack limit on the backtrack stack.
-+ void CheckStackLimit();
-+
-+ // Generate a call to CheckStackGuardState.
-+ void CallCheckStackGuardState(Register scratch);
-+
-+ // The ebp-relative location of a regexp register.
-+ Operand register_location(int register_index);
-+
-+ // The register containing the current character after LoadCurrentCharacter.
-+ inline Register current_character() { return edx; }
-+
-+ // The register containing the backtrack stack top. Provides a meaningful
-+ // name to the register.
-+ inline Register backtrack_stackpointer() { return ecx; }
-+
-+ // Byte size of chars in the string to match (decided by the Mode argument)
-+ inline int char_size() { return static_cast<int>(mode_); }
-+
-+ // Equivalent to a conditional branch to the label, unless the label
-+ // is NULL, in which case it is a conditional Backtrack.
-+ void BranchOrBacktrack(Condition condition, Label* to);
-+
-+ // Call and return internally in the generated code in a way that
-+ // is GC-safe (i.e., doesn't leave absolute code addresses on the stack)
-+ inline void SafeCall(Label* to);
-+ inline void SafeReturn();
-+ inline void SafeCallTarget(Label* name);
-+
-+ // Pushes the value of a register on the backtrack stack. Decrements the
-+ // stack pointer (ecx) by a word size and stores the register's value there.
-+ inline void Push(Register source);
-+
-+ // Pushes a value on the backtrack stack. Decrements the stack pointer (ecx)
-+ // by a word size and stores the value there.
-+ inline void Push(Immediate value);
-+
-+ // Pops a value from the backtrack stack. Reads the word at the stack pointer
-+ // (ecx) and increments it by a word size.
-+ inline void Pop(Register target);
-+
-+ Isolate* isolate() const { return masm_->isolate(); }
-+
-+ MacroAssembler* masm_;
-+
-+ // Which mode to generate code for (LATIN1 or UC16).
-+ Mode mode_;
-+
-+ // One greater than maximal register index actually used.
-+ int num_registers_;
-+
-+ // Number of registers to output at the end (the saved registers
-+ // are always 0..num_saved_registers_-1)
-+ int num_saved_registers_;
-+
-+ // Labels used internally.
-+ Label entry_label_;
-+ Label start_label_;
-+ Label success_label_;
-+ Label backtrack_label_;
-+ Label exit_label_;
-+ Label check_preempt_label_;
-+ Label stack_overflow_label_;
-+};
-+#endif // V8_INTERPRETED_REGEXP
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_REGEXP_X87_REGEXP_MACRO_ASSEMBLER_X87_H_
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/register-configuration.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/register-configuration.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/register-configuration.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/register-configuration.cc 2018-02-18
19:00:54.190418252 +0100
-@@ -74,6 +74,9 @@
- #if V8_TARGET_ARCH_IA32
- kMaxAllocatableGeneralRegisterCount,
- kMaxAllocatableDoubleRegisterCount,
-+#elif V8_TARGET_ARCH_X87
-+ kMaxAllocatableGeneralRegisterCount,
-+ compiler == TURBOFAN ? 1 : kMaxAllocatableDoubleRegisterCount,
- #elif V8_TARGET_ARCH_X64
- kMaxAllocatableGeneralRegisterCount,
- kMaxAllocatableDoubleRegisterCount,
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/register-configuration.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/register-configuration.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/register-configuration.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/register-configuration.h 2018-02-18
19:00:54.190418252 +0100
-@@ -28,7 +28,8 @@
- static const int kMaxFPRegisters = 32;
-
- // Default RegisterConfigurations for the target architecture.
-- // TODO(mstarzinger): Crankshaft is gone.
-+ // TODO(X87): This distinction in RegisterConfigurations is temporary
-+ // until x87 TF supports all of the registers that Crankshaft does.
- static const RegisterConfiguration* Crankshaft();
- static const RegisterConfiguration* Turbofan();
-
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/simulator.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/simulator.h
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/simulator.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/simulator.h 2018-02-18
19:00:54.191418237 +0100
-@@ -21,6 +21,8 @@
- #include "src/mips64/simulator-mips64.h"
- #elif V8_TARGET_ARCH_S390
- #include "src/s390/simulator-s390.h"
-+#elif V8_TARGET_ARCH_X87
-+#include "src/x87/simulator-x87.h"
- #else
- #error Unsupported target architecture.
- #endif
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/strtod.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/strtod.cc
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/strtod.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/strtod.cc 2018-02-18
19:00:54.191418237 +0100
-@@ -154,7 +154,8 @@
- static bool DoubleStrtod(Vector<const char> trimmed,
- int exponent,
- double* result) {
--#if (V8_TARGET_ARCH_IA32 || defined(USE_SIMULATOR)) && !defined(_MSC_VER)
-+#if (V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87 || defined(USE_SIMULATOR)) && \
-+ !defined(_MSC_VER)
- // On x86 the floating-point stack can be 64 or 80 bits wide. If it is
- // 80 bits wide (as is the case on Linux) then double-rounding occurs and the
- // result is not accurate.
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/utils.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/utils.cc
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/utils.cc 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/utils.cc 2018-02-18
19:00:54.191418237 +0100
-@@ -356,7 +356,8 @@
- }
- }
-
--#if V8_TARGET_ARCH_IA32
-+
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- static void MemMoveWrapper(void* dest, const void* src, size_t size) {
- memmove(dest, src, size);
- }
-@@ -410,7 +411,7 @@
- void init_memcopy_functions(Isolate* isolate) {
- if (g_memcopy_functions_initialized) return;
- g_memcopy_functions_initialized = true;
--#if V8_TARGET_ARCH_IA32
-+#if V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_X87
- MemMoveFunction generated_memmove = CreateMemMoveFunction(isolate);
- if (generated_memmove != NULL) {
- memmove_function = generated_memmove;
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/utils.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/utils.h
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/utils.h 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/utils.h 2018-02-18
19:00:54.192418222 +0100
-@@ -431,7 +431,7 @@
- // Initializes the codegen support that depends on CPU features.
- void init_memcopy_functions(Isolate* isolate);
-
--#if defined(V8_TARGET_ARCH_IA32)
-+#if defined(V8_TARGET_ARCH_IA32) || defined(V8_TARGET_ARCH_X87)
- // Limit below which the extra overhead of the MemCopy function is likely
- // to outweigh the benefits of faster copying.
- const int kMinComplexMemCopy = 64;
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/v8.gyp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/v8.gyp
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/v8.gyp 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/v8.gyp 2018-02-18
19:00:54.193418208 +0100
-@@ -279,6 +279,11 @@
- 'builtins/s390/builtins-s390.cc',
- ],
- }],
-+ ['v8_target_arch=="x87"', {
-+ 'sources': [ ### gcmole(arch:x87) ###
-+ 'builtins/x87/builtins-x87.cc',
-+ ],
-+ }],
- ['v8_enable_i18n_support==0', {
- 'sources!': [
- 'builtins/builtins-intl-gen.cc',
-@@ -1587,6 +1592,38 @@
- 'regexp/ia32/regexp-macro-assembler-ia32.h',
- ],
- }],
-+ ['v8_target_arch=="x87"', {
-+ 'sources': [ ### gcmole(arch:x87) ###
-+ 'x87/assembler-x87-inl.h',
-+ 'x87/assembler-x87.cc',
-+ 'x87/assembler-x87.h',
-+ 'x87/code-stubs-x87.cc',
-+ 'x87/code-stubs-x87.h',
-+ 'x87/codegen-x87.cc',
-+ 'x87/codegen-x87.h',
-+ 'x87/cpu-x87.cc',
-+ 'x87/deoptimizer-x87.cc',
-+ 'x87/disasm-x87.cc',
-+ 'x87/frames-x87.cc',
-+ 'x87/frames-x87.h',
-+ 'x87/interface-descriptors-x87.cc',
-+ 'x87/macro-assembler-x87.cc',
-+ 'x87/macro-assembler-x87.h',
-+ 'x87/simulator-x87.cc',
-+ 'x87/simulator-x87.h',
-+ 'compiler/x87/code-generator-x87.cc',
-+ 'compiler/x87/instruction-codes-x87.h',
-+ 'compiler/x87/instruction-scheduler-x87.cc',
-+ 'compiler/x87/instruction-selector-x87.cc',
-+ 'debug/x87/debug-x87.cc',
-+ 'full-codegen/x87/full-codegen-x87.cc',
-+ 'ic/x87/access-compiler-x87.cc',
-+ 'ic/x87/handler-compiler-x87.cc',
-+ 'ic/x87/ic-x87.cc',
-+ 'regexp/x87/regexp-macro-assembler-x87.cc',
-+ 'regexp/x87/regexp-macro-assembler-x87.h',
-+ ],
-+ }],
- ['v8_target_arch=="mips" or
v8_target_arch=="mipsel"', {
- 'sources': [ ### gcmole(arch:mipsel) ###
- 'mips/assembler-mips.cc',
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/assembler-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/assembler-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/assembler-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/assembler-x87.cc 2018-02-18
19:00:54.194418193 +0100
-@@ -0,0 +1,2258 @@
-+// Copyright (c) 1994-2006 Sun Microsystems Inc.
-+// All Rights Reserved.
-+//
-+// Redistribution and use in source and binary forms, with or without
-+// modification, are permitted provided that the following conditions
-+// are met:
-+//
-+// - Redistributions of source code must retain the above copyright notice,
-+// this list of conditions and the following disclaimer.
-+//
-+// - Redistribution in binary form must reproduce the above copyright
-+// notice, this list of conditions and the following disclaimer in the
-+// documentation and/or other materials provided with the
-+// distribution.
-+//
-+// - Neither the name of Sun Microsystems or the names of contributors may
-+// be used to endorse or promote products derived from this software without
-+// specific prior written permission.
-+//
-+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-+// FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-+// COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-+// INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-+// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-+// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
-+// HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
-+// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
-+// OF THE POSSIBILITY OF SUCH DAMAGE.
-+
-+// The original source code covered by the above license above has been modified
-+// significantly by Google Inc.
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+
-+#include "src/x87/assembler-x87.h"
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/base/bits.h"
-+#include "src/base/cpu.h"
-+#include "src/code-stubs.h"
-+#include "src/disassembler.h"
-+#include "src/macro-assembler.h"
-+#include "src/v8.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+Immediate Immediate::EmbeddedNumber(double value) {
-+ int32_t smi;
-+ if (DoubleToSmiInteger(value, &smi)) return Immediate(Smi::FromInt(smi));
-+ Immediate result(0, RelocInfo::EMBEDDED_OBJECT);
-+ result.is_heap_object_request_ = true;
-+ result.value_.heap_object_request = HeapObjectRequest(value);
-+ return result;
-+}
-+
-+Immediate Immediate::EmbeddedCode(CodeStub* stub) {
-+ Immediate result(0, RelocInfo::CODE_TARGET);
-+ result.is_heap_object_request_ = true;
-+ result.value_.heap_object_request = HeapObjectRequest(stub);
-+ return result;
-+}
-+
-+// -----------------------------------------------------------------------------
-+// Implementation of CpuFeatures
-+
-+void CpuFeatures::ProbeImpl(bool cross_compile) {
-+ base::CPU cpu;
-+
-+ // Only use statically determined features for cross compile (snapshot).
-+ if (cross_compile) return;
-+}
-+
-+
-+void CpuFeatures::PrintTarget() { }
-+void CpuFeatures::PrintFeatures() { }
-+
-+
-+// -----------------------------------------------------------------------------
-+// Implementation of Displacement
-+
-+void Displacement::init(Label* L, Type type) {
-+ DCHECK(!L->is_bound());
-+ int next = 0;
-+ if (L->is_linked()) {
-+ next = L->pos();
-+ DCHECK(next > 0); // Displacements must be at positions > 0
-+ }
-+ // Ensure that we _never_ overflow the next field.
-+ DCHECK(NextField::is_valid(Assembler::kMaximalBufferSize));
-+ data_ = NextField::encode(next) | TypeField::encode(type);
-+}
-+
-+
-+// -----------------------------------------------------------------------------
-+// Implementation of RelocInfo
-+
-+
-+const int RelocInfo::kApplyMask =
-+ RelocInfo::kCodeTargetMask | 1 << RelocInfo::RUNTIME_ENTRY |
-+ 1 << RelocInfo::INTERNAL_REFERENCE | 1 << RelocInfo::CODE_AGE_SEQUENCE
|
-+ RelocInfo::kDebugBreakSlotMask;
-+
-+
-+bool RelocInfo::IsCodedSpecially() {
-+ // The deserializer needs to know whether a pointer is specially coded. Being
-+ // specially coded on IA32 means that it is a relative address, as used by
-+ // branch instructions. These are also the ones that need changing when a
-+ // code object moves.
-+ return (1 << rmode_) & kApplyMask;
-+}
-+
-+
-+bool RelocInfo::IsInConstantPool() {
-+ return false;
-+}
-+
-+Address RelocInfo::wasm_memory_reference() {
-+ DCHECK(IsWasmMemoryReference(rmode_));
-+ return Memory::Address_at(pc_);
-+}
-+
-+Address RelocInfo::wasm_global_reference() {
-+ DCHECK(IsWasmGlobalReference(rmode_));
-+ return Memory::Address_at(pc_);
-+}
-+
-+uint32_t RelocInfo::wasm_memory_size_reference() {
-+ DCHECK(IsWasmMemorySizeReference(rmode_));
-+ return Memory::uint32_at(pc_);
-+}
-+
-+uint32_t RelocInfo::wasm_function_table_size_reference() {
-+ DCHECK(IsWasmFunctionTableSizeReference(rmode_));
-+ return Memory::uint32_at(pc_);
-+}
-+
-+void RelocInfo::unchecked_update_wasm_memory_reference(
-+ Isolate* isolate, Address address, ICacheFlushMode icache_flush_mode) {
-+ Memory::Address_at(pc_) = address;
-+ if (icache_flush_mode != SKIP_ICACHE_FLUSH) {
-+ Assembler::FlushICache(isolate, pc_, sizeof(Address));
-+ }
-+}
-+
-+void RelocInfo::unchecked_update_wasm_size(Isolate* isolate, uint32_t size,
-+ ICacheFlushMode icache_flush_mode) {
-+ Memory::uint32_at(pc_) = size;
-+ if (icache_flush_mode != SKIP_ICACHE_FLUSH) {
-+ Assembler::FlushICache(isolate, pc_, sizeof(uint32_t));
-+ }
-+}
-+
-+// -----------------------------------------------------------------------------
-+// Implementation of Operand
-+
-+Operand::Operand(Register base, int32_t disp, RelocInfo::Mode rmode) {
-+ // [base + disp/r]
-+ if (disp == 0 && RelocInfo::IsNone(rmode) && !base.is(ebp)) {
-+ // [base]
-+ set_modrm(0, base);
-+ if (base.is(esp)) set_sib(times_1, esp, base);
-+ } else if (is_int8(disp) && RelocInfo::IsNone(rmode)) {
-+ // [base + disp8]
-+ set_modrm(1, base);
-+ if (base.is(esp)) set_sib(times_1, esp, base);
-+ set_disp8(disp);
-+ } else {
-+ // [base + disp/r]
-+ set_modrm(2, base);
-+ if (base.is(esp)) set_sib(times_1, esp, base);
-+ set_dispr(disp, rmode);
-+ }
-+}
-+
-+
-+Operand::Operand(Register base,
-+ Register index,
-+ ScaleFactor scale,
-+ int32_t disp,
-+ RelocInfo::Mode rmode) {
-+ DCHECK(!index.is(esp)); // illegal addressing mode
-+ // [base + index*scale + disp/r]
-+ if (disp == 0 && RelocInfo::IsNone(rmode) && !base.is(ebp)) {
-+ // [base + index*scale]
-+ set_modrm(0, esp);
-+ set_sib(scale, index, base);
-+ } else if (is_int8(disp) && RelocInfo::IsNone(rmode)) {
-+ // [base + index*scale + disp8]
-+ set_modrm(1, esp);
-+ set_sib(scale, index, base);
-+ set_disp8(disp);
-+ } else {
-+ // [base + index*scale + disp/r]
-+ set_modrm(2, esp);
-+ set_sib(scale, index, base);
-+ set_dispr(disp, rmode);
-+ }
-+}
-+
-+
-+Operand::Operand(Register index,
-+ ScaleFactor scale,
-+ int32_t disp,
-+ RelocInfo::Mode rmode) {
-+ DCHECK(!index.is(esp)); // illegal addressing mode
-+ // [index*scale + disp/r]
-+ set_modrm(0, esp);
-+ set_sib(scale, index, ebp);
-+ set_dispr(disp, rmode);
-+}
-+
-+
-+bool Operand::is_reg(Register reg) const {
-+ return ((buf_[0] & 0xF8) == 0xC0) // addressing mode is register only.
-+ && ((buf_[0] & 0x07) == reg.code()); // register codes match.
-+}
-+
-+
-+bool Operand::is_reg_only() const {
-+ return (buf_[0] & 0xF8) == 0xC0; // Addressing mode is register only.
-+}
-+
-+
-+Register Operand::reg() const {
-+ DCHECK(is_reg_only());
-+ return Register::from_code(buf_[0] & 0x07);
-+}
-+
-+void Assembler::AllocateAndInstallRequestedHeapObjects(Isolate* isolate) {
-+ for (auto& request : heap_object_requests_) {
-+ Handle<HeapObject> object;
-+ switch (request.kind()) {
-+ case HeapObjectRequest::kHeapNumber:
-+ object = isolate->factory()->NewHeapNumber(request.heap_number(),
-+ IMMUTABLE, TENURED);
-+ break;
-+ case HeapObjectRequest::kCodeStub:
-+ request.code_stub()->set_isolate(isolate);
-+ object = request.code_stub()->GetCode();
-+ break;
-+ }
-+ Address pc = buffer_ + request.offset();
-+ Memory::Object_Handle_at(pc) = object;
-+ }
-+}
-+
-+
-+// -----------------------------------------------------------------------------
-+// Implementation of Assembler.
-+
-+// Emit a single byte. Must always be inlined.
-+#define EMIT(x) \
-+ *pc_++ = (x)
-+
-+Assembler::Assembler(IsolateData isolate_data, void* buffer, int buffer_size)
-+ : AssemblerBase(isolate_data, buffer, buffer_size) {
-+// Clear the buffer in debug mode unless it was provided by the
-+// caller in which case we can't be sure it's okay to overwrite
-+// existing code in it; see CodePatcher::CodePatcher(...).
-+#ifdef DEBUG
-+ if (own_buffer_) {
-+ memset(buffer_, 0xCC, buffer_size_); // int3
-+ }
-+#endif
-+
-+ reloc_info_writer.Reposition(buffer_ + buffer_size_, pc_);
-+}
-+
-+
-+void Assembler::GetCode(Isolate* isolate, CodeDesc* desc) {
-+ // Finalize code (at this point overflow() may be true, but the gap ensures
-+ // that we are still not overlapping instructions and relocation info).
-+ DCHECK(pc_ <= reloc_info_writer.pos()); // No overlap.
-+
-+ AllocateAndInstallRequestedHeapObjects(isolate);
-+
-+ // Set up code descriptor.
-+ desc->buffer = buffer_;
-+ desc->buffer_size = buffer_size_;
-+ desc->instr_size = pc_offset();
-+ desc->reloc_size = (buffer_ + buffer_size_) - reloc_info_writer.pos();
-+ desc->origin = this;
-+ desc->constant_pool_size = 0;
-+ desc->unwinding_info_size = 0;
-+ desc->unwinding_info = nullptr;
-+}
-+
-+
-+void Assembler::Align(int m) {
-+ DCHECK(base::bits::IsPowerOfTwo(m));
-+ int mask = m - 1;
-+ int addr = pc_offset();
-+ Nop((m - (addr & mask)) & mask);
-+}
-+
-+
-+bool Assembler::IsNop(Address addr) {
-+ Address a = addr;
-+ while (*a == 0x66) a++;
-+ if (*a == 0x90) return true;
-+ if (a[0] == 0xf && a[1] == 0x1f) return true;
-+ return false;
-+}
-+
-+
-+void Assembler::Nop(int bytes) {
-+ EnsureSpace ensure_space(this);
-+
-+ // Older CPUs that do not support SSE2 may not support multibyte NOP
-+ // instructions.
-+ for (; bytes > 0; bytes--) {
-+ EMIT(0x90);
-+ }
-+ return;
-+}
-+
-+
-+void Assembler::CodeTargetAlign() {
-+ Align(16); // Preferred alignment of jump targets on ia32.
-+}
-+
-+
-+void Assembler::cpuid() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xA2);
-+}
-+
-+
-+void Assembler::pushad() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x60);
-+}
-+
-+
-+void Assembler::popad() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x61);
-+}
-+
-+
-+void Assembler::pushfd() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x9C);
-+}
-+
-+
-+void Assembler::popfd() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x9D);
-+}
-+
-+
-+void Assembler::push(const Immediate& x) {
-+ EnsureSpace ensure_space(this);
-+ if (x.is_int8()) {
-+ EMIT(0x6a);
-+ EMIT(x.immediate());
-+ } else {
-+ EMIT(0x68);
-+ emit(x);
-+ }
-+}
-+
-+
-+void Assembler::push_imm32(int32_t imm32) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x68);
-+ emit(imm32);
-+}
-+
-+
-+void Assembler::push(Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x50 | src.code());
-+}
-+
-+
-+void Assembler::push(const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xFF);
-+ emit_operand(esi, src);
-+}
-+
-+
-+void Assembler::pop(Register dst) {
-+ DCHECK(reloc_info_writer.last_pc() != NULL);
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x58 | dst.code());
-+}
-+
-+
-+void Assembler::pop(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x8F);
-+ emit_operand(eax, dst);
-+}
-+
-+
-+void Assembler::enter(const Immediate& size) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xC8);
-+ emit_w(size);
-+ EMIT(0);
-+}
-+
-+
-+void Assembler::leave() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xC9);
-+}
-+
-+
-+void Assembler::mov_b(Register dst, const Operand& src) {
-+ CHECK(dst.is_byte_register());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x8A);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::mov_b(const Operand& dst, const Immediate& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xC6);
-+ emit_operand(eax, dst);
-+ EMIT(static_cast<int8_t>(src.immediate()));
-+}
-+
-+
-+void Assembler::mov_b(const Operand& dst, int8_t imm8) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xC6);
-+ emit_operand(eax, dst);
-+ EMIT(imm8);
-+}
-+
-+
-+void Assembler::mov_b(const Operand& dst, Register src) {
-+ CHECK(src.is_byte_register());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x88);
-+ emit_operand(src, dst);
-+}
-+
-+
-+void Assembler::mov_w(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0x8B);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::mov_w(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0x89);
-+ emit_operand(src, dst);
-+}
-+
-+
-+void Assembler::mov_w(const Operand& dst, int16_t imm16) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0xC7);
-+ emit_operand(eax, dst);
-+ EMIT(static_cast<int8_t>(imm16 & 0xff));
-+ EMIT(static_cast<int8_t>(imm16 >> 8));
-+}
-+
-+
-+void Assembler::mov_w(const Operand& dst, const Immediate& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0xC7);
-+ emit_operand(eax, dst);
-+ EMIT(static_cast<int8_t>(src.immediate() & 0xff));
-+ EMIT(static_cast<int8_t>(src.immediate() >> 8));
-+}
-+
-+
-+void Assembler::mov(Register dst, int32_t imm32) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xB8 | dst.code());
-+ emit(imm32);
-+}
-+
-+
-+void Assembler::mov(Register dst, const Immediate& x) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xB8 | dst.code());
-+ emit(x);
-+}
-+
-+
-+void Assembler::mov(Register dst, Handle<HeapObject> handle) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xB8 | dst.code());
-+ emit(handle);
-+}
-+
-+
-+void Assembler::mov(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x8B);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::mov(Register dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x89);
-+ EMIT(0xC0 | src.code() << 3 | dst.code());
-+}
-+
-+
-+void Assembler::mov(const Operand& dst, const Immediate& x) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xC7);
-+ emit_operand(eax, dst);
-+ emit(x);
-+}
-+
-+
-+void Assembler::mov(const Operand& dst, Handle<HeapObject> handle) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xC7);
-+ emit_operand(eax, dst);
-+ emit(handle);
-+}
-+
-+
-+void Assembler::mov(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x89);
-+ emit_operand(src, dst);
-+}
-+
-+
-+void Assembler::movsx_b(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xBE);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::movsx_w(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xBF);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::movzx_b(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xB6);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::movzx_w(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xB7);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::cld() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xFC);
-+}
-+
-+
-+void Assembler::rep_movs() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF3);
-+ EMIT(0xA5);
-+}
-+
-+
-+void Assembler::rep_stos() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF3);
-+ EMIT(0xAB);
-+}
-+
-+
-+void Assembler::stos() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xAB);
-+}
-+
-+
-+void Assembler::xchg(Register dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ if (src.is(eax) || dst.is(eax)) { // Single-byte encoding.
-+ EMIT(0x90 | (src.is(eax) ? dst.code() : src.code()));
-+ } else {
-+ EMIT(0x87);
-+ EMIT(0xC0 | src.code() << 3 | dst.code());
-+ }
-+}
-+
-+
-+void Assembler::xchg(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x87);
-+ emit_operand(dst, src);
-+}
-+
-+void Assembler::xchg_b(Register reg, const Operand& op) {
-+ DCHECK(reg.is_byte_register());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x86);
-+ emit_operand(reg, op);
-+}
-+
-+void Assembler::xchg_w(Register reg, const Operand& op) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0x87);
-+ emit_operand(reg, op);
-+}
-+
-+void Assembler::lock() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF0);
-+}
-+
-+void Assembler::cmpxchg(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xB1);
-+ emit_operand(src, dst);
-+}
-+
-+void Assembler::cmpxchg_b(const Operand& dst, Register src) {
-+ DCHECK(src.is_byte_register());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xB0);
-+ emit_operand(src, dst);
-+}
-+
-+void Assembler::cmpxchg_w(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0x0F);
-+ EMIT(0xB1);
-+ emit_operand(src, dst);
-+}
-+
-+void Assembler::adc(Register dst, int32_t imm32) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(2, Operand(dst), Immediate(imm32));
-+}
-+
-+
-+void Assembler::adc(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x13);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::add(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x03);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::add(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x01);
-+ emit_operand(src, dst);
-+}
-+
-+
-+void Assembler::add(const Operand& dst, const Immediate& x) {
-+ DCHECK(reloc_info_writer.last_pc() != NULL);
-+ EnsureSpace ensure_space(this);
-+ emit_arith(0, dst, x);
-+}
-+
-+
-+void Assembler::and_(Register dst, int32_t imm32) {
-+ and_(dst, Immediate(imm32));
-+}
-+
-+
-+void Assembler::and_(Register dst, const Immediate& x) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(4, Operand(dst), x);
-+}
-+
-+
-+void Assembler::and_(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x23);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::and_(const Operand& dst, const Immediate& x) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(4, dst, x);
-+}
-+
-+
-+void Assembler::and_(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x21);
-+ emit_operand(src, dst);
-+}
-+
-+void Assembler::cmpb(const Operand& op, Immediate imm8) {
-+ DCHECK(imm8.is_int8() || imm8.is_uint8());
-+ EnsureSpace ensure_space(this);
-+ if (op.is_reg(eax)) {
-+ EMIT(0x3C);
-+ } else {
-+ EMIT(0x80);
-+ emit_operand(edi, op); // edi == 7
-+ }
-+ emit_b(imm8);
-+}
-+
-+
-+void Assembler::cmpb(const Operand& op, Register reg) {
-+ CHECK(reg.is_byte_register());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x38);
-+ emit_operand(reg, op);
-+}
-+
-+
-+void Assembler::cmpb(Register reg, const Operand& op) {
-+ CHECK(reg.is_byte_register());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x3A);
-+ emit_operand(reg, op);
-+}
-+
-+
-+void Assembler::cmpw(const Operand& op, Immediate imm16) {
-+ DCHECK(imm16.is_int16());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0x81);
-+ emit_operand(edi, op);
-+ emit_w(imm16);
-+}
-+
-+void Assembler::cmpw(Register reg, const Operand& op) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0x3B);
-+ emit_operand(reg, op);
-+}
-+
-+void Assembler::cmpw(const Operand& op, Register reg) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0x39);
-+ emit_operand(reg, op);
-+}
-+
-+void Assembler::cmp(Register reg, int32_t imm32) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(7, Operand(reg), Immediate(imm32));
-+}
-+
-+
-+void Assembler::cmp(Register reg, Handle<HeapObject> handle) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(7, Operand(reg), Immediate(handle));
-+}
-+
-+
-+void Assembler::cmp(Register reg, const Operand& op) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x3B);
-+ emit_operand(reg, op);
-+}
-+
-+void Assembler::cmp(const Operand& op, Register reg) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x39);
-+ emit_operand(reg, op);
-+}
-+
-+void Assembler::cmp(const Operand& op, const Immediate& imm) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(7, op, imm);
-+}
-+
-+
-+void Assembler::cmp(const Operand& op, Handle<HeapObject> handle) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(7, op, Immediate(handle));
-+}
-+
-+
-+void Assembler::cmpb_al(const Operand& op) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x38); // CMP r/m8, r8
-+ emit_operand(eax, op); // eax has same code as register al.
-+}
-+
-+
-+void Assembler::cmpw_ax(const Operand& op) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0x39); // CMP r/m16, r16
-+ emit_operand(eax, op); // eax has same code as register ax.
-+}
-+
-+
-+void Assembler::dec_b(Register dst) {
-+ CHECK(dst.is_byte_register());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xFE);
-+ EMIT(0xC8 | dst.code());
-+}
-+
-+
-+void Assembler::dec_b(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xFE);
-+ emit_operand(ecx, dst);
-+}
-+
-+
-+void Assembler::dec(Register dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x48 | dst.code());
-+}
-+
-+
-+void Assembler::dec(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xFF);
-+ emit_operand(ecx, dst);
-+}
-+
-+
-+void Assembler::cdq() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x99);
-+}
-+
-+
-+void Assembler::idiv(const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF7);
-+ emit_operand(edi, src);
-+}
-+
-+
-+void Assembler::div(const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF7);
-+ emit_operand(esi, src);
-+}
-+
-+
-+void Assembler::imul(Register reg) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF7);
-+ EMIT(0xE8 | reg.code());
-+}
-+
-+
-+void Assembler::imul(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xAF);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::imul(Register dst, Register src, int32_t imm32) {
-+ imul(dst, Operand(src), imm32);
-+}
-+
-+
-+void Assembler::imul(Register dst, const Operand& src, int32_t imm32) {
-+ EnsureSpace ensure_space(this);
-+ if (is_int8(imm32)) {
-+ EMIT(0x6B);
-+ emit_operand(dst, src);
-+ EMIT(imm32);
-+ } else {
-+ EMIT(0x69);
-+ emit_operand(dst, src);
-+ emit(imm32);
-+ }
-+}
-+
-+
-+void Assembler::inc(Register dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x40 | dst.code());
-+}
-+
-+
-+void Assembler::inc(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xFF);
-+ emit_operand(eax, dst);
-+}
-+
-+
-+void Assembler::lea(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x8D);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::mul(Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF7);
-+ EMIT(0xE0 | src.code());
-+}
-+
-+
-+void Assembler::neg(Register dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF7);
-+ EMIT(0xD8 | dst.code());
-+}
-+
-+
-+void Assembler::neg(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF7);
-+ emit_operand(ebx, dst);
-+}
-+
-+
-+void Assembler::not_(Register dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF7);
-+ EMIT(0xD0 | dst.code());
-+}
-+
-+
-+void Assembler::not_(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF7);
-+ emit_operand(edx, dst);
-+}
-+
-+
-+void Assembler::or_(Register dst, int32_t imm32) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(1, Operand(dst), Immediate(imm32));
-+}
-+
-+
-+void Assembler::or_(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0B);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::or_(const Operand& dst, const Immediate& x) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(1, dst, x);
-+}
-+
-+
-+void Assembler::or_(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x09);
-+ emit_operand(src, dst);
-+}
-+
-+
-+void Assembler::rcl(Register dst, uint8_t imm8) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(is_uint5(imm8)); // illegal shift count
-+ if (imm8 == 1) {
-+ EMIT(0xD1);
-+ EMIT(0xD0 | dst.code());
-+ } else {
-+ EMIT(0xC1);
-+ EMIT(0xD0 | dst.code());
-+ EMIT(imm8);
-+ }
-+}
-+
-+
-+void Assembler::rcr(Register dst, uint8_t imm8) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(is_uint5(imm8)); // illegal shift count
-+ if (imm8 == 1) {
-+ EMIT(0xD1);
-+ EMIT(0xD8 | dst.code());
-+ } else {
-+ EMIT(0xC1);
-+ EMIT(0xD8 | dst.code());
-+ EMIT(imm8);
-+ }
-+}
-+
-+
-+void Assembler::ror(const Operand& dst, uint8_t imm8) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(is_uint5(imm8)); // illegal shift count
-+ if (imm8 == 1) {
-+ EMIT(0xD1);
-+ emit_operand(ecx, dst);
-+ } else {
-+ EMIT(0xC1);
-+ emit_operand(ecx, dst);
-+ EMIT(imm8);
-+ }
-+}
-+
-+
-+void Assembler::ror_cl(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD3);
-+ emit_operand(ecx, dst);
-+}
-+
-+
-+void Assembler::sar(const Operand& dst, uint8_t imm8) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(is_uint5(imm8)); // illegal shift count
-+ if (imm8 == 1) {
-+ EMIT(0xD1);
-+ emit_operand(edi, dst);
-+ } else {
-+ EMIT(0xC1);
-+ emit_operand(edi, dst);
-+ EMIT(imm8);
-+ }
-+}
-+
-+
-+void Assembler::sar_cl(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD3);
-+ emit_operand(edi, dst);
-+}
-+
-+void Assembler::sbb(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x1B);
-+ emit_operand(dst, src);
-+}
-+
-+void Assembler::shld(Register dst, Register src, uint8_t shift) {
-+ DCHECK(is_uint5(shift));
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xA4);
-+ emit_operand(src, Operand(dst));
-+ EMIT(shift);
-+}
-+
-+void Assembler::shld_cl(Register dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xA5);
-+ emit_operand(src, Operand(dst));
-+}
-+
-+
-+void Assembler::shl(const Operand& dst, uint8_t imm8) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(is_uint5(imm8)); // illegal shift count
-+ if (imm8 == 1) {
-+ EMIT(0xD1);
-+ emit_operand(esp, dst);
-+ } else {
-+ EMIT(0xC1);
-+ emit_operand(esp, dst);
-+ EMIT(imm8);
-+ }
-+}
-+
-+
-+void Assembler::shl_cl(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD3);
-+ emit_operand(esp, dst);
-+}
-+
-+void Assembler::shr(const Operand& dst, uint8_t imm8) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(is_uint5(imm8)); // illegal shift count
-+ if (imm8 == 1) {
-+ EMIT(0xD1);
-+ emit_operand(ebp, dst);
-+ } else {
-+ EMIT(0xC1);
-+ emit_operand(ebp, dst);
-+ EMIT(imm8);
-+ }
-+}
-+
-+
-+void Assembler::shr_cl(const Operand& dst) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD3);
-+ emit_operand(ebp, dst);
-+}
-+
-+void Assembler::shrd(Register dst, Register src, uint8_t shift) {
-+ DCHECK(is_uint5(shift));
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xAC);
-+ emit_operand(dst, Operand(src));
-+ EMIT(shift);
-+}
-+
-+void Assembler::shrd_cl(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xAD);
-+ emit_operand(src, dst);
-+}
-+
-+void Assembler::sub(const Operand& dst, const Immediate& x) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(5, dst, x);
-+}
-+
-+
-+void Assembler::sub(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x2B);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::sub(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x29);
-+ emit_operand(src, dst);
-+}
-+
-+
-+void Assembler::test(Register reg, const Immediate& imm) {
-+ if (imm.is_uint8()) {
-+ test_b(reg, imm);
-+ return;
-+ }
-+
-+ EnsureSpace ensure_space(this);
-+ // This is not using emit_arith because test doesn't support
-+ // sign-extension of 8-bit operands.
-+ if (reg.is(eax)) {
-+ EMIT(0xA9);
-+ } else {
-+ EMIT(0xF7);
-+ EMIT(0xC0 | reg.code());
-+ }
-+ emit(imm);
-+}
-+
-+
-+void Assembler::test(Register reg, const Operand& op) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x85);
-+ emit_operand(reg, op);
-+}
-+
-+
-+void Assembler::test_b(Register reg, const Operand& op) {
-+ CHECK(reg.is_byte_register());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x84);
-+ emit_operand(reg, op);
-+}
-+
-+
-+void Assembler::test(const Operand& op, const Immediate& imm) {
-+ if (op.is_reg_only()) {
-+ test(op.reg(), imm);
-+ return;
-+ }
-+ if (imm.is_uint8()) {
-+ return test_b(op, imm);
-+ }
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF7);
-+ emit_operand(eax, op);
-+ emit(imm);
-+}
-+
-+void Assembler::test_b(Register reg, Immediate imm8) {
-+ DCHECK(imm8.is_uint8());
-+ EnsureSpace ensure_space(this);
-+ // Only use test against byte for registers that have a byte
-+ // variant: eax, ebx, ecx, and edx.
-+ if (reg.is(eax)) {
-+ EMIT(0xA8);
-+ emit_b(imm8);
-+ } else if (reg.is_byte_register()) {
-+ emit_arith_b(0xF6, 0xC0, reg, static_cast<uint8_t>(imm8.immediate()));
-+ } else {
-+ EMIT(0x66);
-+ EMIT(0xF7);
-+ EMIT(0xC0 | reg.code());
-+ emit_w(imm8);
-+ }
-+}
-+
-+void Assembler::test_b(const Operand& op, Immediate imm8) {
-+ if (op.is_reg_only()) {
-+ test_b(op.reg(), imm8);
-+ return;
-+ }
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF6);
-+ emit_operand(eax, op);
-+ emit_b(imm8);
-+}
-+
-+void Assembler::test_w(Register reg, Immediate imm16) {
-+ DCHECK(imm16.is_int16() || imm16.is_uint16());
-+ EnsureSpace ensure_space(this);
-+ if (reg.is(eax)) {
-+ EMIT(0xA9);
-+ emit_w(imm16);
-+ } else {
-+ EMIT(0x66);
-+ EMIT(0xF7);
-+ EMIT(0xc0 | reg.code());
-+ emit_w(imm16);
-+ }
-+}
-+
-+void Assembler::test_w(Register reg, const Operand& op) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0x85);
-+ emit_operand(reg, op);
-+}
-+
-+void Assembler::test_w(const Operand& op, Immediate imm16) {
-+ DCHECK(imm16.is_int16() || imm16.is_uint16());
-+ if (op.is_reg_only()) {
-+ test_w(op.reg(), imm16);
-+ return;
-+ }
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x66);
-+ EMIT(0xF7);
-+ emit_operand(eax, op);
-+ emit_w(imm16);
-+}
-+
-+void Assembler::xor_(Register dst, int32_t imm32) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(6, Operand(dst), Immediate(imm32));
-+}
-+
-+
-+void Assembler::xor_(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x33);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::xor_(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x31);
-+ emit_operand(src, dst);
-+}
-+
-+
-+void Assembler::xor_(const Operand& dst, const Immediate& x) {
-+ EnsureSpace ensure_space(this);
-+ emit_arith(6, dst, x);
-+}
-+
-+
-+void Assembler::bt(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xA3);
-+ emit_operand(src, dst);
-+}
-+
-+
-+void Assembler::bts(const Operand& dst, Register src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xAB);
-+ emit_operand(src, dst);
-+}
-+
-+
-+void Assembler::bsr(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xBD);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::bsf(Register dst, const Operand& src) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0xBC);
-+ emit_operand(dst, src);
-+}
-+
-+
-+void Assembler::hlt() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xF4);
-+}
-+
-+
-+void Assembler::int3() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xCC);
-+}
-+
-+
-+void Assembler::nop() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x90);
-+}
-+
-+
-+void Assembler::ret(int imm16) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(is_uint16(imm16));
-+ if (imm16 == 0) {
-+ EMIT(0xC3);
-+ } else {
-+ EMIT(0xC2);
-+ EMIT(imm16 & 0xFF);
-+ EMIT((imm16 >> 8) & 0xFF);
-+ }
-+}
-+
-+
-+void Assembler::ud2() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0x0B);
-+}
-+
-+
-+// Labels refer to positions in the (to be) generated code.
-+// There are bound, linked, and unused labels.
-+//
-+// Bound labels refer to known positions in the already
-+// generated code. pos() is the position the label refers to.
-+//
-+// Linked labels refer to unknown positions in the code
-+// to be generated; pos() is the position of the 32bit
-+// Displacement of the last instruction using the label.
-+
-+
-+void Assembler::print(Label* L) {
-+ if (L->is_unused()) {
-+ PrintF("unused label\n");
-+ } else if (L->is_bound()) {
-+ PrintF("bound label to %d\n", L->pos());
-+ } else if (L->is_linked()) {
-+ Label l = *L;
-+ PrintF("unbound label");
-+ while (l.is_linked()) {
-+ Displacement disp = disp_at(&l);
-+ PrintF("@ %d ", l.pos());
-+ disp.print();
-+ PrintF("\n");
-+ disp.next(&l);
-+ }
-+ } else {
-+ PrintF("label in inconsistent state (pos = %d)\n", L->pos_);
-+ }
-+}
-+
-+
-+void Assembler::bind_to(Label* L, int pos) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(0 <= pos && pos <= pc_offset()); // must have a valid binding
position
-+ while (L->is_linked()) {
-+ Displacement disp = disp_at(L);
-+ int fixup_pos = L->pos();
-+ if (disp.type() == Displacement::CODE_ABSOLUTE) {
-+ long_at_put(fixup_pos, reinterpret_cast<int>(buffer_ + pos));
-+ internal_reference_positions_.push_back(fixup_pos);
-+ } else if (disp.type() == Displacement::CODE_RELATIVE) {
-+ // Relative to Code* heap object pointer.
-+ long_at_put(fixup_pos, pos + Code::kHeaderSize - kHeapObjectTag);
-+ } else {
-+ if (disp.type() == Displacement::UNCONDITIONAL_JUMP) {
-+ DCHECK(byte_at(fixup_pos - 1) == 0xE9); // jmp expected
-+ }
-+ // Relative address, relative to point after address.
-+ int imm32 = pos - (fixup_pos + sizeof(int32_t));
-+ long_at_put(fixup_pos, imm32);
-+ }
-+ disp.next(L);
-+ }
-+ while (L->is_near_linked()) {
-+ int fixup_pos = L->near_link_pos();
-+ int offset_to_next =
-+ static_cast<int>(*reinterpret_cast<int8_t*>(addr_at(fixup_pos)));
-+ DCHECK(offset_to_next <= 0);
-+ // Relative address, relative to point after address.
-+ int disp = pos - fixup_pos - sizeof(int8_t);
-+ CHECK(0 <= disp && disp <= 127);
-+ set_byte_at(fixup_pos, disp);
-+ if (offset_to_next < 0) {
-+ L->link_to(fixup_pos + offset_to_next, Label::kNear);
-+ } else {
-+ L->UnuseNear();
-+ }
-+ }
-+ L->bind_to(pos);
-+}
-+
-+
-+void Assembler::bind(Label* L) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(!L->is_bound()); // label can only be bound once
-+ bind_to(L, pc_offset());
-+}
-+
-+
-+void Assembler::call(Label* L) {
-+ EnsureSpace ensure_space(this);
-+ if (L->is_bound()) {
-+ const int long_size = 5;
-+ int offs = L->pos() - pc_offset();
-+ DCHECK(offs <= 0);
-+ // 1110 1000 #32-bit disp.
-+ EMIT(0xE8);
-+ emit(offs - long_size);
-+ } else {
-+ // 1110 1000 #32-bit disp.
-+ EMIT(0xE8);
-+ emit_disp(L, Displacement::OTHER);
-+ }
-+}
-+
-+
-+void Assembler::call(byte* entry, RelocInfo::Mode rmode) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(!RelocInfo::IsCodeTarget(rmode));
-+ EMIT(0xE8);
-+ if (RelocInfo::IsRuntimeEntry(rmode)) {
-+ emit(reinterpret_cast<uint32_t>(entry), rmode);
-+ } else {
-+ emit(entry - (pc_ + sizeof(int32_t)), rmode);
-+ }
-+}
-+
-+
-+int Assembler::CallSize(const Operand& adr) {
-+ // Call size is 1 (opcode) + adr.len_ (operand).
-+ return 1 + adr.len_;
-+}
-+
-+
-+void Assembler::call(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xFF);
-+ emit_operand(edx, adr);
-+}
-+
-+
-+int Assembler::CallSize(Handle<Code> code, RelocInfo::Mode rmode) {
-+ return 1 /* EMIT */ + sizeof(uint32_t) /* emit */;
-+}
-+
-+
-+void Assembler::call(Handle<Code> code, RelocInfo::Mode rmode) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(RelocInfo::IsCodeTarget(rmode)
-+ || rmode == RelocInfo::CODE_AGE_SEQUENCE);
-+ EMIT(0xE8);
-+ emit(code, rmode);
-+}
-+
-+void Assembler::call(CodeStub* stub) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xE8);
-+ emit(Immediate::EmbeddedCode(stub));
-+}
-+
-+void Assembler::jmp(Label* L, Label::Distance distance) {
-+ EnsureSpace ensure_space(this);
-+ if (L->is_bound()) {
-+ const int short_size = 2;
-+ const int long_size = 5;
-+ int offs = L->pos() - pc_offset();
-+ DCHECK(offs <= 0);
-+ if (is_int8(offs - short_size)) {
-+ // 1110 1011 #8-bit disp.
-+ EMIT(0xEB);
-+ EMIT((offs - short_size) & 0xFF);
-+ } else {
-+ // 1110 1001 #32-bit disp.
-+ EMIT(0xE9);
-+ emit(offs - long_size);
-+ }
-+ } else if (distance == Label::kNear) {
-+ EMIT(0xEB);
-+ emit_near_disp(L);
-+ } else {
-+ // 1110 1001 #32-bit disp.
-+ EMIT(0xE9);
-+ emit_disp(L, Displacement::UNCONDITIONAL_JUMP);
-+ }
-+}
-+
-+
-+void Assembler::jmp(byte* entry, RelocInfo::Mode rmode) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(!RelocInfo::IsCodeTarget(rmode));
-+ EMIT(0xE9);
-+ if (RelocInfo::IsRuntimeEntry(rmode)) {
-+ emit(reinterpret_cast<uint32_t>(entry), rmode);
-+ } else {
-+ emit(entry - (pc_ + sizeof(int32_t)), rmode);
-+ }
-+}
-+
-+
-+void Assembler::jmp(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xFF);
-+ emit_operand(esp, adr);
-+}
-+
-+
-+void Assembler::jmp(Handle<Code> code, RelocInfo::Mode rmode) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(RelocInfo::IsCodeTarget(rmode));
-+ EMIT(0xE9);
-+ emit(code, rmode);
-+}
-+
-+
-+void Assembler::j(Condition cc, Label* L, Label::Distance distance) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK(0 <= cc && static_cast<int>(cc) < 16);
-+ if (L->is_bound()) {
-+ const int short_size = 2;
-+ const int long_size = 6;
-+ int offs = L->pos() - pc_offset();
-+ DCHECK(offs <= 0);
-+ if (is_int8(offs - short_size)) {
-+ // 0111 tttn #8-bit disp
-+ EMIT(0x70 | cc);
-+ EMIT((offs - short_size) & 0xFF);
-+ } else {
-+ // 0000 1111 1000 tttn #32-bit disp
-+ EMIT(0x0F);
-+ EMIT(0x80 | cc);
-+ emit(offs - long_size);
-+ }
-+ } else if (distance == Label::kNear) {
-+ EMIT(0x70 | cc);
-+ emit_near_disp(L);
-+ } else {
-+ // 0000 1111 1000 tttn #32-bit disp
-+ // Note: could eliminate cond. jumps to this jump if condition
-+ // is the same however, seems to be rather unlikely case.
-+ EMIT(0x0F);
-+ EMIT(0x80 | cc);
-+ emit_disp(L, Displacement::OTHER);
-+ }
-+}
-+
-+
-+void Assembler::j(Condition cc, byte* entry, RelocInfo::Mode rmode) {
-+ EnsureSpace ensure_space(this);
-+ DCHECK((0 <= cc) && (static_cast<int>(cc) < 16));
-+ // 0000 1111 1000 tttn #32-bit disp.
-+ EMIT(0x0F);
-+ EMIT(0x80 | cc);
-+ if (RelocInfo::IsRuntimeEntry(rmode)) {
-+ emit(reinterpret_cast<uint32_t>(entry), rmode);
-+ } else {
-+ emit(entry - (pc_ + sizeof(int32_t)), rmode);
-+ }
-+}
-+
-+
-+void Assembler::j(Condition cc, Handle<Code> code, RelocInfo::Mode rmode) {
-+ EnsureSpace ensure_space(this);
-+ // 0000 1111 1000 tttn #32-bit disp
-+ EMIT(0x0F);
-+ EMIT(0x80 | cc);
-+ emit(code, rmode);
-+}
-+
-+
-+// FPU instructions.
-+
-+void Assembler::fld(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xD9, 0xC0, i);
-+}
-+
-+
-+void Assembler::fstp(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDD, 0xD8, i);
-+}
-+
-+
-+void Assembler::fld1() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xE8);
-+}
-+
-+
-+void Assembler::fldpi() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xEB);
-+}
-+
-+
-+void Assembler::fldz() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xEE);
-+}
-+
-+
-+void Assembler::fldln2() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xED);
-+}
-+
-+
-+void Assembler::fld_s(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ emit_operand(eax, adr);
-+}
-+
-+
-+void Assembler::fld_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDD);
-+ emit_operand(eax, adr);
-+}
-+
-+
-+void Assembler::fstp_s(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ emit_operand(ebx, adr);
-+}
-+
-+
-+void Assembler::fst_s(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ emit_operand(edx, adr);
-+}
-+
-+
-+void Assembler::fldcw(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ emit_operand(ebp, adr);
-+}
-+
-+
-+void Assembler::fnstcw(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ emit_operand(edi, adr);
-+}
-+
-+
-+void Assembler::fstp_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDD);
-+ emit_operand(ebx, adr);
-+}
-+
-+
-+void Assembler::fst_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDD);
-+ emit_operand(edx, adr);
-+}
-+
-+
-+void Assembler::fild_s(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDB);
-+ emit_operand(eax, adr);
-+}
-+
-+
-+void Assembler::fild_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDF);
-+ emit_operand(ebp, adr);
-+}
-+
-+
-+void Assembler::fistp_s(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDB);
-+ emit_operand(ebx, adr);
-+}
-+
-+
-+void Assembler::fisttp_s(const Operand& adr) {
-+ DCHECK(IsEnabled(SSE3));
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDB);
-+ emit_operand(ecx, adr);
-+}
-+
-+
-+void Assembler::fisttp_d(const Operand& adr) {
-+ DCHECK(IsEnabled(SSE3));
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDD);
-+ emit_operand(ecx, adr);
-+}
-+
-+
-+void Assembler::fist_s(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDB);
-+ emit_operand(edx, adr);
-+}
-+
-+
-+void Assembler::fistp_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDF);
-+ emit_operand(edi, adr);
-+}
-+
-+
-+void Assembler::fabs() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xE1);
-+}
-+
-+
-+void Assembler::fchs() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xE0);
-+}
-+
-+
-+void Assembler::fsqrt() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xFA);
-+}
-+
-+
-+void Assembler::fcos() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xFF);
-+}
-+
-+
-+void Assembler::fsin() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xFE);
-+}
-+
-+
-+void Assembler::fptan() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xF2);
-+}
-+
-+
-+void Assembler::fyl2x() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xF1);
-+}
-+
-+
-+void Assembler::f2xm1() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xF0);
-+}
-+
-+
-+void Assembler::fscale() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xFD);
-+}
-+
-+
-+void Assembler::fninit() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDB);
-+ EMIT(0xE3);
-+}
-+
-+
-+void Assembler::fadd(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDC, 0xC0, i);
-+}
-+
-+
-+void Assembler::fadd_i(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xD8, 0xC0, i);
-+}
-+
-+
-+void Assembler::fadd_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDC);
-+ emit_operand(eax, adr);
-+}
-+
-+
-+void Assembler::fsub(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDC, 0xE8, i);
-+}
-+
-+
-+void Assembler::fsub_i(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xD8, 0xE0, i);
-+}
-+
-+
-+void Assembler::fsubr_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDC);
-+ emit_operand(ebp, adr);
-+}
-+
-+
-+void Assembler::fsub_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDC);
-+ emit_operand(esp, adr);
-+}
-+
-+
-+void Assembler::fisub_s(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDA);
-+ emit_operand(esp, adr);
-+}
-+
-+
-+void Assembler::fmul_i(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xD8, 0xC8, i);
-+}
-+
-+
-+void Assembler::fmul(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDC, 0xC8, i);
-+}
-+
-+
-+void Assembler::fmul_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDC);
-+ emit_operand(ecx, adr);
-+}
-+
-+
-+void Assembler::fdiv(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDC, 0xF8, i);
-+}
-+
-+
-+void Assembler::fdiv_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDC);
-+ emit_operand(esi, adr);
-+}
-+
-+
-+void Assembler::fdivr_d(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDC);
-+ emit_operand(edi, adr);
-+}
-+
-+
-+void Assembler::fdiv_i(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xD8, 0xF0, i);
-+}
-+
-+
-+void Assembler::faddp(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDE, 0xC0, i);
-+}
-+
-+
-+void Assembler::fsubp(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDE, 0xE8, i);
-+}
-+
-+
-+void Assembler::fsubrp(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDE, 0xE0, i);
-+}
-+
-+
-+void Assembler::fmulp(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDE, 0xC8, i);
-+}
-+
-+
-+void Assembler::fdivp(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDE, 0xF8, i);
-+}
-+
-+
-+void Assembler::fprem() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xF8);
-+}
-+
-+
-+void Assembler::fprem1() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xF5);
-+}
-+
-+
-+void Assembler::fxch(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xD9, 0xC8, i);
-+}
-+
-+
-+void Assembler::fincstp() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xF7);
-+}
-+
-+
-+void Assembler::ffree(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDD, 0xC0, i);
-+}
-+
-+
-+void Assembler::ftst() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xE4);
-+}
-+
-+
-+void Assembler::fxam() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xE5);
-+}
-+
-+
-+void Assembler::fucomp(int i) {
-+ EnsureSpace ensure_space(this);
-+ emit_farith(0xDD, 0xE8, i);
-+}
-+
-+
-+void Assembler::fucompp() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDA);
-+ EMIT(0xE9);
-+}
-+
-+
-+void Assembler::fucomi(int i) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDB);
-+ EMIT(0xE8 + i);
-+}
-+
-+
-+void Assembler::fucomip() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDF);
-+ EMIT(0xE9);
-+}
-+
-+
-+void Assembler::fcompp() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDE);
-+ EMIT(0xD9);
-+}
-+
-+
-+void Assembler::fnstsw_ax() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDF);
-+ EMIT(0xE0);
-+}
-+
-+
-+void Assembler::fwait() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x9B);
-+}
-+
-+
-+void Assembler::frndint() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xD9);
-+ EMIT(0xFC);
-+}
-+
-+
-+void Assembler::fnclex() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDB);
-+ EMIT(0xE2);
-+}
-+
-+
-+void Assembler::fnsave(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDD);
-+ emit_operand(esi, adr);
-+}
-+
-+
-+void Assembler::frstor(const Operand& adr) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0xDD);
-+ emit_operand(esp, adr);
-+}
-+
-+
-+void Assembler::sahf() {
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x9E);
-+}
-+
-+
-+void Assembler::setcc(Condition cc, Register reg) {
-+ DCHECK(reg.is_byte_register());
-+ EnsureSpace ensure_space(this);
-+ EMIT(0x0F);
-+ EMIT(0x90 | cc);
-+ EMIT(0xC0 | reg.code());
-+}
-+
-+
-+void Assembler::GrowBuffer() {
-+ DCHECK(buffer_overflow());
-+ if (!own_buffer_) FATAL("external code buffer is too small");
-+
-+ // Compute new buffer size.
-+ CodeDesc desc; // the new buffer
-+ desc.buffer_size = 2 * buffer_size_;
-+
-+ // Some internal data structures overflow for very large buffers,
-+ // they must ensure that kMaximalBufferSize is not too large.
-+ if (desc.buffer_size > kMaximalBufferSize) {
-+ V8::FatalProcessOutOfMemory("Assembler::GrowBuffer");
-+ }
-+
-+ // Set up new buffer.
-+ desc.buffer = NewArray<byte>(desc.buffer_size);
-+ desc.origin = this;
-+ desc.instr_size = pc_offset();
-+ desc.reloc_size = (buffer_ + buffer_size_) - (reloc_info_writer.pos());
-+
-+ // Clear the buffer in debug mode. Use 'int3' instructions to make
-+ // sure to get into problems if we ever run uninitialized code.
-+#ifdef DEBUG
-+ memset(desc.buffer, 0xCC, desc.buffer_size);
-+#endif
-+
-+ // Copy the data.
-+ int pc_delta = desc.buffer - buffer_;
-+ int rc_delta = (desc.buffer + desc.buffer_size) - (buffer_ + buffer_size_);
-+ MemMove(desc.buffer, buffer_, desc.instr_size);
-+ MemMove(rc_delta + reloc_info_writer.pos(), reloc_info_writer.pos(),
-+ desc.reloc_size);
-+
-+ DeleteArray(buffer_);
-+ buffer_ = desc.buffer;
-+ buffer_size_ = desc.buffer_size;
-+ pc_ += pc_delta;
-+ reloc_info_writer.Reposition(reloc_info_writer.pos() + rc_delta,
-+ reloc_info_writer.last_pc() + pc_delta);
-+
-+ // Relocate internal references.
-+ for (auto pos : internal_reference_positions_) {
-+ int32_t* p = reinterpret_cast<int32_t*>(buffer_ + pos);
-+ *p += pc_delta;
-+ }
-+
-+ DCHECK(!buffer_overflow());
-+}
-+
-+
-+void Assembler::emit_arith_b(int op1, int op2, Register dst, int imm8) {
-+ DCHECK(is_uint8(op1) && is_uint8(op2)); // wrong opcode
-+ DCHECK(is_uint8(imm8));
-+ DCHECK((op1 & 0x01) == 0); // should be 8bit operation
-+ EMIT(op1);
-+ EMIT(op2 | dst.code());
-+ EMIT(imm8);
-+}
-+
-+
-+void Assembler::emit_arith(int sel, Operand dst, const Immediate& x) {
-+ DCHECK((0 <= sel) && (sel <= 7));
-+ Register ireg = { sel };
-+ if (x.is_int8()) {
-+ EMIT(0x83); // using a sign-extended 8-bit immediate.
-+ emit_operand(ireg, dst);
-+ EMIT(x.immediate() & 0xFF);
-+ } else if (dst.is_reg(eax)) {
-+ EMIT((sel << 3) | 0x05); // short form if the destination is eax.
-+ emit(x);
-+ } else {
-+ EMIT(0x81); // using a literal 32-bit immediate.
-+ emit_operand(ireg, dst);
-+ emit(x);
-+ }
-+}
-+
-+
-+void Assembler::emit_operand(Register reg, const Operand& adr) {
-+ const unsigned length = adr.len_;
-+ DCHECK(length > 0);
-+
-+ // Emit updated ModRM byte containing the given register.
-+ pc_[0] = (adr.buf_[0] & ~0x38) | (reg.code() << 3);
-+
-+ // Emit the rest of the encoded operand.
-+ for (unsigned i = 1; i < length; i++) pc_[i] = adr.buf_[i];
-+ pc_ += length;
-+
-+ // Emit relocation information if necessary.
-+ if (length >= sizeof(int32_t) && !RelocInfo::IsNone(adr.rmode_)) {
-+ pc_ -= sizeof(int32_t); // pc_ must be *at* disp32
-+ RecordRelocInfo(adr.rmode_);
-+ if (adr.rmode_ == RelocInfo::INTERNAL_REFERENCE) { // Fixup for labels
-+ emit_label(*reinterpret_cast<Label**>(pc_));
-+ } else {
-+ pc_ += sizeof(int32_t);
-+ }
-+ }
-+}
-+
-+
-+void Assembler::emit_label(Label* label) {
-+ if (label->is_bound()) {
-+ internal_reference_positions_.push_back(pc_offset());
-+ emit(reinterpret_cast<uint32_t>(buffer_ + label->pos()));
-+ } else {
-+ emit_disp(label, Displacement::CODE_ABSOLUTE);
-+ }
-+}
-+
-+
-+void Assembler::emit_farith(int b1, int b2, int i) {
-+ DCHECK(is_uint8(b1) && is_uint8(b2)); // wrong opcode
-+ DCHECK(0 <= i && i < 8); // illegal stack offset
-+ EMIT(b1);
-+ EMIT(b2 + i);
-+}
-+
-+
-+void Assembler::db(uint8_t data) {
-+ EnsureSpace ensure_space(this);
-+ EMIT(data);
-+}
-+
-+
-+void Assembler::dd(uint32_t data) {
-+ EnsureSpace ensure_space(this);
-+ emit(data);
-+}
-+
-+
-+void Assembler::dq(uint64_t data) {
-+ EnsureSpace ensure_space(this);
-+ emit_q(data);
-+}
-+
-+
-+void Assembler::dd(Label* label) {
-+ EnsureSpace ensure_space(this);
-+ RecordRelocInfo(RelocInfo::INTERNAL_REFERENCE);
-+ emit_label(label);
-+}
-+
-+
-+void Assembler::RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data) {
-+ DCHECK(!RelocInfo::IsNone(rmode));
-+ // Don't record external references unless the heap will be serialized.
-+ if (rmode == RelocInfo::EXTERNAL_REFERENCE &&
-+ !serializer_enabled() && !emit_debug_code()) {
-+ return;
-+ }
-+ RelocInfo rinfo(pc_, rmode, data, NULL);
-+ reloc_info_writer.Write(&rinfo);
-+}
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/assembler-x87.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/assembler-x87.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/assembler-x87.h 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/assembler-x87.h 2018-02-18
19:00:54.195418178 +0100
-@@ -0,0 +1,1140 @@
-+// Copyright (c) 1994-2006 Sun Microsystems Inc.
-+// All Rights Reserved.
-+//
-+// Redistribution and use in source and binary forms, with or without
-+// modification, are permitted provided that the following conditions are
-+// met:
-+//
-+// - Redistributions of source code must retain the above copyright notice,
-+// this list of conditions and the following disclaimer.
-+//
-+// - Redistribution in binary form must reproduce the above copyright
-+// notice, this list of conditions and the following disclaimer in the
-+// documentation and/or other materials provided with the distribution.
-+//
-+// - Neither the name of Sun Microsystems or the names of contributors may
-+// be used to endorse or promote products derived from this software without
-+// specific prior written permission.
-+//
-+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
-+// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
-+// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-+// PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
-+// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
-+// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
-+// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
-+// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
-+// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
-+// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-+// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-+
-+// The original source code covered by the above license above has been
-+// modified significantly by Google Inc.
-+// Copyright 2011 the V8 project authors. All rights reserved.
-+
-+// A light-weight IA32 Assembler.
-+
-+#ifndef V8_X87_ASSEMBLER_X87_H_
-+#define V8_X87_ASSEMBLER_X87_H_
-+
-+#include <deque>
-+
-+#include "src/assembler.h"
-+#include "src/isolate.h"
-+#include "src/utils.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+#define GENERAL_REGISTERS(V) \
-+ V(eax) \
-+ V(ecx) \
-+ V(edx) \
-+ V(ebx) \
-+ V(esp) \
-+ V(ebp) \
-+ V(esi) \
-+ V(edi)
-+
-+#define ALLOCATABLE_GENERAL_REGISTERS(V) \
-+ V(eax) \
-+ V(ecx) \
-+ V(edx) \
-+ V(ebx) \
-+ V(esi) \
-+ V(edi)
-+
-+#define DOUBLE_REGISTERS(V) \
-+ V(stX_0) \
-+ V(stX_1) \
-+ V(stX_2) \
-+ V(stX_3) \
-+ V(stX_4) \
-+ V(stX_5) \
-+ V(stX_6) \
-+ V(stX_7)
-+
-+#define FLOAT_REGISTERS DOUBLE_REGISTERS
-+#define SIMD128_REGISTERS DOUBLE_REGISTERS
-+
-+#define ALLOCATABLE_DOUBLE_REGISTERS(V) \
-+ V(stX_0) \
-+ V(stX_1) \
-+ V(stX_2) \
-+ V(stX_3) \
-+ V(stX_4) \
-+ V(stX_5)
-+
-+// CPU Registers.
-+//
-+// 1) We would prefer to use an enum, but enum values are assignment-
-+// compatible with int, which has caused code-generation bugs.
-+//
-+// 2) We would prefer to use a class instead of a struct but we don't like
-+// the register initialization to depend on the particular initialization
-+// order (which appears to be different on OS X, Linux, and Windows for the
-+// installed versions of C++ we tried). Using a struct permits C-style
-+// "initialization". Also, the Register objects cannot be const as this
-+// forces initialization stubs in MSVC, making us dependent on initialization
-+// order.
-+//
-+// 3) By not using an enum, we are possibly preventing the compiler from
-+// doing certain constant folds, which may significantly reduce the
-+// code generated for some assembly instructions (because they boil down
-+// to a few constants). If this is a problem, we could change the code
-+// such that we use an enum in optimized mode, and the struct in debug
-+// mode. This way we get the compile-time error checking in debug mode
-+// and best performance in optimized code.
-+//
-+struct Register {
-+ enum Code {
-+#define REGISTER_CODE(R) kCode_##R,
-+ GENERAL_REGISTERS(REGISTER_CODE)
-+#undef REGISTER_CODE
-+ kAfterLast,
-+ kCode_no_reg = -1
-+ };
-+
-+ static const int kNumRegisters = Code::kAfterLast;
-+
-+ static Register from_code(int code) {
-+ DCHECK(code >= 0);
-+ DCHECK(code < kNumRegisters);
-+ Register r = {code};
-+ return r;
-+ }
-+ bool is_valid() const { return 0 <= reg_code && reg_code <
kNumRegisters; }
-+ bool is(Register reg) const { return reg_code == reg.reg_code; }
-+ int code() const {
-+ DCHECK(is_valid());
-+ return reg_code;
-+ }
-+ int bit() const {
-+ DCHECK(is_valid());
-+ return 1 << reg_code;
-+ }
-+
-+ bool is_byte_register() const { return reg_code <= 3; }
-+
-+ // Unfortunately we can't make this private in a struct.
-+ int reg_code;
-+};
-+
-+
-+#define DECLARE_REGISTER(R) const Register R = {Register::kCode_##R};
-+GENERAL_REGISTERS(DECLARE_REGISTER)
-+#undef DECLARE_REGISTER
-+const Register no_reg = {Register::kCode_no_reg};
-+
-+static const bool kSimpleFPAliasing = true;
-+static const bool kSimdMaskRegisters = false;
-+
-+struct X87Register {
-+ enum Code {
-+#define REGISTER_CODE(R) kCode_##R,
-+ DOUBLE_REGISTERS(REGISTER_CODE)
-+#undef REGISTER_CODE
-+ kAfterLast,
-+ kCode_no_reg = -1
-+ };
-+
-+ static const int kMaxNumRegisters = Code::kAfterLast;
-+ static const int kMaxNumAllocatableRegisters = 6;
-+
-+ static X87Register from_code(int code) {
-+ X87Register result = {code};
-+ return result;
-+ }
-+
-+ bool is_valid() const { return 0 <= reg_code && reg_code <
kMaxNumRegisters; }
-+
-+ int code() const {
-+ DCHECK(is_valid());
-+ return reg_code;
-+ }
-+
-+ bool is(X87Register reg) const { return reg_code == reg.reg_code; }
-+
-+ int reg_code;
-+};
-+
-+typedef X87Register FloatRegister;
-+
-+typedef X87Register DoubleRegister;
-+
-+// TODO(x87) Define SIMD registers.
-+typedef X87Register Simd128Register;
-+
-+#define DECLARE_REGISTER(R) \
-+ const DoubleRegister R = {DoubleRegister::kCode_##R};
-+DOUBLE_REGISTERS(DECLARE_REGISTER)
-+#undef DECLARE_REGISTER
-+const DoubleRegister no_double_reg = {DoubleRegister::kCode_no_reg};
-+
-+enum Condition {
-+ // any value < 0 is considered no_condition
-+ no_condition = -1,
-+
-+ overflow = 0,
-+ no_overflow = 1,
-+ below = 2,
-+ above_equal = 3,
-+ equal = 4,
-+ not_equal = 5,
-+ below_equal = 6,
-+ above = 7,
-+ negative = 8,
-+ positive = 9,
-+ parity_even = 10,
-+ parity_odd = 11,
-+ less = 12,
-+ greater_equal = 13,
-+ less_equal = 14,
-+ greater = 15,
-+
-+ // aliases
-+ carry = below,
-+ not_carry = above_equal,
-+ zero = equal,
-+ not_zero = not_equal,
-+ sign = negative,
-+ not_sign = positive
-+};
-+
-+
-+// Returns the equivalent of !cc.
-+// Negation of the default no_condition (-1) results in a non-default
-+// no_condition value (-2). As long as tests for no_condition check
-+// for condition < 0, this will work as expected.
-+inline Condition NegateCondition(Condition cc) {
-+ return static_cast<Condition>(cc ^ 1);
-+}
-+
-+
-+// Commute a condition such that {a cond b == b cond' a}.
-+inline Condition CommuteCondition(Condition cc) {
-+ switch (cc) {
-+ case below:
-+ return above;
-+ case above:
-+ return below;
-+ case above_equal:
-+ return below_equal;
-+ case below_equal:
-+ return above_equal;
-+ case less:
-+ return greater;
-+ case greater:
-+ return less;
-+ case greater_equal:
-+ return less_equal;
-+ case less_equal:
-+ return greater_equal;
-+ default:
-+ return cc;
-+ }
-+}
-+
-+
-+enum RoundingMode {
-+ kRoundToNearest = 0x0,
-+ kRoundDown = 0x1,
-+ kRoundUp = 0x2,
-+ kRoundToZero = 0x3
-+};
-+
-+
-+// -----------------------------------------------------------------------------
-+// Machine instruction Immediates
-+
-+class Immediate BASE_EMBEDDED {
-+ public:
-+ inline explicit Immediate(int x);
-+ inline explicit Immediate(const ExternalReference& ext);
-+ inline explicit Immediate(Handle<HeapObject> handle);
-+ inline explicit Immediate(Smi* value);
-+ inline explicit Immediate(Address addr);
-+ inline explicit Immediate(Address x, RelocInfo::Mode rmode);
-+
-+ static Immediate EmbeddedNumber(double number); // Smi or HeapNumber.
-+ static Immediate EmbeddedCode(CodeStub* code);
-+
-+ static Immediate CodeRelativeOffset(Label* label) {
-+ return Immediate(label);
-+ }
-+
-+ bool is_heap_object_request() const {
-+ DCHECK_IMPLIES(is_heap_object_request_,
-+ rmode_ == RelocInfo::EMBEDDED_OBJECT ||
-+ rmode_ == RelocInfo::CODE_TARGET);
-+ return is_heap_object_request_;
-+ }
-+
-+ HeapObjectRequest heap_object_request() const {
-+ DCHECK(is_heap_object_request());
-+ return value_.heap_object_request;
-+ }
-+
-+ int immediate() const {
-+ DCHECK(!is_heap_object_request());
-+ return value_.immediate;
-+ }
-+
-+ bool is_zero() const { return RelocInfo::IsNone(rmode_) && immediate() == 0;
}
-+ bool is_int8() const {
-+ return RelocInfo::IsNone(rmode_) && i::is_int8(immediate());
-+ }
-+ bool is_uint8() const {
-+ return RelocInfo::IsNone(rmode_) && i::is_uint8(immediate());
-+ }
-+ bool is_int16() const {
-+ return RelocInfo::IsNone(rmode_) && i::is_int16(immediate());
-+ }
-+
-+ bool is_uint16() const {
-+ return RelocInfo::IsNone(rmode_) && i::is_uint16(immediate());
-+ }
-+
-+ RelocInfo::Mode rmode() const { return rmode_; }
-+
-+ private:
-+ inline explicit Immediate(Label* value);
-+
-+ union Value {
-+ Value() {}
-+ HeapObjectRequest heap_object_request;
-+ int immediate;
-+ } value_;
-+ bool is_heap_object_request_ = false;
-+ RelocInfo::Mode rmode_;
-+
-+ friend class Operand;
-+ friend class Assembler;
-+ friend class MacroAssembler;
-+};
-+
-+
-+// -----------------------------------------------------------------------------
-+// Machine instruction Operands
-+
-+enum ScaleFactor {
-+ times_1 = 0,
-+ times_2 = 1,
-+ times_4 = 2,
-+ times_8 = 3,
-+ times_int_size = times_4,
-+ times_half_pointer_size = times_2,
-+ times_pointer_size = times_4,
-+ times_twice_pointer_size = times_8
-+};
-+
-+
-+class Operand BASE_EMBEDDED {
-+ public:
-+ // reg
-+ INLINE(explicit Operand(Register reg));
-+
-+ // [disp/r]
-+ INLINE(explicit Operand(int32_t disp, RelocInfo::Mode rmode));
-+
-+ // [disp/r]
-+ INLINE(explicit Operand(Immediate imm));
-+
-+ // [base + disp/r]
-+ explicit Operand(Register base, int32_t disp,
-+ RelocInfo::Mode rmode = RelocInfo::NONE32);
-+
-+ // [base + index*scale + disp/r]
-+ explicit Operand(Register base,
-+ Register index,
-+ ScaleFactor scale,
-+ int32_t disp,
-+ RelocInfo::Mode rmode = RelocInfo::NONE32);
-+
-+ // [index*scale + disp/r]
-+ explicit Operand(Register index,
-+ ScaleFactor scale,
-+ int32_t disp,
-+ RelocInfo::Mode rmode = RelocInfo::NONE32);
-+
-+ static Operand JumpTable(Register index, ScaleFactor scale, Label* table) {
-+ return Operand(index, scale, reinterpret_cast<int32_t>(table),
-+ RelocInfo::INTERNAL_REFERENCE);
-+ }
-+
-+ static Operand StaticVariable(const ExternalReference& ext) {
-+ return Operand(reinterpret_cast<int32_t>(ext.address()),
-+ RelocInfo::EXTERNAL_REFERENCE);
-+ }
-+
-+ static Operand StaticArray(Register index,
-+ ScaleFactor scale,
-+ const ExternalReference& arr) {
-+ return Operand(index, scale, reinterpret_cast<int32_t>(arr.address()),
-+ RelocInfo::EXTERNAL_REFERENCE);
-+ }
-+
-+ static Operand ForCell(Handle<Cell> cell) {
-+ return Operand(reinterpret_cast<int32_t>(cell.address()), RelocInfo::CELL);
-+ }
-+
-+ static Operand ForRegisterPlusImmediate(Register base, Immediate imm) {
-+ return Operand(base, imm.value_.immediate, imm.rmode_);
-+ }
-+
-+ // Returns true if this Operand is a wrapper for the specified register.
-+ bool is_reg(Register reg) const;
-+
-+ // Returns true if this Operand is a wrapper for one register.
-+ bool is_reg_only() const;
-+
-+ // Asserts that this Operand is a wrapper for one register and returns the
-+ // register.
-+ Register reg() const;
-+
-+ private:
-+ // Set the ModRM byte without an encoded 'reg' register. The
-+ // register is encoded later as part of the emit_operand operation.
-+ inline void set_modrm(int mod, Register rm);
-+
-+ inline void set_sib(ScaleFactor scale, Register index, Register base);
-+ inline void set_disp8(int8_t disp);
-+ inline void set_dispr(int32_t disp, RelocInfo::Mode rmode);
-+
-+ byte buf_[6];
-+ // The number of bytes in buf_.
-+ unsigned int len_;
-+ // Only valid if len_ > 4.
-+ RelocInfo::Mode rmode_;
-+
-+ friend class Assembler;
-+};
-+
-+
-+// -----------------------------------------------------------------------------
-+// A Displacement describes the 32bit immediate field of an instruction which
-+// may be used together with a Label in order to refer to a yet unknown code
-+// position. Displacements stored in the instruction stream are used to describe
-+// the instruction and to chain a list of instructions using the same Label.
-+// A Displacement contains 2 different fields:
-+//
-+// next field: position of next displacement in the chain (0 = end of list)
-+// type field: instruction type
-+//
-+// A next value of null (0) indicates the end of a chain (note that there can
-+// be no displacement at position zero, because there is always at least one
-+// instruction byte before the displacement).
-+//
-+// Displacement _data field layout
-+//
-+// |31.....2|1......0|
-+// [ next | type |
-+
-+class Displacement BASE_EMBEDDED {
-+ public:
-+ enum Type { UNCONDITIONAL_JUMP, CODE_RELATIVE, OTHER, CODE_ABSOLUTE };
-+
-+ int data() const { return data_; }
-+ Type type() const { return TypeField::decode(data_); }
-+ void next(Label* L) const {
-+ int n = NextField::decode(data_);
-+ n > 0 ? L->link_to(n) : L->Unuse();
-+ }
-+ void link_to(Label* L) { init(L, type()); }
-+
-+ explicit Displacement(int data) { data_ = data; }
-+
-+ Displacement(Label* L, Type type) { init(L, type); }
-+
-+ void print() {
-+ PrintF("%s (%x) ", (type() == UNCONDITIONAL_JUMP ? "jmp" :
"[other]"),
-+ NextField::decode(data_));
-+ }
-+
-+ private:
-+ int data_;
-+
-+ class TypeField: public BitField<Type, 0, 2> {};
-+ class NextField: public BitField<int, 2, 32-2> {};
-+
-+ void init(Label* L, Type type);
-+};
-+
-+
-+class Assembler : public AssemblerBase {
-+ private:
-+ // We check before assembling an instruction that there is sufficient
-+ // space to write an instruction and its relocation information.
-+ // The relocation writer's position must be kGap bytes above the end of
-+ // the generated instructions. This leaves enough space for the
-+ // longest possible ia32 instruction, 15 bytes, and the longest possible
-+ // relocation information encoding, RelocInfoWriter::kMaxLength == 16.
-+ // (There is a 15 byte limit on ia32 instruction length that rules out some
-+ // otherwise valid instructions.)
-+ // This allows for a single, fast space check per instruction.
-+ static const int kGap = 32;
-+
-+ public:
-+ // Create an assembler. Instructions and relocation information are emitted
-+ // into a buffer, with the instructions starting from the beginning and the
-+ // relocation information starting from the end of the buffer. See CodeDesc
-+ // for a detailed comment on the layout (globals.h).
-+ //
-+ // If the provided buffer is NULL, the assembler allocates and grows its own
-+ // buffer, and buffer_size determines the initial buffer size. The buffer is
-+ // owned by the assembler and deallocated upon destruction of the assembler.
-+ //
-+ // If the provided buffer is not NULL, the assembler uses the provided buffer
-+ // for code generation and assumes its size to be buffer_size. If the buffer
-+ // is too small, a fatal error occurs. No deallocation of the buffer is done
-+ // upon destruction of the assembler.
-+ Assembler(Isolate* isolate, void* buffer, int buffer_size)
-+ : Assembler(IsolateData(isolate), buffer, buffer_size) {}
-+ Assembler(IsolateData isolate_data, void* buffer, int buffer_size);
-+ virtual ~Assembler() {}
-+
-+ // GetCode emits any pending (non-emitted) code and fills the descriptor
-+ // desc. GetCode() is idempotent; it returns the same result if no other
-+ // Assembler functions are invoked in between GetCode() calls.
-+ void GetCode(Isolate* isolate, CodeDesc* desc);
-+
-+ // Read/Modify the code target in the branch/call instruction at pc.
-+ // The isolate argument is unused (and may be nullptr) when skipping flushing.
-+ inline static Address target_address_at(Address pc, Address constant_pool);
-+ inline static void set_target_address_at(
-+ Isolate* isolate, Address pc, Address constant_pool, Address target,
-+ ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED);
-+ static inline Address target_address_at(Address pc, Code* code);
-+ static inline void set_target_address_at(
-+ Isolate* isolate, Address pc, Code* code, Address target,
-+ ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED);
-+
-+ // Return the code target address at a call site from the return address
-+ // of that call in the instruction stream.
-+ inline static Address target_address_from_return_address(Address pc);
-+
-+ // This sets the branch destination (which is in the instruction on x86).
-+ // This is for calls and branches within generated code.
-+ inline static void deserialization_set_special_target_at(
-+ Isolate* isolate, Address instruction_payload, Code* code,
-+ Address target) {
-+ set_target_address_at(isolate, instruction_payload, code, target);
-+ }
-+
-+ // This sets the internal reference at the pc.
-+ inline static void deserialization_set_target_internal_reference_at(
-+ Isolate* isolate, Address pc, Address target,
-+ RelocInfo::Mode mode = RelocInfo::INTERNAL_REFERENCE);
-+
-+ static const int kSpecialTargetSize = kPointerSize;
-+
-+ // Distance between the address of the code target in the call instruction
-+ // and the return address
-+ static const int kCallTargetAddressOffset = kPointerSize;
-+
-+ static const int kCallInstructionLength = 5;
-+
-+ // The debug break slot must be able to contain a call instruction.
-+ static const int kDebugBreakSlotLength = kCallInstructionLength;
-+
-+ // Distance between start of patched debug break slot and the emitted address
-+ // to jump to.
-+ static const int kPatchDebugBreakSlotAddressOffset = 1; // JMP imm32.
-+
-+ // One byte opcode for test al, 0xXX.
-+ static const byte kTestAlByte = 0xA8;
-+ // One byte opcode for nop.
-+ static const byte kNopByte = 0x90;
-+
-+ // One byte opcode for a short unconditional jump.
-+ static const byte kJmpShortOpcode = 0xEB;
-+ // One byte prefix for a short conditional jump.
-+ static const byte kJccShortPrefix = 0x70;
-+ static const byte kJncShortOpcode = kJccShortPrefix | not_carry;
-+ static const byte kJcShortOpcode = kJccShortPrefix | carry;
-+ static const byte kJnzShortOpcode = kJccShortPrefix | not_zero;
-+ static const byte kJzShortOpcode = kJccShortPrefix | zero;
-+
-+
-+ // ---------------------------------------------------------------------------
-+ // Code generation
-+ //
-+ // - function names correspond one-to-one to ia32 instruction mnemonics
-+ // - unless specified otherwise, instructions operate on 32bit operands
-+ // - instructions on 8bit (byte) operands/registers have a trailing '_b'
-+ // - instructions on 16bit (word) operands/registers have a trailing '_w'
-+ // - naming conflicts with C++ keywords are resolved via a trailing '_'
-+
-+ // NOTE ON INTERFACE: Currently, the interface is not very consistent
-+ // in the sense that some operations (e.g. mov()) can be called in more
-+ // the one way to generate the same instruction: The Register argument
-+ // can in some cases be replaced with an Operand(Register) argument.
-+ // This should be cleaned up and made more orthogonal. The questions
-+ // is: should we always use Operands instead of Registers where an
-+ // Operand is possible, or should we have a Register (overloaded) form
-+ // instead? We must be careful to make sure that the selected instruction
-+ // is obvious from the parameters to avoid hard-to-find code generation
-+ // bugs.
-+
-+ // Insert the smallest number of nop instructions
-+ // possible to align the pc offset to a multiple
-+ // of m. m must be a power of 2.
-+ void Align(int m);
-+ // Insert the smallest number of zero bytes possible to align the pc offset
-+ // to a mulitple of m. m must be a power of 2 (>= 2).
-+ void DataAlign(int m);
-+ void Nop(int bytes = 1);
-+ // Aligns code to something that's optimal for a jump target for the platform.
-+ void CodeTargetAlign();
-+
-+ // Stack
-+ void pushad();
-+ void popad();
-+
-+ void pushfd();
-+ void popfd();
-+
-+ void push(const Immediate& x);
-+ void push_imm32(int32_t imm32);
-+ void push(Register src);
-+ void push(const Operand& src);
-+
-+ void pop(Register dst);
-+ void pop(const Operand& dst);
-+
-+ void enter(const Immediate& size);
-+ void leave();
-+
-+ // Moves
-+ void mov_b(Register dst, Register src) { mov_b(dst, Operand(src)); }
-+ void mov_b(Register dst, const Operand& src);
-+ void mov_b(Register dst, int8_t imm8) { mov_b(Operand(dst), imm8); }
-+ void mov_b(const Operand& dst, int8_t imm8);
-+ void mov_b(const Operand& dst, const Immediate& src);
-+ void mov_b(const Operand& dst, Register src);
-+
-+ void mov_w(Register dst, const Operand& src);
-+ void mov_w(const Operand& dst, Register src);
-+ void mov_w(const Operand& dst, int16_t imm16);
-+ void mov_w(const Operand& dst, const Immediate& src);
-+
-+
-+ void mov(Register dst, int32_t imm32);
-+ void mov(Register dst, const Immediate& x);
-+ void mov(Register dst, Handle<HeapObject> handle);
-+ void mov(Register dst, const Operand& src);
-+ void mov(Register dst, Register src);
-+ void mov(const Operand& dst, const Immediate& x);
-+ void mov(const Operand& dst, Handle<HeapObject> handle);
-+ void mov(const Operand& dst, Register src);
-+
-+ void movsx_b(Register dst, Register src) { movsx_b(dst, Operand(src)); }
-+ void movsx_b(Register dst, const Operand& src);
-+
-+ void movsx_w(Register dst, Register src) { movsx_w(dst, Operand(src)); }
-+ void movsx_w(Register dst, const Operand& src);
-+
-+ void movzx_b(Register dst, Register src) { movzx_b(dst, Operand(src)); }
-+ void movzx_b(Register dst, const Operand& src);
-+
-+ void movzx_w(Register dst, Register src) { movzx_w(dst, Operand(src)); }
-+ void movzx_w(Register dst, const Operand& src);
-+
-+ // Flag management.
-+ void cld();
-+
-+ // Repetitive string instructions.
-+ void rep_movs();
-+ void rep_stos();
-+ void stos();
-+
-+ // Exchange
-+ void xchg(Register dst, Register src);
-+ void xchg(Register dst, const Operand& src);
-+ void xchg_b(Register reg, const Operand& op);
-+ void xchg_w(Register reg, const Operand& op);
-+
-+ // Lock prefix
-+ void lock();
-+
-+ // CompareExchange
-+ void cmpxchg(const Operand& dst, Register src);
-+ void cmpxchg_b(const Operand& dst, Register src);
-+ void cmpxchg_w(const Operand& dst, Register src);
-+
-+ // Arithmetics
-+ void adc(Register dst, int32_t imm32);
-+ void adc(Register dst, const Operand& src);
-+
-+ void add(Register dst, Register src) { add(dst, Operand(src)); }
-+ void add(Register dst, const Operand& src);
-+ void add(const Operand& dst, Register src);
-+ void add(Register dst, const Immediate& imm) { add(Operand(dst), imm); }
-+ void add(const Operand& dst, const Immediate& x);
-+
-+ void and_(Register dst, int32_t imm32);
-+ void and_(Register dst, const Immediate& x);
-+ void and_(Register dst, Register src) { and_(dst, Operand(src)); }
-+ void and_(Register dst, const Operand& src);
-+ void and_(const Operand& dst, Register src);
-+ void and_(const Operand& dst, const Immediate& x);
-+
-+ void cmpb(Register reg, Immediate imm8) { cmpb(Operand(reg), imm8); }
-+ void cmpb(const Operand& op, Immediate imm8);
-+ void cmpb(Register reg, const Operand& op);
-+ void cmpb(const Operand& op, Register reg);
-+ void cmpb(Register dst, Register src) { cmpb(Operand(dst), src); }
-+ void cmpb_al(const Operand& op);
-+ void cmpw_ax(const Operand& op);
-+ void cmpw(const Operand& dst, Immediate src);
-+ void cmpw(Register dst, Immediate src) { cmpw(Operand(dst), src); }
-+ void cmpw(Register dst, const Operand& src);
-+ void cmpw(Register dst, Register src) { cmpw(Operand(dst), src); }
-+ void cmpw(const Operand& dst, Register src);
-+ void cmp(Register reg, int32_t imm32);
-+ void cmp(Register reg, Handle<HeapObject> handle);
-+ void cmp(Register reg0, Register reg1) { cmp(reg0, Operand(reg1)); }
-+ void cmp(Register reg, const Operand& op);
-+ void cmp(Register reg, const Immediate& imm) { cmp(Operand(reg), imm); }
-+ void cmp(const Operand& op, Register reg);
-+ void cmp(const Operand& op, const Immediate& imm);
-+ void cmp(const Operand& op, Handle<HeapObject> handle);
-+
-+ void dec_b(Register dst);
-+ void dec_b(const Operand& dst);
-+
-+ void dec(Register dst);
-+ void dec(const Operand& dst);
-+
-+ void cdq();
-+
-+ void idiv(Register src) { idiv(Operand(src)); }
-+ void idiv(const Operand& src);
-+ void div(Register src) { div(Operand(src)); }
-+ void div(const Operand& src);
-+
-+ // Signed multiply instructions.
-+ void imul(Register src); // edx:eax = eax * src.
-+ void imul(Register dst, Register src) { imul(dst, Operand(src)); }
-+ void imul(Register dst, const Operand& src); // dst = dst * src.
-+ void imul(Register dst, Register src, int32_t imm32); // dst = src * imm32.
-+ void imul(Register dst, const Operand& src, int32_t imm32);
-+
-+ void inc(Register dst);
-+ void inc(const Operand& dst);
-+
-+ void lea(Register dst, const Operand& src);
-+
-+ // Unsigned multiply instruction.
-+ void mul(Register src); // edx:eax = eax * reg.
-+
-+ void neg(Register dst);
-+ void neg(const Operand& dst);
-+
-+ void not_(Register dst);
-+ void not_(const Operand& dst);
-+
-+ void or_(Register dst, int32_t imm32);
-+ void or_(Register dst, Register src) { or_(dst, Operand(src)); }
-+ void or_(Register dst, const Operand& src);
-+ void or_(const Operand& dst, Register src);
-+ void or_(Register dst, const Immediate& imm) { or_(Operand(dst), imm); }
-+ void or_(const Operand& dst, const Immediate& x);
-+
-+ void rcl(Register dst, uint8_t imm8);
-+ void rcr(Register dst, uint8_t imm8);
-+
-+ void ror(Register dst, uint8_t imm8) { ror(Operand(dst), imm8); }
-+ void ror(const Operand& dst, uint8_t imm8);
-+ void ror_cl(Register dst) { ror_cl(Operand(dst)); }
-+ void ror_cl(const Operand& dst);
-+
-+ void sar(Register dst, uint8_t imm8) { sar(Operand(dst), imm8); }
-+ void sar(const Operand& dst, uint8_t imm8);
-+ void sar_cl(Register dst) { sar_cl(Operand(dst)); }
-+ void sar_cl(const Operand& dst);
-+
-+ void sbb(Register dst, const Operand& src);
-+
-+ void shl(Register dst, uint8_t imm8) { shl(Operand(dst), imm8); }
-+ void shl(const Operand& dst, uint8_t imm8);
-+ void shl_cl(Register dst) { shl_cl(Operand(dst)); }
-+ void shl_cl(const Operand& dst);
-+ void shld(Register dst, Register src, uint8_t shift);
-+ void shld_cl(Register dst, Register src);
-+
-+ void shr(Register dst, uint8_t imm8) { shr(Operand(dst), imm8); }
-+ void shr(const Operand& dst, uint8_t imm8);
-+ void shr_cl(Register dst) { shr_cl(Operand(dst)); }
-+ void shr_cl(const Operand& dst);
-+ void shrd(Register dst, Register src, uint8_t shift);
-+ void shrd_cl(Register dst, Register src) { shrd_cl(Operand(dst), src); }
-+ void shrd_cl(const Operand& dst, Register src);
-+
-+ void sub(Register dst, const Immediate& imm) { sub(Operand(dst), imm); }
-+ void sub(const Operand& dst, const Immediate& x);
-+ void sub(Register dst, Register src) { sub(dst, Operand(src)); }
-+ void sub(Register dst, const Operand& src);
-+ void sub(const Operand& dst, Register src);
-+
-+ void test(Register reg, const Immediate& imm);
-+ void test(Register reg0, Register reg1) { test(reg0, Operand(reg1)); }
-+ void test(Register reg, const Operand& op);
-+ void test(const Operand& op, const Immediate& imm);
-+ void test(const Operand& op, Register reg) { test(reg, op); }
-+ void test_b(Register reg, const Operand& op);
-+ void test_b(Register reg, Immediate imm8);
-+ void test_b(const Operand& op, Immediate imm8);
-+ void test_b(const Operand& op, Register reg) { test_b(reg, op); }
-+ void test_b(Register dst, Register src) { test_b(dst, Operand(src)); }
-+ void test_w(Register reg, const Operand& op);
-+ void test_w(Register reg, Immediate imm16);
-+ void test_w(const Operand& op, Immediate imm16);
-+ void test_w(const Operand& op, Register reg) { test_w(reg, op); }
-+ void test_w(Register dst, Register src) { test_w(dst, Operand(src)); }
-+
-+ void xor_(Register dst, int32_t imm32);
-+ void xor_(Register dst, Register src) { xor_(dst, Operand(src)); }
-+ void xor_(Register dst, const Operand& src);
-+ void xor_(const Operand& dst, Register src);
-+ void xor_(Register dst, const Immediate& imm) { xor_(Operand(dst), imm); }
-+ void xor_(const Operand& dst, const Immediate& x);
-+
-+ // Bit operations.
-+ void bt(const Operand& dst, Register src);
-+ void bts(Register dst, Register src) { bts(Operand(dst), src); }
-+ void bts(const Operand& dst, Register src);
-+ void bsr(Register dst, Register src) { bsr(dst, Operand(src)); }
-+ void bsr(Register dst, const Operand& src);
-+ void bsf(Register dst, Register src) { bsf(dst, Operand(src)); }
-+ void bsf(Register dst, const Operand& src);
-+
-+ // Miscellaneous
-+ void hlt();
-+ void int3();
-+ void nop();
-+ void ret(int imm16);
-+ void ud2();
-+
-+ // Label operations & relative jumps (PPUM Appendix D)
-+ //
-+ // Takes a branch opcode (cc) and a label (L) and generates
-+ // either a backward branch or a forward branch and links it
-+ // to the label fixup chain. Usage:
-+ //
-+ // Label L; // unbound label
-+ // j(cc, &L); // forward branch to unbound label
-+ // bind(&L); // bind label to the current pc
-+ // j(cc, &L); // backward branch to bound label
-+ // bind(&L); // illegal: a label may be bound only once
-+ //
-+ // Note: The same Label can be used for forward and backward branches
-+ // but it may be bound only once.
-+
-+ void bind(Label* L); // binds an unbound label L to the current code position
-+
-+ // Calls
-+ void call(Label* L);
-+ void call(byte* entry, RelocInfo::Mode rmode);
-+ int CallSize(const Operand& adr);
-+ void call(Register reg) { call(Operand(reg)); }
-+ void call(const Operand& adr);
-+ int CallSize(Handle<Code> code, RelocInfo::Mode mode);
-+ void call(Handle<Code> code, RelocInfo::Mode rmode);
-+ void call(CodeStub* stub);
-+
-+ // Jumps
-+ // unconditional jump to L
-+ void jmp(Label* L, Label::Distance distance = Label::kFar);
-+ void jmp(byte* entry, RelocInfo::Mode rmode);
-+ void jmp(Register reg) { jmp(Operand(reg)); }
-+ void jmp(const Operand& adr);
-+ void jmp(Handle<Code> code, RelocInfo::Mode rmode);
-+
-+ // Conditional jumps
-+ void j(Condition cc,
-+ Label* L,
-+ Label::Distance distance = Label::kFar);
-+ void j(Condition cc, byte* entry, RelocInfo::Mode rmode);
-+ void j(Condition cc, Handle<Code> code,
-+ RelocInfo::Mode rmode = RelocInfo::CODE_TARGET);
-+
-+ // Floating-point operations
-+ void fld(int i);
-+ void fstp(int i);
-+
-+ void fld1();
-+ void fldz();
-+ void fldpi();
-+ void fldln2();
-+
-+ void fld_s(const Operand& adr);
-+ void fld_d(const Operand& adr);
-+
-+ void fstp_s(const Operand& adr);
-+ void fst_s(const Operand& adr);
-+ void fstp_d(const Operand& adr);
-+ void fst_d(const Operand& adr);
-+
-+ void fild_s(const Operand& adr);
-+ void fild_d(const Operand& adr);
-+
-+ void fist_s(const Operand& adr);
-+
-+ void fistp_s(const Operand& adr);
-+ void fistp_d(const Operand& adr);
-+
-+ // The fisttp instructions require SSE3.
-+ void fisttp_s(const Operand& adr);
-+ void fisttp_d(const Operand& adr);
-+
-+ void fabs();
-+ void fchs();
-+ void fsqrt();
-+ void fcos();
-+ void fsin();
-+ void fptan();
-+ void fyl2x();
-+ void f2xm1();
-+ void fscale();
-+ void fninit();
-+
-+ void fadd(int i);
-+ void fadd_i(int i);
-+ void fadd_d(const Operand& adr);
-+ void fsub(int i);
-+ void fsub_i(int i);
-+ void fsub_d(const Operand& adr);
-+ void fsubr_d(const Operand& adr);
-+ void fmul(int i);
-+ void fmul_d(const Operand& adr);
-+ void fmul_i(int i);
-+ void fdiv(int i);
-+ void fdiv_d(const Operand& adr);
-+ void fdivr_d(const Operand& adr);
-+ void fdiv_i(int i);
-+
-+ void fisub_s(const Operand& adr);
-+
-+ void faddp(int i = 1);
-+ void fsubp(int i = 1);
-+ void fsubr(int i = 1);
-+ void fsubrp(int i = 1);
-+ void fmulp(int i = 1);
-+ void fdivp(int i = 1);
-+ void fprem();
-+ void fprem1();
-+
-+ void fxch(int i = 1);
-+ void fincstp();
-+ void ffree(int i = 0);
-+
-+ void ftst();
-+ void fxam();
-+ void fucomp(int i);
-+ void fucompp();
-+ void fucomi(int i);
-+ void fucomip();
-+ void fcompp();
-+ void fnstsw_ax();
-+ void fldcw(const Operand& adr);
-+ void fnstcw(const Operand& adr);
-+ void fwait();
-+ void fnclex();
-+ void fnsave(const Operand& adr);
-+ void frstor(const Operand& adr);
-+
-+ void frndint();
-+
-+ void sahf();
-+ void setcc(Condition cc, Register reg);
-+
-+ void cpuid();
-+
-+ // TODO(lrn): Need SFENCE for movnt?
-+
-+ // Check the code size generated from label to here.
-+ int SizeOfCodeGeneratedSince(Label* label) {
-+ return pc_offset() - label->pos();
-+ }
-+
-+ // Mark address of a debug break slot.
-+ void RecordDebugBreakSlot(RelocInfo::Mode mode);
-+
-+ // Record a comment relocation entry that can be used by a disassembler.
-+ // Use --code-comments to enable.
-+ void RecordComment(const char* msg);
-+
-+ // Record a deoptimization reason that can be used by a log or cpu profiler.
-+ // Use --trace-deopt to enable.
-+ void RecordDeoptReason(DeoptimizeReason reason, SourcePosition position,
-+ int id);
-+
-+ // Writes a single byte or word of data in the code stream. Used for
-+ // inline tables, e.g., jump-tables.
-+ void db(uint8_t data);
-+ void dd(uint32_t data);
-+ void dq(uint64_t data);
-+ void dp(uintptr_t data) { dd(data); }
-+ void dd(Label* label);
-+
-+ // Check if there is less than kGap bytes available in the buffer.
-+ // If this is the case, we need to grow the buffer before emitting
-+ // an instruction or relocation information.
-+ inline bool buffer_overflow() const {
-+ return pc_ >= reloc_info_writer.pos() - kGap;
-+ }
-+
-+ // Get the number of bytes available in the buffer.
-+ inline int available_space() const { return reloc_info_writer.pos() - pc_; }
-+
-+ static bool IsNop(Address addr);
-+
-+ int relocation_writer_size() {
-+ return (buffer_ + buffer_size_) - reloc_info_writer.pos();
-+ }
-+
-+ // Avoid overflows for displacements etc.
-+ static const int kMaximalBufferSize = 512*MB;
-+
-+ byte byte_at(int pos) { return buffer_[pos]; }
-+ void set_byte_at(int pos, byte value) { buffer_[pos] = value; }
-+
-+ void PatchConstantPoolAccessInstruction(int pc_offset, int offset,
-+ ConstantPoolEntry::Access access,
-+ ConstantPoolEntry::Type type) {
-+ // No embedded constant pool support.
-+ UNREACHABLE();
-+ }
-+
-+ protected:
-+ byte* addr_at(int pos) { return buffer_ + pos; }
-+
-+
-+ private:
-+ uint32_t long_at(int pos) {
-+ return *reinterpret_cast<uint32_t*>(addr_at(pos));
-+ }
-+ void long_at_put(int pos, uint32_t x) {
-+ *reinterpret_cast<uint32_t*>(addr_at(pos)) = x;
-+ }
-+
-+ // code emission
-+ void GrowBuffer();
-+ inline void emit(uint32_t x);
-+ inline void emit(Handle<HeapObject> handle);
-+ inline void emit(uint32_t x, RelocInfo::Mode rmode);
-+ inline void emit(Handle<Code> code, RelocInfo::Mode rmode);
-+ inline void emit(const Immediate& x);
-+ inline void emit_b(Immediate x);
-+ inline void emit_w(const Immediate& x);
-+ inline void emit_q(uint64_t x);
-+
-+ // Emit the code-object-relative offset of the label's position
-+ inline void emit_code_relative_offset(Label* label);
-+
-+ // instruction generation
-+ void emit_arith_b(int op1, int op2, Register dst, int imm8);
-+
-+ // Emit a basic arithmetic instruction (i.e. first byte of the family is 0x81)
-+ // with a given destination expression and an immediate operand. It attempts
-+ // to use the shortest encoding possible.
-+ // sel specifies the /n in the modrm byte (see the Intel PRM).
-+ void emit_arith(int sel, Operand dst, const Immediate& x);
-+
-+ void emit_operand(Register reg, const Operand& adr);
-+
-+ void emit_label(Label* label);
-+
-+ void emit_farith(int b1, int b2, int i);
-+
-+ // labels
-+ void print(Label* L);
-+ void bind_to(Label* L, int pos);
-+
-+ // displacements
-+ inline Displacement disp_at(Label* L);
-+ inline void disp_at_put(Label* L, Displacement disp);
-+ inline void emit_disp(Label* L, Displacement::Type type);
-+ inline void emit_near_disp(Label* L);
-+
-+ // record reloc info for current pc_
-+ void RecordRelocInfo(RelocInfo::Mode rmode, intptr_t data = 0);
-+
-+ friend class CodePatcher;
-+ friend class EnsureSpace;
-+
-+ // Internal reference positions, required for (potential) patching in
-+ // GrowBuffer(); contains only those internal references whose labels
-+ // are already bound.
-+ std::deque<int> internal_reference_positions_;
-+
-+ // code generation
-+ RelocInfoWriter reloc_info_writer;
-+
-+ // The following functions help with avoiding allocations of embedded heap
-+ // objects during the code assembly phase. {RequestHeapObject} records the
-+ // need for a future heap number allocation or code stub generation. After
-+ // code assembly, {AllocateAndInstallRequestedHeapObjects} will allocate these
-+ // objects and place them where they are expected (determined by the pc offset
-+ // associated with each request). That is, for each request, it will patch the
-+ // dummy heap object handle that we emitted during code assembly with the
-+ // actual heap object handle.
-+ void RequestHeapObject(HeapObjectRequest request);
-+ void AllocateAndInstallRequestedHeapObjects(Isolate* isolate);
-+
-+ std::forward_list<HeapObjectRequest> heap_object_requests_;
-+};
-+
-+
-+// Helper class that ensures that there is enough space for generating
-+// instructions and relocation information. The constructor makes
-+// sure that there is enough space and (in debug mode) the destructor
-+// checks that we did not generate too much.
-+class EnsureSpace BASE_EMBEDDED {
-+ public:
-+ explicit EnsureSpace(Assembler* assembler) : assembler_(assembler) {
-+ if (assembler_->buffer_overflow()) assembler_->GrowBuffer();
-+#ifdef DEBUG
-+ space_before_ = assembler_->available_space();
-+#endif
-+ }
-+
-+#ifdef DEBUG
-+ ~EnsureSpace() {
-+ int bytes_generated = space_before_ - assembler_->available_space();
-+ DCHECK(bytes_generated < assembler_->kGap);
-+ }
-+#endif
-+
-+ private:
-+ Assembler* assembler_;
-+#ifdef DEBUG
-+ int space_before_;
-+#endif
-+};
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_X87_ASSEMBLER_X87_H_
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/assembler-x87-inl.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/assembler-x87-inl.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/assembler-x87-inl.h 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/assembler-x87-inl.h 2018-02-18
19:00:54.195418178 +0100
-@@ -0,0 +1,528 @@
-+// Copyright (c) 1994-2006 Sun Microsystems Inc.
-+// All Rights Reserved.
-+//
-+// Redistribution and use in source and binary forms, with or without
-+// modification, are permitted provided that the following conditions are
-+// met:
-+//
-+// - Redistributions of source code must retain the above copyright notice,
-+// this list of conditions and the following disclaimer.
-+//
-+// - Redistribution in binary form must reproduce the above copyright
-+// notice, this list of conditions and the following disclaimer in the
-+// documentation and/or other materials provided with the distribution.
-+//
-+// - Neither the name of Sun Microsystems or the names of contributors may
-+// be used to endorse or promote products derived from this software without
-+// specific prior written permission.
-+//
-+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
-+// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
-+// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-+// PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
-+// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
-+// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
-+// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
-+// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
-+// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
-+// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-+// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-+
-+// The original source code covered by the above license above has been
-+// modified significantly by Google Inc.
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+
-+// A light-weight IA32 Assembler.
-+
-+#ifndef V8_X87_ASSEMBLER_X87_INL_H_
-+#define V8_X87_ASSEMBLER_X87_INL_H_
-+
-+#include "src/x87/assembler-x87.h"
-+
-+#include "src/assembler.h"
-+#include "src/debug/debug.h"
-+#include "src/objects-inl.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+bool CpuFeatures::SupportsCrankshaft() { return true; }
-+
-+bool CpuFeatures::SupportsWasmSimd128() { return false; }
-+
-+static const byte kCallOpcode = 0xE8;
-+static const int kNoCodeAgeSequenceLength = 5;
-+
-+
-+// The modes possibly affected by apply must be in kApplyMask.
-+void RelocInfo::apply(intptr_t delta) {
-+ if (IsRuntimeEntry(rmode_) || IsCodeTarget(rmode_)) {
-+ int32_t* p = reinterpret_cast<int32_t*>(pc_);
-+ *p -= delta; // Relocate entry.
-+ } else if (IsCodeAgeSequence(rmode_)) {
-+ if (*pc_ == kCallOpcode) {
-+ int32_t* p = reinterpret_cast<int32_t*>(pc_ + 1);
-+ *p -= delta; // Relocate entry.
-+ }
-+ } else if (IsDebugBreakSlot(rmode_) && IsPatchedDebugBreakSlotSequence()) {
-+ // Special handling of a debug break slot when a break point is set (call
-+ // instruction has been inserted).
-+ int32_t* p = reinterpret_cast<int32_t*>(
-+ pc_ + Assembler::kPatchDebugBreakSlotAddressOffset);
-+ *p -= delta; // Relocate entry.
-+ } else if (IsInternalReference(rmode_)) {
-+ // absolute code pointer inside code object moves with the code object.
-+ int32_t* p = reinterpret_cast<int32_t*>(pc_);
-+ *p += delta; // Relocate entry.
-+ }
-+}
-+
-+
-+Address RelocInfo::target_address() {
-+ DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_));
-+ return Assembler::target_address_at(pc_, host_);
-+}
-+
-+Address RelocInfo::target_address_address() {
-+ DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)
-+ || rmode_ == EMBEDDED_OBJECT
-+ || rmode_ == EXTERNAL_REFERENCE);
-+ return reinterpret_cast<Address>(pc_);
-+}
-+
-+
-+Address RelocInfo::constant_pool_entry_address() {
-+ UNREACHABLE();
-+}
-+
-+
-+int RelocInfo::target_address_size() {
-+ return Assembler::kSpecialTargetSize;
-+}
-+
-+HeapObject* RelocInfo::target_object() {
-+ DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT);
-+ return HeapObject::cast(Memory::Object_at(pc_));
-+}
-+
-+Handle<HeapObject> RelocInfo::target_object_handle(Assembler* origin) {
-+ DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT);
-+ return Handle<HeapObject>::cast(Memory::Object_Handle_at(pc_));
-+}
-+
-+void RelocInfo::set_target_object(HeapObject* target,
-+ WriteBarrierMode write_barrier_mode,
-+ ICacheFlushMode icache_flush_mode) {
-+ DCHECK(IsCodeTarget(rmode_) || rmode_ == EMBEDDED_OBJECT);
-+ Memory::Object_at(pc_) = target;
-+ if (icache_flush_mode != SKIP_ICACHE_FLUSH) {
-+ Assembler::FlushICache(target->GetIsolate(), pc_, sizeof(Address));
-+ }
-+ if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != nullptr) {
-+ host()->GetHeap()->RecordWriteIntoCode(host(), this, target);
-+ host()->GetHeap()->incremental_marking()->RecordWriteIntoCode(host(),
this,
-+ target);
-+ }
-+}
-+
-+
-+Address RelocInfo::target_external_reference() {
-+ DCHECK(rmode_ == RelocInfo::EXTERNAL_REFERENCE);
-+ return Memory::Address_at(pc_);
-+}
-+
-+
-+Address RelocInfo::target_internal_reference() {
-+ DCHECK(rmode_ == INTERNAL_REFERENCE);
-+ return Memory::Address_at(pc_);
-+}
-+
-+
-+Address RelocInfo::target_internal_reference_address() {
-+ DCHECK(rmode_ == INTERNAL_REFERENCE);
-+ return reinterpret_cast<Address>(pc_);
-+}
-+
-+
-+Address RelocInfo::target_runtime_entry(Assembler* origin) {
-+ DCHECK(IsRuntimeEntry(rmode_));
-+ return reinterpret_cast<Address>(*reinterpret_cast<int32_t*>(pc_));
-+}
-+
-+void RelocInfo::set_target_runtime_entry(Isolate* isolate, Address target,
-+ WriteBarrierMode write_barrier_mode,
-+ ICacheFlushMode icache_flush_mode) {
-+ DCHECK(IsRuntimeEntry(rmode_));
-+ if (target_address() != target) {
-+ set_target_address(isolate, target, write_barrier_mode, icache_flush_mode);
-+ }
-+}
-+
-+
-+Handle<Cell> RelocInfo::target_cell_handle() {
-+ DCHECK(rmode_ == RelocInfo::CELL);
-+ Address address = Memory::Address_at(pc_);
-+ return Handle<Cell>(reinterpret_cast<Cell**>(address));
-+}
-+
-+
-+Cell* RelocInfo::target_cell() {
-+ DCHECK(rmode_ == RelocInfo::CELL);
-+ return Cell::FromValueAddress(Memory::Address_at(pc_));
-+}
-+
-+
-+void RelocInfo::set_target_cell(Cell* cell,
-+ WriteBarrierMode write_barrier_mode,
-+ ICacheFlushMode icache_flush_mode) {
-+ DCHECK(cell->IsCell());
-+ DCHECK(rmode_ == RelocInfo::CELL);
-+ Address address = cell->address() + Cell::kValueOffset;
-+ Memory::Address_at(pc_) = address;
-+ if (icache_flush_mode != SKIP_ICACHE_FLUSH) {
-+ Assembler::FlushICache(cell->GetIsolate(), pc_, sizeof(Address));
-+ }
-+ if (write_barrier_mode == UPDATE_WRITE_BARRIER && host() != NULL) {
-+ host()->GetHeap()->incremental_marking()->RecordWriteIntoCode(host(),
this,
-+ cell);
-+ }
-+}
-+
-+Handle<Code> RelocInfo::code_age_stub_handle(Assembler* origin) {
-+ DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE);
-+ DCHECK(*pc_ == kCallOpcode);
-+ return Handle<Code>::cast(Memory::Object_Handle_at(pc_ + 1));
-+}
-+
-+
-+Code* RelocInfo::code_age_stub() {
-+ DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE);
-+ DCHECK(*pc_ == kCallOpcode);
-+ return Code::GetCodeFromTargetAddress(
-+ Assembler::target_address_at(pc_ + 1, host_));
-+}
-+
-+
-+void RelocInfo::set_code_age_stub(Code* stub,
-+ ICacheFlushMode icache_flush_mode) {
-+ DCHECK(*pc_ == kCallOpcode);
-+ DCHECK(rmode_ == RelocInfo::CODE_AGE_SEQUENCE);
-+ Assembler::set_target_address_at(stub->GetIsolate(), pc_ + 1, host_,
-+ stub->instruction_start(),
-+ icache_flush_mode);
-+}
-+
-+
-+Address RelocInfo::debug_call_address() {
-+ DCHECK(IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence());
-+ Address location = pc_ + Assembler::kPatchDebugBreakSlotAddressOffset;
-+ return Assembler::target_address_at(location, host_);
-+}
-+
-+void RelocInfo::set_debug_call_address(Isolate* isolate, Address target) {
-+ DCHECK(IsDebugBreakSlot(rmode()) && IsPatchedDebugBreakSlotSequence());
-+ Address location = pc_ + Assembler::kPatchDebugBreakSlotAddressOffset;
-+ Assembler::set_target_address_at(isolate, location, host_, target);
-+ if (host() != NULL) {
-+ Code* target_code = Code::GetCodeFromTargetAddress(target);
-+ host()->GetHeap()->incremental_marking()->RecordWriteIntoCode(host(),
this,
-+ target_code);
-+ }
-+}
-+
-+void RelocInfo::WipeOut(Isolate* isolate) {
-+ if (IsEmbeddedObject(rmode_) || IsExternalReference(rmode_) ||
-+ IsInternalReference(rmode_)) {
-+ Memory::Address_at(pc_) = NULL;
-+ } else if (IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)) {
-+ // Effectively write zero into the relocation.
-+ Assembler::set_target_address_at(isolate, pc_, host_,
-+ pc_ + sizeof(int32_t));
-+ } else {
-+ UNREACHABLE();
-+ }
-+}
-+
-+template <typename ObjectVisitor>
-+void RelocInfo::Visit(Isolate* isolate, ObjectVisitor* visitor) {
-+ RelocInfo::Mode mode = rmode();
-+ if (mode == RelocInfo::EMBEDDED_OBJECT) {
-+ visitor->VisitEmbeddedPointer(host(), this);
-+ Assembler::FlushICache(isolate, pc_, sizeof(Address));
-+ } else if (RelocInfo::IsCodeTarget(mode)) {
-+ visitor->VisitCodeTarget(host(), this);
-+ } else if (mode == RelocInfo::CELL) {
-+ visitor->VisitCellPointer(host(), this);
-+ } else if (mode == RelocInfo::EXTERNAL_REFERENCE) {
-+ visitor->VisitExternalReference(host(), this);
-+ } else if (mode == RelocInfo::INTERNAL_REFERENCE) {
-+ visitor->VisitInternalReference(host(), this);
-+ } else if (RelocInfo::IsCodeAgeSequence(mode)) {
-+ visitor->VisitCodeAgeSequence(host(), this);
-+ } else if (RelocInfo::IsDebugBreakSlot(mode) &&
-+ IsPatchedDebugBreakSlotSequence()) {
-+ visitor->VisitDebugTarget(host(), this);
-+ } else if (IsRuntimeEntry(mode)) {
-+ visitor->VisitRuntimeEntry(host(), this);
-+ }
-+}
-+
-+
-+template<typename StaticVisitor>
-+void RelocInfo::Visit(Heap* heap) {
-+ RelocInfo::Mode mode = rmode();
-+ if (mode == RelocInfo::EMBEDDED_OBJECT) {
-+ StaticVisitor::VisitEmbeddedPointer(heap, this);
-+ Assembler::FlushICache(heap->isolate(), pc_, sizeof(Address));
-+ } else if (RelocInfo::IsCodeTarget(mode)) {
-+ StaticVisitor::VisitCodeTarget(heap, this);
-+ } else if (mode == RelocInfo::CELL) {
-+ StaticVisitor::VisitCell(heap, this);
-+ } else if (mode == RelocInfo::EXTERNAL_REFERENCE) {
-+ StaticVisitor::VisitExternalReference(this);
-+ } else if (mode == RelocInfo::INTERNAL_REFERENCE) {
-+ StaticVisitor::VisitInternalReference(this);
-+ } else if (RelocInfo::IsCodeAgeSequence(mode)) {
-+ StaticVisitor::VisitCodeAgeSequence(heap, this);
-+ } else if (RelocInfo::IsDebugBreakSlot(mode) &&
-+ IsPatchedDebugBreakSlotSequence()) {
-+ StaticVisitor::VisitDebugTarget(heap, this);
-+ } else if (IsRuntimeEntry(mode)) {
-+ StaticVisitor::VisitRuntimeEntry(this);
-+ }
-+}
-+
-+
-+
-+Immediate::Immediate(int x) {
-+ value_.immediate = x;
-+ rmode_ = RelocInfo::NONE32;
-+}
-+
-+Immediate::Immediate(Address x, RelocInfo::Mode rmode) {
-+ value_.immediate = reinterpret_cast<int32_t>(x);
-+ rmode_ = rmode;
-+}
-+
-+Immediate::Immediate(const ExternalReference& ext) {
-+ value_.immediate = reinterpret_cast<int32_t>(ext.address());
-+ rmode_ = RelocInfo::EXTERNAL_REFERENCE;
-+}
-+
-+
-+Immediate::Immediate(Label* internal_offset) {
-+ value_.immediate = reinterpret_cast<int32_t>(internal_offset);
-+ rmode_ = RelocInfo::INTERNAL_REFERENCE;
-+}
-+
-+
-+Immediate::Immediate(Handle<HeapObject> handle) {
-+ value_.immediate = reinterpret_cast<intptr_t>(handle.address());
-+ rmode_ = RelocInfo::EMBEDDED_OBJECT;
-+}
-+
-+
-+Immediate::Immediate(Smi* value) {
-+ value_.immediate = reinterpret_cast<intptr_t>(value);
-+ rmode_ = RelocInfo::NONE32;
-+}
-+
-+
-+Immediate::Immediate(Address addr) {
-+ value_.immediate = reinterpret_cast<int32_t>(addr);
-+ rmode_ = RelocInfo::NONE32;
-+}
-+
-+
-+void Assembler::emit(uint32_t x) {
-+ *reinterpret_cast<uint32_t*>(pc_) = x;
-+ pc_ += sizeof(uint32_t);
-+}
-+
-+
-+void Assembler::emit_q(uint64_t x) {
-+ *reinterpret_cast<uint64_t*>(pc_) = x;
-+ pc_ += sizeof(uint64_t);
-+}
-+
-+
-+void Assembler::emit(Handle<HeapObject> handle) {
-+ emit(reinterpret_cast<intptr_t>(handle.address()),
-+ RelocInfo::EMBEDDED_OBJECT);
-+}
-+
-+
-+void Assembler::emit(uint32_t x, RelocInfo::Mode rmode) {
-+ if (!RelocInfo::IsNone(rmode) && rmode != RelocInfo::CODE_AGE_SEQUENCE) {
-+ RecordRelocInfo(rmode);
-+ }
-+ emit(x);
-+}
-+
-+
-+void Assembler::emit(Handle<Code> code, RelocInfo::Mode rmode) {
-+ emit(reinterpret_cast<intptr_t>(code.address()), rmode);
-+}
-+
-+
-+void Assembler::emit(const Immediate& x) {
-+ if (x.rmode_ == RelocInfo::INTERNAL_REFERENCE) {
-+ Label* label = reinterpret_cast<Label*>(x.immediate());
-+ emit_code_relative_offset(label);
-+ return;
-+ }
-+ if (!RelocInfo::IsNone(x.rmode_)) RecordRelocInfo(x.rmode_);
-+ if (x.is_heap_object_request()) {
-+ RequestHeapObject(x.heap_object_request());
-+ emit(0);
-+ } else {
-+ emit(x.immediate());
-+ }
-+}
-+
-+
-+void Assembler::emit_code_relative_offset(Label* label) {
-+ if (label->is_bound()) {
-+ int32_t pos;
-+ pos = label->pos() + Code::kHeaderSize - kHeapObjectTag;
-+ emit(pos);
-+ } else {
-+ emit_disp(label, Displacement::CODE_RELATIVE);
-+ }
-+}
-+
-+void Assembler::emit_b(Immediate x) {
-+ DCHECK(x.is_int8() || x.is_uint8());
-+ uint8_t value = static_cast<uint8_t>(x.immediate());
-+ *pc_++ = value;
-+}
-+
-+void Assembler::emit_w(const Immediate& x) {
-+ DCHECK(RelocInfo::IsNone(x.rmode_));
-+ uint16_t value = static_cast<uint16_t>(x.immediate());
-+ reinterpret_cast<uint16_t*>(pc_)[0] = value;
-+ pc_ += sizeof(uint16_t);
-+}
-+
-+
-+Address Assembler::target_address_at(Address pc, Address constant_pool) {
-+ return pc + sizeof(int32_t) + *reinterpret_cast<int32_t*>(pc);
-+}
-+
-+
-+void Assembler::set_target_address_at(Isolate* isolate, Address pc,
-+ Address constant_pool, Address target,
-+ ICacheFlushMode icache_flush_mode) {
-+ DCHECK_IMPLIES(isolate == nullptr, icache_flush_mode == SKIP_ICACHE_FLUSH);
-+ int32_t* p = reinterpret_cast<int32_t*>(pc);
-+ *p = target - (pc + sizeof(int32_t));
-+ if (icache_flush_mode != SKIP_ICACHE_FLUSH) {
-+ Assembler::FlushICache(isolate, p, sizeof(int32_t));
-+ }
-+}
-+
-+Address Assembler::target_address_at(Address pc, Code* code) {
-+ Address constant_pool = code ? code->constant_pool() : NULL;
-+ return target_address_at(pc, constant_pool);
-+}
-+
-+void Assembler::set_target_address_at(Isolate* isolate, Address pc, Code* code,
-+ Address target,
-+ ICacheFlushMode icache_flush_mode) {
-+ Address constant_pool = code ? code->constant_pool() : NULL;
-+ set_target_address_at(isolate, pc, constant_pool, target, icache_flush_mode);
-+}
-+
-+Address Assembler::target_address_from_return_address(Address pc) {
-+ return pc - kCallTargetAddressOffset;
-+}
-+
-+
-+Displacement Assembler::disp_at(Label* L) {
-+ return Displacement(long_at(L->pos()));
-+}
-+
-+
-+void Assembler::disp_at_put(Label* L, Displacement disp) {
-+ long_at_put(L->pos(), disp.data());
-+}
-+
-+
-+void Assembler::emit_disp(Label* L, Displacement::Type type) {
-+ Displacement disp(L, type);
-+ L->link_to(pc_offset());
-+ emit(static_cast<int>(disp.data()));
-+}
-+
-+
-+void Assembler::emit_near_disp(Label* L) {
-+ byte disp = 0x00;
-+ if (L->is_near_linked()) {
-+ int offset = L->near_link_pos() - pc_offset();
-+ DCHECK(is_int8(offset));
-+ disp = static_cast<byte>(offset & 0xFF);
-+ }
-+ L->link_to(pc_offset(), Label::kNear);
-+ *pc_++ = disp;
-+}
-+
-+
-+void Assembler::deserialization_set_target_internal_reference_at(
-+ Isolate* isolate, Address pc, Address target, RelocInfo::Mode mode) {
-+ Memory::Address_at(pc) = target;
-+}
-+
-+
-+void Operand::set_modrm(int mod, Register rm) {
-+ DCHECK((mod & -4) == 0);
-+ buf_[0] = mod << 6 | rm.code();
-+ len_ = 1;
-+}
-+
-+
-+void Operand::set_sib(ScaleFactor scale, Register index, Register base) {
-+ DCHECK(len_ == 1);
-+ DCHECK((scale & -4) == 0);
-+ // Use SIB with no index register only for base esp.
-+ DCHECK(!index.is(esp) || base.is(esp));
-+ buf_[1] = scale << 6 | index.code() << 3 | base.code();
-+ len_ = 2;
-+}
-+
-+
-+void Operand::set_disp8(int8_t disp) {
-+ DCHECK(len_ == 1 || len_ == 2);
-+ *reinterpret_cast<int8_t*>(&buf_[len_++]) = disp;
-+}
-+
-+
-+void Operand::set_dispr(int32_t disp, RelocInfo::Mode rmode) {
-+ DCHECK(len_ == 1 || len_ == 2);
-+ int32_t* p = reinterpret_cast<int32_t*>(&buf_[len_]);
-+ *p = disp;
-+ len_ += sizeof(int32_t);
-+ rmode_ = rmode;
-+}
-+
-+Operand::Operand(Register reg) {
-+ // reg
-+ set_modrm(3, reg);
-+}
-+
-+
-+Operand::Operand(int32_t disp, RelocInfo::Mode rmode) {
-+ // [disp/r]
-+ set_modrm(0, ebp);
-+ set_dispr(disp, rmode);
-+}
-+
-+
-+Operand::Operand(Immediate imm) {
-+ // [disp/r]
-+ set_modrm(0, ebp);
-+ set_dispr(imm.immediate(), imm.rmode_);
-+}
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_X87_ASSEMBLER_X87_INL_H_
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/codegen-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/codegen-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/codegen-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/codegen-x87.cc 2018-02-18
19:00:54.195418178 +0100
-@@ -0,0 +1,381 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#include "src/x87/codegen-x87.h"
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/codegen.h"
-+#include "src/heap/heap.h"
-+#include "src/macro-assembler.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+
-+// -------------------------------------------------------------------------
-+// Platform-specific RuntimeCallHelper functions.
-+
-+void StubRuntimeCallHelper::BeforeCall(MacroAssembler* masm) const {
-+ masm->EnterFrame(StackFrame::INTERNAL);
-+ DCHECK(!masm->has_frame());
-+ masm->set_has_frame(true);
-+}
-+
-+
-+void StubRuntimeCallHelper::AfterCall(MacroAssembler* masm) const {
-+ masm->LeaveFrame(StackFrame::INTERNAL);
-+ DCHECK(masm->has_frame());
-+ masm->set_has_frame(false);
-+}
-+
-+
-+#define __ masm.
-+
-+
-+UnaryMathFunctionWithIsolate CreateSqrtFunction(Isolate* isolate) {
-+ size_t actual_size;
-+ // Allocate buffer in executable space.
-+ byte* buffer =
-+ static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true));
-+ if (buffer == nullptr) return nullptr;
-+
-+ MacroAssembler masm(isolate, buffer, static_cast<int>(actual_size),
-+ CodeObjectRequired::kNo);
-+ // Load double input into registers.
-+ __ fld_d(MemOperand(esp, 4));
-+ __ X87SetFPUCW(0x027F);
-+ __ fsqrt();
-+ __ X87SetFPUCW(0x037F);
-+ __ Ret();
-+
-+ CodeDesc desc;
-+ masm.GetCode(isolate, &desc);
-+ DCHECK(!RelocInfo::RequiresRelocation(isolate, desc));
-+
-+ Assembler::FlushICache(isolate, buffer, actual_size);
-+ base::OS::ProtectCode(buffer, actual_size);
-+ return FUNCTION_CAST<UnaryMathFunctionWithIsolate>(buffer);
-+}
-+
-+
-+// Helper functions for CreateMemMoveFunction.
-+#undef __
-+#define __ ACCESS_MASM(masm)
-+
-+enum Direction { FORWARD, BACKWARD };
-+enum Alignment { MOVE_ALIGNED, MOVE_UNALIGNED };
-+
-+
-+void MemMoveEmitPopAndReturn(MacroAssembler* masm) {
-+ __ pop(esi);
-+ __ pop(edi);
-+ __ ret(0);
-+}
-+
-+
-+#undef __
-+#define __ masm.
-+
-+
-+class LabelConverter {
-+ public:
-+ explicit LabelConverter(byte* buffer) : buffer_(buffer) {}
-+ int32_t address(Label* l) const {
-+ return reinterpret_cast<int32_t>(buffer_) + l->pos();
-+ }
-+ private:
-+ byte* buffer_;
-+};
-+
-+
-+MemMoveFunction CreateMemMoveFunction(Isolate* isolate) {
-+ size_t actual_size;
-+ // Allocate buffer in executable space.
-+ byte* buffer =
-+ static_cast<byte*>(base::OS::Allocate(1 * KB, &actual_size, true));
-+ if (buffer == nullptr) return nullptr;
-+ MacroAssembler masm(isolate, buffer, static_cast<int>(actual_size),
-+ CodeObjectRequired::kNo);
-+ LabelConverter conv(buffer);
-+
-+ // Generated code is put into a fixed, unmovable buffer, and not into
-+ // the V8 heap. We can't, and don't, refer to any relocatable addresses
-+ // (e.g. the JavaScript nan-object).
-+
-+ // 32-bit C declaration function calls pass arguments on stack.
-+
-+ // Stack layout:
-+ // esp[12]: Third argument, size.
-+ // esp[8]: Second argument, source pointer.
-+ // esp[4]: First argument, destination pointer.
-+ // esp[0]: return address
-+
-+ const int kDestinationOffset = 1 * kPointerSize;
-+ const int kSourceOffset = 2 * kPointerSize;
-+ const int kSizeOffset = 3 * kPointerSize;
-+
-+ int stack_offset = 0; // Update if we change the stack height.
-+
-+ Label backward, backward_much_overlap;
-+ Label forward_much_overlap, small_size, medium_size, pop_and_return;
-+ __ push(edi);
-+ __ push(esi);
-+ stack_offset += 2 * kPointerSize;
-+ Register dst = edi;
-+ Register src = esi;
-+ Register count = ecx;
-+ __ mov(dst, Operand(esp, stack_offset + kDestinationOffset));
-+ __ mov(src, Operand(esp, stack_offset + kSourceOffset));
-+ __ mov(count, Operand(esp, stack_offset + kSizeOffset));
-+
-+ __ cmp(dst, src);
-+ __ j(equal, &pop_and_return);
-+
-+ // No SSE2.
-+ Label forward;
-+ __ cmp(count, 0);
-+ __ j(equal, &pop_and_return);
-+ __ cmp(dst, src);
-+ __ j(above, &backward);
-+ __ jmp(&forward);
-+ {
-+ // Simple forward copier.
-+ Label forward_loop_1byte, forward_loop_4byte;
-+ __ bind(&forward_loop_4byte);
-+ __ mov(eax, Operand(src, 0));
-+ __ sub(count, Immediate(4));
-+ __ add(src, Immediate(4));
-+ __ mov(Operand(dst, 0), eax);
-+ __ add(dst, Immediate(4));
-+ __ bind(&forward); // Entry point.
-+ __ cmp(count, 3);
-+ __ j(above, &forward_loop_4byte);
-+ __ bind(&forward_loop_1byte);
-+ __ cmp(count, 0);
-+ __ j(below_equal, &pop_and_return);
-+ __ mov_b(eax, Operand(src, 0));
-+ __ dec(count);
-+ __ inc(src);
-+ __ mov_b(Operand(dst, 0), eax);
-+ __ inc(dst);
-+ __ jmp(&forward_loop_1byte);
-+ }
-+ {
-+ // Simple backward copier.
-+ Label backward_loop_1byte, backward_loop_4byte, entry_shortcut;
-+ __ bind(&backward);
-+ __ add(src, count);
-+ __ add(dst, count);
-+ __ cmp(count, 3);
-+ __ j(below_equal, &entry_shortcut);
-+
-+ __ bind(&backward_loop_4byte);
-+ __ sub(src, Immediate(4));
-+ __ sub(count, Immediate(4));
-+ __ mov(eax, Operand(src, 0));
-+ __ sub(dst, Immediate(4));
-+ __ mov(Operand(dst, 0), eax);
-+ __ cmp(count, 3);
-+ __ j(above, &backward_loop_4byte);
-+ __ bind(&backward_loop_1byte);
-+ __ cmp(count, 0);
-+ __ j(below_equal, &pop_and_return);
-+ __ bind(&entry_shortcut);
-+ __ dec(src);
-+ __ dec(count);
-+ __ mov_b(eax, Operand(src, 0));
-+ __ dec(dst);
-+ __ mov_b(Operand(dst, 0), eax);
-+ __ jmp(&backward_loop_1byte);
-+ }
-+
-+ __ bind(&pop_and_return);
-+ MemMoveEmitPopAndReturn(&masm);
-+
-+ CodeDesc desc;
-+ masm.GetCode(isolate, &desc);
-+ DCHECK(!RelocInfo::RequiresRelocation(isolate, desc));
-+ Assembler::FlushICache(isolate, buffer, actual_size);
-+ base::OS::ProtectCode(buffer, actual_size);
-+ // TODO(jkummerow): It would be nice to register this code creation event
-+ // with the PROFILE / GDBJIT system.
-+ return FUNCTION_CAST<MemMoveFunction>(buffer);
-+}
-+
-+
-+#undef __
-+
-+// -------------------------------------------------------------------------
-+// Code generators
-+
-+#define __ ACCESS_MASM(masm)
-+
-+void StringCharLoadGenerator::Generate(MacroAssembler* masm,
-+ Factory* factory,
-+ Register string,
-+ Register index,
-+ Register result,
-+ Label* call_runtime) {
-+ Label indirect_string_loaded;
-+ __ bind(&indirect_string_loaded);
-+
-+ // Fetch the instance type of the receiver into result register.
-+ __ mov(result, FieldOperand(string, HeapObject::kMapOffset));
-+ __ movzx_b(result, FieldOperand(result, Map::kInstanceTypeOffset));
-+
-+ // We need special handling for indirect strings.
-+ Label check_sequential;
-+ __ test(result, Immediate(kIsIndirectStringMask));
-+ __ j(zero, &check_sequential, Label::kNear);
-+
-+ // Dispatch on the indirect string shape: slice or cons.
-+ Label cons_string, thin_string;
-+ __ and_(result, Immediate(kStringRepresentationMask));
-+ __ cmp(result, Immediate(kConsStringTag));
-+ __ j(equal, &cons_string, Label::kNear);
-+ __ cmp(result, Immediate(kThinStringTag));
-+ __ j(equal, &thin_string, Label::kNear);
-+
-+ // Handle slices.
-+ __ mov(result, FieldOperand(string, SlicedString::kOffsetOffset));
-+ __ SmiUntag(result);
-+ __ add(index, result);
-+ __ mov(string, FieldOperand(string, SlicedString::kParentOffset));
-+ __ jmp(&indirect_string_loaded);
-+
-+ // Handle thin strings.
-+ __ bind(&thin_string);
-+ __ mov(string, FieldOperand(string, ThinString::kActualOffset));
-+ __ jmp(&indirect_string_loaded);
-+
-+ // Handle cons strings.
-+ // Check whether the right hand side is the empty string (i.e. if
-+ // this is really a flat string in a cons string). If that is not
-+ // the case we would rather go to the runtime system now to flatten
-+ // the string.
-+ __ bind(&cons_string);
-+ __ cmp(FieldOperand(string, ConsString::kSecondOffset),
-+ Immediate(factory->empty_string()));
-+ __ j(not_equal, call_runtime);
-+ __ mov(string, FieldOperand(string, ConsString::kFirstOffset));
-+ __ jmp(&indirect_string_loaded);
-+
-+ // Distinguish sequential and external strings. Only these two string
-+ // representations can reach here (slices and flat cons strings have been
-+ // reduced to the underlying sequential or external string).
-+ Label seq_string;
-+ __ bind(&check_sequential);
-+ STATIC_ASSERT(kSeqStringTag == 0);
-+ __ test(result, Immediate(kStringRepresentationMask));
-+ __ j(zero, &seq_string, Label::kNear);
-+
-+ // Handle external strings.
-+ Label one_byte_external, done;
-+ if (FLAG_debug_code) {
-+ // Assert that we do not have a cons or slice (indirect strings) here.
-+ // Sequential strings have already been ruled out.
-+ __ test(result, Immediate(kIsIndirectStringMask));
-+ __ Assert(zero, kExternalStringExpectedButNotFound);
-+ }
-+ // Rule out short external strings.
-+ STATIC_ASSERT(kShortExternalStringTag != 0);
-+ __ test_b(result, Immediate(kShortExternalStringMask));
-+ __ j(not_zero, call_runtime);
-+ // Check encoding.
-+ STATIC_ASSERT(kTwoByteStringTag == 0);
-+ __ test_b(result, Immediate(kStringEncodingMask));
-+ __ mov(result, FieldOperand(string, ExternalString::kResourceDataOffset));
-+ __ j(not_equal, &one_byte_external, Label::kNear);
-+ // Two-byte string.
-+ __ movzx_w(result, Operand(result, index, times_2, 0));
-+ __ jmp(&done, Label::kNear);
-+ __ bind(&one_byte_external);
-+ // One-byte string.
-+ __ movzx_b(result, Operand(result, index, times_1, 0));
-+ __ jmp(&done, Label::kNear);
-+
-+ // Dispatch on the encoding: one-byte or two-byte.
-+ Label one_byte;
-+ __ bind(&seq_string);
-+ STATIC_ASSERT((kStringEncodingMask & kOneByteStringTag) != 0);
-+ STATIC_ASSERT((kStringEncodingMask & kTwoByteStringTag) == 0);
-+ __ test(result, Immediate(kStringEncodingMask));
-+ __ j(not_zero, &one_byte, Label::kNear);
-+
-+ // Two-byte string.
-+ // Load the two-byte character code into the result register.
-+ __ movzx_w(result, FieldOperand(string,
-+ index,
-+ times_2,
-+ SeqTwoByteString::kHeaderSize));
-+ __ jmp(&done, Label::kNear);
-+
-+ // One-byte string.
-+ // Load the byte into the result register.
-+ __ bind(&one_byte);
-+ __ movzx_b(result, FieldOperand(string,
-+ index,
-+ times_1,
-+ SeqOneByteString::kHeaderSize));
-+ __ bind(&done);
-+}
-+
-+
-+#undef __
-+
-+
-+CodeAgingHelper::CodeAgingHelper(Isolate* isolate) {
-+ USE(isolate);
-+ DCHECK(young_sequence_.length() == kNoCodeAgeSequenceLength);
-+ CodePatcher patcher(isolate, young_sequence_.start(),
-+ young_sequence_.length());
-+ patcher.masm()->push(ebp);
-+ patcher.masm()->mov(ebp, esp);
-+ patcher.masm()->push(esi);
-+ patcher.masm()->push(edi);
-+}
-+
-+
-+#ifdef DEBUG
-+bool CodeAgingHelper::IsOld(byte* candidate) const {
-+ return *candidate == kCallOpcode;
-+}
-+#endif
-+
-+
-+bool Code::IsYoungSequence(Isolate* isolate, byte* sequence) {
-+ bool result = isolate->code_aging_helper()->IsYoung(sequence);
-+ DCHECK(result || isolate->code_aging_helper()->IsOld(sequence));
-+ return result;
-+}
-+
-+Code::Age Code::GetCodeAge(Isolate* isolate, byte* sequence) {
-+ if (IsYoungSequence(isolate, sequence)) return kNoAgeCodeAge;
-+
-+ sequence++; // Skip the kCallOpcode byte
-+ Address target_address = sequence + *reinterpret_cast<int*>(sequence) +
-+ Assembler::kCallTargetAddressOffset;
-+ Code* stub = GetCodeFromTargetAddress(target_address);
-+ return GetAgeOfCodeAgeStub(stub);
-+}
-+
-+void Code::PatchPlatformCodeAge(Isolate* isolate, byte* sequence,
-+ Code::Age age) {
-+ uint32_t young_length = isolate->code_aging_helper()->young_sequence_length();
-+ if (age == kNoAgeCodeAge) {
-+ isolate->code_aging_helper()->CopyYoungSequenceTo(sequence);
-+ Assembler::FlushICache(isolate, sequence, young_length);
-+ } else {
-+ Code* stub = GetCodeAgeStub(isolate, age);
-+ CodePatcher patcher(isolate, sequence, young_length);
-+ patcher.masm()->call(stub->instruction_start(), RelocInfo::NONE32);
-+ }
-+}
-+
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/codegen-x87.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/codegen-x87.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/codegen-x87.h 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/codegen-x87.h 2018-02-18
19:00:54.195418178 +0100
-@@ -0,0 +1,33 @@
-+// Copyright 2011 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#ifndef V8_X87_CODEGEN_X87_H_
-+#define V8_X87_CODEGEN_X87_H_
-+
-+#include "src/macro-assembler.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+
-+class StringCharLoadGenerator : public AllStatic {
-+ public:
-+ // Generates the code for handling different string types and loading the
-+ // indexed character into |result|. We expect |index| as untagged input and
-+ // |result| as untagged output.
-+ static void Generate(MacroAssembler* masm,
-+ Factory* factory,
-+ Register string,
-+ Register index,
-+ Register result,
-+ Label* call_runtime);
-+
-+ private:
-+ DISALLOW_COPY_AND_ASSIGN(StringCharLoadGenerator);
-+};
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_X87_CODEGEN_X87_H_
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/code-stubs-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/code-stubs-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/code-stubs-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/code-stubs-x87.cc 2018-02-18
19:00:54.197418149 +0100
-@@ -0,0 +1,2668 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/code-stubs.h"
-+#include "src/api-arguments.h"
-+#include "src/base/bits.h"
-+#include "src/bootstrapper.h"
-+#include "src/codegen.h"
-+#include "src/ic/handler-compiler.h"
-+#include "src/ic/ic.h"
-+#include "src/ic/stub-cache.h"
-+#include "src/isolate.h"
-+#include "src/regexp/jsregexp.h"
-+#include "src/regexp/regexp-macro-assembler.h"
-+#include "src/runtime/runtime.h"
-+#include "src/x87/code-stubs-x87.h"
-+#include "src/x87/frames-x87.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+#define __ ACCESS_MASM(masm)
-+
-+void ArrayNArgumentsConstructorStub::Generate(MacroAssembler* masm) {
-+ __ pop(ecx);
-+ __ mov(MemOperand(esp, eax, times_4, 0), edi);
-+ __ push(edi);
-+ __ push(ebx);
-+ __ push(ecx);
-+ __ add(eax, Immediate(3));
-+ __ TailCallRuntime(Runtime::kNewArray);
-+}
-+
-+
-+void StoreBufferOverflowStub::Generate(MacroAssembler* masm) {
-+ // We don't allow a GC during a store buffer overflow so there is no need to
-+ // store the registers in any particular way, but we do have to store and
-+ // restore them.
-+ __ pushad();
-+ if (save_doubles()) {
-+ // Save FPU stat in m108byte.
-+ __ sub(esp, Immediate(108));
-+ __ fnsave(Operand(esp, 0));
-+ }
-+ const int argument_count = 1;
-+
-+ AllowExternalCallThatCantCauseGC scope(masm);
-+ __ PrepareCallCFunction(argument_count, ecx);
-+ __ mov(Operand(esp, 0 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(isolate())));
-+ __ CallCFunction(
-+ ExternalReference::store_buffer_overflow_function(isolate()),
-+ argument_count);
-+ if (save_doubles()) {
-+ // Restore FPU stat in m108byte.
-+ __ frstor(Operand(esp, 0));
-+ __ add(esp, Immediate(108));
-+ }
-+ __ popad();
-+ __ ret(0);
-+}
-+
-+
-+class FloatingPointHelper : public AllStatic {
-+ public:
-+ enum ArgLocation {
-+ ARGS_ON_STACK,
-+ ARGS_IN_REGISTERS
-+ };
-+
-+ // Code pattern for loading a floating point value. Input value must
-+ // be either a smi or a heap number object (fp value). Requirements:
-+ // operand in register number. Returns operand as floating point number
-+ // on FPU stack.
-+ static void LoadFloatOperand(MacroAssembler* masm, Register number);
-+
-+ // Test if operands are smi or number objects (fp). Requirements:
-+ // operand_1 in eax, operand_2 in edx; falls through on float
-+ // operands, jumps to the non_float label otherwise.
-+ static void CheckFloatOperands(MacroAssembler* masm,
-+ Label* non_float,
-+ Register scratch);
-+};
-+
-+
-+void DoubleToIStub::Generate(MacroAssembler* masm) {
-+ Register input_reg = this->source();
-+ Register final_result_reg = this->destination();
-+ DCHECK(is_truncating());
-+
-+ Label check_negative, process_64_bits, done, done_no_stash;
-+
-+ int double_offset = offset();
-+
-+ // Account for return address and saved regs if input is esp.
-+ if (input_reg.is(esp)) double_offset += 3 * kPointerSize;
-+
-+ MemOperand mantissa_operand(MemOperand(input_reg, double_offset));
-+ MemOperand exponent_operand(MemOperand(input_reg,
-+ double_offset + kDoubleSize / 2));
-+
-+ Register scratch1;
-+ {
-+ Register scratch_candidates[3] = { ebx, edx, edi };
-+ for (int i = 0; i < 3; i++) {
-+ scratch1 = scratch_candidates[i];
-+ if (!final_result_reg.is(scratch1) && !input_reg.is(scratch1)) break;
-+ }
-+ }
-+ // Since we must use ecx for shifts below, use some other register (eax)
-+ // to calculate the result if ecx is the requested return register.
-+ Register result_reg = final_result_reg.is(ecx) ? eax : final_result_reg;
-+ // Save ecx if it isn't the return register and therefore volatile, or if it
-+ // is the return register, then save the temp register we use in its stead for
-+ // the result.
-+ Register save_reg = final_result_reg.is(ecx) ? eax : ecx;
-+ __ push(scratch1);
-+ __ push(save_reg);
-+
-+ bool stash_exponent_copy = !input_reg.is(esp);
-+ __ mov(scratch1, mantissa_operand);
-+ __ mov(ecx, exponent_operand);
-+ if (stash_exponent_copy) __ push(ecx);
-+
-+ __ and_(ecx, HeapNumber::kExponentMask);
-+ __ shr(ecx, HeapNumber::kExponentShift);
-+ __ lea(result_reg, MemOperand(ecx, -HeapNumber::kExponentBias));
-+ __ cmp(result_reg, Immediate(HeapNumber::kMantissaBits));
-+ __ j(below, &process_64_bits);
-+
-+ // Result is entirely in lower 32-bits of mantissa
-+ int delta = HeapNumber::kExponentBias + Double::kPhysicalSignificandSize;
-+ __ sub(ecx, Immediate(delta));
-+ __ xor_(result_reg, result_reg);
-+ __ cmp(ecx, Immediate(31));
-+ __ j(above, &done);
-+ __ shl_cl(scratch1);
-+ __ jmp(&check_negative);
-+
-+ __ bind(&process_64_bits);
-+ // Result must be extracted from shifted 32-bit mantissa
-+ __ sub(ecx, Immediate(delta));
-+ __ neg(ecx);
-+ if (stash_exponent_copy) {
-+ __ mov(result_reg, MemOperand(esp, 0));
-+ } else {
-+ __ mov(result_reg, exponent_operand);
-+ }
-+ __ and_(result_reg,
-+ Immediate(static_cast<uint32_t>(Double::kSignificandMask >>
32)));
-+ __ add(result_reg,
-+ Immediate(static_cast<uint32_t>(Double::kHiddenBit >> 32)));
-+ __ shrd_cl(scratch1, result_reg);
-+ __ shr_cl(result_reg);
-+ __ test(ecx, Immediate(32));
-+ {
-+ Label skip_mov;
-+ __ j(equal, &skip_mov, Label::kNear);
-+ __ mov(scratch1, result_reg);
-+ __ bind(&skip_mov);
-+ }
-+
-+ // If the double was negative, negate the integer result.
-+ __ bind(&check_negative);
-+ __ mov(result_reg, scratch1);
-+ __ neg(result_reg);
-+ if (stash_exponent_copy) {
-+ __ cmp(MemOperand(esp, 0), Immediate(0));
-+ } else {
-+ __ cmp(exponent_operand, Immediate(0));
-+ }
-+ {
-+ Label skip_mov;
-+ __ j(less_equal, &skip_mov, Label::kNear);
-+ __ mov(result_reg, scratch1);
-+ __ bind(&skip_mov);
-+ }
-+
-+ // Restore registers
-+ __ bind(&done);
-+ if (stash_exponent_copy) {
-+ __ add(esp, Immediate(kDoubleSize / 2));
-+ }
-+ __ bind(&done_no_stash);
-+ if (!final_result_reg.is(result_reg)) {
-+ DCHECK(final_result_reg.is(ecx));
-+ __ mov(final_result_reg, result_reg);
-+ }
-+ __ pop(save_reg);
-+ __ pop(scratch1);
-+ __ ret(0);
-+}
-+
-+
-+void FloatingPointHelper::LoadFloatOperand(MacroAssembler* masm,
-+ Register number) {
-+ Label load_smi, done;
-+
-+ __ JumpIfSmi(number, &load_smi, Label::kNear);
-+ __ fld_d(FieldOperand(number, HeapNumber::kValueOffset));
-+ __ jmp(&done, Label::kNear);
-+
-+ __ bind(&load_smi);
-+ __ SmiUntag(number);
-+ __ push(number);
-+ __ fild_s(Operand(esp, 0));
-+ __ pop(number);
-+
-+ __ bind(&done);
-+}
-+
-+
-+void FloatingPointHelper::CheckFloatOperands(MacroAssembler* masm,
-+ Label* non_float,
-+ Register scratch) {
-+ Label test_other, done;
-+ // Test if both operands are floats or smi -> scratch=k_is_float;
-+ // Otherwise scratch = k_not_float.
-+ __ JumpIfSmi(edx, &test_other, Label::kNear);
-+ __ mov(scratch, FieldOperand(edx, HeapObject::kMapOffset));
-+ Factory* factory = masm->isolate()->factory();
-+ __ cmp(scratch, factory->heap_number_map());
-+ __ j(not_equal, non_float); // argument in edx is not a number -> NaN
-+
-+ __ bind(&test_other);
-+ __ JumpIfSmi(eax, &done, Label::kNear);
-+ __ mov(scratch, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ cmp(scratch, factory->heap_number_map());
-+ __ j(not_equal, non_float); // argument in eax is not a number -> NaN
-+
-+ // Fall-through: Both operands are numbers.
-+ __ bind(&done);
-+}
-+
-+
-+void MathPowStub::Generate(MacroAssembler* masm) {
-+ const Register scratch = ecx;
-+
-+ // Load the double_exponent into x87 FPU
-+ __ fld_d(Operand(esp, 0 * kDoubleSize + 4));
-+ // Load the double_base into x87 FPU
-+ __ fld_d(Operand(esp, 1 * kDoubleSize + 4));
-+
-+ // Call ieee754 runtime directly.
-+ {
-+ AllowExternalCallThatCantCauseGC scope(masm);
-+ __ PrepareCallCFunction(4, scratch);
-+ // Put the double_base parameter in call stack
-+ __ fstp_d(Operand(esp, 0 * kDoubleSize));
-+ // Put the double_exponent parameter in call stack
-+ __ fstp_d(Operand(esp, 1 * kDoubleSize));
-+ __ CallCFunction(ExternalReference::power_double_double_function(isolate()),
-+ 4);
-+ }
-+ // Return value is in st(0) on ia32.
-+ __ ret(0);
-+}
-+
-+
-+static int NegativeComparisonResult(Condition cc) {
-+ DCHECK(cc != equal);
-+ DCHECK((cc == less) || (cc == less_equal)
-+ || (cc == greater) || (cc == greater_equal));
-+ return (cc == greater || cc == greater_equal) ? LESS : GREATER;
-+}
-+
-+
-+static void CheckInputType(MacroAssembler* masm, Register input,
-+ CompareICState::State expected, Label* fail) {
-+ Label ok;
-+ if (expected == CompareICState::SMI) {
-+ __ JumpIfNotSmi(input, fail);
-+ } else if (expected == CompareICState::NUMBER) {
-+ __ JumpIfSmi(input, &ok);
-+ __ cmp(FieldOperand(input, HeapObject::kMapOffset),
-+ Immediate(masm->isolate()->factory()->heap_number_map()));
-+ __ j(not_equal, fail);
-+ }
-+ // We could be strict about internalized/non-internalized here, but as long as
-+ // hydrogen doesn't care, the stub doesn't have to care either.
-+ __ bind(&ok);
-+}
-+
-+
-+static void BranchIfNotInternalizedString(MacroAssembler* masm,
-+ Label* label,
-+ Register object,
-+ Register scratch) {
-+ __ JumpIfSmi(object, label);
-+ __ mov(scratch, FieldOperand(object, HeapObject::kMapOffset));
-+ __ movzx_b(scratch, FieldOperand(scratch, Map::kInstanceTypeOffset));
-+ STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0);
-+ __ test(scratch, Immediate(kIsNotStringMask | kIsNotInternalizedMask));
-+ __ j(not_zero, label);
-+}
-+
-+
-+void CompareICStub::GenerateGeneric(MacroAssembler* masm) {
-+ Label runtime_call, check_unequal_objects;
-+ Condition cc = GetCondition();
-+
-+ Label miss;
-+ CheckInputType(masm, edx, left(), &miss);
-+ CheckInputType(masm, eax, right(), &miss);
-+
-+ // Compare two smis.
-+ Label non_smi, smi_done;
-+ __ mov(ecx, edx);
-+ __ or_(ecx, eax);
-+ __ JumpIfNotSmi(ecx, &non_smi, Label::kNear);
-+ __ sub(edx, eax); // Return on the result of the subtraction.
-+ __ j(no_overflow, &smi_done, Label::kNear);
-+ __ not_(edx); // Correct sign in case of overflow. edx is never 0 here.
-+ __ bind(&smi_done);
-+ __ mov(eax, edx);
-+ __ ret(0);
-+ __ bind(&non_smi);
-+
-+ // NOTICE! This code is only reached after a smi-fast-case check, so
-+ // it is certain that at least one operand isn't a smi.
-+
-+ // Identical objects can be compared fast, but there are some tricky cases
-+ // for NaN and undefined.
-+ Label generic_heap_number_comparison;
-+ {
-+ Label not_identical;
-+ __ cmp(eax, edx);
-+ __ j(not_equal, ¬_identical);
-+
-+ if (cc != equal) {
-+ // Check for undefined. undefined OP undefined is false even though
-+ // undefined == undefined.
-+ __ cmp(edx, isolate()->factory()->undefined_value());
-+ Label check_for_nan;
-+ __ j(not_equal, &check_for_nan, Label::kNear);
-+ __ Move(eax, Immediate(Smi::FromInt(NegativeComparisonResult(cc))));
-+ __ ret(0);
-+ __ bind(&check_for_nan);
-+ }
-+
-+ // Test for NaN. Compare heap numbers in a general way,
-+ // to handle NaNs correctly.
-+ __ cmp(FieldOperand(edx, HeapObject::kMapOffset),
-+ Immediate(isolate()->factory()->heap_number_map()));
-+ __ j(equal, &generic_heap_number_comparison, Label::kNear);
-+ if (cc != equal) {
-+ __ mov(ecx, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ movzx_b(ecx, FieldOperand(ecx, Map::kInstanceTypeOffset));
-+ // Call runtime on identical JSObjects. Otherwise return equal.
-+ __ cmpb(ecx, Immediate(FIRST_JS_RECEIVER_TYPE));
-+ __ j(above_equal, &runtime_call, Label::kFar);
-+ // Call runtime on identical symbols since we need to throw a TypeError.
-+ __ cmpb(ecx, Immediate(SYMBOL_TYPE));
-+ __ j(equal, &runtime_call, Label::kFar);
-+ }
-+ __ Move(eax, Immediate(Smi::FromInt(EQUAL)));
-+ __ ret(0);
-+
-+
-+ __ bind(¬_identical);
-+ }
-+
-+ // Strict equality can quickly decide whether objects are equal.
-+ // Non-strict object equality is slower, so it is handled later in the stub.
-+ if (cc == equal && strict()) {
-+ Label slow; // Fallthrough label.
-+ Label not_smis;
-+ // If we're doing a strict equality comparison, we don't have to do
-+ // type conversion, so we generate code to do fast comparison for objects
-+ // and oddballs. Non-smi numbers and strings still go through the usual
-+ // slow-case code.
-+ // If either is a Smi (we know that not both are), then they can only
-+ // be equal if the other is a HeapNumber. If so, use the slow case.
-+ STATIC_ASSERT(kSmiTag == 0);
-+ DCHECK_EQ(static_cast<Smi*>(0), Smi::kZero);
-+ __ mov(ecx, Immediate(kSmiTagMask));
-+ __ and_(ecx, eax);
-+ __ test(ecx, edx);
-+ __ j(not_zero, ¬_smis, Label::kNear);
-+ // One operand is a smi.
-+
-+ // Check whether the non-smi is a heap number.
-+ STATIC_ASSERT(kSmiTagMask == 1);
-+ // ecx still holds eax & kSmiTag, which is either zero or one.
-+ __ sub(ecx, Immediate(0x01));
-+ __ mov(ebx, edx);
-+ __ xor_(ebx, eax);
-+ __ and_(ebx, ecx); // ebx holds either 0 or eax ^ edx.
-+ __ xor_(ebx, eax);
-+ // if eax was smi, ebx is now edx, else eax.
-+
-+ // Check if the non-smi operand is a heap number.
-+ __ cmp(FieldOperand(ebx, HeapObject::kMapOffset),
-+ Immediate(isolate()->factory()->heap_number_map()));
-+ // If heap number, handle it in the slow case.
-+ __ j(equal, &slow, Label::kNear);
-+ // Return non-equal (ebx is not zero)
-+ __ mov(eax, ebx);
-+ __ ret(0);
-+
-+ __ bind(¬_smis);
-+ // If either operand is a JSObject or an oddball value, then they are not
-+ // equal since their pointers are different
-+ // There is no test for undetectability in strict equality.
-+
-+ // Get the type of the first operand.
-+ // If the first object is a JS object, we have done pointer comparison.
-+ Label first_non_object;
-+ STATIC_ASSERT(LAST_TYPE == LAST_JS_RECEIVER_TYPE);
-+ __ CmpObjectType(eax, FIRST_JS_RECEIVER_TYPE, ecx);
-+ __ j(below, &first_non_object, Label::kNear);
-+
-+ // Return non-zero (eax is not zero)
-+ Label return_not_equal;
-+ STATIC_ASSERT(kHeapObjectTag != 0);
-+ __ bind(&return_not_equal);
-+ __ ret(0);
-+
-+ __ bind(&first_non_object);
-+ // Check for oddballs: true, false, null, undefined.
-+ __ CmpInstanceType(ecx, ODDBALL_TYPE);
-+ __ j(equal, &return_not_equal);
-+
-+ __ CmpObjectType(edx, FIRST_JS_RECEIVER_TYPE, ecx);
-+ __ j(above_equal, &return_not_equal);
-+
-+ // Check for oddballs: true, false, null, undefined.
-+ __ CmpInstanceType(ecx, ODDBALL_TYPE);
-+ __ j(equal, &return_not_equal);
-+
-+ // Fall through to the general case.
-+ __ bind(&slow);
-+ }
-+
-+ // Generate the number comparison code.
-+ Label non_number_comparison;
-+ Label unordered;
-+ __ bind(&generic_heap_number_comparison);
-+ FloatingPointHelper::CheckFloatOperands(
-+ masm, &non_number_comparison, ebx);
-+ FloatingPointHelper::LoadFloatOperand(masm, eax);
-+ FloatingPointHelper::LoadFloatOperand(masm, edx);
-+ __ FCmp();
-+
-+ // Don't base result on EFLAGS when a NaN is involved.
-+ __ j(parity_even, &unordered, Label::kNear);
-+
-+ Label below_label, above_label;
-+ // Return a result of -1, 0, or 1, based on EFLAGS.
-+ __ j(below, &below_label, Label::kNear);
-+ __ j(above, &above_label, Label::kNear);
-+
-+ __ Move(eax, Immediate(0));
-+ __ ret(0);
-+
-+ __ bind(&below_label);
-+ __ mov(eax, Immediate(Smi::FromInt(-1)));
-+ __ ret(0);
-+
-+ __ bind(&above_label);
-+ __ mov(eax, Immediate(Smi::FromInt(1)));
-+ __ ret(0);
-+
-+ // If one of the numbers was NaN, then the result is always false.
-+ // The cc is never not-equal.
-+ __ bind(&unordered);
-+ DCHECK(cc != not_equal);
-+ if (cc == less || cc == less_equal) {
-+ __ mov(eax, Immediate(Smi::FromInt(1)));
-+ } else {
-+ __ mov(eax, Immediate(Smi::FromInt(-1)));
-+ }
-+ __ ret(0);
-+
-+ // The number comparison code did not provide a valid result.
-+ __ bind(&non_number_comparison);
-+
-+ // Fast negative check for internalized-to-internalized equality.
-+ Label check_for_strings;
-+ if (cc == equal) {
-+ BranchIfNotInternalizedString(masm, &check_for_strings, eax, ecx);
-+ BranchIfNotInternalizedString(masm, &check_for_strings, edx, ecx);
-+
-+ // We've already checked for object identity, so if both operands
-+ // are internalized they aren't equal. Register eax already holds a
-+ // non-zero value, which indicates not equal, so just return.
-+ __ ret(0);
-+ }
-+
-+ __ bind(&check_for_strings);
-+
-+ __ JumpIfNotBothSequentialOneByteStrings(edx, eax, ecx, ebx,
-+ &check_unequal_objects);
-+
-+ // Inline comparison of one-byte strings.
-+ if (cc == equal) {
-+ StringHelper::GenerateFlatOneByteStringEquals(masm, edx, eax, ecx, ebx);
-+ } else {
-+ StringHelper::GenerateCompareFlatOneByteStrings(masm, edx, eax, ecx, ebx,
-+ edi);
-+ }
-+#ifdef DEBUG
-+ __ Abort(kUnexpectedFallThroughFromStringComparison);
-+#endif
-+
-+ __ bind(&check_unequal_objects);
-+ if (cc == equal && !strict()) {
-+ // Non-strict equality. Objects are unequal if
-+ // they are both JSObjects and not undetectable,
-+ // and their pointers are different.
-+ Label return_equal, return_unequal, undetectable;
-+ // At most one is a smi, so we can test for smi by adding the two.
-+ // A smi plus a heap object has the low bit set, a heap object plus
-+ // a heap object has the low bit clear.
-+ STATIC_ASSERT(kSmiTag == 0);
-+ STATIC_ASSERT(kSmiTagMask == 1);
-+ __ lea(ecx, Operand(eax, edx, times_1, 0));
-+ __ test(ecx, Immediate(kSmiTagMask));
-+ __ j(not_zero, &runtime_call);
-+
-+ __ mov(ecx, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ mov(ebx, FieldOperand(edx, HeapObject::kMapOffset));
-+
-+ __ test_b(FieldOperand(ebx, Map::kBitFieldOffset),
-+ Immediate(1 << Map::kIsUndetectable));
-+ __ j(not_zero, &undetectable, Label::kNear);
-+ __ test_b(FieldOperand(ecx, Map::kBitFieldOffset),
-+ Immediate(1 << Map::kIsUndetectable));
-+ __ j(not_zero, &return_unequal, Label::kNear);
-+
-+ __ CmpInstanceType(ebx, FIRST_JS_RECEIVER_TYPE);
-+ __ j(below, &runtime_call, Label::kNear);
-+ __ CmpInstanceType(ecx, FIRST_JS_RECEIVER_TYPE);
-+ __ j(below, &runtime_call, Label::kNear);
-+
-+ __ bind(&return_unequal);
-+ // Return non-equal by returning the non-zero object pointer in eax.
-+ __ ret(0); // eax, edx were pushed
-+
-+ __ bind(&undetectable);
-+ __ test_b(FieldOperand(ecx, Map::kBitFieldOffset),
-+ Immediate(1 << Map::kIsUndetectable));
-+ __ j(zero, &return_unequal, Label::kNear);
-+
-+ // If both sides are JSReceivers, then the result is false according to
-+ // the HTML specification, which says that only comparisons with null or
-+ // undefined are affected by special casing for document.all.
-+ __ CmpInstanceType(ebx, ODDBALL_TYPE);
-+ __ j(zero, &return_equal, Label::kNear);
-+ __ CmpInstanceType(ecx, ODDBALL_TYPE);
-+ __ j(not_zero, &return_unequal, Label::kNear);
-+
-+ __ bind(&return_equal);
-+ __ Move(eax, Immediate(EQUAL));
-+ __ ret(0); // eax, edx were pushed
-+ }
-+ __ bind(&runtime_call);
-+
-+ if (cc == equal) {
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ Push(esi);
-+ __ Call(strict() ? isolate()->builtins()->StrictEqual()
-+ : isolate()->builtins()->Equal(),
-+ RelocInfo::CODE_TARGET);
-+ __ Pop(esi);
-+ }
-+ // Turn true into 0 and false into some non-zero value.
-+ STATIC_ASSERT(EQUAL == 0);
-+ __ sub(eax, Immediate(isolate()->factory()->true_value()));
-+ __ Ret();
-+ } else {
-+ // Push arguments below the return address.
-+ __ pop(ecx);
-+ __ push(edx);
-+ __ push(eax);
-+ __ push(Immediate(Smi::FromInt(NegativeComparisonResult(cc))));
-+
-+ // Restore return address on the stack.
-+ __ push(ecx);
-+ // Call the native; it returns -1 (less), 0 (equal), or 1 (greater)
-+ // tagged as a small integer.
-+ __ TailCallRuntime(Runtime::kCompare);
-+ }
-+
-+ __ bind(&miss);
-+ GenerateMiss(masm);
-+}
-+
-+
-+static void CallStubInRecordCallTarget(MacroAssembler* masm, CodeStub* stub) {
-+ // eax : number of arguments to the construct function
-+ // ebx : feedback vector
-+ // edx : slot in feedback vector (Smi)
-+ // edi : the function to call
-+
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+
-+ // Number-of-arguments register must be smi-tagged to call out.
-+ __ SmiTag(eax);
-+ __ push(eax);
-+ __ push(edi);
-+ __ push(edx);
-+ __ push(ebx);
-+ __ push(esi);
-+
-+ __ CallStub(stub);
-+
-+ __ pop(esi);
-+ __ pop(ebx);
-+ __ pop(edx);
-+ __ pop(edi);
-+ __ pop(eax);
-+ __ SmiUntag(eax);
-+ }
-+}
-+
-+
-+static void GenerateRecordCallTarget(MacroAssembler* masm) {
-+ // Cache the called function in a feedback vector slot. Cache states
-+ // are uninitialized, monomorphic (indicated by a JSFunction), and
-+ // megamorphic.
-+ // eax : number of arguments to the construct function
-+ // ebx : feedback vector
-+ // edx : slot in feedback vector (Smi)
-+ // edi : the function to call
-+ Isolate* isolate = masm->isolate();
-+ Label initialize, done, miss, megamorphic, not_array_function;
-+
-+ // Load the cache state into ecx.
-+ __ mov(ecx, FieldOperand(ebx, edx, times_half_pointer_size,
-+ FixedArray::kHeaderSize));
-+
-+ // A monomorphic cache hit or an already megamorphic state: invoke the
-+ // function without changing the state.
-+ // We don't know if ecx is a WeakCell or a Symbol, but it's harmless to read
-+ // at this position in a symbol (see static asserts in feedback-vector.h).
-+ Label check_allocation_site;
-+ __ cmp(edi, FieldOperand(ecx, WeakCell::kValueOffset));
-+ __ j(equal, &done, Label::kFar);
-+ __ CompareRoot(ecx, Heap::kmegamorphic_symbolRootIndex);
-+ __ j(equal, &done, Label::kFar);
-+ __ CompareRoot(FieldOperand(ecx, HeapObject::kMapOffset),
-+ Heap::kWeakCellMapRootIndex);
-+ __ j(not_equal, &check_allocation_site);
-+
-+ // If the weak cell is cleared, we have a new chance to become monomorphic.
-+ __ JumpIfSmi(FieldOperand(ecx, WeakCell::kValueOffset), &initialize);
-+ __ jmp(&megamorphic);
-+
-+ __ bind(&check_allocation_site);
-+ // If we came here, we need to see if we are the array function.
-+ // If we didn't have a matching function, and we didn't find the megamorph
-+ // sentinel, then we have in the slot either some other function or an
-+ // AllocationSite.
-+ __ CompareRoot(FieldOperand(ecx, 0), Heap::kAllocationSiteMapRootIndex);
-+ __ j(not_equal, &miss);
-+
-+ // Make sure the function is the Array() function
-+ __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, ecx);
-+ __ cmp(edi, ecx);
-+ __ j(not_equal, &megamorphic);
-+ __ jmp(&done, Label::kFar);
-+
-+ __ bind(&miss);
-+
-+ // A monomorphic miss (i.e, here the cache is not uninitialized) goes
-+ // megamorphic.
-+ __ CompareRoot(ecx, Heap::kuninitialized_symbolRootIndex);
-+ __ j(equal, &initialize);
-+ // MegamorphicSentinel is an immortal immovable object (undefined) so no
-+ // write-barrier is needed.
-+ __ bind(&megamorphic);
-+ __ mov(
-+ FieldOperand(ebx, edx, times_half_pointer_size, FixedArray::kHeaderSize),
-+ Immediate(FeedbackVector::MegamorphicSentinel(isolate)));
-+ __ jmp(&done, Label::kFar);
-+
-+ // An uninitialized cache is patched with the function or sentinel to
-+ // indicate the ElementsKind if function is the Array constructor.
-+ __ bind(&initialize);
-+ // Make sure the function is the Array() function
-+ __ LoadGlobalFunction(Context::ARRAY_FUNCTION_INDEX, ecx);
-+ __ cmp(edi, ecx);
-+ __ j(not_equal, ¬_array_function);
-+
-+ // The target function is the Array constructor,
-+ // Create an AllocationSite if we don't already have it, store it in the
-+ // slot.
-+ CreateAllocationSiteStub create_stub(isolate);
-+ CallStubInRecordCallTarget(masm, &create_stub);
-+ __ jmp(&done);
-+
-+ __ bind(¬_array_function);
-+ CreateWeakCellStub weak_cell_stub(isolate);
-+ CallStubInRecordCallTarget(masm, &weak_cell_stub);
-+
-+ __ bind(&done);
-+ // Increment the call count for all function calls.
-+ __ add(FieldOperand(ebx, edx, times_half_pointer_size,
-+ FixedArray::kHeaderSize + kPointerSize),
-+ Immediate(Smi::FromInt(1)));
-+}
-+
-+
-+void CallConstructStub::Generate(MacroAssembler* masm) {
-+ // eax : number of arguments
-+ // ebx : feedback vector
-+ // edx : slot in feedback vector (Smi, for RecordCallTarget)
-+ // edi : constructor function
-+
-+ Label non_function;
-+ // Check that function is not a smi.
-+ __ JumpIfSmi(edi, &non_function);
-+ // Check that function is a JSFunction.
-+ __ CmpObjectType(edi, JS_FUNCTION_TYPE, ecx);
-+ __ j(not_equal, &non_function);
-+
-+ GenerateRecordCallTarget(masm);
-+
-+ Label feedback_register_initialized;
-+ // Put the AllocationSite from the feedback vector into ebx, or undefined.
-+ __ mov(ebx, FieldOperand(ebx, edx, times_half_pointer_size,
-+ FixedArray::kHeaderSize));
-+ Handle<Map> allocation_site_map =
isolate()->factory()->allocation_site_map();
-+ __ cmp(FieldOperand(ebx, 0), Immediate(allocation_site_map));
-+ __ j(equal, &feedback_register_initialized);
-+ __ mov(ebx, isolate()->factory()->undefined_value());
-+ __ bind(&feedback_register_initialized);
-+
-+ __ AssertUndefinedOrAllocationSite(ebx);
-+
-+ // Pass new target to construct stub.
-+ __ mov(edx, edi);
-+
-+ // Tail call to the function-specific construct stub (still in the caller
-+ // context at this point).
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ __ mov(ecx, FieldOperand(ecx, SharedFunctionInfo::kConstructStubOffset));
-+ __ lea(ecx, FieldOperand(ecx, Code::kHeaderSize));
-+ __ jmp(ecx);
-+
-+ __ bind(&non_function);
-+ __ mov(edx, edi);
-+ __ Jump(isolate()->builtins()->Construct(), RelocInfo::CODE_TARGET);
-+}
-+
-+
-+bool CEntryStub::NeedsImmovableCode() {
-+ return false;
-+}
-+
-+
-+void CodeStub::GenerateStubsAheadOfTime(Isolate* isolate) {
-+ CEntryStub::GenerateAheadOfTime(isolate);
-+ StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime(isolate);
-+ // It is important that the store buffer overflow stubs are generated first.
-+ CommonArrayConstructorStub::GenerateStubsAheadOfTime(isolate);
-+ CreateAllocationSiteStub::GenerateAheadOfTime(isolate);
-+ CreateWeakCellStub::GenerateAheadOfTime(isolate);
-+ StoreFastElementStub::GenerateAheadOfTime(isolate);
-+}
-+
-+
-+void CodeStub::GenerateFPStubs(Isolate* isolate) {
-+ CEntryStub save_doubles(isolate, 1, kSaveFPRegs);
-+ // Stubs might already be in the snapshot, detect that and don't regenerate,
-+ // which would lead to code stub initialization state being messed up.
-+ Code* save_doubles_code;
-+ if (!save_doubles.FindCodeInCache(&save_doubles_code)) {
-+ save_doubles_code = *(save_doubles.GetCode());
-+ }
-+}
-+
-+
-+void CEntryStub::GenerateAheadOfTime(Isolate* isolate) {
-+ CEntryStub stub(isolate, 1, kDontSaveFPRegs);
-+ stub.GetCode();
-+}
-+
-+
-+void CEntryStub::Generate(MacroAssembler* masm) {
-+ // eax: number of arguments including receiver
-+ // ebx: pointer to C function (C callee-saved)
-+ // ebp: frame pointer (restored after C call)
-+ // esp: stack pointer (restored after C call)
-+ // esi: current context (C callee-saved)
-+ // edi: JS function of the caller (C callee-saved)
-+ //
-+ // If argv_in_register():
-+ // ecx: pointer to the first argument
-+
-+ ProfileEntryHookStub::MaybeCallEntryHook(masm);
-+
-+ // Reserve space on the stack for the three arguments passed to the call. If
-+ // result size is greater than can be returned in registers, also reserve
-+ // space for the hidden argument for the result location, and space for the
-+ // result itself.
-+ int arg_stack_space = result_size() < 3 ? 3 : 4 + result_size();
-+
-+ // Enter the exit frame that transitions from JavaScript to C++.
-+ if (argv_in_register()) {
-+ DCHECK(!save_doubles());
-+ DCHECK(!is_builtin_exit());
-+ __ EnterApiExitFrame(arg_stack_space);
-+
-+ // Move argc and argv into the correct registers.
-+ __ mov(esi, ecx);
-+ __ mov(edi, eax);
-+ } else {
-+ __ EnterExitFrame(
-+ arg_stack_space, save_doubles(),
-+ is_builtin_exit() ? StackFrame::BUILTIN_EXIT : StackFrame::EXIT);
-+ }
-+
-+ // ebx: pointer to C function (C callee-saved)
-+ // ebp: frame pointer (restored after C call)
-+ // esp: stack pointer (restored after C call)
-+ // edi: number of arguments including receiver (C callee-saved)
-+ // esi: pointer to the first argument (C callee-saved)
-+
-+ // Result returned in eax, or eax+edx if result size is 2.
-+
-+ // Check stack alignment.
-+ if (FLAG_debug_code) {
-+ __ CheckStackAlignment();
-+ }
-+ // Call C function.
-+ if (result_size() <= 2) {
-+ __ mov(Operand(esp, 0 * kPointerSize), edi); // argc.
-+ __ mov(Operand(esp, 1 * kPointerSize), esi); // argv.
-+ __ mov(Operand(esp, 2 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(isolate())));
-+ } else {
-+ DCHECK_EQ(3, result_size());
-+ // Pass a pointer to the result location as the first argument.
-+ __ lea(eax, Operand(esp, 4 * kPointerSize));
-+ __ mov(Operand(esp, 0 * kPointerSize), eax);
-+ __ mov(Operand(esp, 1 * kPointerSize), edi); // argc.
-+ __ mov(Operand(esp, 2 * kPointerSize), esi); // argv.
-+ __ mov(Operand(esp, 3 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(isolate())));
-+ }
-+ __ call(ebx);
-+
-+ if (result_size() > 2) {
-+ DCHECK_EQ(3, result_size());
-+#ifndef _WIN32
-+ // Restore the "hidden" argument on the stack which was popped by caller.
-+ __ sub(esp, Immediate(kPointerSize));
-+#endif
-+ // Read result values stored on stack. Result is stored above the arguments.
-+ __ mov(kReturnRegister0, Operand(esp, 4 * kPointerSize));
-+ __ mov(kReturnRegister1, Operand(esp, 5 * kPointerSize));
-+ __ mov(kReturnRegister2, Operand(esp, 6 * kPointerSize));
-+ }
-+ // Result is in eax, edx:eax or edi:edx:eax - do not destroy these registers!
-+
-+ // Check result for exception sentinel.
-+ Label exception_returned;
-+ __ cmp(eax, isolate()->factory()->exception());
-+ __ j(equal, &exception_returned);
-+
-+ // Check that there is no pending exception, otherwise we
-+ // should have returned the exception sentinel.
-+ if (FLAG_debug_code) {
-+ __ push(edx);
-+ __ mov(edx, Immediate(isolate()->factory()->the_hole_value()));
-+ Label okay;
-+ ExternalReference pending_exception_address(
-+ IsolateAddressId::kPendingExceptionAddress, isolate());
-+ __ cmp(edx, Operand::StaticVariable(pending_exception_address));
-+ // Cannot use check here as it attempts to generate call into runtime.
-+ __ j(equal, &okay, Label::kNear);
-+ __ int3();
-+ __ bind(&okay);
-+ __ pop(edx);
-+ }
-+
-+ // Exit the JavaScript to C++ exit frame.
-+ __ LeaveExitFrame(save_doubles(), !argv_in_register());
-+ __ ret(0);
-+
-+ // Handling of exception.
-+ __ bind(&exception_returned);
-+
-+ ExternalReference pending_handler_context_address(
-+ IsolateAddressId::kPendingHandlerContextAddress, isolate());
-+ ExternalReference pending_handler_code_address(
-+ IsolateAddressId::kPendingHandlerCodeAddress, isolate());
-+ ExternalReference pending_handler_offset_address(
-+ IsolateAddressId::kPendingHandlerOffsetAddress, isolate());
-+ ExternalReference pending_handler_fp_address(
-+ IsolateAddressId::kPendingHandlerFPAddress, isolate());
-+ ExternalReference pending_handler_sp_address(
-+ IsolateAddressId::kPendingHandlerSPAddress, isolate());
-+
-+ // Ask the runtime for help to determine the handler. This will set eax to
-+ // contain the current pending exception, don't clobber it.
-+ ExternalReference find_handler(Runtime::kUnwindAndFindExceptionHandler,
-+ isolate());
-+ {
-+ FrameScope scope(masm, StackFrame::MANUAL);
-+ __ PrepareCallCFunction(3, eax);
-+ __ mov(Operand(esp, 0 * kPointerSize), Immediate(0)); // argc.
-+ __ mov(Operand(esp, 1 * kPointerSize), Immediate(0)); // argv.
-+ __ mov(Operand(esp, 2 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(isolate())));
-+ __ CallCFunction(find_handler, 3);
-+ }
-+
-+ // Retrieve the handler context, SP and FP.
-+ __ mov(esi, Operand::StaticVariable(pending_handler_context_address));
-+ __ mov(esp, Operand::StaticVariable(pending_handler_sp_address));
-+ __ mov(ebp, Operand::StaticVariable(pending_handler_fp_address));
-+
-+ // If the handler is a JS frame, restore the context to the frame. Note that
-+ // the context will be set to (esi == 0) for non-JS frames.
-+ Label skip;
-+ __ test(esi, esi);
-+ __ j(zero, &skip, Label::kNear);
-+ __ mov(Operand(ebp, StandardFrameConstants::kContextOffset), esi);
-+ __ bind(&skip);
-+
-+ // Compute the handler entry address and jump to it.
-+ __ mov(edi, Operand::StaticVariable(pending_handler_code_address));
-+ __ mov(edx, Operand::StaticVariable(pending_handler_offset_address));
-+ // Check whether it's a turbofanned exception handler code before jump to it.
-+ Label not_turbo;
-+ __ push(eax);
-+ __ mov(eax, Operand(edi, Code::kKindSpecificFlags1Offset - kHeapObjectTag));
-+ __ and_(eax, Immediate(1 << Code::kIsTurbofannedBit));
-+ __ j(zero, ¬_turbo);
-+ __ fninit();
-+ __ fld1();
-+ __ bind(¬_turbo);
-+ __ pop(eax);
-+ __ lea(edi, FieldOperand(edi, edx, times_1, Code::kHeaderSize));
-+ __ jmp(edi);
-+}
-+
-+
-+void JSEntryStub::Generate(MacroAssembler* masm) {
-+ Label invoke, handler_entry, exit;
-+ Label not_outermost_js, not_outermost_js_2;
-+
-+ ProfileEntryHookStub::MaybeCallEntryHook(masm);
-+
-+ // Set up frame.
-+ __ push(ebp);
-+ __ mov(ebp, esp);
-+
-+ // Push marker in two places.
-+ int marker = type();
-+ __ push(Immediate(Smi::FromInt(marker))); // marker
-+ ExternalReference context_address(IsolateAddressId::kContextAddress,
-+ isolate());
-+ __ push(Operand::StaticVariable(context_address)); // context
-+ // Save callee-saved registers (C calling conventions).
-+ __ push(edi);
-+ __ push(esi);
-+ __ push(ebx);
-+
-+ // Save copies of the top frame descriptor on the stack.
-+ ExternalReference c_entry_fp(IsolateAddressId::kCEntryFPAddress, isolate());
-+ __ push(Operand::StaticVariable(c_entry_fp));
-+
-+ // If this is the outermost JS call, set js_entry_sp value.
-+ ExternalReference js_entry_sp(IsolateAddressId::kJSEntrySPAddress, isolate());
-+ __ cmp(Operand::StaticVariable(js_entry_sp), Immediate(0));
-+ __ j(not_equal, ¬_outermost_js, Label::kNear);
-+ __ mov(Operand::StaticVariable(js_entry_sp), ebp);
-+ __ push(Immediate(Smi::FromInt(StackFrame::OUTERMOST_JSENTRY_FRAME)));
-+ __ jmp(&invoke, Label::kNear);
-+ __ bind(¬_outermost_js);
-+ __ push(Immediate(Smi::FromInt(StackFrame::INNER_JSENTRY_FRAME)));
-+
-+ // Jump to a faked try block that does the invoke, with a faked catch
-+ // block that sets the pending exception.
-+ __ jmp(&invoke);
-+ __ bind(&handler_entry);
-+ handler_offset_ = handler_entry.pos();
-+ // Caught exception: Store result (exception) in the pending exception
-+ // field in the JSEnv and return a failure sentinel.
-+ ExternalReference pending_exception(
-+ IsolateAddressId::kPendingExceptionAddress, isolate());
-+ __ mov(Operand::StaticVariable(pending_exception), eax);
-+ __ mov(eax, Immediate(isolate()->factory()->exception()));
-+ __ jmp(&exit);
-+
-+ // Invoke: Link this frame into the handler chain.
-+ __ bind(&invoke);
-+ __ PushStackHandler();
-+
-+ // Fake a receiver (NULL).
-+ __ push(Immediate(0)); // receiver
-+
-+ // Invoke the function by calling through JS entry trampoline builtin and
-+ // pop the faked function when we return. Notice that we cannot store a
-+ // reference to the trampoline code directly in this stub, because the
-+ // builtin stubs may not have been generated yet.
-+ if (type() == StackFrame::ENTRY_CONSTRUCT) {
-+ ExternalReference construct_entry(Builtins::kJSConstructEntryTrampoline,
-+ isolate());
-+ __ mov(edx, Immediate(construct_entry));
-+ } else {
-+ ExternalReference entry(Builtins::kJSEntryTrampoline, isolate());
-+ __ mov(edx, Immediate(entry));
-+ }
-+ __ mov(edx, Operand(edx, 0)); // deref address
-+ __ lea(edx, FieldOperand(edx, Code::kHeaderSize));
-+ __ call(edx);
-+
-+ // Unlink this frame from the handler chain.
-+ __ PopStackHandler();
-+
-+ __ bind(&exit);
-+ // Check if the current stack frame is marked as the outermost JS frame.
-+ __ pop(ebx);
-+ __ cmp(ebx, Immediate(Smi::FromInt(StackFrame::OUTERMOST_JSENTRY_FRAME)));
-+ __ j(not_equal, ¬_outermost_js_2);
-+ __ mov(Operand::StaticVariable(js_entry_sp), Immediate(0));
-+ __ bind(¬_outermost_js_2);
-+
-+ // Restore the top frame descriptor from the stack.
-+ __ pop(Operand::StaticVariable(
-+ ExternalReference(IsolateAddressId::kCEntryFPAddress, isolate())));
-+
-+ // Restore callee-saved registers (C calling conventions).
-+ __ pop(ebx);
-+ __ pop(esi);
-+ __ pop(edi);
-+ __ add(esp, Immediate(2 * kPointerSize)); // remove markers
-+
-+ // Restore frame pointer and return.
-+ __ pop(ebp);
-+ __ ret(0);
-+}
-+
-+
-+// -------------------------------------------------------------------------
-+// StringCharCodeAtGenerator
-+
-+void StringCharCodeAtGenerator::GenerateFast(MacroAssembler* masm) {
-+ // If the receiver is a smi trigger the non-string case.
-+ if (check_mode_ == RECEIVER_IS_UNKNOWN) {
-+ __ JumpIfSmi(object_, receiver_not_string_);
-+
-+ // Fetch the instance type of the receiver into result register.
-+ __ mov(result_, FieldOperand(object_, HeapObject::kMapOffset));
-+ __ movzx_b(result_, FieldOperand(result_, Map::kInstanceTypeOffset));
-+ // If the receiver is not a string trigger the non-string case.
-+ __ test(result_, Immediate(kIsNotStringMask));
-+ __ j(not_zero, receiver_not_string_);
-+ }
-+
-+ // If the index is non-smi trigger the non-smi case.
-+ __ JumpIfNotSmi(index_, &index_not_smi_);
-+ __ bind(&got_smi_index_);
-+
-+ // Check for index out of range.
-+ __ cmp(index_, FieldOperand(object_, String::kLengthOffset));
-+ __ j(above_equal, index_out_of_range_);
-+
-+ __ SmiUntag(index_);
-+
-+ Factory* factory = masm->isolate()->factory();
-+ StringCharLoadGenerator::Generate(
-+ masm, factory, object_, index_, result_, &call_runtime_);
-+
-+ __ SmiTag(result_);
-+ __ bind(&exit_);
-+}
-+
-+
-+void StringCharCodeAtGenerator::GenerateSlow(
-+ MacroAssembler* masm, EmbedMode embed_mode,
-+ const RuntimeCallHelper& call_helper) {
-+ __ Abort(kUnexpectedFallthroughToCharCodeAtSlowCase);
-+
-+ // Index is not a smi.
-+ __ bind(&index_not_smi_);
-+ // If index is a heap number, try converting it to an integer.
-+ __ CheckMap(index_,
-+ masm->isolate()->factory()->heap_number_map(),
-+ index_not_number_,
-+ DONT_DO_SMI_CHECK);
-+ call_helper.BeforeCall(masm);
-+ if (embed_mode == PART_OF_IC_HANDLER) {
-+ __ push(LoadWithVectorDescriptor::VectorRegister());
-+ __ push(LoadDescriptor::SlotRegister());
-+ }
-+ __ push(object_);
-+ __ push(index_); // Consumed by runtime conversion function.
-+ __ CallRuntime(Runtime::kNumberToSmi);
-+ if (!index_.is(eax)) {
-+ // Save the conversion result before the pop instructions below
-+ // have a chance to overwrite it.
-+ __ mov(index_, eax);
-+ }
-+ __ pop(object_);
-+ if (embed_mode == PART_OF_IC_HANDLER) {
-+ __ pop(LoadDescriptor::SlotRegister());
-+ __ pop(LoadWithVectorDescriptor::VectorRegister());
-+ }
-+ // Reload the instance type.
-+ __ mov(result_, FieldOperand(object_, HeapObject::kMapOffset));
-+ __ movzx_b(result_, FieldOperand(result_, Map::kInstanceTypeOffset));
-+ call_helper.AfterCall(masm);
-+ // If index is still not a smi, it must be out of range.
-+ STATIC_ASSERT(kSmiTag == 0);
-+ __ JumpIfNotSmi(index_, index_out_of_range_);
-+ // Otherwise, return to the fast path.
-+ __ jmp(&got_smi_index_);
-+
-+ // Call runtime. We get here when the receiver is a string and the
-+ // index is a number, but the code of getting the actual character
-+ // is too complex (e.g., when the string needs to be flattened).
-+ __ bind(&call_runtime_);
-+ call_helper.BeforeCall(masm);
-+ __ push(object_);
-+ __ SmiTag(index_);
-+ __ push(index_);
-+ __ CallRuntime(Runtime::kStringCharCodeAtRT);
-+ if (!result_.is(eax)) {
-+ __ mov(result_, eax);
-+ }
-+ call_helper.AfterCall(masm);
-+ __ jmp(&exit_);
-+
-+ __ Abort(kUnexpectedFallthroughFromCharCodeAtSlowCase);
-+}
-+
-+void StringHelper::GenerateFlatOneByteStringEquals(MacroAssembler* masm,
-+ Register left,
-+ Register right,
-+ Register scratch1,
-+ Register scratch2) {
-+ Register length = scratch1;
-+
-+ // Compare lengths.
-+ Label strings_not_equal, check_zero_length;
-+ __ mov(length, FieldOperand(left, String::kLengthOffset));
-+ __ cmp(length, FieldOperand(right, String::kLengthOffset));
-+ __ j(equal, &check_zero_length, Label::kNear);
-+ __ bind(&strings_not_equal);
-+ __ Move(eax, Immediate(Smi::FromInt(NOT_EQUAL)));
-+ __ ret(0);
-+
-+ // Check if the length is zero.
-+ Label compare_chars;
-+ __ bind(&check_zero_length);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ __ test(length, length);
-+ __ j(not_zero, &compare_chars, Label::kNear);
-+ __ Move(eax, Immediate(Smi::FromInt(EQUAL)));
-+ __ ret(0);
-+
-+ // Compare characters.
-+ __ bind(&compare_chars);
-+ GenerateOneByteCharsCompareLoop(masm, left, right, length, scratch2,
-+ &strings_not_equal, Label::kNear);
-+
-+ // Characters are equal.
-+ __ Move(eax, Immediate(Smi::FromInt(EQUAL)));
-+ __ ret(0);
-+}
-+
-+
-+void StringHelper::GenerateCompareFlatOneByteStrings(
-+ MacroAssembler* masm, Register left, Register right, Register scratch1,
-+ Register scratch2, Register scratch3) {
-+ Counters* counters = masm->isolate()->counters();
-+ __ IncrementCounter(counters->string_compare_native(), 1);
-+
-+ // Find minimum length.
-+ Label left_shorter;
-+ __ mov(scratch1, FieldOperand(left, String::kLengthOffset));
-+ __ mov(scratch3, scratch1);
-+ __ sub(scratch3, FieldOperand(right, String::kLengthOffset));
-+
-+ Register length_delta = scratch3;
-+
-+ __ j(less_equal, &left_shorter, Label::kNear);
-+ // Right string is shorter. Change scratch1 to be length of right string.
-+ __ sub(scratch1, length_delta);
-+ __ bind(&left_shorter);
-+
-+ Register min_length = scratch1;
-+
-+ // If either length is zero, just compare lengths.
-+ Label compare_lengths;
-+ __ test(min_length, min_length);
-+ __ j(zero, &compare_lengths, Label::kNear);
-+
-+ // Compare characters.
-+ Label result_not_equal;
-+ GenerateOneByteCharsCompareLoop(masm, left, right, min_length, scratch2,
-+ &result_not_equal, Label::kNear);
-+
-+ // Compare lengths - strings up to min-length are equal.
-+ __ bind(&compare_lengths);
-+ __ test(length_delta, length_delta);
-+ Label length_not_equal;
-+ __ j(not_zero, &length_not_equal, Label::kNear);
-+
-+ // Result is EQUAL.
-+ STATIC_ASSERT(EQUAL == 0);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ __ Move(eax, Immediate(Smi::FromInt(EQUAL)));
-+ __ ret(0);
-+
-+ Label result_greater;
-+ Label result_less;
-+ __ bind(&length_not_equal);
-+ __ j(greater, &result_greater, Label::kNear);
-+ __ jmp(&result_less, Label::kNear);
-+ __ bind(&result_not_equal);
-+ __ j(above, &result_greater, Label::kNear);
-+ __ bind(&result_less);
-+
-+ // Result is LESS.
-+ __ Move(eax, Immediate(Smi::FromInt(LESS)));
-+ __ ret(0);
-+
-+ // Result is GREATER.
-+ __ bind(&result_greater);
-+ __ Move(eax, Immediate(Smi::FromInt(GREATER)));
-+ __ ret(0);
-+}
-+
-+
-+void StringHelper::GenerateOneByteCharsCompareLoop(
-+ MacroAssembler* masm, Register left, Register right, Register length,
-+ Register scratch, Label* chars_not_equal,
-+ Label::Distance chars_not_equal_near) {
-+ // Change index to run from -length to -1 by adding length to string
-+ // start. This means that loop ends when index reaches zero, which
-+ // doesn't need an additional compare.
-+ __ SmiUntag(length);
-+ __ lea(left,
-+ FieldOperand(left, length, times_1, SeqOneByteString::kHeaderSize));
-+ __ lea(right,
-+ FieldOperand(right, length, times_1, SeqOneByteString::kHeaderSize));
-+ __ neg(length);
-+ Register index = length; // index = -length;
-+
-+ // Compare loop.
-+ Label loop;
-+ __ bind(&loop);
-+ __ mov_b(scratch, Operand(left, index, times_1, 0));
-+ __ cmpb(scratch, Operand(right, index, times_1, 0));
-+ __ j(not_equal, chars_not_equal, chars_not_equal_near);
-+ __ inc(index);
-+ __ j(not_zero, &loop);
-+}
-+
-+
-+void CompareICStub::GenerateBooleans(MacroAssembler* masm) {
-+ DCHECK_EQ(CompareICState::BOOLEAN, state());
-+ Label miss;
-+ Label::Distance const miss_distance =
-+ masm->emit_debug_code() ? Label::kFar : Label::kNear;
-+
-+ __ JumpIfSmi(edx, &miss, miss_distance);
-+ __ mov(ecx, FieldOperand(edx, HeapObject::kMapOffset));
-+ __ JumpIfSmi(eax, &miss, miss_distance);
-+ __ mov(ebx, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ JumpIfNotRoot(ecx, Heap::kBooleanMapRootIndex, &miss, miss_distance);
-+ __ JumpIfNotRoot(ebx, Heap::kBooleanMapRootIndex, &miss, miss_distance);
-+ if (!Token::IsEqualityOp(op())) {
-+ __ mov(eax, FieldOperand(eax, Oddball::kToNumberOffset));
-+ __ AssertSmi(eax);
-+ __ mov(edx, FieldOperand(edx, Oddball::kToNumberOffset));
-+ __ AssertSmi(edx);
-+ __ xchg(eax, edx);
-+ }
-+ __ sub(eax, edx);
-+ __ Ret();
-+
-+ __ bind(&miss);
-+ GenerateMiss(masm);
-+}
-+
-+
-+void CompareICStub::GenerateSmis(MacroAssembler* masm) {
-+ DCHECK(state() == CompareICState::SMI);
-+ Label miss;
-+ __ mov(ecx, edx);
-+ __ or_(ecx, eax);
-+ __ JumpIfNotSmi(ecx, &miss, Label::kNear);
-+
-+ if (GetCondition() == equal) {
-+ // For equality we do not care about the sign of the result.
-+ __ sub(eax, edx);
-+ } else {
-+ Label done;
-+ __ sub(edx, eax);
-+ __ j(no_overflow, &done, Label::kNear);
-+ // Correct sign of result in case of overflow.
-+ __ not_(edx);
-+ __ bind(&done);
-+ __ mov(eax, edx);
-+ }
-+ __ ret(0);
-+
-+ __ bind(&miss);
-+ GenerateMiss(masm);
-+}
-+
-+
-+void CompareICStub::GenerateNumbers(MacroAssembler* masm) {
-+ DCHECK(state() == CompareICState::NUMBER);
-+
-+ Label generic_stub, check_left;
-+ Label unordered, maybe_undefined1, maybe_undefined2;
-+ Label miss;
-+
-+ if (left() == CompareICState::SMI) {
-+ __ JumpIfNotSmi(edx, &miss);
-+ }
-+ if (right() == CompareICState::SMI) {
-+ __ JumpIfNotSmi(eax, &miss);
-+ }
-+
-+ // Inlining the double comparison and falling back to the general compare
-+ // stub if NaN is involved or SSE2 or CMOV is unsupported.
-+ __ JumpIfSmi(eax, &check_left, Label::kNear);
-+ __ cmp(FieldOperand(eax, HeapObject::kMapOffset),
-+ isolate()->factory()->heap_number_map());
-+ __ j(not_equal, &maybe_undefined1, Label::kNear);
-+
-+ __ bind(&check_left);
-+ __ JumpIfSmi(edx, &generic_stub, Label::kNear);
-+ __ cmp(FieldOperand(edx, HeapObject::kMapOffset),
-+ isolate()->factory()->heap_number_map());
-+ __ j(not_equal, &maybe_undefined2, Label::kNear);
-+
-+ __ bind(&unordered);
-+ __ bind(&generic_stub);
-+ CompareICStub stub(isolate(), op(), CompareICState::GENERIC,
-+ CompareICState::GENERIC, CompareICState::GENERIC);
-+ __ jmp(stub.GetCode(), RelocInfo::CODE_TARGET);
-+
-+ __ bind(&maybe_undefined1);
-+ if (Token::IsOrderedRelationalCompareOp(op())) {
-+ __ cmp(eax, Immediate(isolate()->factory()->undefined_value()));
-+ __ j(not_equal, &miss);
-+ __ JumpIfSmi(edx, &unordered);
-+ __ CmpObjectType(edx, HEAP_NUMBER_TYPE, ecx);
-+ __ j(not_equal, &maybe_undefined2, Label::kNear);
-+ __ jmp(&unordered);
-+ }
-+
-+ __ bind(&maybe_undefined2);
-+ if (Token::IsOrderedRelationalCompareOp(op())) {
-+ __ cmp(edx, Immediate(isolate()->factory()->undefined_value()));
-+ __ j(equal, &unordered);
-+ }
-+
-+ __ bind(&miss);
-+ GenerateMiss(masm);
-+}
-+
-+
-+void CompareICStub::GenerateInternalizedStrings(MacroAssembler* masm) {
-+ DCHECK(state() == CompareICState::INTERNALIZED_STRING);
-+ DCHECK(GetCondition() == equal);
-+
-+ // Registers containing left and right operands respectively.
-+ Register left = edx;
-+ Register right = eax;
-+ Register tmp1 = ecx;
-+ Register tmp2 = ebx;
-+
-+ // Check that both operands are heap objects.
-+ Label miss;
-+ __ mov(tmp1, left);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ __ and_(tmp1, right);
-+ __ JumpIfSmi(tmp1, &miss, Label::kNear);
-+
-+ // Check that both operands are internalized strings.
-+ __ mov(tmp1, FieldOperand(left, HeapObject::kMapOffset));
-+ __ mov(tmp2, FieldOperand(right, HeapObject::kMapOffset));
-+ __ movzx_b(tmp1, FieldOperand(tmp1, Map::kInstanceTypeOffset));
-+ __ movzx_b(tmp2, FieldOperand(tmp2, Map::kInstanceTypeOffset));
-+ STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0);
-+ __ or_(tmp1, tmp2);
-+ __ test(tmp1, Immediate(kIsNotStringMask | kIsNotInternalizedMask));
-+ __ j(not_zero, &miss, Label::kNear);
-+
-+ // Internalized strings are compared by identity.
-+ Label done;
-+ __ cmp(left, right);
-+ // Make sure eax is non-zero. At this point input operands are
-+ // guaranteed to be non-zero.
-+ DCHECK(right.is(eax));
-+ __ j(not_equal, &done, Label::kNear);
-+ STATIC_ASSERT(EQUAL == 0);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ __ Move(eax, Immediate(Smi::FromInt(EQUAL)));
-+ __ bind(&done);
-+ __ ret(0);
-+
-+ __ bind(&miss);
-+ GenerateMiss(masm);
-+}
-+
-+
-+void CompareICStub::GenerateUniqueNames(MacroAssembler* masm) {
-+ DCHECK(state() == CompareICState::UNIQUE_NAME);
-+ DCHECK(GetCondition() == equal);
-+
-+ // Registers containing left and right operands respectively.
-+ Register left = edx;
-+ Register right = eax;
-+ Register tmp1 = ecx;
-+ Register tmp2 = ebx;
-+
-+ // Check that both operands are heap objects.
-+ Label miss;
-+ __ mov(tmp1, left);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ __ and_(tmp1, right);
-+ __ JumpIfSmi(tmp1, &miss, Label::kNear);
-+
-+ // Check that both operands are unique names. This leaves the instance
-+ // types loaded in tmp1 and tmp2.
-+ __ mov(tmp1, FieldOperand(left, HeapObject::kMapOffset));
-+ __ mov(tmp2, FieldOperand(right, HeapObject::kMapOffset));
-+ __ movzx_b(tmp1, FieldOperand(tmp1, Map::kInstanceTypeOffset));
-+ __ movzx_b(tmp2, FieldOperand(tmp2, Map::kInstanceTypeOffset));
-+
-+ __ JumpIfNotUniqueNameInstanceType(tmp1, &miss, Label::kNear);
-+ __ JumpIfNotUniqueNameInstanceType(tmp2, &miss, Label::kNear);
-+
-+ // Unique names are compared by identity.
-+ Label done;
-+ __ cmp(left, right);
-+ // Make sure eax is non-zero. At this point input operands are
-+ // guaranteed to be non-zero.
-+ DCHECK(right.is(eax));
-+ __ j(not_equal, &done, Label::kNear);
-+ STATIC_ASSERT(EQUAL == 0);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ __ Move(eax, Immediate(Smi::FromInt(EQUAL)));
-+ __ bind(&done);
-+ __ ret(0);
-+
-+ __ bind(&miss);
-+ GenerateMiss(masm);
-+}
-+
-+
-+void CompareICStub::GenerateStrings(MacroAssembler* masm) {
-+ DCHECK(state() == CompareICState::STRING);
-+ Label miss;
-+
-+ bool equality = Token::IsEqualityOp(op());
-+
-+ // Registers containing left and right operands respectively.
-+ Register left = edx;
-+ Register right = eax;
-+ Register tmp1 = ecx;
-+ Register tmp2 = ebx;
-+ Register tmp3 = edi;
-+
-+ // Check that both operands are heap objects.
-+ __ mov(tmp1, left);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ __ and_(tmp1, right);
-+ __ JumpIfSmi(tmp1, &miss);
-+
-+ // Check that both operands are strings. This leaves the instance
-+ // types loaded in tmp1 and tmp2.
-+ __ mov(tmp1, FieldOperand(left, HeapObject::kMapOffset));
-+ __ mov(tmp2, FieldOperand(right, HeapObject::kMapOffset));
-+ __ movzx_b(tmp1, FieldOperand(tmp1, Map::kInstanceTypeOffset));
-+ __ movzx_b(tmp2, FieldOperand(tmp2, Map::kInstanceTypeOffset));
-+ __ mov(tmp3, tmp1);
-+ STATIC_ASSERT(kNotStringTag != 0);
-+ __ or_(tmp3, tmp2);
-+ __ test(tmp3, Immediate(kIsNotStringMask));
-+ __ j(not_zero, &miss);
-+
-+ // Fast check for identical strings.
-+ Label not_same;
-+ __ cmp(left, right);
-+ __ j(not_equal, ¬_same, Label::kNear);
-+ STATIC_ASSERT(EQUAL == 0);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ __ Move(eax, Immediate(Smi::FromInt(EQUAL)));
-+ __ ret(0);
-+
-+ // Handle not identical strings.
-+ __ bind(¬_same);
-+
-+ // Check that both strings are internalized. If they are, we're done
-+ // because we already know they are not identical. But in the case of
-+ // non-equality compare, we still need to determine the order. We
-+ // also know they are both strings.
-+ if (equality) {
-+ Label do_compare;
-+ STATIC_ASSERT(kInternalizedTag == 0);
-+ __ or_(tmp1, tmp2);
-+ __ test(tmp1, Immediate(kIsNotInternalizedMask));
-+ __ j(not_zero, &do_compare, Label::kNear);
-+ // Make sure eax is non-zero. At this point input operands are
-+ // guaranteed to be non-zero.
-+ DCHECK(right.is(eax));
-+ __ ret(0);
-+ __ bind(&do_compare);
-+ }
-+
-+ // Check that both strings are sequential one-byte.
-+ Label runtime;
-+ __ JumpIfNotBothSequentialOneByteStrings(left, right, tmp1, tmp2, &runtime);
-+
-+ // Compare flat one byte strings. Returns when done.
-+ if (equality) {
-+ StringHelper::GenerateFlatOneByteStringEquals(masm, left, right, tmp1,
-+ tmp2);
-+ } else {
-+ StringHelper::GenerateCompareFlatOneByteStrings(masm, left, right, tmp1,
-+ tmp2, tmp3);
-+ }
-+
-+ // Handle more complex cases in runtime.
-+ __ bind(&runtime);
-+ if (equality) {
-+ {
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ Push(left);
-+ __ Push(right);
-+ __ CallRuntime(Runtime::kStringEqual);
-+ }
-+ __ sub(eax, Immediate(masm->isolate()->factory()->true_value()));
-+ __ Ret();
-+ } else {
-+ __ pop(tmp1); // Return address.
-+ __ push(left);
-+ __ push(right);
-+ __ push(tmp1);
-+ __ TailCallRuntime(Runtime::kStringCompare);
-+ }
-+
-+ __ bind(&miss);
-+ GenerateMiss(masm);
-+}
-+
-+
-+void CompareICStub::GenerateReceivers(MacroAssembler* masm) {
-+ DCHECK_EQ(CompareICState::RECEIVER, state());
-+ Label miss;
-+ __ mov(ecx, edx);
-+ __ and_(ecx, eax);
-+ __ JumpIfSmi(ecx, &miss, Label::kNear);
-+
-+ STATIC_ASSERT(LAST_TYPE == LAST_JS_RECEIVER_TYPE);
-+ __ CmpObjectType(eax, FIRST_JS_RECEIVER_TYPE, ecx);
-+ __ j(below, &miss, Label::kNear);
-+ __ CmpObjectType(edx, FIRST_JS_RECEIVER_TYPE, ecx);
-+ __ j(below, &miss, Label::kNear);
-+
-+ DCHECK_EQ(equal, GetCondition());
-+ __ sub(eax, edx);
-+ __ ret(0);
-+
-+ __ bind(&miss);
-+ GenerateMiss(masm);
-+}
-+
-+
-+void CompareICStub::GenerateKnownReceivers(MacroAssembler* masm) {
-+ Label miss;
-+ Handle<WeakCell> cell = Map::WeakCellForMap(known_map_);
-+ __ mov(ecx, edx);
-+ __ and_(ecx, eax);
-+ __ JumpIfSmi(ecx, &miss, Label::kNear);
-+
-+ __ GetWeakValue(edi, cell);
-+ __ cmp(edi, FieldOperand(eax, HeapObject::kMapOffset));
-+ __ j(not_equal, &miss, Label::kNear);
-+ __ cmp(edi, FieldOperand(edx, HeapObject::kMapOffset));
-+ __ j(not_equal, &miss, Label::kNear);
-+
-+ if (Token::IsEqualityOp(op())) {
-+ __ sub(eax, edx);
-+ __ ret(0);
-+ } else {
-+ __ PopReturnAddressTo(ecx);
-+ __ Push(edx);
-+ __ Push(eax);
-+ __ Push(Immediate(Smi::FromInt(NegativeComparisonResult(GetCondition()))));
-+ __ PushReturnAddressFrom(ecx);
-+ __ TailCallRuntime(Runtime::kCompare);
-+ }
-+
-+ __ bind(&miss);
-+ GenerateMiss(masm);
-+}
-+
-+
-+void CompareICStub::GenerateMiss(MacroAssembler* masm) {
-+ {
-+ // Call the runtime system in a fresh internal frame.
-+ FrameScope scope(masm, StackFrame::INTERNAL);
-+ __ push(edx); // Preserve edx and eax.
-+ __ push(eax);
-+ __ push(edx); // And also use them as the arguments.
-+ __ push(eax);
-+ __ push(Immediate(Smi::FromInt(op())));
-+ __ CallRuntime(Runtime::kCompareIC_Miss);
-+ // Compute the entry point of the rewritten stub.
-+ __ lea(edi, FieldOperand(eax, Code::kHeaderSize));
-+ __ pop(eax);
-+ __ pop(edx);
-+ }
-+
-+ // Do a tail call to the rewritten stub.
-+ __ jmp(edi);
-+}
-+
-+
-+// Helper function used to check that the dictionary doesn't contain
-+// the property. This function may return false negatives, so miss_label
-+// must always call a backup property check that is complete.
-+// This function is safe to call if the receiver has fast properties.
-+// Name must be a unique name and receiver must be a heap object.
-+void NameDictionaryLookupStub::GenerateNegativeLookup(MacroAssembler* masm,
-+ Label* miss,
-+ Label* done,
-+ Register properties,
-+ Handle<Name> name,
-+ Register r0) {
-+ DCHECK(name->IsUniqueName());
-+
-+ // If names of slots in range from 1 to kProbes - 1 for the hash value are
-+ // not equal to the name and kProbes-th slot is not used (its name is the
-+ // undefined value), it guarantees the hash table doesn't contain the
-+ // property. It's true even if some slots represent deleted properties
-+ // (their names are the hole value).
-+ for (int i = 0; i < kInlinedProbes; i++) {
-+ // Compute the masked index: (hash + i + i * i) & mask.
-+ Register index = r0;
-+ // Capacity is smi 2^n.
-+ __ mov(index, FieldOperand(properties, kCapacityOffset));
-+ __ dec(index);
-+ __ and_(index,
-+ Immediate(Smi::FromInt(name->Hash() +
-+ NameDictionary::GetProbeOffset(i))));
-+
-+ // Scale the index by multiplying by the entry size.
-+ STATIC_ASSERT(NameDictionary::kEntrySize == 3);
-+ __ lea(index, Operand(index, index, times_2, 0)); // index *= 3.
-+ Register entity_name = r0;
-+ // Having undefined at this place means the name is not contained.
-+ STATIC_ASSERT(kSmiTagSize == 1);
-+ __ mov(entity_name, Operand(properties, index, times_half_pointer_size,
-+ kElementsStartOffset - kHeapObjectTag));
-+ __ cmp(entity_name, masm->isolate()->factory()->undefined_value());
-+ __ j(equal, done);
-+
-+ // Stop if found the property.
-+ __ cmp(entity_name, Handle<Name>(name));
-+ __ j(equal, miss);
-+
-+ Label good;
-+ // Check for the hole and skip.
-+ __ cmp(entity_name, masm->isolate()->factory()->the_hole_value());
-+ __ j(equal, &good, Label::kNear);
-+
-+ // Check if the entry name is not a unique name.
-+ __ mov(entity_name, FieldOperand(entity_name, HeapObject::kMapOffset));
-+ __ JumpIfNotUniqueNameInstanceType(
-+ FieldOperand(entity_name, Map::kInstanceTypeOffset), miss);
-+ __ bind(&good);
-+ }
-+
-+ NameDictionaryLookupStub stub(masm->isolate(), properties, r0, r0,
-+ NEGATIVE_LOOKUP);
-+ __ push(Immediate(name));
-+ __ push(Immediate(name->Hash()));
-+ __ CallStub(&stub);
-+ __ test(r0, r0);
-+ __ j(not_zero, miss);
-+ __ jmp(done);
-+}
-+
-+void NameDictionaryLookupStub::Generate(MacroAssembler* masm) {
-+ // This stub overrides SometimesSetsUpAFrame() to return false. That means
-+ // we cannot call anything that could cause a GC from this stub.
-+ // Stack frame on entry:
-+ // esp[0 * kPointerSize]: return address.
-+ // esp[1 * kPointerSize]: key's hash.
-+ // esp[2 * kPointerSize]: key.
-+ // Registers:
-+ // dictionary_: NameDictionary to probe.
-+ // result_: used as scratch.
-+ // index_: will hold an index of entry if lookup is successful.
-+ // might alias with result_.
-+ // Returns:
-+ // result_ is zero if lookup failed, non zero otherwise.
-+
-+ Label in_dictionary, maybe_in_dictionary, not_in_dictionary;
-+
-+ Register scratch = result();
-+
-+ __ mov(scratch, FieldOperand(dictionary(), kCapacityOffset));
-+ __ dec(scratch);
-+ __ SmiUntag(scratch);
-+ __ push(scratch);
-+
-+ // If names of slots in range from 1 to kProbes - 1 for the hash value are
-+ // not equal to the name and kProbes-th slot is not used (its name is the
-+ // undefined value), it guarantees the hash table doesn't contain the
-+ // property. It's true even if some slots represent deleted properties
-+ // (their names are the null value).
-+ for (int i = kInlinedProbes; i < kTotalProbes; i++) {
-+ // Compute the masked index: (hash + i + i * i) & mask.
-+ __ mov(scratch, Operand(esp, 2 * kPointerSize));
-+ if (i > 0) {
-+ __ add(scratch, Immediate(NameDictionary::GetProbeOffset(i)));
-+ }
-+ __ and_(scratch, Operand(esp, 0));
-+
-+ // Scale the index by multiplying by the entry size.
-+ STATIC_ASSERT(NameDictionary::kEntrySize == 3);
-+ __ lea(index(), Operand(scratch, scratch, times_2, 0)); // index *= 3.
-+
-+ // Having undefined at this place means the name is not contained.
-+ STATIC_ASSERT(kSmiTagSize == 1);
-+ __ mov(scratch, Operand(dictionary(), index(), times_pointer_size,
-+ kElementsStartOffset - kHeapObjectTag));
-+ __ cmp(scratch, isolate()->factory()->undefined_value());
-+ __ j(equal, ¬_in_dictionary);
-+
-+ // Stop if found the property.
-+ __ cmp(scratch, Operand(esp, 3 * kPointerSize));
-+ __ j(equal, &in_dictionary);
-+
-+ if (i != kTotalProbes - 1 && mode() == NEGATIVE_LOOKUP) {
-+ // If we hit a key that is not a unique name during negative
-+ // lookup we have to bailout as this key might be equal to the
-+ // key we are looking for.
-+
-+ // Check if the entry name is not a unique name.
-+ __ mov(scratch, FieldOperand(scratch, HeapObject::kMapOffset));
-+ __ JumpIfNotUniqueNameInstanceType(
-+ FieldOperand(scratch, Map::kInstanceTypeOffset),
-+ &maybe_in_dictionary);
-+ }
-+ }
-+
-+ __ bind(&maybe_in_dictionary);
-+ // If we are doing negative lookup then probing failure should be
-+ // treated as a lookup success. For positive lookup probing failure
-+ // should be treated as lookup failure.
-+ if (mode() == POSITIVE_LOOKUP) {
-+ __ mov(result(), Immediate(0));
-+ __ Drop(1);
-+ __ ret(2 * kPointerSize);
-+ }
-+
-+ __ bind(&in_dictionary);
-+ __ mov(result(), Immediate(1));
-+ __ Drop(1);
-+ __ ret(2 * kPointerSize);
-+
-+ __ bind(¬_in_dictionary);
-+ __ mov(result(), Immediate(0));
-+ __ Drop(1);
-+ __ ret(2 * kPointerSize);
-+}
-+
-+
-+void StoreBufferOverflowStub::GenerateFixedRegStubsAheadOfTime(
-+ Isolate* isolate) {
-+ StoreBufferOverflowStub stub(isolate, kDontSaveFPRegs);
-+ stub.GetCode();
-+ StoreBufferOverflowStub stub2(isolate, kSaveFPRegs);
-+ stub2.GetCode();
-+}
-+
-+
-+// Takes the input in 3 registers: address_ value_ and object_. A pointer to
-+// the value has just been written into the object, now this stub makes sure
-+// we keep the GC informed. The word in the object where the value has been
-+// written is in the address register.
-+void RecordWriteStub::Generate(MacroAssembler* masm) {
-+ Label skip_to_incremental_noncompacting;
-+ Label skip_to_incremental_compacting;
-+
-+ // The first two instructions are generated with labels so as to get the
-+ // offset fixed up correctly by the bind(Label*) call. We patch it back and
-+ // forth between a compare instructions (a nop in this position) and the
-+ // real branch when we start and stop incremental heap marking.
-+ __ jmp(&skip_to_incremental_noncompacting, Label::kNear);
-+ __ jmp(&skip_to_incremental_compacting, Label::kFar);
-+
-+ if (remembered_set_action() == EMIT_REMEMBERED_SET) {
-+ __ RememberedSetHelper(object(), address(), value(), save_fp_regs_mode(),
-+ MacroAssembler::kReturnAtEnd);
-+ } else {
-+ __ ret(0);
-+ }
-+
-+ __ bind(&skip_to_incremental_noncompacting);
-+ GenerateIncremental(masm, INCREMENTAL);
-+
-+ __ bind(&skip_to_incremental_compacting);
-+ GenerateIncremental(masm, INCREMENTAL_COMPACTION);
-+
-+ // Initial mode of the stub is expected to be STORE_BUFFER_ONLY.
-+ // Will be checked in IncrementalMarking::ActivateGeneratedStub.
-+ masm->set_byte_at(0, kTwoByteNopInstruction);
-+ masm->set_byte_at(2, kFiveByteNopInstruction);
-+}
-+
-+
-+void RecordWriteStub::GenerateIncremental(MacroAssembler* masm, Mode mode) {
-+ regs_.Save(masm);
-+
-+ if (remembered_set_action() == EMIT_REMEMBERED_SET) {
-+ Label dont_need_remembered_set;
-+
-+ __ mov(regs_.scratch0(), Operand(regs_.address(), 0));
-+ __ JumpIfNotInNewSpace(regs_.scratch0(), // Value.
-+ regs_.scratch0(),
-+ &dont_need_remembered_set);
-+
-+ __ JumpIfInNewSpace(regs_.object(), regs_.scratch0(),
-+ &dont_need_remembered_set);
-+
-+ // First notify the incremental marker if necessary, then update the
-+ // remembered set.
-+ CheckNeedsToInformIncrementalMarker(
-+ masm,
-+ kUpdateRememberedSetOnNoNeedToInformIncrementalMarker,
-+ mode);
-+ InformIncrementalMarker(masm);
-+ regs_.Restore(masm);
-+ __ RememberedSetHelper(object(), address(), value(), save_fp_regs_mode(),
-+ MacroAssembler::kReturnAtEnd);
-+
-+ __ bind(&dont_need_remembered_set);
-+ }
-+
-+ CheckNeedsToInformIncrementalMarker(
-+ masm,
-+ kReturnOnNoNeedToInformIncrementalMarker,
-+ mode);
-+ InformIncrementalMarker(masm);
-+ regs_.Restore(masm);
-+ __ ret(0);
-+}
-+
-+
-+void RecordWriteStub::InformIncrementalMarker(MacroAssembler* masm) {
-+ regs_.SaveCallerSaveRegisters(masm, save_fp_regs_mode());
-+ int argument_count = 3;
-+ __ PrepareCallCFunction(argument_count, regs_.scratch0());
-+ __ mov(Operand(esp, 0 * kPointerSize), regs_.object());
-+ __ mov(Operand(esp, 1 * kPointerSize), regs_.address()); // Slot.
-+ __ mov(Operand(esp, 2 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(isolate())));
-+
-+ AllowExternalCallThatCantCauseGC scope(masm);
-+ __ CallCFunction(
-+ ExternalReference::incremental_marking_record_write_function(isolate()),
-+ argument_count);
-+
-+ regs_.RestoreCallerSaveRegisters(masm, save_fp_regs_mode());
-+}
-+
-+
-+void RecordWriteStub::CheckNeedsToInformIncrementalMarker(
-+ MacroAssembler* masm,
-+ OnNoNeedToInformIncrementalMarker on_no_need,
-+ Mode mode) {
-+ Label need_incremental, need_incremental_pop_object;
-+
-+#ifndef V8_CONCURRENT_MARKING
-+ Label object_is_black;
-+ // Let's look at the color of the object: If it is not black we don't have
-+ // to inform the incremental marker.
-+ __ JumpIfBlack(regs_.object(),
-+ regs_.scratch0(),
-+ regs_.scratch1(),
-+ &object_is_black,
-+ Label::kNear);
-+
-+ regs_.Restore(masm);
-+ if (on_no_need == kUpdateRememberedSetOnNoNeedToInformIncrementalMarker) {
-+ __ RememberedSetHelper(object(), address(), value(), save_fp_regs_mode(),
-+ MacroAssembler::kReturnAtEnd);
-+ } else {
-+ __ ret(0);
-+ }
-+
-+ __ bind(&object_is_black);
-+#endif
-+
-+ // Get the value from the slot.
-+ __ mov(regs_.scratch0(), Operand(regs_.address(), 0));
-+
-+ if (mode == INCREMENTAL_COMPACTION) {
-+ Label ensure_not_white;
-+
-+ __ CheckPageFlag(regs_.scratch0(), // Contains value.
-+ regs_.scratch1(), // Scratch.
-+ MemoryChunk::kEvacuationCandidateMask,
-+ zero,
-+ &ensure_not_white,
-+ Label::kNear);
-+
-+ __ CheckPageFlag(regs_.object(),
-+ regs_.scratch1(), // Scratch.
-+ MemoryChunk::kSkipEvacuationSlotsRecordingMask,
-+ not_zero,
-+ &ensure_not_white,
-+ Label::kNear);
-+
-+ __ jmp(&need_incremental);
-+
-+ __ bind(&ensure_not_white);
-+ }
-+
-+ // We need an extra register for this, so we push the object register
-+ // temporarily.
-+ __ push(regs_.object());
-+ __ JumpIfWhite(regs_.scratch0(), // The value.
-+ regs_.scratch1(), // Scratch.
-+ regs_.object(), // Scratch.
-+ &need_incremental_pop_object, Label::kNear);
-+ __ pop(regs_.object());
-+
-+ regs_.Restore(masm);
-+ if (on_no_need == kUpdateRememberedSetOnNoNeedToInformIncrementalMarker) {
-+ __ RememberedSetHelper(object(), address(), value(), save_fp_regs_mode(),
-+ MacroAssembler::kReturnAtEnd);
-+ } else {
-+ __ ret(0);
-+ }
-+
-+ __ bind(&need_incremental_pop_object);
-+ __ pop(regs_.object());
-+
-+ __ bind(&need_incremental);
-+
-+ // Fall through when we need to inform the incremental marker.
-+}
-+
-+
-+void ProfileEntryHookStub::MaybeCallEntryHookDelayed(TurboAssembler* tasm,
-+ Zone* zone) {
-+ if (tasm->isolate()->function_entry_hook() != NULL) {
-+ tasm->CallStubDelayed(new (zone) ProfileEntryHookStub(nullptr));
-+ }
-+}
-+
-+void ProfileEntryHookStub::MaybeCallEntryHook(MacroAssembler* masm) {
-+ if (masm->isolate()->function_entry_hook() != NULL) {
-+ ProfileEntryHookStub stub(masm->isolate());
-+ masm->CallStub(&stub);
-+ }
-+}
-+
-+void ProfileEntryHookStub::Generate(MacroAssembler* masm) {
-+ // Save volatile registers.
-+ const int kNumSavedRegisters = 3;
-+ __ push(eax);
-+ __ push(ecx);
-+ __ push(edx);
-+
-+ // Calculate and push the original stack pointer.
-+ __ lea(eax, Operand(esp, (kNumSavedRegisters + 1) * kPointerSize));
-+ __ push(eax);
-+
-+ // Retrieve our return address and use it to calculate the calling
-+ // function's address.
-+ __ mov(eax, Operand(esp, (kNumSavedRegisters + 1) * kPointerSize));
-+ __ sub(eax, Immediate(Assembler::kCallInstructionLength));
-+ __ push(eax);
-+
-+ // Call the entry hook.
-+ DCHECK(isolate()->function_entry_hook() != NULL);
-+ __ call(FUNCTION_ADDR(isolate()->function_entry_hook()),
-+ RelocInfo::RUNTIME_ENTRY);
-+ __ add(esp, Immediate(2 * kPointerSize));
-+
-+ // Restore ecx.
-+ __ pop(edx);
-+ __ pop(ecx);
-+ __ pop(eax);
-+
-+ __ ret(0);
-+}
-+
-+template <class T>
-+static void CreateArrayDispatch(MacroAssembler* masm,
-+ AllocationSiteOverrideMode mode) {
-+ if (mode == DISABLE_ALLOCATION_SITES) {
-+ T stub(masm->isolate(), GetInitialFastElementsKind(), mode);
-+ __ TailCallStub(&stub);
-+ } else if (mode == DONT_OVERRIDE) {
-+ int last_index =
-+ GetSequenceIndexFromFastElementsKind(TERMINAL_FAST_ELEMENTS_KIND);
-+ for (int i = 0; i <= last_index; ++i) {
-+ Label next;
-+ ElementsKind kind = GetFastElementsKindFromSequenceIndex(i);
-+ __ cmp(edx, kind);
-+ __ j(not_equal, &next);
-+ T stub(masm->isolate(), kind);
-+ __ TailCallStub(&stub);
-+ __ bind(&next);
-+ }
-+
-+ // If we reached this point there is a problem.
-+ __ Abort(kUnexpectedElementsKindInArrayConstructor);
-+ } else {
-+ UNREACHABLE();
-+ }
-+}
-+
-+static void CreateArrayDispatchOneArgument(MacroAssembler* masm,
-+ AllocationSiteOverrideMode mode) {
-+ // ebx - allocation site (if mode != DISABLE_ALLOCATION_SITES)
-+ // edx - kind (if mode != DISABLE_ALLOCATION_SITES)
-+ // eax - number of arguments
-+ // edi - constructor?
-+ // esp[0] - return address
-+ // esp[4] - last argument
-+ STATIC_ASSERT(PACKED_SMI_ELEMENTS == 0);
-+ STATIC_ASSERT(HOLEY_SMI_ELEMENTS == 1);
-+ STATIC_ASSERT(PACKED_ELEMENTS == 2);
-+ STATIC_ASSERT(HOLEY_ELEMENTS == 3);
-+ STATIC_ASSERT(PACKED_DOUBLE_ELEMENTS == 4);
-+ STATIC_ASSERT(HOLEY_DOUBLE_ELEMENTS == 5);
-+
-+ if (mode == DISABLE_ALLOCATION_SITES) {
-+ ElementsKind initial = GetInitialFastElementsKind();
-+ ElementsKind holey_initial = GetHoleyElementsKind(initial);
-+
-+ ArraySingleArgumentConstructorStub stub_holey(
-+ masm->isolate(), holey_initial, DISABLE_ALLOCATION_SITES);
-+ __ TailCallStub(&stub_holey);
-+ } else if (mode == DONT_OVERRIDE) {
-+ // is the low bit set? If so, we are holey and that is good.
-+ Label normal_sequence;
-+ __ test_b(edx, Immediate(1));
-+ __ j(not_zero, &normal_sequence);
-+
-+ // We are going to create a holey array, but our kind is non-holey.
-+ // Fix kind and retry.
-+ __ inc(edx);
-+
-+ if (FLAG_debug_code) {
-+ Handle<Map> allocation_site_map =
-+ masm->isolate()->factory()->allocation_site_map();
-+ __ cmp(FieldOperand(ebx, 0), Immediate(allocation_site_map));
-+ __ Assert(equal, kExpectedAllocationSite);
-+ }
-+
-+ // Save the resulting elements kind in type info. We can't just store r3
-+ // in the AllocationSite::transition_info field because elements kind is
-+ // restricted to a portion of the field...upper bits need to be left alone.
-+ STATIC_ASSERT(AllocationSite::ElementsKindBits::kShift == 0);
-+ __ add(
-+ FieldOperand(ebx, AllocationSite::kTransitionInfoOrBoilerplateOffset),
-+ Immediate(Smi::FromInt(kFastElementsKindPackedToHoley)));
-+
-+ __ bind(&normal_sequence);
-+ int last_index =
-+ GetSequenceIndexFromFastElementsKind(TERMINAL_FAST_ELEMENTS_KIND);
-+ for (int i = 0; i <= last_index; ++i) {
-+ Label next;
-+ ElementsKind kind = GetFastElementsKindFromSequenceIndex(i);
-+ __ cmp(edx, kind);
-+ __ j(not_equal, &next);
-+ ArraySingleArgumentConstructorStub stub(masm->isolate(), kind);
-+ __ TailCallStub(&stub);
-+ __ bind(&next);
-+ }
-+
-+ // If we reached this point there is a problem.
-+ __ Abort(kUnexpectedElementsKindInArrayConstructor);
-+ } else {
-+ UNREACHABLE();
-+ }
-+}
-+
-+template <class T>
-+static void ArrayConstructorStubAheadOfTimeHelper(Isolate* isolate) {
-+ int to_index =
-+ GetSequenceIndexFromFastElementsKind(TERMINAL_FAST_ELEMENTS_KIND);
-+ for (int i = 0; i <= to_index; ++i) {
-+ ElementsKind kind = GetFastElementsKindFromSequenceIndex(i);
-+ T stub(isolate, kind);
-+ stub.GetCode();
-+ if (AllocationSite::ShouldTrack(kind)) {
-+ T stub1(isolate, kind, DISABLE_ALLOCATION_SITES);
-+ stub1.GetCode();
-+ }
-+ }
-+}
-+
-+void CommonArrayConstructorStub::GenerateStubsAheadOfTime(Isolate* isolate) {
-+ ArrayConstructorStubAheadOfTimeHelper<ArrayNoArgumentConstructorStub>(
-+ isolate);
-+ ArrayConstructorStubAheadOfTimeHelper<ArraySingleArgumentConstructorStub>(
-+ isolate);
-+ ArrayNArgumentsConstructorStub stub(isolate);
-+ stub.GetCode();
-+
-+ ElementsKind kinds[2] = {PACKED_ELEMENTS, HOLEY_ELEMENTS};
-+ for (int i = 0; i < 2; i++) {
-+ // For internal arrays we only need a few things
-+ InternalArrayNoArgumentConstructorStub stubh1(isolate, kinds[i]);
-+ stubh1.GetCode();
-+ InternalArraySingleArgumentConstructorStub stubh2(isolate, kinds[i]);
-+ stubh2.GetCode();
-+ }
-+}
-+
-+void ArrayConstructorStub::GenerateDispatchToArrayStub(
-+ MacroAssembler* masm, AllocationSiteOverrideMode mode) {
-+ Label not_zero_case, not_one_case;
-+ __ test(eax, eax);
-+ __ j(not_zero, ¬_zero_case);
-+ CreateArrayDispatch<ArrayNoArgumentConstructorStub>(masm, mode);
-+
-+ __ bind(¬_zero_case);
-+ __ cmp(eax, 1);
-+ __ j(greater, ¬_one_case);
-+ CreateArrayDispatchOneArgument(masm, mode);
-+
-+ __ bind(¬_one_case);
-+ ArrayNArgumentsConstructorStub stub(masm->isolate());
-+ __ TailCallStub(&stub);
-+}
-+
-+void ArrayConstructorStub::Generate(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argc (only if argument_count() is ANY or MORE_THAN_ONE)
-+ // -- ebx : AllocationSite or undefined
-+ // -- edi : constructor
-+ // -- edx : Original constructor
-+ // -- esp[0] : return address
-+ // -- esp[4] : last argument
-+ // -----------------------------------
-+ if (FLAG_debug_code) {
-+ // The array construct code is only set for the global and natives
-+ // builtin Array functions which always have maps.
-+
-+ // Initial map for the builtin Array function should be a map.
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset));
-+ // Will both indicate a NULL and a Smi.
-+ __ test(ecx, Immediate(kSmiTagMask));
-+ __ Assert(not_zero, kUnexpectedInitialMapForArrayFunction);
-+ __ CmpObjectType(ecx, MAP_TYPE, ecx);
-+ __ Assert(equal, kUnexpectedInitialMapForArrayFunction);
-+
-+ // We should either have undefined in ebx or a valid AllocationSite
-+ __ AssertUndefinedOrAllocationSite(ebx);
-+ }
-+
-+ Label subclassing;
-+
-+ // Enter the context of the Array function.
-+ __ mov(esi, FieldOperand(edi, JSFunction::kContextOffset));
-+
-+ __ cmp(edx, edi);
-+ __ j(not_equal, &subclassing);
-+
-+ Label no_info;
-+ // If the feedback vector is the undefined value call an array constructor
-+ // that doesn't use AllocationSites.
-+ __ cmp(ebx, isolate()->factory()->undefined_value());
-+ __ j(equal, &no_info);
-+
-+ // Only look at the lower 16 bits of the transition info.
-+ __ mov(edx,
-+ FieldOperand(ebx, AllocationSite::kTransitionInfoOrBoilerplateOffset));
-+ __ SmiUntag(edx);
-+ STATIC_ASSERT(AllocationSite::ElementsKindBits::kShift == 0);
-+ __ and_(edx, Immediate(AllocationSite::ElementsKindBits::kMask));
-+ GenerateDispatchToArrayStub(masm, DONT_OVERRIDE);
-+
-+ __ bind(&no_info);
-+ GenerateDispatchToArrayStub(masm, DISABLE_ALLOCATION_SITES);
-+
-+ // Subclassing.
-+ __ bind(&subclassing);
-+ __ mov(Operand(esp, eax, times_pointer_size, kPointerSize), edi);
-+ __ add(eax, Immediate(3));
-+ __ PopReturnAddressTo(ecx);
-+ __ Push(edx);
-+ __ Push(ebx);
-+ __ PushReturnAddressFrom(ecx);
-+ __ JumpToExternalReference(ExternalReference(Runtime::kNewArray, isolate()));
-+}
-+
-+void InternalArrayConstructorStub::GenerateCase(MacroAssembler* masm,
-+ ElementsKind kind) {
-+ Label not_zero_case, not_one_case;
-+ Label normal_sequence;
-+
-+ __ test(eax, eax);
-+ __ j(not_zero, ¬_zero_case);
-+ InternalArrayNoArgumentConstructorStub stub0(isolate(), kind);
-+ __ TailCallStub(&stub0);
-+
-+ __ bind(¬_zero_case);
-+ __ cmp(eax, 1);
-+ __ j(greater, ¬_one_case);
-+
-+ if (IsFastPackedElementsKind(kind)) {
-+ // We might need to create a holey array
-+ // look at the first argument
-+ __ mov(ecx, Operand(esp, kPointerSize));
-+ __ test(ecx, ecx);
-+ __ j(zero, &normal_sequence);
-+
-+ InternalArraySingleArgumentConstructorStub stub1_holey(
-+ isolate(), GetHoleyElementsKind(kind));
-+ __ TailCallStub(&stub1_holey);
-+ }
-+
-+ __ bind(&normal_sequence);
-+ InternalArraySingleArgumentConstructorStub stub1(isolate(), kind);
-+ __ TailCallStub(&stub1);
-+
-+ __ bind(¬_one_case);
-+ ArrayNArgumentsConstructorStub stubN(isolate());
-+ __ TailCallStub(&stubN);
-+}
-+
-+void InternalArrayConstructorStub::Generate(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- eax : argc
-+ // -- edi : constructor
-+ // -- esp[0] : return address
-+ // -- esp[4] : last argument
-+ // -----------------------------------
-+
-+ if (FLAG_debug_code) {
-+ // The array construct code is only set for the global and natives
-+ // builtin Array functions which always have maps.
-+
-+ // Initial map for the builtin Array function should be a map.
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset));
-+ // Will both indicate a NULL and a Smi.
-+ __ test(ecx, Immediate(kSmiTagMask));
-+ __ Assert(not_zero, kUnexpectedInitialMapForArrayFunction);
-+ __ CmpObjectType(ecx, MAP_TYPE, ecx);
-+ __ Assert(equal, kUnexpectedInitialMapForArrayFunction);
-+ }
-+
-+ // Figure out the right elements kind
-+ __ mov(ecx, FieldOperand(edi, JSFunction::kPrototypeOrInitialMapOffset));
-+
-+ // Load the map's "bit field 2" into |result|. We only need the first
byte,
-+ // but the following masking takes care of that anyway.
-+ __ mov(ecx, FieldOperand(ecx, Map::kBitField2Offset));
-+ // Retrieve elements_kind from bit field 2.
-+ __ DecodeField<Map::ElementsKindBits>(ecx);
-+
-+ if (FLAG_debug_code) {
-+ Label done;
-+ __ cmp(ecx, Immediate(PACKED_ELEMENTS));
-+ __ j(equal, &done);
-+ __ cmp(ecx, Immediate(HOLEY_ELEMENTS));
-+ __ Assert(equal, kInvalidElementsKindForInternalArrayOrInternalPackedArray);
-+ __ bind(&done);
-+ }
-+
-+ Label fast_elements_case;
-+ __ cmp(ecx, Immediate(PACKED_ELEMENTS));
-+ __ j(equal, &fast_elements_case);
-+ GenerateCase(masm, HOLEY_ELEMENTS);
-+
-+ __ bind(&fast_elements_case);
-+ GenerateCase(masm, PACKED_ELEMENTS);
-+}
-+
-+// Generates an Operand for saving parameters after PrepareCallApiFunction.
-+static Operand ApiParameterOperand(int index) {
-+ return Operand(esp, index * kPointerSize);
-+}
-+
-+
-+// Prepares stack to put arguments (aligns and so on). Reserves
-+// space for return value if needed (assumes the return value is a handle).
-+// Arguments must be stored in ApiParameterOperand(0), ApiParameterOperand(1)
-+// etc. Saves context (esi). If space was reserved for return value then
-+// stores the pointer to the reserved slot into esi.
-+static void PrepareCallApiFunction(MacroAssembler* masm, int argc) {
-+ __ EnterApiExitFrame(argc);
-+ if (__ emit_debug_code()) {
-+ __ mov(esi, Immediate(bit_cast<int32_t>(kZapValue)));
-+ }
-+}
-+
-+
-+// Calls an API function. Allocates HandleScope, extracts returned value
-+// from handle and propagates exceptions. Clobbers ebx, edi and
-+// caller-save registers. Restores context. On return removes
-+// stack_space * kPointerSize (GCed).
-+static void CallApiFunctionAndReturn(MacroAssembler* masm,
-+ Register function_address,
-+ ExternalReference thunk_ref,
-+ Operand thunk_last_arg, int stack_space,
-+ Operand* stack_space_operand,
-+ Operand return_value_operand,
-+ Operand* context_restore_operand) {
-+ Isolate* isolate = masm->isolate();
-+
-+ ExternalReference next_address =
-+ ExternalReference::handle_scope_next_address(isolate);
-+ ExternalReference limit_address =
-+ ExternalReference::handle_scope_limit_address(isolate);
-+ ExternalReference level_address =
-+ ExternalReference::handle_scope_level_address(isolate);
-+
-+ DCHECK(edx.is(function_address));
-+ // Allocate HandleScope in callee-save registers.
-+ __ mov(ebx, Operand::StaticVariable(next_address));
-+ __ mov(edi, Operand::StaticVariable(limit_address));
-+ __ add(Operand::StaticVariable(level_address), Immediate(1));
-+
-+ if (FLAG_log_timer_events) {
-+ FrameScope frame(masm, StackFrame::MANUAL);
-+ __ PushSafepointRegisters();
-+ __ PrepareCallCFunction(1, eax);
-+ __ mov(Operand(esp, 0),
-+ Immediate(ExternalReference::isolate_address(isolate)));
-+ __ CallCFunction(ExternalReference::log_enter_external_function(isolate),
-+ 1);
-+ __ PopSafepointRegisters();
-+ }
-+
-+
-+ Label profiler_disabled;
-+ Label end_profiler_check;
-+ __ mov(eax, Immediate(ExternalReference::is_profiling_address(isolate)));
-+ __ cmpb(Operand(eax, 0), Immediate(0));
-+ __ j(zero, &profiler_disabled);
-+
-+ // Additional parameter is the address of the actual getter function.
-+ __ mov(thunk_last_arg, function_address);
-+ // Call the api function.
-+ __ mov(eax, Immediate(thunk_ref));
-+ __ call(eax);
-+ __ jmp(&end_profiler_check);
-+
-+ __ bind(&profiler_disabled);
-+ // Call the api function.
-+ __ call(function_address);
-+ __ bind(&end_profiler_check);
-+
-+ if (FLAG_log_timer_events) {
-+ FrameScope frame(masm, StackFrame::MANUAL);
-+ __ PushSafepointRegisters();
-+ __ PrepareCallCFunction(1, eax);
-+ __ mov(Operand(esp, 0),
-+ Immediate(ExternalReference::isolate_address(isolate)));
-+ __ CallCFunction(ExternalReference::log_leave_external_function(isolate),
-+ 1);
-+ __ PopSafepointRegisters();
-+ }
-+
-+ Label prologue;
-+ // Load the value from ReturnValue
-+ __ mov(eax, return_value_operand);
-+
-+ Label promote_scheduled_exception;
-+ Label delete_allocated_handles;
-+ Label leave_exit_frame;
-+
-+ __ bind(&prologue);
-+ // No more valid handles (the result handle was the last one). Restore
-+ // previous handle scope.
-+ __ mov(Operand::StaticVariable(next_address), ebx);
-+ __ sub(Operand::StaticVariable(level_address), Immediate(1));
-+ __ Assert(above_equal, kInvalidHandleScopeLevel);
-+ __ cmp(edi, Operand::StaticVariable(limit_address));
-+ __ j(not_equal, &delete_allocated_handles);
-+
-+ // Leave the API exit frame.
-+ __ bind(&leave_exit_frame);
-+ bool restore_context = context_restore_operand != NULL;
-+ if (restore_context) {
-+ __ mov(esi, *context_restore_operand);
-+ }
-+ if (stack_space_operand != nullptr) {
-+ __ mov(ebx, *stack_space_operand);
-+ }
-+ __ LeaveApiExitFrame(!restore_context);
-+
-+ // Check if the function scheduled an exception.
-+ ExternalReference scheduled_exception_address =
-+ ExternalReference::scheduled_exception_address(isolate);
-+ __ cmp(Operand::StaticVariable(scheduled_exception_address),
-+ Immediate(isolate->factory()->the_hole_value()));
-+ __ j(not_equal, &promote_scheduled_exception);
-+
-+#if DEBUG
-+ // Check if the function returned a valid JavaScript value.
-+ Label ok;
-+ Register return_value = eax;
-+ Register map = ecx;
-+
-+ __ JumpIfSmi(return_value, &ok, Label::kNear);
-+ __ mov(map, FieldOperand(return_value, HeapObject::kMapOffset));
-+
-+ __ CmpInstanceType(map, LAST_NAME_TYPE);
-+ __ j(below_equal, &ok, Label::kNear);
-+
-+ __ CmpInstanceType(map, FIRST_JS_RECEIVER_TYPE);
-+ __ j(above_equal, &ok, Label::kNear);
-+
-+ __ cmp(map, isolate->factory()->heap_number_map());
-+ __ j(equal, &ok, Label::kNear);
-+
-+ __ cmp(return_value, isolate->factory()->undefined_value());
-+ __ j(equal, &ok, Label::kNear);
-+
-+ __ cmp(return_value, isolate->factory()->true_value());
-+ __ j(equal, &ok, Label::kNear);
-+
-+ __ cmp(return_value, isolate->factory()->false_value());
-+ __ j(equal, &ok, Label::kNear);
-+
-+ __ cmp(return_value, isolate->factory()->null_value());
-+ __ j(equal, &ok, Label::kNear);
-+
-+ __ Abort(kAPICallReturnedInvalidObject);
-+
-+ __ bind(&ok);
-+#endif
-+
-+ if (stack_space_operand != nullptr) {
-+ DCHECK_EQ(0, stack_space);
-+ __ pop(ecx);
-+ __ add(esp, ebx);
-+ __ jmp(ecx);
-+ } else {
-+ __ ret(stack_space * kPointerSize);
-+ }
-+
-+ // Re-throw by promoting a scheduled exception.
-+ __ bind(&promote_scheduled_exception);
-+ __ TailCallRuntime(Runtime::kPromoteScheduledException);
-+
-+ // HandleScope limit has changed. Delete allocated extensions.
-+ ExternalReference delete_extensions =
-+ ExternalReference::delete_handle_scope_extensions(isolate);
-+ __ bind(&delete_allocated_handles);
-+ __ mov(Operand::StaticVariable(limit_address), edi);
-+ __ mov(edi, eax);
-+ __ mov(Operand(esp, 0),
-+ Immediate(ExternalReference::isolate_address(isolate)));
-+ __ mov(eax, Immediate(delete_extensions));
-+ __ call(eax);
-+ __ mov(eax, edi);
-+ __ jmp(&leave_exit_frame);
-+}
-+
-+void CallApiCallbackStub::Generate(MacroAssembler* masm) {
-+ // ----------- S t a t e -------------
-+ // -- edi : callee
-+ // -- ebx : call_data
-+ // -- ecx : holder
-+ // -- edx : api_function_address
-+ // -- esi : context
-+ // --
-+ // -- esp[0] : return address
-+ // -- esp[4] : last argument
-+ // -- ...
-+ // -- esp[argc * 4] : first argument
-+ // -- esp[(argc + 1) * 4] : receiver
-+ // -----------------------------------
-+
-+ Register callee = edi;
-+ Register call_data = ebx;
-+ Register holder = ecx;
-+ Register api_function_address = edx;
-+ Register context = esi;
-+ Register return_address = eax;
-+
-+ typedef FunctionCallbackArguments FCA;
-+
-+ STATIC_ASSERT(FCA::kContextSaveIndex == 6);
-+ STATIC_ASSERT(FCA::kCalleeIndex == 5);
-+ STATIC_ASSERT(FCA::kDataIndex == 4);
-+ STATIC_ASSERT(FCA::kReturnValueOffset == 3);
-+ STATIC_ASSERT(FCA::kReturnValueDefaultValueIndex == 2);
-+ STATIC_ASSERT(FCA::kIsolateIndex == 1);
-+ STATIC_ASSERT(FCA::kHolderIndex == 0);
-+ STATIC_ASSERT(FCA::kNewTargetIndex == 7);
-+ STATIC_ASSERT(FCA::kArgsLength == 8);
-+
-+ __ pop(return_address);
-+
-+ // new target
-+ __ PushRoot(Heap::kUndefinedValueRootIndex);
-+
-+ // context save.
-+ __ push(context);
-+
-+ // callee
-+ __ push(callee);
-+
-+ // call data
-+ __ push(call_data);
-+
-+ // return value
-+ __ push(Immediate(masm->isolate()->factory()->undefined_value()));
-+ // return value default
-+ __ push(Immediate(masm->isolate()->factory()->undefined_value()));
-+ // isolate
-+ __ push(Immediate(reinterpret_cast<int>(masm->isolate())));
-+ // holder
-+ __ push(holder);
-+
-+ Register scratch = call_data;
-+ __ mov(scratch, esp);
-+
-+ // push return address
-+ __ push(return_address);
-+
-+ if (!is_lazy()) {
-+ // load context from callee
-+ __ mov(context, FieldOperand(callee, JSFunction::kContextOffset));
-+ }
-+
-+ // API function gets reference to the v8::Arguments. If CPU profiler
-+ // is enabled wrapper function will be called and we need to pass
-+ // address of the callback as additional parameter, always allocate
-+ // space for it.
-+ const int kApiArgc = 1 + 1;
-+
-+ // Allocate the v8::Arguments structure in the arguments' space since
-+ // it's not controlled by GC.
-+ const int kApiStackSpace = 3;
-+
-+ PrepareCallApiFunction(masm, kApiArgc + kApiStackSpace);
-+
-+ // FunctionCallbackInfo::implicit_args_.
-+ __ mov(ApiParameterOperand(2), scratch);
-+ __ add(scratch, Immediate((argc() + FCA::kArgsLength - 1) * kPointerSize));
-+ // FunctionCallbackInfo::values_.
-+ __ mov(ApiParameterOperand(3), scratch);
-+ // FunctionCallbackInfo::length_.
-+ __ Move(ApiParameterOperand(4), Immediate(argc()));
-+
-+ // v8::InvocationCallback's argument.
-+ __ lea(scratch, ApiParameterOperand(2));
-+ __ mov(ApiParameterOperand(0), scratch);
-+
-+ ExternalReference thunk_ref =
-+ ExternalReference::invoke_function_callback(masm->isolate());
-+
-+ Operand context_restore_operand(ebp,
-+ (2 + FCA::kContextSaveIndex) * kPointerSize);
-+ // Stores return the first js argument
-+ int return_value_offset = 0;
-+ if (is_store()) {
-+ return_value_offset = 2 + FCA::kArgsLength;
-+ } else {
-+ return_value_offset = 2 + FCA::kReturnValueOffset;
-+ }
-+ Operand return_value_operand(ebp, return_value_offset * kPointerSize);
-+ int stack_space = 0;
-+ Operand length_operand = ApiParameterOperand(4);
-+ Operand* stack_space_operand = &length_operand;
-+ stack_space = argc() + FCA::kArgsLength + 1;
-+ stack_space_operand = nullptr;
-+ CallApiFunctionAndReturn(masm, api_function_address, thunk_ref,
-+ ApiParameterOperand(1), stack_space,
-+ stack_space_operand, return_value_operand,
-+ &context_restore_operand);
-+}
-+
-+
-+void CallApiGetterStub::Generate(MacroAssembler* masm) {
-+ // Build v8::PropertyCallbackInfo::args_ array on the stack and push property
-+ // name below the exit frame to make GC aware of them.
-+ STATIC_ASSERT(PropertyCallbackArguments::kShouldThrowOnErrorIndex == 0);
-+ STATIC_ASSERT(PropertyCallbackArguments::kHolderIndex == 1);
-+ STATIC_ASSERT(PropertyCallbackArguments::kIsolateIndex == 2);
-+ STATIC_ASSERT(PropertyCallbackArguments::kReturnValueDefaultValueIndex == 3);
-+ STATIC_ASSERT(PropertyCallbackArguments::kReturnValueOffset == 4);
-+ STATIC_ASSERT(PropertyCallbackArguments::kDataIndex == 5);
-+ STATIC_ASSERT(PropertyCallbackArguments::kThisIndex == 6);
-+ STATIC_ASSERT(PropertyCallbackArguments::kArgsLength == 7);
-+
-+ Register receiver = ApiGetterDescriptor::ReceiverRegister();
-+ Register holder = ApiGetterDescriptor::HolderRegister();
-+ Register callback = ApiGetterDescriptor::CallbackRegister();
-+ Register scratch = ebx;
-+ DCHECK(!AreAliased(receiver, holder, callback, scratch));
-+
-+ __ pop(scratch); // Pop return address to extend the frame.
-+ __ push(receiver);
-+ __ push(FieldOperand(callback, AccessorInfo::kDataOffset));
-+ __ PushRoot(Heap::kUndefinedValueRootIndex); // ReturnValue
-+ // ReturnValue default value
-+ __ PushRoot(Heap::kUndefinedValueRootIndex);
-+ __ push(Immediate(ExternalReference::isolate_address(isolate())));
-+ __ push(holder);
-+ __ push(Immediate(Smi::kZero)); // should_throw_on_error -> false
-+ __ push(FieldOperand(callback, AccessorInfo::kNameOffset));
-+ __ push(scratch); // Restore return address.
-+
-+ // v8::PropertyCallbackInfo::args_ array and name handle.
-+ const int kStackUnwindSpace = PropertyCallbackArguments::kArgsLength + 1;
-+
-+ // Allocate v8::PropertyCallbackInfo object, arguments for callback and
-+ // space for optional callback address parameter (in case CPU profiler is
-+ // active) in non-GCed stack space.
-+ const int kApiArgc = 3 + 1;
-+
-+ // Load address of v8::PropertyAccessorInfo::args_ array.
-+ __ lea(scratch, Operand(esp, 2 * kPointerSize));
-+
-+ PrepareCallApiFunction(masm, kApiArgc);
-+ // Create v8::PropertyCallbackInfo object on the stack and initialize
-+ // it's args_ field.
-+ Operand info_object = ApiParameterOperand(3);
-+ __ mov(info_object, scratch);
-+
-+ // Name as handle.
-+ __ sub(scratch, Immediate(kPointerSize));
-+ __ mov(ApiParameterOperand(0), scratch);
-+ // Arguments pointer.
-+ __ lea(scratch, info_object);
-+ __ mov(ApiParameterOperand(1), scratch);
-+ // Reserve space for optional callback address parameter.
-+ Operand thunk_last_arg = ApiParameterOperand(2);
-+
-+ ExternalReference thunk_ref =
-+ ExternalReference::invoke_accessor_getter_callback(isolate());
-+
-+ __ mov(scratch, FieldOperand(callback, AccessorInfo::kJsGetterOffset));
-+ Register function_address = edx;
-+ __ mov(function_address,
-+ FieldOperand(scratch, Foreign::kForeignAddressOffset));
-+ // +3 is to skip prolog, return address and name handle.
-+ Operand return_value_operand(
-+ ebp, (PropertyCallbackArguments::kReturnValueOffset + 3) * kPointerSize);
-+ CallApiFunctionAndReturn(masm, function_address, thunk_ref, thunk_last_arg,
-+ kStackUnwindSpace, nullptr, return_value_operand,
-+ NULL);
-+}
-+
-+#undef __
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/code-stubs-x87.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/code-stubs-x87.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/code-stubs-x87.h 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/code-stubs-x87.h 2018-02-18
19:00:54.197418149 +0100
-@@ -0,0 +1,351 @@
-+// Copyright 2011 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#ifndef V8_X87_CODE_STUBS_X87_H_
-+#define V8_X87_CODE_STUBS_X87_H_
-+
-+namespace v8 {
-+namespace internal {
-+
-+
-+void ArrayNativeCode(MacroAssembler* masm,
-+ bool construct_call,
-+ Label* call_generic_code);
-+
-+
-+class StringHelper : public AllStatic {
-+ public:
-+ // Compares two flat one byte strings and returns result in eax.
-+ static void GenerateCompareFlatOneByteStrings(MacroAssembler* masm,
-+ Register left, Register right,
-+ Register scratch1,
-+ Register scratch2,
-+ Register scratch3);
-+
-+ // Compares two flat one byte strings for equality and returns result in eax.
-+ static void GenerateFlatOneByteStringEquals(MacroAssembler* masm,
-+ Register left, Register right,
-+ Register scratch1,
-+ Register scratch2);
-+
-+ private:
-+ static void GenerateOneByteCharsCompareLoop(
-+ MacroAssembler* masm, Register left, Register right, Register length,
-+ Register scratch, Label* chars_not_equal,
-+ Label::Distance chars_not_equal_near = Label::kFar);
-+
-+ DISALLOW_IMPLICIT_CONSTRUCTORS(StringHelper);
-+};
-+
-+
-+class NameDictionaryLookupStub: public PlatformCodeStub {
-+ public:
-+ enum LookupMode { POSITIVE_LOOKUP, NEGATIVE_LOOKUP };
-+
-+ NameDictionaryLookupStub(Isolate* isolate, Register dictionary,
-+ Register result, Register index, LookupMode mode)
-+ : PlatformCodeStub(isolate) {
-+ minor_key_ = DictionaryBits::encode(dictionary.code()) |
-+ ResultBits::encode(result.code()) |
-+ IndexBits::encode(index.code()) | LookupModeBits::encode(mode);
-+ }
-+
-+ static void GenerateNegativeLookup(MacroAssembler* masm,
-+ Label* miss,
-+ Label* done,
-+ Register properties,
-+ Handle<Name> name,
-+ Register r0);
-+
-+ bool SometimesSetsUpAFrame() override { return false; }
-+
-+ private:
-+ static const int kInlinedProbes = 4;
-+ static const int kTotalProbes = 20;
-+
-+ static const int kCapacityOffset =
-+ NameDictionary::kHeaderSize +
-+ NameDictionary::kCapacityIndex * kPointerSize;
-+
-+ static const int kElementsStartOffset =
-+ NameDictionary::kHeaderSize +
-+ NameDictionary::kElementsStartIndex * kPointerSize;
-+
-+ Register dictionary() const {
-+ return Register::from_code(DictionaryBits::decode(minor_key_));
-+ }
-+
-+ Register result() const {
-+ return Register::from_code(ResultBits::decode(minor_key_));
-+ }
-+
-+ Register index() const {
-+ return Register::from_code(IndexBits::decode(minor_key_));
-+ }
-+
-+ LookupMode mode() const { return LookupModeBits::decode(minor_key_); }
-+
-+ class DictionaryBits: public BitField<int, 0, 3> {};
-+ class ResultBits: public BitField<int, 3, 3> {};
-+ class IndexBits: public BitField<int, 6, 3> {};
-+ class LookupModeBits: public BitField<LookupMode, 9, 1> {};
-+
-+ DEFINE_NULL_CALL_INTERFACE_DESCRIPTOR();
-+ DEFINE_PLATFORM_CODE_STUB(NameDictionaryLookup, PlatformCodeStub);
-+};
-+
-+
-+class RecordWriteStub: public PlatformCodeStub {
-+ public:
-+ RecordWriteStub(Isolate* isolate, Register object, Register value,
-+ Register address, RememberedSetAction remembered_set_action,
-+ SaveFPRegsMode fp_mode)
-+ : PlatformCodeStub(isolate),
-+ regs_(object, // An input reg.
-+ address, // An input reg.
-+ value) { // One scratch reg.
-+ minor_key_ = ObjectBits::encode(object.code()) |
-+ ValueBits::encode(value.code()) |
-+ AddressBits::encode(address.code()) |
-+ RememberedSetActionBits::encode(remembered_set_action) |
-+ SaveFPRegsModeBits::encode(fp_mode);
-+ }
-+
-+ RecordWriteStub(uint32_t key, Isolate* isolate)
-+ : PlatformCodeStub(key, isolate), regs_(object(), address(), value()) {}
-+
-+ enum Mode {
-+ STORE_BUFFER_ONLY,
-+ INCREMENTAL,
-+ INCREMENTAL_COMPACTION
-+ };
-+
-+ bool SometimesSetsUpAFrame() override { return false; }
-+
-+ static const byte kTwoByteNopInstruction = 0x3c; // Cmpb al, #imm8.
-+ static const byte kTwoByteJumpInstruction = 0xeb; // Jmp #imm8.
-+
-+ static const byte kFiveByteNopInstruction = 0x3d; // Cmpl eax, #imm32.
-+ static const byte kFiveByteJumpInstruction = 0xe9; // Jmp #imm32.
-+
-+ static Mode GetMode(Code* stub) {
-+ byte first_instruction = stub->instruction_start()[0];
-+ byte second_instruction = stub->instruction_start()[2];
-+
-+ if (first_instruction == kTwoByteJumpInstruction) {
-+ return INCREMENTAL;
-+ }
-+
-+ DCHECK(first_instruction == kTwoByteNopInstruction);
-+
-+ if (second_instruction == kFiveByteJumpInstruction) {
-+ return INCREMENTAL_COMPACTION;
-+ }
-+
-+ DCHECK(second_instruction == kFiveByteNopInstruction);
-+
-+ return STORE_BUFFER_ONLY;
-+ }
-+
-+ static void Patch(Code* stub, Mode mode) {
-+ switch (mode) {
-+ case STORE_BUFFER_ONLY:
-+ DCHECK(GetMode(stub) == INCREMENTAL ||
-+ GetMode(stub) == INCREMENTAL_COMPACTION);
-+ stub->instruction_start()[0] = kTwoByteNopInstruction;
-+ stub->instruction_start()[2] = kFiveByteNopInstruction;
-+ break;
-+ case INCREMENTAL:
-+ DCHECK(GetMode(stub) == STORE_BUFFER_ONLY);
-+ stub->instruction_start()[0] = kTwoByteJumpInstruction;
-+ break;
-+ case INCREMENTAL_COMPACTION:
-+ DCHECK(GetMode(stub) == STORE_BUFFER_ONLY);
-+ stub->instruction_start()[0] = kTwoByteNopInstruction;
-+ stub->instruction_start()[2] = kFiveByteJumpInstruction;
-+ break;
-+ }
-+ DCHECK(GetMode(stub) == mode);
-+ Assembler::FlushICache(stub->GetIsolate(), stub->instruction_start(), 7);
-+ }
-+
-+ DEFINE_NULL_CALL_INTERFACE_DESCRIPTOR();
-+
-+ private:
-+ // This is a helper class for freeing up 3 scratch registers, where the third
-+ // is always ecx (needed for shift operations). The input is two registers
-+ // that must be preserved and one scratch register provided by the caller.
-+ class RegisterAllocation {
-+ public:
-+ RegisterAllocation(Register object,
-+ Register address,
-+ Register scratch0)
-+ : object_orig_(object),
-+ address_orig_(address),
-+ scratch0_orig_(scratch0),
-+ object_(object),
-+ address_(address),
-+ scratch0_(scratch0) {
-+ DCHECK(!AreAliased(scratch0, object, address, no_reg));
-+ scratch1_ = GetRegThatIsNotEcxOr(object_, address_, scratch0_);
-+ if (scratch0.is(ecx)) {
-+ scratch0_ = GetRegThatIsNotEcxOr(object_, address_, scratch1_);
-+ }
-+ if (object.is(ecx)) {
-+ object_ = GetRegThatIsNotEcxOr(address_, scratch0_, scratch1_);
-+ }
-+ if (address.is(ecx)) {
-+ address_ = GetRegThatIsNotEcxOr(object_, scratch0_, scratch1_);
-+ }
-+ DCHECK(!AreAliased(scratch0_, object_, address_, ecx));
-+ }
-+
-+ void Save(MacroAssembler* masm) {
-+ DCHECK(!address_orig_.is(object_));
-+ DCHECK(object_.is(object_orig_) || address_.is(address_orig_));
-+ DCHECK(!AreAliased(object_, address_, scratch1_, scratch0_));
-+ DCHECK(!AreAliased(object_orig_, address_, scratch1_, scratch0_));
-+ DCHECK(!AreAliased(object_, address_orig_, scratch1_, scratch0_));
-+ // We don't have to save scratch0_orig_ because it was given to us as
-+ // a scratch register. But if we had to switch to a different reg then
-+ // we should save the new scratch0_.
-+ if (!scratch0_.is(scratch0_orig_)) masm->push(scratch0_);
-+ if (!ecx.is(scratch0_orig_) &&
-+ !ecx.is(object_orig_) &&
-+ !ecx.is(address_orig_)) {
-+ masm->push(ecx);
-+ }
-+ masm->push(scratch1_);
-+ if (!address_.is(address_orig_)) {
-+ masm->push(address_);
-+ masm->mov(address_, address_orig_);
-+ }
-+ if (!object_.is(object_orig_)) {
-+ masm->push(object_);
-+ masm->mov(object_, object_orig_);
-+ }
-+ }
-+
-+ void Restore(MacroAssembler* masm) {
-+ // These will have been preserved the entire time, so we just need to move
-+ // them back. Only in one case is the orig_ reg different from the plain
-+ // one, since only one of them can alias with ecx.
-+ if (!object_.is(object_orig_)) {
-+ masm->mov(object_orig_, object_);
-+ masm->pop(object_);
-+ }
-+ if (!address_.is(address_orig_)) {
-+ masm->mov(address_orig_, address_);
-+ masm->pop(address_);
-+ }
-+ masm->pop(scratch1_);
-+ if (!ecx.is(scratch0_orig_) &&
-+ !ecx.is(object_orig_) &&
-+ !ecx.is(address_orig_)) {
-+ masm->pop(ecx);
-+ }
-+ if (!scratch0_.is(scratch0_orig_)) masm->pop(scratch0_);
-+ }
-+
-+ // If we have to call into C then we need to save and restore all caller-
-+ // saved registers that were not already preserved. The caller saved
-+ // registers are eax, ecx and edx. The three scratch registers (incl. ecx)
-+ // will be restored by other means so we don't bother pushing them here.
-+ void SaveCallerSaveRegisters(MacroAssembler* masm, SaveFPRegsMode mode) {
-+ masm->PushCallerSaved(mode, ecx, scratch0_, scratch1_);
-+ }
-+
-+ inline void RestoreCallerSaveRegisters(MacroAssembler* masm,
-+ SaveFPRegsMode mode) {
-+ masm->PopCallerSaved(mode, ecx, scratch0_, scratch1_);
-+ }
-+
-+ inline Register object() { return object_; }
-+ inline Register address() { return address_; }
-+ inline Register scratch0() { return scratch0_; }
-+ inline Register scratch1() { return scratch1_; }
-+
-+ private:
-+ Register object_orig_;
-+ Register address_orig_;
-+ Register scratch0_orig_;
-+ Register object_;
-+ Register address_;
-+ Register scratch0_;
-+ Register scratch1_;
-+ // Third scratch register is always ecx.
-+
-+ Register GetRegThatIsNotEcxOr(Register r1,
-+ Register r2,
-+ Register r3) {
-+ for (int i = 0; i < Register::kNumRegisters; i++) {
-+ if (RegisterConfiguration::Crankshaft()->IsAllocatableGeneralCode(i)) {
-+ Register candidate = Register::from_code(i);
-+ if (candidate.is(ecx)) continue;
-+ if (candidate.is(r1)) continue;
-+ if (candidate.is(r2)) continue;
-+ if (candidate.is(r3)) continue;
-+ return candidate;
-+ }
-+ }
-+ UNREACHABLE();
-+ }
-+ friend class RecordWriteStub;
-+ };
-+
-+ enum OnNoNeedToInformIncrementalMarker {
-+ kReturnOnNoNeedToInformIncrementalMarker,
-+ kUpdateRememberedSetOnNoNeedToInformIncrementalMarker
-+ };
-+
-+ inline Major MajorKey() const final { return RecordWrite; }
-+
-+ void Generate(MacroAssembler* masm) override;
-+ void GenerateIncremental(MacroAssembler* masm, Mode mode);
-+ void CheckNeedsToInformIncrementalMarker(
-+ MacroAssembler* masm,
-+ OnNoNeedToInformIncrementalMarker on_no_need,
-+ Mode mode);
-+ void InformIncrementalMarker(MacroAssembler* masm);
-+
-+ void Activate(Code* code) override {
-+ code->GetHeap()->incremental_marking()->ActivateGeneratedStub(code);
-+ }
-+
-+ Register object() const {
-+ return Register::from_code(ObjectBits::decode(minor_key_));
-+ }
-+
-+ Register value() const {
-+ return Register::from_code(ValueBits::decode(minor_key_));
-+ }
-+
-+ Register address() const {
-+ return Register::from_code(AddressBits::decode(minor_key_));
-+ }
-+
-+ RememberedSetAction remembered_set_action() const {
-+ return RememberedSetActionBits::decode(minor_key_);
-+ }
-+
-+ SaveFPRegsMode save_fp_regs_mode() const {
-+ return SaveFPRegsModeBits::decode(minor_key_);
-+ }
-+
-+ class ObjectBits: public BitField<int, 0, 3> {};
-+ class ValueBits: public BitField<int, 3, 3> {};
-+ class AddressBits: public BitField<int, 6, 3> {};
-+ class RememberedSetActionBits: public BitField<RememberedSetAction, 9, 1> {};
-+ class SaveFPRegsModeBits : public BitField<SaveFPRegsMode, 10, 1> {};
-+
-+ RegisterAllocation regs_;
-+
-+ DISALLOW_COPY_AND_ASSIGN(RecordWriteStub);
-+};
-+
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_X87_CODE_STUBS_X87_H_
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/cpu-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/cpu-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/cpu-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/cpu-x87.cc 2018-02-18
19:00:54.197418149 +0100
-@@ -0,0 +1,43 @@
-+// Copyright 2011 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+// CPU specific code for ia32 independent of OS goes here.
-+
-+#ifdef __GNUC__
-+#include "src/third_party/valgrind/valgrind.h"
-+#endif
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/assembler.h"
-+#include "src/macro-assembler.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+void CpuFeatures::FlushICache(void* start, size_t size) {
-+ // No need to flush the instruction cache on Intel. On Intel instruction
-+ // cache flushing is only necessary when multiple cores running the same
-+ // code simultaneously. V8 (and JavaScript) is single threaded and when code
-+ // is patched on an intel CPU the core performing the patching will have its
-+ // own instruction cache updated automatically.
-+
-+ // If flushing of the instruction cache becomes necessary Windows has the
-+ // API function FlushInstructionCache.
-+
-+ // By default, valgrind only checks the stack for writes that might need to
-+ // invalidate already cached translated code. This leads to random
-+ // instability when code patches or moves are sometimes unnoticed. One
-+ // solution is to run valgrind with --smc-check=all, but this comes at a big
-+ // performance cost. We can notify valgrind to invalidate its cache.
-+#ifdef VALGRIND_DISCARD_TRANSLATIONS
-+ unsigned res = VALGRIND_DISCARD_TRANSLATIONS(start, size);
-+ USE(res);
-+#endif
-+}
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/deoptimizer-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/deoptimizer-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/deoptimizer-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/deoptimizer-x87.cc 2018-02-18
19:00:54.198418134 +0100
-@@ -0,0 +1,412 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/codegen.h"
-+#include "src/deoptimizer.h"
-+#include "src/full-codegen/full-codegen.h"
-+#include "src/register-configuration.h"
-+#include "src/safepoint-table.h"
-+#include "src/x87/frames-x87.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+const int Deoptimizer::table_entry_size_ = 10;
-+
-+
-+int Deoptimizer::patch_size() {
-+ return Assembler::kCallInstructionLength;
-+}
-+
-+
-+void Deoptimizer::EnsureRelocSpaceForLazyDeoptimization(Handle<Code> code) {
-+ Isolate* isolate = code->GetIsolate();
-+ HandleScope scope(isolate);
-+
-+ // Compute the size of relocation information needed for the code
-+ // patching in Deoptimizer::PatchCodeForDeoptimization below.
-+ int min_reloc_size = 0;
-+ int prev_pc_offset = 0;
-+ DeoptimizationInputData* deopt_data =
-+ DeoptimizationInputData::cast(code->deoptimization_data());
-+ for (int i = 0; i < deopt_data->DeoptCount(); i++) {
-+ int pc_offset = deopt_data->Pc(i)->value();
-+ if (pc_offset == -1) continue;
-+ pc_offset = pc_offset + 1; // We will encode the pc offset after the call.
-+ DCHECK_GE(pc_offset, prev_pc_offset);
-+ int pc_delta = pc_offset - prev_pc_offset;
-+ // We use RUNTIME_ENTRY reloc info which has a size of 2 bytes
-+ // if encodable with small pc delta encoding and up to 6 bytes
-+ // otherwise.
-+ if (pc_delta <= RelocInfo::kMaxSmallPCDelta) {
-+ min_reloc_size += 2;
-+ } else {
-+ min_reloc_size += 6;
-+ }
-+ prev_pc_offset = pc_offset;
-+ }
-+
-+ // If the relocation information is not big enough we create a new
-+ // relocation info object that is padded with comments to make it
-+ // big enough for lazy doptimization.
-+ int reloc_length = code->relocation_info()->length();
-+ if (min_reloc_size > reloc_length) {
-+ int comment_reloc_size = RelocInfo::kMinRelocCommentSize;
-+ // Padding needed.
-+ int min_padding = min_reloc_size - reloc_length;
-+ // Number of comments needed to take up at least that much space.
-+ int additional_comments =
-+ (min_padding + comment_reloc_size - 1) / comment_reloc_size;
-+ // Actual padding size.
-+ int padding = additional_comments * comment_reloc_size;
-+ // Allocate new relocation info and copy old relocation to the end
-+ // of the new relocation info array because relocation info is
-+ // written and read backwards.
-+ Factory* factory = isolate->factory();
-+ Handle<ByteArray> new_reloc =
-+ factory->NewByteArray(reloc_length + padding, TENURED);
-+ MemCopy(new_reloc->GetDataStartAddress() + padding,
-+ code->relocation_info()->GetDataStartAddress(), reloc_length);
-+ // Create a relocation writer to write the comments in the padding
-+ // space. Use position 0 for everything to ensure short encoding.
-+ RelocInfoWriter reloc_info_writer(
-+ new_reloc->GetDataStartAddress() + padding, 0);
-+ intptr_t comment_string
-+ = reinterpret_cast<intptr_t>(RelocInfo::kFillerCommentString);
-+ RelocInfo rinfo(0, RelocInfo::COMMENT, comment_string, NULL);
-+ for (int i = 0; i < additional_comments; ++i) {
-+#ifdef DEBUG
-+ byte* pos_before = reloc_info_writer.pos();
-+#endif
-+ reloc_info_writer.Write(&rinfo);
-+ DCHECK(RelocInfo::kMinRelocCommentSize ==
-+ pos_before - reloc_info_writer.pos());
-+ }
-+ // Replace relocation information on the code object.
-+ code->set_relocation_info(*new_reloc);
-+ }
-+}
-+
-+
-+void Deoptimizer::PatchCodeForDeoptimization(Isolate* isolate, Code* code) {
-+ Address code_start_address = code->instruction_start();
-+
-+ // Fail hard and early if we enter this code object again.
-+ byte* pointer = code->FindCodeAgeSequence();
-+ if (pointer != NULL) {
-+ pointer += kNoCodeAgeSequenceLength;
-+ } else {
-+ pointer = code->instruction_start();
-+ }
-+ CodePatcher patcher(isolate, pointer, 1);
-+ patcher.masm()->int3();
-+
-+ DeoptimizationInputData* data =
-+ DeoptimizationInputData::cast(code->deoptimization_data());
-+ int osr_offset = data->OsrPcOffset()->value();
-+ if (osr_offset > 0) {
-+ CodePatcher osr_patcher(isolate, code_start_address + osr_offset, 1);
-+ osr_patcher.masm()->int3();
-+ }
-+
-+ // We will overwrite the code's relocation info in-place. Relocation info
-+ // is written backward. The relocation info is the payload of a byte
-+ // array. Later on we will slide this to the start of the byte array and
-+ // create a filler object in the remaining space.
-+ ByteArray* reloc_info = code->relocation_info();
-+ Address reloc_end_address = reloc_info->address() + reloc_info->Size();
-+ RelocInfoWriter reloc_info_writer(reloc_end_address, code_start_address);
-+
-+ // Since the call is a relative encoding, write new
-+ // reloc info. We do not need any of the existing reloc info because the
-+ // existing code will not be used again (we zap it in debug builds).
-+ //
-+ // Emit call to lazy deoptimization at all lazy deopt points.
-+ DeoptimizationInputData* deopt_data =
-+ DeoptimizationInputData::cast(code->deoptimization_data());
-+#ifdef DEBUG
-+ Address prev_call_address = NULL;
-+#endif
-+ // For each LLazyBailout instruction insert a call to the corresponding
-+ // deoptimization entry.
-+ for (int i = 0; i < deopt_data->DeoptCount(); i++) {
-+ if (deopt_data->Pc(i)->value() == -1) continue;
-+ // Patch lazy deoptimization entry.
-+ Address call_address = code_start_address + deopt_data->Pc(i)->value();
-+ CodePatcher patcher(isolate, call_address, patch_size());
-+ Address deopt_entry = GetDeoptimizationEntry(isolate, i, LAZY);
-+ patcher.masm()->call(deopt_entry, RelocInfo::NONE32);
-+ // We use RUNTIME_ENTRY for deoptimization bailouts.
-+ RelocInfo rinfo(call_address + 1, // 1 after the call opcode.
-+ RelocInfo::RUNTIME_ENTRY,
-+ reinterpret_cast<intptr_t>(deopt_entry), NULL);
-+ reloc_info_writer.Write(&rinfo);
-+ DCHECK_GE(reloc_info_writer.pos(),
-+ reloc_info->address() + ByteArray::kHeaderSize);
-+ DCHECK(prev_call_address == NULL ||
-+ call_address >= prev_call_address + patch_size());
-+ DCHECK(call_address + patch_size() <= code->instruction_end());
-+#ifdef DEBUG
-+ prev_call_address = call_address;
-+#endif
-+ }
-+
-+ // Move the relocation info to the beginning of the byte array.
-+ const int new_reloc_length = reloc_end_address - reloc_info_writer.pos();
-+ MemMove(code->relocation_start(), reloc_info_writer.pos(), new_reloc_length);
-+
-+ // Right trim the relocation info to free up remaining space.
-+ const int delta = reloc_info->length() - new_reloc_length;
-+ if (delta > 0) {
-+ isolate->heap()->RightTrimFixedArray(reloc_info, delta);
-+ }
-+}
-+
-+
-+#define __ masm()->
-+
-+void Deoptimizer::TableEntryGenerator::Generate() {
-+ GeneratePrologue();
-+
-+ // Save all general purpose registers before messing with them.
-+ const int kNumberOfRegisters = Register::kNumRegisters;
-+
-+ const int kDoubleRegsSize = kDoubleSize * X87Register::kMaxNumRegisters;
-+
-+ // Reserve space for x87 fp registers.
-+ __ sub(esp, Immediate(kDoubleRegsSize));
-+
-+ __ pushad();
-+
-+ ExternalReference c_entry_fp_address(IsolateAddressId::kCEntryFPAddress,
-+ isolate());
-+ __ mov(Operand::StaticVariable(c_entry_fp_address), ebp);
-+
-+ // GP registers are safe to use now.
-+ // Save used x87 fp registers in correct position of previous reserve space.
-+ Label loop, done;
-+ // Get the layout of x87 stack.
-+ __ sub(esp, Immediate(kPointerSize));
-+ __ fistp_s(MemOperand(esp, 0));
-+ __ pop(eax);
-+ // Preserve stack layout in edi
-+ __ mov(edi, eax);
-+ // Get the x87 stack depth, the first 3 bits.
-+ __ mov(ecx, eax);
-+ __ and_(ecx, 0x7);
-+ __ j(zero, &done, Label::kNear);
-+
-+ __ bind(&loop);
-+ __ shr(eax, 0x3);
-+ __ mov(ebx, eax);
-+ __ and_(ebx, 0x7); // Extract the st_x index into ebx.
-+ // Pop TOS to the correct position. The disp(0x20) is due to pushad.
-+ // The st_i should be saved to (esp + ebx * kDoubleSize + 0x20).
-+ __ fstp_d(Operand(esp, ebx, times_8, 0x20));
-+ __ dec(ecx); // Decrease stack depth.
-+ __ j(not_zero, &loop, Label::kNear);
-+ __ bind(&done);
-+
-+ const int kSavedRegistersAreaSize =
-+ kNumberOfRegisters * kPointerSize + kDoubleRegsSize;
-+
-+ // Get the bailout id from the stack.
-+ __ mov(ebx, Operand(esp, kSavedRegistersAreaSize));
-+
-+ // Get the address of the location in the code object
-+ // and compute the fp-to-sp delta in register edx.
-+ __ mov(ecx, Operand(esp, kSavedRegistersAreaSize + 1 * kPointerSize));
-+ __ lea(edx, Operand(esp, kSavedRegistersAreaSize + 2 * kPointerSize));
-+
-+ __ sub(edx, ebp);
-+ __ neg(edx);
-+
-+ __ push(edi);
-+ // Allocate a new deoptimizer object.
-+ __ PrepareCallCFunction(6, eax);
-+ __ mov(eax, Immediate(0));
-+ Label context_check;
-+ __ mov(edi, Operand(ebp, CommonFrameConstants::kContextOrFrameTypeOffset));
-+ __ JumpIfSmi(edi, &context_check);
-+ __ mov(eax, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ __ bind(&context_check);
-+ __ mov(Operand(esp, 0 * kPointerSize), eax); // Function.
-+ __ mov(Operand(esp, 1 * kPointerSize), Immediate(type())); // Bailout type.
-+ __ mov(Operand(esp, 2 * kPointerSize), ebx); // Bailout id.
-+ __ mov(Operand(esp, 3 * kPointerSize), ecx); // Code address or 0.
-+ __ mov(Operand(esp, 4 * kPointerSize), edx); // Fp-to-sp delta.
-+ __ mov(Operand(esp, 5 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(isolate())));
-+ {
-+ AllowExternalCallThatCantCauseGC scope(masm());
-+ __ CallCFunction(ExternalReference::new_deoptimizer_function(isolate()), 6);
-+ }
-+
-+ __ pop(edi);
-+
-+ // Preserve deoptimizer object in register eax and get the input
-+ // frame descriptor pointer.
-+ __ mov(ebx, Operand(eax, Deoptimizer::input_offset()));
-+
-+ // Fill in the input registers.
-+ for (int i = kNumberOfRegisters - 1; i >= 0; i--) {
-+ int offset = (i * kPointerSize) + FrameDescription::registers_offset();
-+ __ pop(Operand(ebx, offset));
-+ }
-+
-+ int double_regs_offset = FrameDescription::double_registers_offset();
-+ const RegisterConfiguration* config = RegisterConfiguration::Crankshaft();
-+ // Fill in the double input registers.
-+ for (int i = 0; i < X87Register::kMaxNumAllocatableRegisters; ++i) {
-+ int code = config->GetAllocatableDoubleCode(i);
-+ int dst_offset = code * kDoubleSize + double_regs_offset;
-+ int src_offset = code * kDoubleSize;
-+ __ fld_d(Operand(esp, src_offset));
-+ __ fstp_d(Operand(ebx, dst_offset));
-+ }
-+
-+ // Clear FPU all exceptions.
-+ // TODO(ulan): Find out why the TOP register is not zero here in some cases,
-+ // and check that the generated code never deoptimizes with unbalanced stack.
-+ __ fnclex();
-+
-+ // Remove the bailout id, return address and the double registers.
-+ __ add(esp, Immediate(kDoubleRegsSize + 2 * kPointerSize));
-+
-+ // Compute a pointer to the unwinding limit in register ecx; that is
-+ // the first stack slot not part of the input frame.
-+ __ mov(ecx, Operand(ebx, FrameDescription::frame_size_offset()));
-+ __ add(ecx, esp);
-+
-+ // Unwind the stack down to - but not including - the unwinding
-+ // limit and copy the contents of the activation frame to the input
-+ // frame description.
-+ __ lea(edx, Operand(ebx, FrameDescription::frame_content_offset()));
-+ Label pop_loop_header;
-+ __ jmp(&pop_loop_header);
-+ Label pop_loop;
-+ __ bind(&pop_loop);
-+ __ pop(Operand(edx, 0));
-+ __ add(edx, Immediate(sizeof(uint32_t)));
-+ __ bind(&pop_loop_header);
-+ __ cmp(ecx, esp);
-+ __ j(not_equal, &pop_loop);
-+
-+ // Compute the output frame in the deoptimizer.
-+ __ push(edi);
-+ __ push(eax);
-+ __ PrepareCallCFunction(1, ebx);
-+ __ mov(Operand(esp, 0 * kPointerSize), eax);
-+ {
-+ AllowExternalCallThatCantCauseGC scope(masm());
-+ __ CallCFunction(
-+ ExternalReference::compute_output_frames_function(isolate()), 1);
-+ }
-+ __ pop(eax);
-+ __ pop(edi);
-+ __ mov(esp, Operand(eax, Deoptimizer::caller_frame_top_offset()));
-+
-+ // Replace the current (input) frame with the output frames.
-+ Label outer_push_loop, inner_push_loop,
-+ outer_loop_header, inner_loop_header;
-+ // Outer loop state: eax = current FrameDescription**, edx = one past the
-+ // last FrameDescription**.
-+ __ mov(edx, Operand(eax, Deoptimizer::output_count_offset()));
-+ __ mov(eax, Operand(eax, Deoptimizer::output_offset()));
-+ __ lea(edx, Operand(eax, edx, times_4, 0));
-+ __ jmp(&outer_loop_header);
-+ __ bind(&outer_push_loop);
-+ // Inner loop state: ebx = current FrameDescription*, ecx = loop index.
-+ __ mov(ebx, Operand(eax, 0));
-+ __ mov(ecx, Operand(ebx, FrameDescription::frame_size_offset()));
-+ __ jmp(&inner_loop_header);
-+ __ bind(&inner_push_loop);
-+ __ sub(ecx, Immediate(sizeof(uint32_t)));
-+ __ push(Operand(ebx, ecx, times_1, FrameDescription::frame_content_offset()));
-+ __ bind(&inner_loop_header);
-+ __ test(ecx, ecx);
-+ __ j(not_zero, &inner_push_loop);
-+ __ add(eax, Immediate(kPointerSize));
-+ __ bind(&outer_loop_header);
-+ __ cmp(eax, edx);
-+ __ j(below, &outer_push_loop);
-+
-+
-+ // In case of a failed STUB, we have to restore the x87 stack.
-+ // x87 stack layout is in edi.
-+ Label loop2, done2;
-+ // Get the x87 stack depth, the first 3 bits.
-+ __ mov(ecx, edi);
-+ __ and_(ecx, 0x7);
-+ __ j(zero, &done2, Label::kNear);
-+
-+ __ lea(ecx, Operand(ecx, ecx, times_2, 0));
-+ __ bind(&loop2);
-+ __ mov(eax, edi);
-+ __ shr_cl(eax);
-+ __ and_(eax, 0x7);
-+ __ fld_d(Operand(ebx, eax, times_8, double_regs_offset));
-+ __ sub(ecx, Immediate(0x3));
-+ __ j(not_zero, &loop2, Label::kNear);
-+ __ bind(&done2);
-+
-+ // Push state, pc, and continuation from the last output frame.
-+ __ push(Operand(ebx, FrameDescription::state_offset()));
-+ __ push(Operand(ebx, FrameDescription::pc_offset()));
-+ __ push(Operand(ebx, FrameDescription::continuation_offset()));
-+
-+
-+ // Push the registers from the last output frame.
-+ for (int i = 0; i < kNumberOfRegisters; i++) {
-+ int offset = (i * kPointerSize) + FrameDescription::registers_offset();
-+ __ push(Operand(ebx, offset));
-+ }
-+
-+ // Restore the registers from the stack.
-+ __ popad();
-+
-+ // Return to the continuation point.
-+ __ ret(0);
-+}
-+
-+
-+void Deoptimizer::TableEntryGenerator::GeneratePrologue() {
-+ // Create a sequence of deoptimization entries.
-+ Label done;
-+ for (int i = 0; i < count(); i++) {
-+ int start = masm()->pc_offset();
-+ USE(start);
-+ __ push_imm32(i);
-+ __ jmp(&done);
-+ DCHECK(masm()->pc_offset() - start == table_entry_size_);
-+ }
-+ __ bind(&done);
-+}
-+
-+
-+void FrameDescription::SetCallerPc(unsigned offset, intptr_t value) {
-+ SetFrameSlot(offset, value);
-+}
-+
-+
-+void FrameDescription::SetCallerFp(unsigned offset, intptr_t value) {
-+ SetFrameSlot(offset, value);
-+}
-+
-+
-+void FrameDescription::SetCallerConstantPool(unsigned offset, intptr_t value) {
-+ // No embedded constant pool support.
-+ UNREACHABLE();
-+}
-+
-+
-+#undef __
-+
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/disasm-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/disasm-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/disasm-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/disasm-x87.cc 2018-02-18
19:00:54.199418119 +0100
-@@ -0,0 +1,1874 @@
-+// Copyright 2011 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#include <assert.h>
-+#include <stdarg.h>
-+#include <stdio.h>
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/base/compiler-specific.h"
-+#include "src/disasm.h"
-+
-+namespace disasm {
-+
-+enum OperandOrder {
-+ UNSET_OP_ORDER = 0,
-+ REG_OPER_OP_ORDER,
-+ OPER_REG_OP_ORDER
-+};
-+
-+
-+//------------------------------------------------------------------
-+// Tables
-+//------------------------------------------------------------------
-+struct ByteMnemonic {
-+ int b; // -1 terminates, otherwise must be in range (0..255)
-+ const char* mnem;
-+ OperandOrder op_order_;
-+};
-+
-+static const ByteMnemonic two_operands_instr[] = {
-+ {0x01, "add", OPER_REG_OP_ORDER}, {0x03, "add",
REG_OPER_OP_ORDER},
-+ {0x09, "or", OPER_REG_OP_ORDER}, {0x0B, "or",
REG_OPER_OP_ORDER},
-+ {0x13, "adc", REG_OPER_OP_ORDER}, {0x1B, "sbb",
REG_OPER_OP_ORDER},
-+ {0x21, "and", OPER_REG_OP_ORDER}, {0x23, "and",
REG_OPER_OP_ORDER},
-+ {0x29, "sub", OPER_REG_OP_ORDER}, {0x2A, "subb",
REG_OPER_OP_ORDER},
-+ {0x2B, "sub", REG_OPER_OP_ORDER}, {0x31, "xor",
OPER_REG_OP_ORDER},
-+ {0x33, "xor", REG_OPER_OP_ORDER}, {0x38, "cmpb",
OPER_REG_OP_ORDER},
-+ {0x39, "cmp", OPER_REG_OP_ORDER}, {0x3A, "cmpb",
REG_OPER_OP_ORDER},
-+ {0x3B, "cmp", REG_OPER_OP_ORDER}, {0x84, "test_b",
REG_OPER_OP_ORDER},
-+ {0x85, "test", REG_OPER_OP_ORDER}, {0x86, "xchg_b",
REG_OPER_OP_ORDER},
-+ {0x87, "xchg", REG_OPER_OP_ORDER}, {0x8A, "mov_b",
REG_OPER_OP_ORDER},
-+ {0x8B, "mov", REG_OPER_OP_ORDER}, {0x8D, "lea",
REG_OPER_OP_ORDER},
-+ {-1, "", UNSET_OP_ORDER}};
-+
-+static const ByteMnemonic zero_operands_instr[] = {
-+ {0xC3, "ret", UNSET_OP_ORDER},
-+ {0xC9, "leave", UNSET_OP_ORDER},
-+ {0x90, "nop", UNSET_OP_ORDER},
-+ {0xF4, "hlt", UNSET_OP_ORDER},
-+ {0xCC, "int3", UNSET_OP_ORDER},
-+ {0x60, "pushad", UNSET_OP_ORDER},
-+ {0x61, "popad", UNSET_OP_ORDER},
-+ {0x9C, "pushfd", UNSET_OP_ORDER},
-+ {0x9D, "popfd", UNSET_OP_ORDER},
-+ {0x9E, "sahf", UNSET_OP_ORDER},
-+ {0x99, "cdq", UNSET_OP_ORDER},
-+ {0x9B, "fwait", UNSET_OP_ORDER},
-+ {0xFC, "cld", UNSET_OP_ORDER},
-+ {0xAB, "stos", UNSET_OP_ORDER},
-+ {-1, "", UNSET_OP_ORDER}
-+};
-+
-+
-+static const ByteMnemonic call_jump_instr[] = {
-+ {0xE8, "call", UNSET_OP_ORDER},
-+ {0xE9, "jmp", UNSET_OP_ORDER},
-+ {-1, "", UNSET_OP_ORDER}
-+};
-+
-+
-+static const ByteMnemonic short_immediate_instr[] = {
-+ {0x05, "add", UNSET_OP_ORDER},
-+ {0x0D, "or", UNSET_OP_ORDER},
-+ {0x15, "adc", UNSET_OP_ORDER},
-+ {0x25, "and", UNSET_OP_ORDER},
-+ {0x2D, "sub", UNSET_OP_ORDER},
-+ {0x35, "xor", UNSET_OP_ORDER},
-+ {0x3D, "cmp", UNSET_OP_ORDER},
-+ {-1, "", UNSET_OP_ORDER}
-+};
-+
-+
-+// Generally we don't want to generate these because they are subject to partial
-+// register stalls. They are included for completeness and because the cmp
-+// variant is used by the RecordWrite stub. Because it does not update the
-+// register it is not subject to partial register stalls.
-+static ByteMnemonic byte_immediate_instr[] = {
-+ {0x0c, "or", UNSET_OP_ORDER},
-+ {0x24, "and", UNSET_OP_ORDER},
-+ {0x34, "xor", UNSET_OP_ORDER},
-+ {0x3c, "cmp", UNSET_OP_ORDER},
-+ {-1, "", UNSET_OP_ORDER}
-+};
-+
-+
-+static const char* const jump_conditional_mnem[] = {
-+ /*0*/ "jo", "jno", "jc", "jnc",
-+ /*4*/ "jz", "jnz", "jna", "ja",
-+ /*8*/ "js", "jns", "jpe", "jpo",
-+ /*12*/ "jl", "jnl", "jng", "jg"
-+};
-+
-+
-+static const char* const set_conditional_mnem[] = {
-+ /*0*/ "seto", "setno", "setc", "setnc",
-+ /*4*/ "setz", "setnz", "setna", "seta",
-+ /*8*/ "sets", "setns", "setpe", "setpo",
-+ /*12*/ "setl", "setnl", "setng", "setg"
-+};
-+
-+
-+static const char* const conditional_move_mnem[] = {
-+ /*0*/ "cmovo", "cmovno", "cmovc", "cmovnc",
-+ /*4*/ "cmovz", "cmovnz", "cmovna", "cmova",
-+ /*8*/ "cmovs", "cmovns", "cmovpe", "cmovpo",
-+ /*12*/ "cmovl", "cmovnl", "cmovng", "cmovg"
-+};
-+
-+
-+enum InstructionType {
-+ NO_INSTR,
-+ ZERO_OPERANDS_INSTR,
-+ TWO_OPERANDS_INSTR,
-+ JUMP_CONDITIONAL_SHORT_INSTR,
-+ REGISTER_INSTR,
-+ MOVE_REG_INSTR,
-+ CALL_JUMP_INSTR,
-+ SHORT_IMMEDIATE_INSTR,
-+ BYTE_IMMEDIATE_INSTR
-+};
-+
-+
-+struct InstructionDesc {
-+ const char* mnem;
-+ InstructionType type;
-+ OperandOrder op_order_;
-+};
-+
-+
-+class InstructionTable {
-+ public:
-+ InstructionTable();
-+ const InstructionDesc& Get(byte x) const { return instructions_[x]; }
-+ static InstructionTable* get_instance() {
-+ static InstructionTable table;
-+ return &table;
-+ }
-+
-+ private:
-+ InstructionDesc instructions_[256];
-+ void Clear();
-+ void Init();
-+ void CopyTable(const ByteMnemonic bm[], InstructionType type);
-+ void SetTableRange(InstructionType type,
-+ byte start,
-+ byte end,
-+ const char* mnem);
-+ void AddJumpConditionalShort();
-+};
-+
-+
-+InstructionTable::InstructionTable() {
-+ Clear();
-+ Init();
-+}
-+
-+
-+void InstructionTable::Clear() {
-+ for (int i = 0; i < 256; i++) {
-+ instructions_[i].mnem = "";
-+ instructions_[i].type = NO_INSTR;
-+ instructions_[i].op_order_ = UNSET_OP_ORDER;
-+ }
-+}
-+
-+
-+void InstructionTable::Init() {
-+ CopyTable(two_operands_instr, TWO_OPERANDS_INSTR);
-+ CopyTable(zero_operands_instr, ZERO_OPERANDS_INSTR);
-+ CopyTable(call_jump_instr, CALL_JUMP_INSTR);
-+ CopyTable(short_immediate_instr, SHORT_IMMEDIATE_INSTR);
-+ CopyTable(byte_immediate_instr, BYTE_IMMEDIATE_INSTR);
-+ AddJumpConditionalShort();
-+ SetTableRange(REGISTER_INSTR, 0x40, 0x47, "inc");
-+ SetTableRange(REGISTER_INSTR, 0x48, 0x4F, "dec");
-+ SetTableRange(REGISTER_INSTR, 0x50, 0x57, "push");
-+ SetTableRange(REGISTER_INSTR, 0x58, 0x5F, "pop");
-+ SetTableRange(REGISTER_INSTR, 0x91, 0x97, "xchg eax,"); // 0x90 is nop.
-+ SetTableRange(MOVE_REG_INSTR, 0xB8, 0xBF, "mov");
-+}
-+
-+
-+void InstructionTable::CopyTable(const ByteMnemonic bm[],
-+ InstructionType type) {
-+ for (int i = 0; bm[i].b >= 0; i++) {
-+ InstructionDesc* id = &instructions_[bm[i].b];
-+ id->mnem = bm[i].mnem;
-+ id->op_order_ = bm[i].op_order_;
-+ DCHECK_EQ(NO_INSTR, id->type); // Information not already entered.
-+ id->type = type;
-+ }
-+}
-+
-+
-+void InstructionTable::SetTableRange(InstructionType type,
-+ byte start,
-+ byte end,
-+ const char* mnem) {
-+ for (byte b = start; b <= end; b++) {
-+ InstructionDesc* id = &instructions_[b];
-+ DCHECK_EQ(NO_INSTR, id->type); // Information not already entered.
-+ id->mnem = mnem;
-+ id->type = type;
-+ }
-+}
-+
-+
-+void InstructionTable::AddJumpConditionalShort() {
-+ for (byte b = 0x70; b <= 0x7F; b++) {
-+ InstructionDesc* id = &instructions_[b];
-+ DCHECK_EQ(NO_INSTR, id->type); // Information not already entered.
-+ id->mnem = jump_conditional_mnem[b & 0x0F];
-+ id->type = JUMP_CONDITIONAL_SHORT_INSTR;
-+ }
-+}
-+
-+
-+// The X87 disassembler implementation.
-+class DisassemblerX87 {
-+ public:
-+ DisassemblerX87(const NameConverter& converter,
-+ bool abort_on_unimplemented = true)
-+ : converter_(converter),
-+ instruction_table_(InstructionTable::get_instance()),
-+ tmp_buffer_pos_(0),
-+ abort_on_unimplemented_(abort_on_unimplemented) {
-+ tmp_buffer_[0] = '\0';
-+ }
-+
-+ virtual ~DisassemblerX87() {}
-+
-+ // Writes one disassembled instruction into 'buffer' (0-terminated).
-+ // Returns the length of the disassembled machine instruction in bytes.
-+ int InstructionDecode(v8::internal::Vector<char> buffer, byte* instruction);
-+
-+ private:
-+ const NameConverter& converter_;
-+ InstructionTable* instruction_table_;
-+ v8::internal::EmbeddedVector<char, 128> tmp_buffer_;
-+ unsigned int tmp_buffer_pos_;
-+ bool abort_on_unimplemented_;
-+
-+ enum {
-+ eax = 0,
-+ ecx = 1,
-+ edx = 2,
-+ ebx = 3,
-+ esp = 4,
-+ ebp = 5,
-+ esi = 6,
-+ edi = 7
-+ };
-+
-+
-+ enum ShiftOpcodeExtension {
-+ kROL = 0,
-+ kROR = 1,
-+ kRCL = 2,
-+ kRCR = 3,
-+ kSHL = 4,
-+ KSHR = 5,
-+ kSAR = 7
-+ };
-+
-+
-+ const char* NameOfCPURegister(int reg) const {
-+ return converter_.NameOfCPURegister(reg);
-+ }
-+
-+
-+ const char* NameOfByteCPURegister(int reg) const {
-+ return converter_.NameOfByteCPURegister(reg);
-+ }
-+
-+
-+ const char* NameOfXMMRegister(int reg) const {
-+ return converter_.NameOfXMMRegister(reg);
-+ }
-+
-+
-+ const char* NameOfAddress(byte* addr) const {
-+ return converter_.NameOfAddress(addr);
-+ }
-+
-+
-+ // Disassembler helper functions.
-+ static void get_modrm(byte data, int* mod, int* regop, int* rm) {
-+ *mod = (data >> 6) & 3;
-+ *regop = (data & 0x38) >> 3;
-+ *rm = data & 7;
-+ }
-+
-+
-+ static void get_sib(byte data, int* scale, int* index, int* base) {
-+ *scale = (data >> 6) & 3;
-+ *index = (data >> 3) & 7;
-+ *base = data & 7;
-+ }
-+
-+ typedef const char* (DisassemblerX87::*RegisterNameMapping)(int reg) const;
-+
-+ int PrintRightOperandHelper(byte* modrmp, RegisterNameMapping register_name);
-+ int PrintRightOperand(byte* modrmp);
-+ int PrintRightByteOperand(byte* modrmp);
-+ int PrintRightXMMOperand(byte* modrmp);
-+ int PrintOperands(const char* mnem, OperandOrder op_order, byte* data);
-+ int PrintImmediateOp(byte* data);
-+ int F7Instruction(byte* data);
-+ int D1D3C1Instruction(byte* data);
-+ int JumpShort(byte* data);
-+ int JumpConditional(byte* data, const char* comment);
-+ int JumpConditionalShort(byte* data, const char* comment);
-+ int SetCC(byte* data);
-+ int CMov(byte* data);
-+ int FPUInstruction(byte* data);
-+ int MemoryFPUInstruction(int escape_opcode, int regop, byte* modrm_start);
-+ int RegisterFPUInstruction(int escape_opcode, byte modrm_byte);
-+ PRINTF_FORMAT(2, 3) void AppendToBuffer(const char* format, ...);
-+
-+ void UnimplementedInstruction() {
-+ if (abort_on_unimplemented_) {
-+ UNIMPLEMENTED();
-+ } else {
-+ AppendToBuffer("'Unimplemented Instruction'");
-+ }
-+ }
-+};
-+
-+
-+void DisassemblerX87::AppendToBuffer(const char* format, ...) {
-+ v8::internal::Vector<char> buf = tmp_buffer_ + tmp_buffer_pos_;
-+ va_list args;
-+ va_start(args, format);
-+ int result = v8::internal::VSNPrintF(buf, format, args);
-+ va_end(args);
-+ tmp_buffer_pos_ += result;
-+}
-+
-+int DisassemblerX87::PrintRightOperandHelper(
-+ byte* modrmp,
-+ RegisterNameMapping direct_register_name) {
-+ int mod, regop, rm;
-+ get_modrm(*modrmp, &mod, ®op, &rm);
-+ RegisterNameMapping register_name = (mod == 3) ? direct_register_name :
-+ &DisassemblerX87::NameOfCPURegister;
-+ switch (mod) {
-+ case 0:
-+ if (rm == ebp) {
-+ int32_t disp = *reinterpret_cast<int32_t*>(modrmp+1);
-+ AppendToBuffer("[0x%x]", disp);
-+ return 5;
-+ } else if (rm == esp) {
-+ byte sib = *(modrmp + 1);
-+ int scale, index, base;
-+ get_sib(sib, &scale, &index, &base);
-+ if (index == esp && base == esp && scale == 0 /*times_1*/) {
-+ AppendToBuffer("[%s]", (this->*register_name)(rm));
-+ return 2;
-+ } else if (base == ebp) {
-+ int32_t disp = *reinterpret_cast<int32_t*>(modrmp + 2);
-+ AppendToBuffer("[%s*%d%s0x%x]",
-+ (this->*register_name)(index),
-+ 1 << scale,
-+ disp < 0 ? "-" : "+",
-+ disp < 0 ? -disp : disp);
-+ return 6;
-+ } else if (index != esp && base != ebp) {
-+ // [base+index*scale]
-+ AppendToBuffer("[%s+%s*%d]",
-+ (this->*register_name)(base),
-+ (this->*register_name)(index),
-+ 1 << scale);
-+ return 2;
-+ } else {
-+ UnimplementedInstruction();
-+ return 1;
-+ }
-+ } else {
-+ AppendToBuffer("[%s]", (this->*register_name)(rm));
-+ return 1;
-+ }
-+ break;
-+ case 1: // fall through
-+ case 2:
-+ if (rm == esp) {
-+ byte sib = *(modrmp + 1);
-+ int scale, index, base;
-+ get_sib(sib, &scale, &index, &base);
-+ int disp = mod == 2 ? *reinterpret_cast<int32_t*>(modrmp + 2)
-+ : *reinterpret_cast<int8_t*>(modrmp + 2);
-+ if (index == base && index == rm /*esp*/ && scale == 0
/*times_1*/) {
-+ AppendToBuffer("[%s%s0x%x]",
-+ (this->*register_name)(rm),
-+ disp < 0 ? "-" : "+",
-+ disp < 0 ? -disp : disp);
-+ } else {
-+ AppendToBuffer("[%s+%s*%d%s0x%x]",
-+ (this->*register_name)(base),
-+ (this->*register_name)(index),
-+ 1 << scale,
-+ disp < 0 ? "-" : "+",
-+ disp < 0 ? -disp : disp);
-+ }
-+ return mod == 2 ? 6 : 3;
-+ } else {
-+ // No sib.
-+ int disp = mod == 2 ? *reinterpret_cast<int32_t*>(modrmp + 1)
-+ : *reinterpret_cast<int8_t*>(modrmp + 1);
-+ AppendToBuffer("[%s%s0x%x]",
-+ (this->*register_name)(rm),
-+ disp < 0 ? "-" : "+",
-+ disp < 0 ? -disp : disp);
-+ return mod == 2 ? 5 : 2;
-+ }
-+ break;
-+ case 3:
-+ AppendToBuffer("%s", (this->*register_name)(rm));
-+ return 1;
-+ default:
-+ UnimplementedInstruction();
-+ return 1;
-+ }
-+ UNREACHABLE();
-+}
-+
-+
-+int DisassemblerX87::PrintRightOperand(byte* modrmp) {
-+ return PrintRightOperandHelper(modrmp, &DisassemblerX87::NameOfCPURegister);
-+}
-+
-+
-+int DisassemblerX87::PrintRightByteOperand(byte* modrmp) {
-+ return PrintRightOperandHelper(modrmp,
-+ &DisassemblerX87::NameOfByteCPURegister);
-+}
-+
-+
-+int DisassemblerX87::PrintRightXMMOperand(byte* modrmp) {
-+ return PrintRightOperandHelper(modrmp,
-+ &DisassemblerX87::NameOfXMMRegister);
-+}
-+
-+
-+// Returns number of bytes used including the current *data.
-+// Writes instruction's mnemonic, left and right operands to 'tmp_buffer_'.
-+int DisassemblerX87::PrintOperands(const char* mnem,
-+ OperandOrder op_order,
-+ byte* data) {
-+ byte modrm = *data;
-+ int mod, regop, rm;
-+ get_modrm(modrm, &mod, ®op, &rm);
-+ int advance = 0;
-+ switch (op_order) {
-+ case REG_OPER_OP_ORDER: {
-+ AppendToBuffer("%s %s,", mnem, NameOfCPURegister(regop));
-+ advance = PrintRightOperand(data);
-+ break;
-+ }
-+ case OPER_REG_OP_ORDER: {
-+ AppendToBuffer("%s ", mnem);
-+ advance = PrintRightOperand(data);
-+ AppendToBuffer(",%s", NameOfCPURegister(regop));
-+ break;
-+ }
-+ default:
-+ UNREACHABLE();
-+ break;
-+ }
-+ return advance;
-+}
-+
-+
-+// Returns number of bytes used by machine instruction, including *data byte.
-+// Writes immediate instructions to 'tmp_buffer_'.
-+int DisassemblerX87::PrintImmediateOp(byte* data) {
-+ bool sign_extension_bit = (*data & 0x02) != 0;
-+ byte modrm = *(data+1);
-+ int mod, regop, rm;
-+ get_modrm(modrm, &mod, ®op, &rm);
-+ const char* mnem = "Imm???";
-+ switch (regop) {
-+ case 0: mnem = "add"; break;
-+ case 1: mnem = "or"; break;
-+ case 2: mnem = "adc"; break;
-+ case 4: mnem = "and"; break;
-+ case 5: mnem = "sub"; break;
-+ case 6: mnem = "xor"; break;
-+ case 7: mnem = "cmp"; break;
-+ default: UnimplementedInstruction();
-+ }
-+ AppendToBuffer("%s ", mnem);
-+ int count = PrintRightOperand(data+1);
-+ if (sign_extension_bit) {
-+ AppendToBuffer(",0x%x", *(data + 1 + count));
-+ return 1 + count + 1 /*int8*/;
-+ } else {
-+ AppendToBuffer(",0x%x", *reinterpret_cast<int32_t*>(data + 1 +
count));
-+ return 1 + count + 4 /*int32_t*/;
-+ }
-+}
-+
-+
-+// Returns number of bytes used, including *data.
-+int DisassemblerX87::F7Instruction(byte* data) {
-+ DCHECK_EQ(0xF7, *data);
-+ byte modrm = *++data;
-+ int mod, regop, rm;
-+ get_modrm(modrm, &mod, ®op, &rm);
-+ const char* mnem = NULL;
-+ switch (regop) {
-+ case 0:
-+ mnem = "test";
-+ break;
-+ case 2:
-+ mnem = "not";
-+ break;
-+ case 3:
-+ mnem = "neg";
-+ break;
-+ case 4:
-+ mnem = "mul";
-+ break;
-+ case 5:
-+ mnem = "imul";
-+ break;
-+ case 6:
-+ mnem = "div";
-+ break;
-+ case 7:
-+ mnem = "idiv";
-+ break;
-+ default:
-+ UnimplementedInstruction();
-+ }
-+ AppendToBuffer("%s ", mnem);
-+ int count = PrintRightOperand(data);
-+ if (regop == 0) {
-+ AppendToBuffer(",0x%x", *reinterpret_cast<int32_t*>(data + count));
-+ count += 4;
-+ }
-+ return 1 + count;
-+}
-+
-+
-+int DisassemblerX87::D1D3C1Instruction(byte* data) {
-+ byte op = *data;
-+ DCHECK(op == 0xD1 || op == 0xD3 || op == 0xC1);
-+ byte modrm = *++data;
-+ int mod, regop, rm;
-+ get_modrm(modrm, &mod, ®op, &rm);
-+ int imm8 = -1;
-+ const char* mnem = NULL;
-+ switch (regop) {
-+ case kROL:
-+ mnem = "rol";
-+ break;
-+ case kROR:
-+ mnem = "ror";
-+ break;
-+ case kRCL:
-+ mnem = "rcl";
-+ break;
-+ case kRCR:
-+ mnem = "rcr";
-+ break;
-+ case kSHL:
-+ mnem = "shl";
-+ break;
-+ case KSHR:
-+ mnem = "shr";
-+ break;
-+ case kSAR:
-+ mnem = "sar";
-+ break;
-+ default:
-+ UnimplementedInstruction();
-+ }
-+ AppendToBuffer("%s ", mnem);
-+ int count = PrintRightOperand(data);
-+ if (op == 0xD1) {
-+ imm8 = 1;
-+ } else if (op == 0xC1) {
-+ imm8 = *(data + 1);
-+ count++;
-+ } else if (op == 0xD3) {
-+ // Shift/rotate by cl.
-+ }
-+ if (imm8 >= 0) {
-+ AppendToBuffer(",%d", imm8);
-+ } else {
-+ AppendToBuffer(",cl");
-+ }
-+ return 1 + count;
-+}
-+
-+
-+// Returns number of bytes used, including *data.
-+int DisassemblerX87::JumpShort(byte* data) {
-+ DCHECK_EQ(0xEB, *data);
-+ byte b = *(data+1);
-+ byte* dest = data + static_cast<int8_t>(b) + 2;
-+ AppendToBuffer("jmp %s", NameOfAddress(dest));
-+ return 2;
-+}
-+
-+
-+// Returns number of bytes used, including *data.
-+int DisassemblerX87::JumpConditional(byte* data, const char* comment) {
-+ DCHECK_EQ(0x0F, *data);
-+ byte cond = *(data+1) & 0x0F;
-+ byte* dest = data + *reinterpret_cast<int32_t*>(data+2) + 6;
-+ const char* mnem = jump_conditional_mnem[cond];
-+ AppendToBuffer("%s %s", mnem, NameOfAddress(dest));
-+ if (comment != NULL) {
-+ AppendToBuffer(", %s", comment);
-+ }
-+ return 6; // includes 0x0F
-+}
-+
-+
-+// Returns number of bytes used, including *data.
-+int DisassemblerX87::JumpConditionalShort(byte* data, const char* comment) {
-+ byte cond = *data & 0x0F;
-+ byte b = *(data+1);
-+ byte* dest = data + static_cast<int8_t>(b) + 2;
-+ const char* mnem = jump_conditional_mnem[cond];
-+ AppendToBuffer("%s %s", mnem, NameOfAddress(dest));
-+ if (comment != NULL) {
-+ AppendToBuffer(", %s", comment);
-+ }
-+ return 2;
-+}
-+
-+
-+// Returns number of bytes used, including *data.
-+int DisassemblerX87::SetCC(byte* data) {
-+ DCHECK_EQ(0x0F, *data);
-+ byte cond = *(data+1) & 0x0F;
-+ const char* mnem = set_conditional_mnem[cond];
-+ AppendToBuffer("%s ", mnem);
-+ PrintRightByteOperand(data+2);
-+ return 3; // Includes 0x0F.
-+}
-+
-+
-+// Returns number of bytes used, including *data.
-+int DisassemblerX87::CMov(byte* data) {
-+ DCHECK_EQ(0x0F, *data);
-+ byte cond = *(data + 1) & 0x0F;
-+ const char* mnem = conditional_move_mnem[cond];
-+ int op_size = PrintOperands(mnem, REG_OPER_OP_ORDER, data + 2);
-+ return 2 + op_size; // includes 0x0F
-+}
-+
-+
-+// Returns number of bytes used, including *data.
-+int DisassemblerX87::FPUInstruction(byte* data) {
-+ byte escape_opcode = *data;
-+ DCHECK_EQ(0xD8, escape_opcode & 0xF8);
-+ byte modrm_byte = *(data+1);
-+
-+ if (modrm_byte >= 0xC0) {
-+ return RegisterFPUInstruction(escape_opcode, modrm_byte);
-+ } else {
-+ return MemoryFPUInstruction(escape_opcode, modrm_byte, data+1);
-+ }
-+}
-+
-+int DisassemblerX87::MemoryFPUInstruction(int escape_opcode,
-+ int modrm_byte,
-+ byte* modrm_start) {
-+ const char* mnem = "?";
-+ int regop = (modrm_byte >> 3) & 0x7; // reg/op field of modrm byte.
-+ switch (escape_opcode) {
-+ case 0xD9: switch (regop) {
-+ case 0: mnem = "fld_s"; break;
-+ case 2: mnem = "fst_s"; break;
-+ case 3: mnem = "fstp_s"; break;
-+ case 5:
-+ mnem = "fldcw";
-+ break;
-+ case 7:
-+ mnem = "fnstcw";
-+ break;
-+ default: UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xDB: switch (regop) {
-+ case 0: mnem = "fild_s"; break;
-+ case 1: mnem = "fisttp_s"; break;
-+ case 2: mnem = "fist_s"; break;
-+ case 3: mnem = "fistp_s"; break;
-+ default: UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xDC:
-+ switch (regop) {
-+ case 0:
-+ mnem = "fadd_d";
-+ break;
-+ case 1:
-+ mnem = "fmul_d";
-+ break;
-+ case 4:
-+ mnem = "fsub_d";
-+ break;
-+ case 5:
-+ mnem = "fsubr_d";
-+ break;
-+ case 6:
-+ mnem = "fdiv_d";
-+ break;
-+ case 7:
-+ mnem = "fdivr_d";
-+ break;
-+ default:
-+ UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xDD: switch (regop) {
-+ case 0: mnem = "fld_d"; break;
-+ case 1: mnem = "fisttp_d"; break;
-+ case 2: mnem = "fst_d"; break;
-+ case 3: mnem = "fstp_d"; break;
-+ case 4:
-+ mnem = "frstor";
-+ break;
-+ case 6:
-+ mnem = "fnsave";
-+ break;
-+ default: UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xDF: switch (regop) {
-+ case 5: mnem = "fild_d"; break;
-+ case 7: mnem = "fistp_d"; break;
-+ default: UnimplementedInstruction();
-+ }
-+ break;
-+
-+ default: UnimplementedInstruction();
-+ }
-+ AppendToBuffer("%s ", mnem);
-+ int count = PrintRightOperand(modrm_start);
-+ return count + 1;
-+}
-+
-+int DisassemblerX87::RegisterFPUInstruction(int escape_opcode,
-+ byte modrm_byte) {
-+ bool has_register = false; // Is the FPU register encoded in modrm_byte?
-+ const char* mnem = "?";
-+
-+ switch (escape_opcode) {
-+ case 0xD8:
-+ has_register = true;
-+ switch (modrm_byte & 0xF8) {
-+ case 0xC0: mnem = "fadd_i"; break;
-+ case 0xE0: mnem = "fsub_i"; break;
-+ case 0xC8: mnem = "fmul_i"; break;
-+ case 0xF0: mnem = "fdiv_i"; break;
-+ default: UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xD9:
-+ switch (modrm_byte & 0xF8) {
-+ case 0xC0:
-+ mnem = "fld";
-+ has_register = true;
-+ break;
-+ case 0xC8:
-+ mnem = "fxch";
-+ has_register = true;
-+ break;
-+ default:
-+ switch (modrm_byte) {
-+ case 0xE0: mnem = "fchs"; break;
-+ case 0xE1: mnem = "fabs"; break;
-+ case 0xE4: mnem = "ftst"; break;
-+ case 0xE8: mnem = "fld1"; break;
-+ case 0xEB: mnem = "fldpi"; break;
-+ case 0xED: mnem = "fldln2"; break;
-+ case 0xEE: mnem = "fldz"; break;
-+ case 0xF0: mnem = "f2xm1"; break;
-+ case 0xF1: mnem = "fyl2x"; break;
-+ case 0xF4: mnem = "fxtract"; break;
-+ case 0xF5: mnem = "fprem1"; break;
-+ case 0xF7: mnem = "fincstp"; break;
-+ case 0xF8: mnem = "fprem"; break;
-+ case 0xFC: mnem = "frndint"; break;
-+ case 0xFD: mnem = "fscale"; break;
-+ case 0xFE: mnem = "fsin"; break;
-+ case 0xFF: mnem = "fcos"; break;
-+ default: UnimplementedInstruction();
-+ }
-+ }
-+ break;
-+
-+ case 0xDA:
-+ if (modrm_byte == 0xE9) {
-+ mnem = "fucompp";
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xDB:
-+ if ((modrm_byte & 0xF8) == 0xE8) {
-+ mnem = "fucomi";
-+ has_register = true;
-+ } else if (modrm_byte == 0xE2) {
-+ mnem = "fclex";
-+ } else if (modrm_byte == 0xE3) {
-+ mnem = "fninit";
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xDC:
-+ has_register = true;
-+ switch (modrm_byte & 0xF8) {
-+ case 0xC0: mnem = "fadd"; break;
-+ case 0xE8: mnem = "fsub"; break;
-+ case 0xC8: mnem = "fmul"; break;
-+ case 0xF8: mnem = "fdiv"; break;
-+ default: UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xDD:
-+ has_register = true;
-+ switch (modrm_byte & 0xF8) {
-+ case 0xC0: mnem = "ffree"; break;
-+ case 0xD0: mnem = "fst"; break;
-+ case 0xD8: mnem = "fstp"; break;
-+ default: UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xDE:
-+ if (modrm_byte == 0xD9) {
-+ mnem = "fcompp";
-+ } else {
-+ has_register = true;
-+ switch (modrm_byte & 0xF8) {
-+ case 0xC0: mnem = "faddp"; break;
-+ case 0xE8: mnem = "fsubp"; break;
-+ case 0xC8: mnem = "fmulp"; break;
-+ case 0xF8: mnem = "fdivp"; break;
-+ default: UnimplementedInstruction();
-+ }
-+ }
-+ break;
-+
-+ case 0xDF:
-+ if (modrm_byte == 0xE0) {
-+ mnem = "fnstsw_ax";
-+ } else if ((modrm_byte & 0xF8) == 0xE8) {
-+ mnem = "fucomip";
-+ has_register = true;
-+ }
-+ break;
-+
-+ default: UnimplementedInstruction();
-+ }
-+
-+ if (has_register) {
-+ AppendToBuffer("%s st%d", mnem, modrm_byte & 0x7);
-+ } else {
-+ AppendToBuffer("%s", mnem);
-+ }
-+ return 2;
-+}
-+
-+
-+// Mnemonics for instructions 0xF0 byte.
-+// Returns NULL if the instruction is not handled here.
-+static const char* F0Mnem(byte f0byte) {
-+ switch (f0byte) {
-+ case 0x0B:
-+ return "ud2";
-+ case 0x18:
-+ return "prefetch";
-+ case 0xA2:
-+ return "cpuid";
-+ case 0xBE:
-+ return "movsx_b";
-+ case 0xBF:
-+ return "movsx_w";
-+ case 0xB6:
-+ return "movzx_b";
-+ case 0xB7:
-+ return "movzx_w";
-+ case 0xAF:
-+ return "imul";
-+ case 0xA4:
-+ return "shld";
-+ case 0xA5:
-+ return "shld";
-+ case 0xAD:
-+ return "shrd";
-+ case 0xAC:
-+ return "shrd"; // 3-operand version.
-+ case 0xAB:
-+ return "bts";
-+ case 0xB0:
-+ return "cmpxchg_b";
-+ case 0xB1:
-+ return "cmpxchg";
-+ case 0xBC:
-+ return "bsf";
-+ case 0xBD:
-+ return "bsr";
-+ default: return NULL;
-+ }
-+}
-+
-+
-+// Disassembled instruction '*instr' and writes it into 'out_buffer'.
-+int DisassemblerX87::InstructionDecode(v8::internal::Vector<char> out_buffer,
-+ byte* instr) {
-+ tmp_buffer_pos_ = 0; // starting to write as position 0
-+ byte* data = instr;
-+ // Check for hints.
-+ const char* branch_hint = NULL;
-+ // We use these two prefixes only with branch prediction
-+ if (*data == 0x3E /*ds*/) {
-+ branch_hint = "predicted taken";
-+ data++;
-+ } else if (*data == 0x2E /*cs*/) {
-+ branch_hint = "predicted not taken";
-+ data++;
-+ } else if (*data == 0xF0 /*lock*/) {
-+ AppendToBuffer("lock ");
-+ data++;
-+ }
-+
-+ bool processed = true; // Will be set to false if the current instruction
-+ // is not in 'instructions' table.
-+ const InstructionDesc& idesc = instruction_table_->Get(*data);
-+ switch (idesc.type) {
-+ case ZERO_OPERANDS_INSTR:
-+ AppendToBuffer("%s", idesc.mnem);
-+ data++;
-+ break;
-+
-+ case TWO_OPERANDS_INSTR:
-+ data++;
-+ data += PrintOperands(idesc.mnem, idesc.op_order_, data);
-+ break;
-+
-+ case JUMP_CONDITIONAL_SHORT_INSTR:
-+ data += JumpConditionalShort(data, branch_hint);
-+ break;
-+
-+ case REGISTER_INSTR:
-+ AppendToBuffer("%s %s", idesc.mnem, NameOfCPURegister(*data &
0x07));
-+ data++;
-+ break;
-+
-+ case MOVE_REG_INSTR: {
-+ byte* addr =
reinterpret_cast<byte*>(*reinterpret_cast<int32_t*>(data+1));
-+ AppendToBuffer("mov %s,%s",
-+ NameOfCPURegister(*data & 0x07),
-+ NameOfAddress(addr));
-+ data += 5;
-+ break;
-+ }
-+
-+ case CALL_JUMP_INSTR: {
-+ byte* addr = data + *reinterpret_cast<int32_t*>(data+1) + 5;
-+ AppendToBuffer("%s %s", idesc.mnem, NameOfAddress(addr));
-+ data += 5;
-+ break;
-+ }
-+
-+ case SHORT_IMMEDIATE_INSTR: {
-+ byte* addr =
reinterpret_cast<byte*>(*reinterpret_cast<int32_t*>(data+1));
-+ AppendToBuffer("%s eax,%s", idesc.mnem, NameOfAddress(addr));
-+ data += 5;
-+ break;
-+ }
-+
-+ case BYTE_IMMEDIATE_INSTR: {
-+ AppendToBuffer("%s al,0x%x", idesc.mnem, data[1]);
-+ data += 2;
-+ break;
-+ }
-+
-+ case NO_INSTR:
-+ processed = false;
-+ break;
-+
-+ default:
-+ UNIMPLEMENTED(); // This type is not implemented.
-+ }
-+ //----------------------------
-+ if (!processed) {
-+ switch (*data) {
-+ case 0xC2:
-+ AppendToBuffer("ret 0x%x",
*reinterpret_cast<uint16_t*>(data+1));
-+ data += 3;
-+ break;
-+
-+ case 0x6B: {
-+ data++;
-+ data += PrintOperands("imul", REG_OPER_OP_ORDER, data);
-+ AppendToBuffer(",%d", *data);
-+ data++;
-+ } break;
-+
-+ case 0x69: {
-+ data++;
-+ data += PrintOperands("imul", REG_OPER_OP_ORDER, data);
-+ AppendToBuffer(",%d", *reinterpret_cast<int32_t*>(data));
-+ data += 4;
-+ }
-+ break;
-+
-+ case 0xF6:
-+ { data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ if (regop == eax) {
-+ AppendToBuffer("test_b ");
-+ data += PrintRightByteOperand(data);
-+ int32_t imm = *data;
-+ AppendToBuffer(",0x%x", imm);
-+ data++;
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ }
-+ break;
-+
-+ case 0x81: // fall through
-+ case 0x83: // 0x81 with sign extension bit set
-+ data += PrintImmediateOp(data);
-+ break;
-+
-+ case 0x0F:
-+ { byte f0byte = data[1];
-+ const char* f0mnem = F0Mnem(f0byte);
-+ if (f0byte == 0x18) {
-+ data += 2;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ const char* suffix[] = {"nta", "1", "2",
"3"};
-+ AppendToBuffer("%s%s ", f0mnem, suffix[regop & 0x03]);
-+ data += PrintRightOperand(data);
-+ } else if (f0byte == 0x1F && data[2] == 0) {
-+ AppendToBuffer("nop"); // 3 byte nop.
-+ data += 3;
-+ } else if (f0byte == 0x1F && data[2] == 0x40 && data[3] == 0)
{
-+ AppendToBuffer("nop"); // 4 byte nop.
-+ data += 4;
-+ } else if (f0byte == 0x1F && data[2] == 0x44 && data[3] == 0
&&
-+ data[4] == 0) {
-+ AppendToBuffer("nop"); // 5 byte nop.
-+ data += 5;
-+ } else if (f0byte == 0x1F && data[2] == 0x80 && data[3] == 0
&&
-+ data[4] == 0 && data[5] == 0 && data[6] == 0) {
-+ AppendToBuffer("nop"); // 7 byte nop.
-+ data += 7;
-+ } else if (f0byte == 0x1F && data[2] == 0x84 && data[3] == 0
&&
-+ data[4] == 0 && data[5] == 0 && data[6] == 0
&&
-+ data[7] == 0) {
-+ AppendToBuffer("nop"); // 8 byte nop.
-+ data += 8;
-+ } else if (f0byte == 0x0B || f0byte == 0xA2 || f0byte == 0x31) {
-+ AppendToBuffer("%s", f0mnem);
-+ data += 2;
-+ } else if (f0byte == 0x28) {
-+ data += 2;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("movaps %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (f0byte >= 0x53 && f0byte <= 0x5F) {
-+ const char* const pseudo_op[] = {
-+ "rcpps",
-+ "andps",
-+ "andnps",
-+ "orps",
-+ "xorps",
-+ "addps",
-+ "mulps",
-+ "cvtps2pd",
-+ "cvtdq2ps",
-+ "subps",
-+ "minps",
-+ "divps",
-+ "maxps",
-+ };
-+
-+ data += 2;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("%s %s,",
-+ pseudo_op[f0byte - 0x53],
-+ NameOfXMMRegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ } else if (f0byte == 0x50) {
-+ data += 2;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("movmskps %s,%s",
-+ NameOfCPURegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (f0byte== 0xC6) {
-+ // shufps xmm, xmm/m128, imm8
-+ data += 2;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ int8_t imm8 = static_cast<int8_t>(data[1]);
-+ AppendToBuffer("shufps %s,%s,%d",
-+ NameOfXMMRegister(rm),
-+ NameOfXMMRegister(regop),
-+ static_cast<int>(imm8));
-+ data += 2;
-+ } else if ((f0byte & 0xF0) == 0x80) {
-+ data += JumpConditional(data, branch_hint);
-+ } else if (f0byte == 0xBE || f0byte == 0xBF || f0byte == 0xB6 ||
-+ f0byte == 0xB7 || f0byte == 0xAF) {
-+ data += 2;
-+ data += PrintOperands(f0mnem, REG_OPER_OP_ORDER, data);
-+ } else if ((f0byte & 0xF0) == 0x90) {
-+ data += SetCC(data);
-+ } else if ((f0byte & 0xF0) == 0x40) {
-+ data += CMov(data);
-+ } else if (f0byte == 0xA4 || f0byte == 0xAC) {
-+ // shld, shrd
-+ data += 2;
-+ AppendToBuffer("%s ", f0mnem);
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ int8_t imm8 = static_cast<int8_t>(data[1]);
-+ data += 2;
-+ AppendToBuffer("%s,%s,%d", NameOfCPURegister(rm),
-+ NameOfCPURegister(regop), static_cast<int>(imm8));
-+ } else if (f0byte == 0xAB || f0byte == 0xA5 || f0byte == 0xAD) {
-+ // shrd_cl, shld_cl, bts
-+ data += 2;
-+ AppendToBuffer("%s ", f0mnem);
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ data += PrintRightOperand(data);
-+ if (f0byte == 0xAB) {
-+ AppendToBuffer(",%s", NameOfCPURegister(regop));
-+ } else {
-+ AppendToBuffer(",%s,cl", NameOfCPURegister(regop));
-+ }
-+ } else if (f0byte == 0xB0) {
-+ // cmpxchg_b
-+ data += 2;
-+ AppendToBuffer("%s ", f0mnem);
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ data += PrintRightOperand(data);
-+ AppendToBuffer(",%s", NameOfByteCPURegister(regop));
-+ } else if (f0byte == 0xB1) {
-+ // cmpxchg
-+ data += 2;
-+ data += PrintOperands(f0mnem, OPER_REG_OP_ORDER, data);
-+ } else if (f0byte == 0xBC) {
-+ data += 2;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("%s %s,", f0mnem, NameOfCPURegister(regop));
-+ data += PrintRightOperand(data);
-+ } else if (f0byte == 0xBD) {
-+ data += 2;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("%s %s,", f0mnem, NameOfCPURegister(regop));
-+ data += PrintRightOperand(data);
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ }
-+ break;
-+
-+ case 0x8F:
-+ { data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ if (regop == eax) {
-+ AppendToBuffer("pop ");
-+ data += PrintRightOperand(data);
-+ }
-+ }
-+ break;
-+
-+ case 0xFF:
-+ { data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ const char* mnem = NULL;
-+ switch (regop) {
-+ case esi: mnem = "push"; break;
-+ case eax: mnem = "inc"; break;
-+ case ecx: mnem = "dec"; break;
-+ case edx: mnem = "call"; break;
-+ case esp: mnem = "jmp"; break;
-+ default: mnem = "???";
-+ }
-+ AppendToBuffer("%s ", mnem);
-+ data += PrintRightOperand(data);
-+ }
-+ break;
-+
-+ case 0xC7: // imm32, fall through
-+ case 0xC6: // imm8
-+ { bool is_byte = *data == 0xC6;
-+ data++;
-+ if (is_byte) {
-+ AppendToBuffer("%s ", "mov_b");
-+ data += PrintRightByteOperand(data);
-+ int32_t imm = *data;
-+ AppendToBuffer(",0x%x", imm);
-+ data++;
-+ } else {
-+ AppendToBuffer("%s ", "mov");
-+ data += PrintRightOperand(data);
-+ int32_t imm = *reinterpret_cast<int32_t*>(data);
-+ AppendToBuffer(",0x%x", imm);
-+ data += 4;
-+ }
-+ }
-+ break;
-+
-+ case 0x80:
-+ { data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ const char* mnem = NULL;
-+ switch (regop) {
-+ case 5: mnem = "subb"; break;
-+ case 7: mnem = "cmpb"; break;
-+ default: UnimplementedInstruction();
-+ }
-+ AppendToBuffer("%s ", mnem);
-+ data += PrintRightByteOperand(data);
-+ int32_t imm = *data;
-+ AppendToBuffer(",0x%x", imm);
-+ data++;
-+ }
-+ break;
-+
-+ case 0x88: // 8bit, fall through
-+ case 0x89: // 32bit
-+ { bool is_byte = *data == 0x88;
-+ int mod, regop, rm;
-+ data++;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ if (is_byte) {
-+ AppendToBuffer("%s ", "mov_b");
-+ data += PrintRightByteOperand(data);
-+ AppendToBuffer(",%s", NameOfByteCPURegister(regop));
-+ } else {
-+ AppendToBuffer("%s ", "mov");
-+ data += PrintRightOperand(data);
-+ AppendToBuffer(",%s", NameOfCPURegister(regop));
-+ }
-+ }
-+ break;
-+
-+ case 0x66: // prefix
-+ while (*data == 0x66) data++;
-+ if (*data == 0xf && data[1] == 0x1f) {
-+ AppendToBuffer("nop"); // 0x66 prefix
-+ } else if (*data == 0x39) {
-+ data++;
-+ data += PrintOperands("cmpw", OPER_REG_OP_ORDER, data);
-+ } else if (*data == 0x3B) {
-+ data++;
-+ data += PrintOperands("cmpw", REG_OPER_OP_ORDER, data);
-+ } else if (*data == 0x81) {
-+ data++;
-+ AppendToBuffer("cmpw ");
-+ data += PrintRightOperand(data);
-+ int imm = *reinterpret_cast<int16_t*>(data);
-+ AppendToBuffer(",0x%x", imm);
-+ data += 2;
-+ } else if (*data == 0x87) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("xchg_w %s,", NameOfCPURegister(regop));
-+ data += PrintRightOperand(data);
-+ } else if (*data == 0x89) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("mov_w ");
-+ data += PrintRightOperand(data);
-+ AppendToBuffer(",%s", NameOfCPURegister(regop));
-+ } else if (*data == 0x8B) {
-+ data++;
-+ data += PrintOperands("mov_w", REG_OPER_OP_ORDER, data);
-+ } else if (*data == 0x90) {
-+ AppendToBuffer("nop"); // 0x66 prefix
-+ } else if (*data == 0xC7) {
-+ data++;
-+ AppendToBuffer("%s ", "mov_w");
-+ data += PrintRightOperand(data);
-+ int imm = *reinterpret_cast<int16_t*>(data);
-+ AppendToBuffer(",0x%x", imm);
-+ data += 2;
-+ } else if (*data == 0xF7) {
-+ data++;
-+ AppendToBuffer("%s ", "test_w");
-+ data += PrintRightOperand(data);
-+ int imm = *reinterpret_cast<int16_t*>(data);
-+ AppendToBuffer(",0x%x", imm);
-+ data += 2;
-+ } else if (*data == 0x0F) {
-+ data++;
-+ if (*data == 0x38) {
-+ data++;
-+ if (*data == 0x17) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("ptest %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0x2A) {
-+ // movntdqa
-+ UnimplementedInstruction();
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ } else if (*data == 0x3A) {
-+ data++;
-+ if (*data == 0x0B) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ int8_t imm8 = static_cast<int8_t>(data[1]);
-+ AppendToBuffer("roundsd %s,%s,%d",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm),
-+ static_cast<int>(imm8));
-+ data += 2;
-+ } else if (*data == 0x16) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, &rm, ®op);
-+ int8_t imm8 = static_cast<int8_t>(data[1]);
-+ AppendToBuffer("pextrd %s,%s,%d",
-+ NameOfCPURegister(regop),
-+ NameOfXMMRegister(rm),
-+ static_cast<int>(imm8));
-+ data += 2;
-+ } else if (*data == 0x17) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ int8_t imm8 = static_cast<int8_t>(data[1]);
-+ AppendToBuffer("extractps %s,%s,%d",
-+ NameOfCPURegister(rm),
-+ NameOfXMMRegister(regop),
-+ static_cast<int>(imm8));
-+ data += 2;
-+ } else if (*data == 0x22) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ int8_t imm8 = static_cast<int8_t>(data[1]);
-+ AppendToBuffer("pinsrd %s,%s,%d",
-+ NameOfXMMRegister(regop),
-+ NameOfCPURegister(rm),
-+ static_cast<int>(imm8));
-+ data += 2;
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ } else if (*data == 0x2E || *data == 0x2F) {
-+ const char* mnem = (*data == 0x2E) ? "ucomisd" :
"comisd";
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ if (mod == 0x3) {
-+ AppendToBuffer("%s %s,%s", mnem,
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else {
-+ AppendToBuffer("%s %s,", mnem, NameOfXMMRegister(regop));
-+ data += PrintRightOperand(data);
-+ }
-+ } else if (*data == 0x50) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("movmskpd %s,%s",
-+ NameOfCPURegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0x54) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("andpd %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0x56) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("orpd %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0x57) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("xorpd %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0x6E) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("movd %s,", NameOfXMMRegister(regop));
-+ data += PrintRightOperand(data);
-+ } else if (*data == 0x6F) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("movdqa %s,", NameOfXMMRegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ } else if (*data == 0x70) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ int8_t imm8 = static_cast<int8_t>(data[1]);
-+ AppendToBuffer("pshufd %s,%s,%d",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm),
-+ static_cast<int>(imm8));
-+ data += 2;
-+ } else if (*data == 0x76) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("pcmpeqd %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0x90) {
-+ data++;
-+ AppendToBuffer("nop"); // 2 byte nop.
-+ } else if (*data == 0xF3) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("psllq %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0x73) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ int8_t imm8 = static_cast<int8_t>(data[1]);
-+ DCHECK(regop == esi || regop == edx);
-+ AppendToBuffer("%s %s,%d",
-+ (regop == esi) ? "psllq" : "psrlq",
-+ NameOfXMMRegister(rm),
-+ static_cast<int>(imm8));
-+ data += 2;
-+ } else if (*data == 0xD3) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("psrlq %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0x7F) {
-+ AppendToBuffer("movdqa ");
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ data += PrintRightXMMOperand(data);
-+ AppendToBuffer(",%s", NameOfXMMRegister(regop));
-+ } else if (*data == 0x7E) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("movd ");
-+ data += PrintRightOperand(data);
-+ AppendToBuffer(",%s", NameOfXMMRegister(regop));
-+ } else if (*data == 0xDB) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("pand %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0xE7) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ if (mod == 3) {
-+ // movntdq
-+ UnimplementedInstruction();
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ } else if (*data == 0xEF) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("pxor %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0xEB) {
-+ data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("por %s,%s",
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data++;
-+ } else if (*data == 0xB1) {
-+ data++;
-+ data += PrintOperands("cmpxchg_w", OPER_REG_OP_ORDER, data);
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xFE:
-+ { data++;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ if (regop == ecx) {
-+ AppendToBuffer("dec_b ");
-+ data += PrintRightOperand(data);
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ }
-+ break;
-+
-+ case 0x68:
-+ AppendToBuffer("push 0x%x",
*reinterpret_cast<int32_t*>(data+1));
-+ data += 5;
-+ break;
-+
-+ case 0x6A:
-+ AppendToBuffer("push 0x%x", *reinterpret_cast<int8_t*>(data +
1));
-+ data += 2;
-+ break;
-+
-+ case 0xA8:
-+ AppendToBuffer("test al,0x%x",
*reinterpret_cast<uint8_t*>(data+1));
-+ data += 2;
-+ break;
-+
-+ case 0xA9:
-+ AppendToBuffer("test eax,0x%x",
*reinterpret_cast<int32_t*>(data+1));
-+ data += 5;
-+ break;
-+
-+ case 0xD1: // fall through
-+ case 0xD3: // fall through
-+ case 0xC1:
-+ data += D1D3C1Instruction(data);
-+ break;
-+
-+ case 0xD8: // fall through
-+ case 0xD9: // fall through
-+ case 0xDA: // fall through
-+ case 0xDB: // fall through
-+ case 0xDC: // fall through
-+ case 0xDD: // fall through
-+ case 0xDE: // fall through
-+ case 0xDF:
-+ data += FPUInstruction(data);
-+ break;
-+
-+ case 0xEB:
-+ data += JumpShort(data);
-+ break;
-+
-+ case 0xF2:
-+ if (*(data+1) == 0x0F) {
-+ byte b2 = *(data+2);
-+ if (b2 == 0x11) {
-+ AppendToBuffer("movsd ");
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ data += PrintRightXMMOperand(data);
-+ AppendToBuffer(",%s", NameOfXMMRegister(regop));
-+ } else if (b2 == 0x10) {
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("movsd %s,", NameOfXMMRegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ } else if (b2 == 0x5A) {
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("cvtsd2ss %s,", NameOfXMMRegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ } else {
-+ const char* mnem = "?";
-+ switch (b2) {
-+ case 0x2A: mnem = "cvtsi2sd"; break;
-+ case 0x2C: mnem = "cvttsd2si"; break;
-+ case 0x2D: mnem = "cvtsd2si"; break;
-+ case 0x51: mnem = "sqrtsd"; break;
-+ case 0x58: mnem = "addsd"; break;
-+ case 0x59: mnem = "mulsd"; break;
-+ case 0x5C: mnem = "subsd"; break;
-+ case 0x5E: mnem = "divsd"; break;
-+ }
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ if (b2 == 0x2A) {
-+ AppendToBuffer("%s %s,", mnem, NameOfXMMRegister(regop));
-+ data += PrintRightOperand(data);
-+ } else if (b2 == 0x2C || b2 == 0x2D) {
-+ AppendToBuffer("%s %s,", mnem, NameOfCPURegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ } else if (b2 == 0xC2) {
-+ // Intel manual 2A, Table 3-18.
-+ const char* const pseudo_op[] = {
-+ "cmpeqsd",
-+ "cmpltsd",
-+ "cmplesd",
-+ "cmpunordsd",
-+ "cmpneqsd",
-+ "cmpnltsd",
-+ "cmpnlesd",
-+ "cmpordsd"
-+ };
-+ AppendToBuffer("%s %s,%s",
-+ pseudo_op[data[1]],
-+ NameOfXMMRegister(regop),
-+ NameOfXMMRegister(rm));
-+ data += 2;
-+ } else {
-+ AppendToBuffer("%s %s,", mnem, NameOfXMMRegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ }
-+ }
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xF3:
-+ if (*(data+1) == 0x0F) {
-+ byte b2 = *(data+2);
-+ if (b2 == 0x11) {
-+ AppendToBuffer("movss ");
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ data += PrintRightXMMOperand(data);
-+ AppendToBuffer(",%s", NameOfXMMRegister(regop));
-+ } else if (b2 == 0x10) {
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("movss %s,", NameOfXMMRegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ } else if (b2 == 0x2C) {
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("cvttss2si %s,", NameOfCPURegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ } else if (b2 == 0x5A) {
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("cvtss2sd %s,", NameOfXMMRegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ } else if (b2 == 0x6F) {
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ AppendToBuffer("movdqu %s,", NameOfXMMRegister(regop));
-+ data += PrintRightXMMOperand(data);
-+ } else if (b2 == 0x7F) {
-+ AppendToBuffer("movdqu ");
-+ data += 3;
-+ int mod, regop, rm;
-+ get_modrm(*data, &mod, ®op, &rm);
-+ data += PrintRightXMMOperand(data);
-+ AppendToBuffer(",%s", NameOfXMMRegister(regop));
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ } else if (*(data+1) == 0xA5) {
-+ data += 2;
-+ AppendToBuffer("rep_movs");
-+ } else if (*(data+1) == 0xAB) {
-+ data += 2;
-+ AppendToBuffer("rep_stos");
-+ } else {
-+ UnimplementedInstruction();
-+ }
-+ break;
-+
-+ case 0xF7:
-+ data += F7Instruction(data);
-+ break;
-+
-+ default:
-+ UnimplementedInstruction();
-+ }
-+ }
-+
-+ if (tmp_buffer_pos_ < sizeof tmp_buffer_) {
-+ tmp_buffer_[tmp_buffer_pos_] = '\0';
-+ }
-+
-+ int instr_len = data - instr;
-+ if (instr_len == 0) {
-+ printf("%02x", *data);
-+ }
-+ DCHECK(instr_len > 0); // Ensure progress.
-+
-+ int outp = 0;
-+ // Instruction bytes.
-+ for (byte* bp = instr; bp < data; bp++) {
-+ outp += v8::internal::SNPrintF(out_buffer + outp, "%02x", *bp);
-+ }
-+ for (int i = 6 - instr_len; i >= 0; i--) {
-+ outp += v8::internal::SNPrintF(out_buffer + outp, " ");
-+ }
-+
-+ outp += v8::internal::SNPrintF(out_buffer + outp, " %s",
tmp_buffer_.start());
-+ return instr_len;
-+} // NOLINT (function is too long)
-+
-+
-+//------------------------------------------------------------------------------
-+
-+
-+static const char* const cpu_regs[8] = {
-+ "eax", "ecx", "edx", "ebx", "esp",
"ebp", "esi", "edi"
-+};
-+
-+
-+static const char* const byte_cpu_regs[8] = {
-+ "al", "cl", "dl", "bl", "ah",
"ch", "dh", "bh"
-+};
-+
-+
-+static const char* const xmm_regs[8] = {
-+ "xmm0", "xmm1", "xmm2", "xmm3",
"xmm4", "xmm5", "xmm6", "xmm7"
-+};
-+
-+
-+const char* NameConverter::NameOfAddress(byte* addr) const {
-+ v8::internal::SNPrintF(tmp_buffer_, "%p", static_cast<void*>(addr));
-+ return tmp_buffer_.start();
-+}
-+
-+
-+const char* NameConverter::NameOfConstant(byte* addr) const {
-+ return NameOfAddress(addr);
-+}
-+
-+
-+const char* NameConverter::NameOfCPURegister(int reg) const {
-+ if (0 <= reg && reg < 8) return cpu_regs[reg];
-+ return "noreg";
-+}
-+
-+
-+const char* NameConverter::NameOfByteCPURegister(int reg) const {
-+ if (0 <= reg && reg < 8) return byte_cpu_regs[reg];
-+ return "noreg";
-+}
-+
-+
-+const char* NameConverter::NameOfXMMRegister(int reg) const {
-+ if (0 <= reg && reg < 8) return xmm_regs[reg];
-+ return "noxmmreg";
-+}
-+
-+
-+const char* NameConverter::NameInCode(byte* addr) const {
-+ // X87 does not embed debug strings at the moment.
-+ UNREACHABLE();
-+}
-+
-+
-+//------------------------------------------------------------------------------
-+
-+Disassembler::Disassembler(const NameConverter& converter)
-+ : converter_(converter) {}
-+
-+
-+Disassembler::~Disassembler() {}
-+
-+
-+int Disassembler::InstructionDecode(v8::internal::Vector<char> buffer,
-+ byte* instruction) {
-+ DisassemblerX87 d(converter_, false /*do not crash if unimplemented*/);
-+ return d.InstructionDecode(buffer, instruction);
-+}
-+
-+
-+// The IA-32 assembler does not currently use constant pools.
-+int Disassembler::ConstantPoolSizeAt(byte* instruction) { return -1; }
-+
-+
-+/*static*/ void Disassembler::Disassemble(FILE* f, byte* begin, byte* end) {
-+ NameConverter converter;
-+ Disassembler d(converter);
-+ for (byte* pc = begin; pc < end;) {
-+ v8::internal::EmbeddedVector<char, 128> buffer;
-+ buffer[0] = '\0';
-+ byte* prev_pc = pc;
-+ pc += d.InstructionDecode(buffer, pc);
-+ fprintf(f, "%p", static_cast<void*>(prev_pc));
-+ fprintf(f, " ");
-+
-+ for (byte* bp = prev_pc; bp < pc; bp++) {
-+ fprintf(f, "%02x", *bp);
-+ }
-+ for (int i = 6 - (pc - prev_pc); i >= 0; i--) {
-+ fprintf(f, " ");
-+ }
-+ fprintf(f, " %s\n", buffer.start());
-+ }
-+}
-+
-+
-+} // namespace disasm
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/frames-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/frames-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/frames-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/frames-x87.cc 2018-02-18
19:00:54.199418119 +0100
-@@ -0,0 +1,27 @@
-+// Copyright 2006-2008 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/assembler.h"
-+#include "src/frames.h"
-+#include "src/x87/assembler-x87-inl.h"
-+#include "src/x87/assembler-x87.h"
-+#include "src/x87/frames-x87.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+
-+Register JavaScriptFrame::fp_register() { return ebp; }
-+Register JavaScriptFrame::context_register() { return esi; }
-+Register JavaScriptFrame::constant_pool_pointer_register() {
-+ UNREACHABLE();
-+}
-+
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/frames-x87.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/frames-x87.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/frames-x87.h 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/frames-x87.h 2018-02-18
19:00:54.199418119 +0100
-@@ -0,0 +1,78 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#ifndef V8_X87_FRAMES_X87_H_
-+#define V8_X87_FRAMES_X87_H_
-+
-+namespace v8 {
-+namespace internal {
-+
-+
-+// Register lists
-+// Note that the bit values must match those used in actual instruction encoding
-+const int kNumRegs = 8;
-+
-+
-+// Caller-saved registers
-+const RegList kJSCallerSaved =
-+ 1 << 0 | // eax
-+ 1 << 1 | // ecx
-+ 1 << 2 | // edx
-+ 1 << 3 | // ebx - used as a caller-saved register in JavaScript code
-+ 1 << 7; // edi - callee function
-+
-+const int kNumJSCallerSaved = 5;
-+
-+
-+// Number of registers for which space is reserved in safepoints.
-+const int kNumSafepointRegisters = 8;
-+
-+// ----------------------------------------------------
-+
-+
-+class EntryFrameConstants : public AllStatic {
-+ public:
-+ static const int kCallerFPOffset = -6 * kPointerSize;
-+
-+ static const int kNewTargetArgOffset = +2 * kPointerSize;
-+ static const int kFunctionArgOffset = +3 * kPointerSize;
-+ static const int kReceiverArgOffset = +4 * kPointerSize;
-+ static const int kArgcOffset = +5 * kPointerSize;
-+ static const int kArgvOffset = +6 * kPointerSize;
-+};
-+
-+class ExitFrameConstants : public TypedFrameConstants {
-+ public:
-+ static const int kSPOffset = TYPED_FRAME_PUSHED_VALUE_OFFSET(0);
-+ static const int kCodeOffset = TYPED_FRAME_PUSHED_VALUE_OFFSET(1);
-+ DEFINE_TYPED_FRAME_SIZES(2);
-+
-+ static const int kCallerFPOffset = 0 * kPointerSize;
-+ static const int kCallerPCOffset = +1 * kPointerSize;
-+
-+ // FP-relative displacement of the caller's SP. It points just
-+ // below the saved PC.
-+ static const int kCallerSPDisplacement = +2 * kPointerSize;
-+
-+ static const int kConstantPoolOffset = 0; // Not used
-+};
-+
-+
-+class JavaScriptFrameConstants : public AllStatic {
-+ public:
-+ // FP-relative.
-+ static const int kLocal0Offset = StandardFrameConstants::kExpressionsOffset;
-+ static const int kLastParameterOffset = +2 * kPointerSize;
-+ static const int kFunctionOffset = StandardFrameConstants::kFunctionOffset;
-+
-+ // Caller SP-relative.
-+ static const int kParam0Offset = -2 * kPointerSize;
-+ static const int kReceiverOffset = -1 * kPointerSize;
-+};
-+
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_X87_FRAMES_X87_H_
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/interface-descriptors-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/interface-descriptors-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/interface-descriptors-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/interface-descriptors-x87.cc 2018-02-18
19:00:54.199418119 +0100
-@@ -0,0 +1,450 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/interface-descriptors.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+const Register CallInterfaceDescriptor::ContextRegister() { return esi; }
-+
-+void CallInterfaceDescriptor::DefaultInitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data, int register_parameter_count) {
-+ const Register default_stub_registers[] = {eax, ebx, ecx, edx, edi};
-+ CHECK_LE(static_cast<size_t>(register_parameter_count),
-+ arraysize(default_stub_registers));
-+ data->InitializePlatformSpecific(register_parameter_count,
-+ default_stub_registers);
-+}
-+
-+const Register FastNewFunctionContextDescriptor::FunctionRegister() {
-+ return edi;
-+}
-+const Register FastNewFunctionContextDescriptor::SlotsRegister() { return eax; }
-+
-+const Register LoadDescriptor::ReceiverRegister() { return edx; }
-+const Register LoadDescriptor::NameRegister() { return ecx; }
-+const Register LoadDescriptor::SlotRegister() { return eax; }
-+
-+const Register LoadWithVectorDescriptor::VectorRegister() { return ebx; }
-+
-+const Register LoadICProtoArrayDescriptor::HandlerRegister() { return edi; }
-+
-+const Register StoreDescriptor::ReceiverRegister() { return edx; }
-+const Register StoreDescriptor::NameRegister() { return ecx; }
-+const Register StoreDescriptor::ValueRegister() { return eax; }
-+const Register StoreDescriptor::SlotRegister() { return edi; }
-+
-+const Register StoreWithVectorDescriptor::VectorRegister() { return ebx; }
-+
-+const Register StoreTransitionDescriptor::SlotRegister() { return no_reg; }
-+const Register StoreTransitionDescriptor::VectorRegister() { return ebx; }
-+const Register StoreTransitionDescriptor::MapRegister() { return edi; }
-+
-+const Register StringCompareDescriptor::LeftRegister() { return edx; }
-+const Register StringCompareDescriptor::RightRegister() { return eax; }
-+
-+const Register StringConcatDescriptor::ArgumentsCountRegister() { return eax; }
-+
-+const Register ApiGetterDescriptor::HolderRegister() { return ecx; }
-+const Register ApiGetterDescriptor::CallbackRegister() { return eax; }
-+
-+const Register MathPowTaggedDescriptor::exponent() { return eax; }
-+
-+const Register MathPowIntegerDescriptor::exponent() {
-+ return MathPowTaggedDescriptor::exponent();
-+}
-+
-+
-+const Register GrowArrayElementsDescriptor::ObjectRegister() { return eax; }
-+const Register GrowArrayElementsDescriptor::KeyRegister() { return ebx; }
-+
-+
-+void FastNewClosureDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // SharedFunctionInfo, vector, slot index.
-+ Register registers[] = {ebx, ecx, edx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+// static
-+const Register TypeConversionDescriptor::ArgumentRegister() { return eax; }
-+
-+void TypeofDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+
-+void FastCloneRegExpDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {edi, eax, ecx, edx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+
-+void FastCloneShallowArrayDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {eax, ebx, ecx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+
-+void FastCloneShallowObjectDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {eax, ebx, ecx, edx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+
-+void CreateAllocationSiteDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {ebx, edx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+
-+void CreateWeakCellDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {ebx, edx, edi};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+
-+void CallFunctionDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {edi};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+void CallICTrampolineDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {edi, eax, edx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void CallICDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {edi, eax, edx, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+
-+void CallConstructDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments
-+ // ebx : feedback vector
-+ // ecx : new target (for IsSuperConstructorCall)
-+ // edx : slot in feedback vector (Smi, for RecordCallTarget)
-+ // edi : constructor function
-+ // TODO(turbofan): So far we don't gather type feedback and hence skip the
-+ // slot parameter, but ArrayConstructStub needs the vector to be undefined.
-+ Register registers[] = {eax, edi, ecx, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+
-+void CallTrampolineDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments
-+ // edi : the target to call
-+ Register registers[] = {edi, eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void CallVarargsDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments (on the stack, not including receiver)
-+ // edi : the target to call
-+ // ebx : arguments list (FixedArray)
-+ // ecx : arguments list length (untagged)
-+ Register registers[] = {edi, eax, ebx, ecx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void CallForwardVarargsDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments
-+ // ecx : start index (to support rest parameters)
-+ // edi : the target to call
-+ Register registers[] = {edi, eax, ecx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void CallWithSpreadDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments (on the stack, not including receiver)
-+ // edi : the target to call
-+ // ebx : the object to spread
-+ Register registers[] = {edi, eax, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void CallWithArrayLikeDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // edi : the target to call
-+ // ebx : the arguments list
-+ Register registers[] = {edi, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void ConstructVarargsDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments (on the stack, not including receiver)
-+ // edi : the target to call
-+ // edx : the new target
-+ // ebx : arguments list (FixedArray)
-+ // ecx : arguments list length (untagged)
-+ Register registers[] = {edi, edx, eax, ebx, ecx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void ConstructForwardVarargsDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments
-+ // edx : the new target
-+ // ecx : start index (to support rest parameters)
-+ // edi : the target to call
-+ Register registers[] = {edi, edx, eax, ecx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void ConstructWithSpreadDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments (on the stack, not including receiver)
-+ // edi : the target to call
-+ // edx : the new target
-+ // ebx : the object to spread
-+ Register registers[] = {edi, edx, eax, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void ConstructWithArrayLikeDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // edi : the target to call
-+ // edx : the new target
-+ // ebx : the arguments list
-+ Register registers[] = {edi, edx, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void ConstructStubDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments
-+ // edx : the new target
-+ // edi : the target to call
-+ // ebx : allocation site or undefined
-+ Register registers[] = {edi, edx, eax, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+
-+void ConstructTrampolineDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // eax : number of arguments
-+ // edx : the new target
-+ // edi : the target to call
-+ Register registers[] = {edi, edx, eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+
-+void TransitionElementsKindDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {eax, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+
-+void AllocateHeapNumberDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // register state
-+ data->InitializePlatformSpecific(0, nullptr, nullptr);
-+}
-+
-+void ArrayConstructorDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // kTarget, kNewTarget, kActualArgumentsCount, kAllocationSite
-+ Register registers[] = {edi, edx, eax, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+void ArrayNoArgumentConstructorDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // register state
-+ // eax -- number of arguments
-+ // edi -- function
-+ // ebx -- allocation site with elements kind
-+ Register registers[] = {edi, ebx, eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+void ArraySingleArgumentConstructorDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // register state
-+ // eax -- number of arguments
-+ // edi -- function
-+ // ebx -- allocation site with elements kind
-+ Register registers[] = {edi, ebx, eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+void ArrayNArgumentsConstructorDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // register state
-+ // eax -- number of arguments
-+ // edi -- function
-+ // ebx -- allocation site with elements kind
-+ Register registers[] = {edi, ebx, eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+void VarArgFunctionDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // stack param count needs (arg count)
-+ Register registers[] = {eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void CompareDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {edx, eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+
-+void BinaryOpDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {edx, eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+
-+void BinaryOpWithAllocationSiteDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {ecx, edx, eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+void BinaryOpWithVectorDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ // register state
-+ // edx -- lhs
-+ // eax -- rhs
-+ // edi -- slot id
-+ // ebx -- vector
-+ Register registers[] = {edx, eax, edi, ebx};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void CountOpDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void StringAddDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {edx, eax};
-+ data->InitializePlatformSpecific(arraysize(registers), registers, NULL);
-+}
-+
-+void ArgumentAdaptorDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {
-+ edi, // JSFunction
-+ edx, // the new target
-+ eax, // actual number of arguments
-+ ebx, // expected number of arguments
-+ };
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void ApiCallbackDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {
-+ edi, // callee
-+ ebx, // call_data
-+ ecx, // holder
-+ edx, // api_function_address
-+ };
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void InterpreterDispatchDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {
-+ kInterpreterAccumulatorRegister, kInterpreterBytecodeOffsetRegister,
-+ kInterpreterBytecodeArrayRegister, kInterpreterDispatchTableRegister};
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void InterpreterPushArgsThenCallDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {
-+ eax, // argument count (not including receiver)
-+ ebx, // address of first argument
-+ edi // the target callable to be call
-+ };
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void InterpreterPushArgsThenConstructDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {
-+ eax, // argument count (not including receiver)
-+ edx, // new target
-+ edi, // constructor
-+ ebx, // allocation site feedback
-+ ecx, // address of first argument
-+ };
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void InterpreterPushArgsThenConstructArrayDescriptor::
-+ InitializePlatformSpecific(CallInterfaceDescriptorData* data) {
-+ Register registers[] = {
-+ eax, // argument count (not including receiver)
-+ edx, // target to the call. It is checked to be Array function.
-+ ebx, // allocation site feedback
-+ ecx, // address of first argument
-+ };
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void InterpreterCEntryDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {
-+ eax, // argument count (argc)
-+ ecx, // address of first argument (argv)
-+ ebx // the runtime function to call
-+ };
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void ResumeGeneratorDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {
-+ eax, // the value to pass to the generator
-+ ebx, // the JSGeneratorObject to resume
-+ edx // the resume mode (tagged)
-+ };
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+void FrameDropperTrampolineDescriptor::InitializePlatformSpecific(
-+ CallInterfaceDescriptorData* data) {
-+ Register registers[] = {
-+ ebx, // loaded new FP
-+ };
-+ data->InitializePlatformSpecific(arraysize(registers), registers);
-+}
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/macro-assembler-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/macro-assembler-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/macro-assembler-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/macro-assembler-x87.cc 2018-02-18
19:00:54.200418105 +0100
-@@ -0,0 +1,2584 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#if V8_TARGET_ARCH_X87
-+
-+#include "src/base/bits.h"
-+#include "src/base/division-by-constant.h"
-+#include "src/base/utils/random-number-generator.h"
-+#include "src/bootstrapper.h"
-+#include "src/codegen.h"
-+#include "src/debug/debug.h"
-+#include "src/runtime/runtime.h"
-+#include "src/x87/frames-x87.h"
-+#include "src/x87/macro-assembler-x87.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+// -------------------------------------------------------------------------
-+// MacroAssembler implementation.
-+
-+MacroAssembler::MacroAssembler(Isolate* isolate, void* buffer, int size,
-+ CodeObjectRequired create_code_object)
-+ : TurboAssembler(isolate, buffer, size, create_code_object),
-+ jit_cookie_(0) {
-+ if (FLAG_mask_constants_with_cookie) {
-+ jit_cookie_ = isolate->random_number_generator()->NextInt();
-+ }
-+}
-+
-+
-+void MacroAssembler::Load(Register dst, const Operand& src, Representation r) {
-+ DCHECK(!r.IsDouble());
-+ if (r.IsInteger8()) {
-+ movsx_b(dst, src);
-+ } else if (r.IsUInteger8()) {
-+ movzx_b(dst, src);
-+ } else if (r.IsInteger16()) {
-+ movsx_w(dst, src);
-+ } else if (r.IsUInteger16()) {
-+ movzx_w(dst, src);
-+ } else {
-+ mov(dst, src);
-+ }
-+}
-+
-+
-+void MacroAssembler::Store(Register src, const Operand& dst, Representation r) {
-+ DCHECK(!r.IsDouble());
-+ if (r.IsInteger8() || r.IsUInteger8()) {
-+ mov_b(dst, src);
-+ } else if (r.IsInteger16() || r.IsUInteger16()) {
-+ mov_w(dst, src);
-+ } else {
-+ if (r.IsHeapObject()) {
-+ AssertNotSmi(src);
-+ } else if (r.IsSmi()) {
-+ AssertSmi(src);
-+ }
-+ mov(dst, src);
-+ }
-+}
-+
-+
-+void MacroAssembler::LoadRoot(Register destination, Heap::RootListIndex index) {
-+ if (isolate()->heap()->RootCanBeTreatedAsConstant(index)) {
-+ Handle<Object> object = isolate()->heap()->root_handle(index);
-+ if (object->IsHeapObject()) {
-+ mov(destination, Handle<HeapObject>::cast(object));
-+ } else {
-+ mov(destination, Immediate(Smi::cast(*object)));
-+ }
-+ return;
-+ }
-+ ExternalReference roots_array_start =
-+ ExternalReference::roots_array_start(isolate());
-+ mov(destination, Immediate(index));
-+ mov(destination, Operand::StaticArray(destination,
-+ times_pointer_size,
-+ roots_array_start));
-+}
-+
-+
-+void MacroAssembler::StoreRoot(Register source,
-+ Register scratch,
-+ Heap::RootListIndex index) {
-+ DCHECK(Heap::RootCanBeWrittenAfterInitialization(index));
-+ ExternalReference roots_array_start =
-+ ExternalReference::roots_array_start(isolate());
-+ mov(scratch, Immediate(index));
-+ mov(Operand::StaticArray(scratch, times_pointer_size, roots_array_start),
-+ source);
-+}
-+
-+
-+void MacroAssembler::CompareRoot(Register with,
-+ Register scratch,
-+ Heap::RootListIndex index) {
-+ ExternalReference roots_array_start =
-+ ExternalReference::roots_array_start(isolate());
-+ mov(scratch, Immediate(index));
-+ cmp(with, Operand::StaticArray(scratch,
-+ times_pointer_size,
-+ roots_array_start));
-+}
-+
-+
-+void MacroAssembler::CompareRoot(Register with, Heap::RootListIndex index) {
-+ DCHECK(isolate()->heap()->RootCanBeTreatedAsConstant(index));
-+ Handle<Object> object = isolate()->heap()->root_handle(index);
-+ if (object->IsHeapObject()) {
-+ cmp(with, Handle<HeapObject>::cast(object));
-+ } else {
-+ cmp(with, Immediate(Smi::cast(*object)));
-+ }
-+}
-+
-+
-+void MacroAssembler::CompareRoot(const Operand& with,
-+ Heap::RootListIndex index) {
-+ DCHECK(isolate()->heap()->RootCanBeTreatedAsConstant(index));
-+ Handle<Object> object = isolate()->heap()->root_handle(index);
-+ if (object->IsHeapObject()) {
-+ cmp(with, Handle<HeapObject>::cast(object));
-+ } else {
-+ cmp(with, Immediate(Smi::cast(*object)));
-+ }
-+}
-+
-+
-+void MacroAssembler::PushRoot(Heap::RootListIndex index) {
-+ DCHECK(isolate()->heap()->RootCanBeTreatedAsConstant(index));
-+ PushObject(isolate()->heap()->root_handle(index));
-+}
-+
-+#define REG(Name) \
-+ { Register::kCode_##Name }
-+
-+static const Register saved_regs[] = {REG(eax), REG(ecx), REG(edx)};
-+
-+#undef REG
-+
-+static const int kNumberOfSavedRegs = sizeof(saved_regs) / sizeof(Register);
-+
-+void MacroAssembler::PushCallerSaved(SaveFPRegsMode fp_mode,
-+ Register exclusion1, Register exclusion2,
-+ Register exclusion3) {
-+ // We don't allow a GC during a store buffer overflow so there is no need to
-+ // store the registers in any particular way, but we do have to store and
-+ // restore them.
-+ for (int i = 0; i < kNumberOfSavedRegs; i++) {
-+ Register reg = saved_regs[i];
-+ if (!reg.is(exclusion1) && !reg.is(exclusion2) &&
!reg.is(exclusion3)) {
-+ push(reg);
-+ }
-+ }
-+ if (fp_mode == kSaveFPRegs) {
-+ // Save FPU state in m108byte.
-+ sub(esp, Immediate(108));
-+ fnsave(Operand(esp, 0));
-+ }
-+}
-+
-+void MacroAssembler::PopCallerSaved(SaveFPRegsMode fp_mode, Register exclusion1,
-+ Register exclusion2, Register exclusion3) {
-+ if (fp_mode == kSaveFPRegs) {
-+ // Restore FPU state in m108byte.
-+ frstor(Operand(esp, 0));
-+ add(esp, Immediate(108));
-+ }
-+
-+ for (int i = kNumberOfSavedRegs - 1; i >= 0; i--) {
-+ Register reg = saved_regs[i];
-+ if (!reg.is(exclusion1) && !reg.is(exclusion2) &&
!reg.is(exclusion3)) {
-+ pop(reg);
-+ }
-+ }
-+}
-+
-+void MacroAssembler::InNewSpace(Register object, Register scratch, Condition cc,
-+ Label* condition_met,
-+ Label::Distance distance) {
-+ CheckPageFlag(object, scratch, MemoryChunk::kIsInNewSpaceMask, cc,
-+ condition_met, distance);
-+}
-+
-+
-+void MacroAssembler::RememberedSetHelper(
-+ Register object, // Only used for debug checks.
-+ Register addr, Register scratch, SaveFPRegsMode save_fp,
-+ MacroAssembler::RememberedSetFinalAction and_then) {
-+ Label done;
-+ if (emit_debug_code()) {
-+ Label ok;
-+ JumpIfNotInNewSpace(object, scratch, &ok, Label::kNear);
-+ int3();
-+ bind(&ok);
-+ }
-+ // Load store buffer top.
-+ ExternalReference store_buffer =
-+ ExternalReference::store_buffer_top(isolate());
-+ mov(scratch, Operand::StaticVariable(store_buffer));
-+ // Store pointer to buffer.
-+ mov(Operand(scratch, 0), addr);
-+ // Increment buffer top.
-+ add(scratch, Immediate(kPointerSize));
-+ // Write back new top of buffer.
-+ mov(Operand::StaticVariable(store_buffer), scratch);
-+ // Call stub on end of buffer.
-+ // Check for end of buffer.
-+ test(scratch, Immediate(StoreBuffer::kStoreBufferMask));
-+ if (and_then == kReturnAtEnd) {
-+ Label buffer_overflowed;
-+ j(equal, &buffer_overflowed, Label::kNear);
-+ ret(0);
-+ bind(&buffer_overflowed);
-+ } else {
-+ DCHECK(and_then == kFallThroughAtEnd);
-+ j(not_equal, &done, Label::kNear);
-+ }
-+ StoreBufferOverflowStub store_buffer_overflow(isolate(), save_fp);
-+ CallStub(&store_buffer_overflow);
-+ if (and_then == kReturnAtEnd) {
-+ ret(0);
-+ } else {
-+ DCHECK(and_then == kFallThroughAtEnd);
-+ bind(&done);
-+ }
-+}
-+
-+
-+void MacroAssembler::ClampTOSToUint8(Register result_reg) {
-+ Label done, conv_failure;
-+ sub(esp, Immediate(kPointerSize));
-+ fnclex();
-+ fist_s(Operand(esp, 0));
-+ pop(result_reg);
-+ X87CheckIA();
-+ j(equal, &conv_failure, Label::kNear);
-+ test(result_reg, Immediate(0xFFFFFF00));
-+ j(zero, &done, Label::kNear);
-+ setcc(sign, result_reg);
-+ sub(result_reg, Immediate(1));
-+ and_(result_reg, Immediate(255));
-+ jmp(&done, Label::kNear);
-+ bind(&conv_failure);
-+ fnclex();
-+ fldz();
-+ fld(1);
-+ FCmp();
-+ setcc(below, result_reg); // 1 if negative, 0 if positive.
-+ dec_b(result_reg); // 0 if negative, 255 if positive.
-+ bind(&done);
-+}
-+
-+
-+void MacroAssembler::ClampUint8(Register reg) {
-+ Label done;
-+ test(reg, Immediate(0xFFFFFF00));
-+ j(zero, &done, Label::kNear);
-+ setcc(negative, reg); // 1 if negative, 0 if positive.
-+ dec_b(reg); // 0 if negative, 255 if positive.
-+ bind(&done);
-+}
-+
-+
-+void TurboAssembler::SlowTruncateToIDelayed(Zone* zone, Register result_reg,
-+ Register input_reg, int offset) {
-+ CallStubDelayed(
-+ new (zone) DoubleToIStub(nullptr, input_reg, result_reg, offset, true));
-+}
-+
-+void MacroAssembler::SlowTruncateToI(Register result_reg,
-+ Register input_reg,
-+ int offset) {
-+ DoubleToIStub stub(isolate(), input_reg, result_reg, offset, true);
-+ CallStub(&stub);
-+}
-+
-+
-+void TurboAssembler::TruncateX87TOSToI(Zone* zone, Register result_reg) {
-+ sub(esp, Immediate(kDoubleSize));
-+ fst_d(MemOperand(esp, 0));
-+ SlowTruncateToIDelayed(zone, result_reg, esp, 0);
-+ add(esp, Immediate(kDoubleSize));
-+}
-+
-+
-+void MacroAssembler::X87TOSToI(Register result_reg,
-+ MinusZeroMode minus_zero_mode,
-+ Label* lost_precision, Label* is_nan,
-+ Label* minus_zero, Label::Distance dst) {
-+ Label done;
-+ sub(esp, Immediate(kPointerSize));
-+ fld(0);
-+ fist_s(MemOperand(esp, 0));
-+ fild_s(MemOperand(esp, 0));
-+ pop(result_reg);
-+ FCmp();
-+ j(not_equal, lost_precision, dst);
-+ j(parity_even, is_nan, dst);
-+ if (minus_zero_mode == FAIL_ON_MINUS_ZERO) {
-+ test(result_reg, Operand(result_reg));
-+ j(not_zero, &done, Label::kNear);
-+ // To check for minus zero, we load the value again as float, and check
-+ // if that is still 0.
-+ sub(esp, Immediate(kPointerSize));
-+ fst_s(MemOperand(esp, 0));
-+ pop(result_reg);
-+ test(result_reg, Operand(result_reg));
-+ j(not_zero, minus_zero, dst);
-+ }
-+ bind(&done);
-+}
-+
-+
-+void MacroAssembler::TruncateHeapNumberToI(Register result_reg,
-+ Register input_reg) {
-+ Label done, slow_case;
-+
-+ SlowTruncateToI(result_reg, input_reg);
-+ bind(&done);
-+}
-+
-+
-+void TurboAssembler::LoadUint32NoSSE2(const Operand& src) {
-+ Label done;
-+ push(src);
-+ fild_s(Operand(esp, 0));
-+ cmp(src, Immediate(0));
-+ j(not_sign, &done, Label::kNear);
-+ ExternalReference uint32_bias =
-+ ExternalReference::address_of_uint32_bias();
-+ fld_d(Operand::StaticVariable(uint32_bias));
-+ faddp(1);
-+ bind(&done);
-+ add(esp, Immediate(kPointerSize));
-+}
-+
-+
-+void MacroAssembler::RecordWriteField(
-+ Register object, int offset, Register value, Register dst,
-+ SaveFPRegsMode save_fp, RememberedSetAction remembered_set_action,
-+ SmiCheck smi_check, PointersToHereCheck pointers_to_here_check_for_value) {
-+ // First, check if a write barrier is even needed. The tests below
-+ // catch stores of Smis.
-+ Label done;
-+
-+ // Skip barrier if writing a smi.
-+ if (smi_check == INLINE_SMI_CHECK) {
-+ JumpIfSmi(value, &done, Label::kNear);
-+ }
-+
-+ // Although the object register is tagged, the offset is relative to the start
-+ // of the object, so so offset must be a multiple of kPointerSize.
-+ DCHECK(IsAligned(offset, kPointerSize));
-+
-+ lea(dst, FieldOperand(object, offset));
-+ if (emit_debug_code()) {
-+ Label ok;
-+ test_b(dst, Immediate(kPointerSize - 1));
-+ j(zero, &ok, Label::kNear);
-+ int3();
-+ bind(&ok);
-+ }
-+
-+ RecordWrite(object, dst, value, save_fp, remembered_set_action,
-+ OMIT_SMI_CHECK, pointers_to_here_check_for_value);
-+
-+ bind(&done);
-+
-+ // Clobber clobbered input registers when running with the debug-code flag
-+ // turned on to provoke errors.
-+ if (emit_debug_code()) {
-+ mov(value, Immediate(bit_cast<int32_t>(kZapValue)));
-+ mov(dst, Immediate(bit_cast<int32_t>(kZapValue)));
-+ }
-+}
-+
-+
-+void MacroAssembler::RecordWriteForMap(Register object, Handle<Map> map,
-+ Register scratch1, Register scratch2,
-+ SaveFPRegsMode save_fp) {
-+ Label done;
-+
-+ Register address = scratch1;
-+ Register value = scratch2;
-+ if (emit_debug_code()) {
-+ Label ok;
-+ lea(address, FieldOperand(object, HeapObject::kMapOffset));
-+ test_b(address, Immediate(kPointerSize - 1));
-+ j(zero, &ok, Label::kNear);
-+ int3();
-+ bind(&ok);
-+ }
-+
-+ DCHECK(!object.is(value));
-+ DCHECK(!object.is(address));
-+ DCHECK(!value.is(address));
-+ AssertNotSmi(object);
-+
-+ if (!FLAG_incremental_marking) {
-+ return;
-+ }
-+
-+ // Compute the address.
-+ lea(address, FieldOperand(object, HeapObject::kMapOffset));
-+
-+ // A single check of the map's pages interesting flag suffices, since it is
-+ // only set during incremental collection, and then it's also guaranteed that
-+ // the from object's page's interesting flag is also set. This optimization
-+ // relies on the fact that maps can never be in new space.
-+ DCHECK(!isolate()->heap()->InNewSpace(*map));
-+ CheckPageFlagForMap(map,
-+ MemoryChunk::kPointersToHereAreInterestingMask,
-+ zero,
-+ &done,
-+ Label::kNear);
-+
-+ RecordWriteStub stub(isolate(), object, value, address, OMIT_REMEMBERED_SET,
-+ save_fp);
-+ CallStub(&stub);
-+
-+ bind(&done);
-+
-+ // Count number of write barriers in generated code.
-+ isolate()->counters()->write_barriers_static()->Increment();
-+ IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1);
-+
-+ // Clobber clobbered input registers when running with the debug-code flag
-+ // turned on to provoke errors.
-+ if (emit_debug_code()) {
-+ mov(value, Immediate(bit_cast<int32_t>(kZapValue)));
-+ mov(scratch1, Immediate(bit_cast<int32_t>(kZapValue)));
-+ mov(scratch2, Immediate(bit_cast<int32_t>(kZapValue)));
-+ }
-+}
-+
-+
-+void MacroAssembler::RecordWrite(
-+ Register object, Register address, Register value, SaveFPRegsMode fp_mode,
-+ RememberedSetAction remembered_set_action, SmiCheck smi_check,
-+ PointersToHereCheck pointers_to_here_check_for_value) {
-+ DCHECK(!object.is(value));
-+ DCHECK(!object.is(address));
-+ DCHECK(!value.is(address));
-+ AssertNotSmi(object);
-+
-+ if (remembered_set_action == OMIT_REMEMBERED_SET &&
-+ !FLAG_incremental_marking) {
-+ return;
-+ }
-+
-+ if (emit_debug_code()) {
-+ Label ok;
-+ cmp(value, Operand(address, 0));
-+ j(equal, &ok, Label::kNear);
-+ int3();
-+ bind(&ok);
-+ }
-+
-+ // First, check if a write barrier is even needed. The tests below
-+ // catch stores of Smis and stores into young gen.
-+ Label done;
-+
-+ if (smi_check == INLINE_SMI_CHECK) {
-+ // Skip barrier if writing a smi.
-+ JumpIfSmi(value, &done, Label::kNear);
-+ }
-+
-+ if (pointers_to_here_check_for_value != kPointersToHereAreAlwaysInteresting) {
-+ CheckPageFlag(value,
-+ value, // Used as scratch.
-+ MemoryChunk::kPointersToHereAreInterestingMask,
-+ zero,
-+ &done,
-+ Label::kNear);
-+ }
-+ CheckPageFlag(object,
-+ value, // Used as scratch.
-+ MemoryChunk::kPointersFromHereAreInterestingMask,
-+ zero,
-+ &done,
-+ Label::kNear);
-+
-+ RecordWriteStub stub(isolate(), object, value, address, remembered_set_action,
-+ fp_mode);
-+ CallStub(&stub);
-+
-+ bind(&done);
-+
-+ // Count number of write barriers in generated code.
-+ isolate()->counters()->write_barriers_static()->Increment();
-+ IncrementCounter(isolate()->counters()->write_barriers_dynamic(), 1);
-+
-+ // Clobber clobbered registers when running with the debug-code flag
-+ // turned on to provoke errors.
-+ if (emit_debug_code()) {
-+ mov(address, Immediate(bit_cast<int32_t>(kZapValue)));
-+ mov(value, Immediate(bit_cast<int32_t>(kZapValue)));
-+ }
-+}
-+
-+void MacroAssembler::RecordWriteCodeEntryField(Register js_function,
-+ Register code_entry,
-+ Register scratch) {
-+ const int offset = JSFunction::kCodeEntryOffset;
-+
-+ // Since a code entry (value) is always in old space, we don't need to update
-+ // remembered set. If incremental marking is off, there is nothing for us to
-+ // do.
-+ if (!FLAG_incremental_marking) return;
-+
-+ DCHECK(!js_function.is(code_entry));
-+ DCHECK(!js_function.is(scratch));
-+ DCHECK(!code_entry.is(scratch));
-+ AssertNotSmi(js_function);
-+
-+ if (emit_debug_code()) {
-+ Label ok;
-+ lea(scratch, FieldOperand(js_function, offset));
-+ cmp(code_entry, Operand(scratch, 0));
-+ j(equal, &ok, Label::kNear);
-+ int3();
-+ bind(&ok);
-+ }
-+
-+ // First, check if a write barrier is even needed. The tests below
-+ // catch stores of Smis and stores into young gen.
-+ Label done;
-+
-+ CheckPageFlag(code_entry, scratch,
-+ MemoryChunk::kPointersToHereAreInterestingMask, zero, &done,
-+ Label::kNear);
-+ CheckPageFlag(js_function, scratch,
-+ MemoryChunk::kPointersFromHereAreInterestingMask, zero, &done,
-+ Label::kNear);
-+
-+ // Save input registers.
-+ push(js_function);
-+ push(code_entry);
-+
-+ const Register dst = scratch;
-+ lea(dst, FieldOperand(js_function, offset));
-+
-+ // Save caller-saved registers.
-+ PushCallerSaved(kDontSaveFPRegs, js_function, code_entry);
-+
-+ int argument_count = 3;
-+ PrepareCallCFunction(argument_count, code_entry);
-+ mov(Operand(esp, 0 * kPointerSize), js_function);
-+ mov(Operand(esp, 1 * kPointerSize), dst); // Slot.
-+ mov(Operand(esp, 2 * kPointerSize),
-+ Immediate(ExternalReference::isolate_address(isolate())));
-+
-+ {
-+ AllowExternalCallThatCantCauseGC scope(this);
-+ CallCFunction(
-+ ExternalReference::incremental_marking_record_write_code_entry_function(
-+ isolate()),
-+ argument_count);
-+ }
-+
-+ // Restore caller-saved registers.
-+ PopCallerSaved(kDontSaveFPRegs, js_function, code_entry);
-+
-+ // Restore input registers.
-+ pop(code_entry);
-+ pop(js_function);
-+
-+ bind(&done);
-+}
-+
-+void MacroAssembler::MaybeDropFrames() {
-+ // Check whether we need to drop frames to restart a function on the stack.
-+ ExternalReference restart_fp =
-+ ExternalReference::debug_restart_fp_address(isolate());
-+ mov(ebx, Operand::StaticVariable(restart_fp));
-+ test(ebx, ebx);
-+ j(not_zero, isolate()->builtins()->FrameDropperTrampoline(),
-+ RelocInfo::CODE_TARGET);
-+}
-+
-+void TurboAssembler::ShlPair(Register high, Register low, uint8_t shift) {
-+ if (shift >= 32) {
-+ mov(high, low);
-+ shl(high, shift - 32);
-+ xor_(low, low);
-+ } else {
-+ shld(high, low, shift);
-+ shl(low, shift);
-+ }
-+}
-+
-+void TurboAssembler::ShlPair_cl(Register high, Register low) {
-+ shld_cl(high, low);
-+ shl_cl(low);
-+ Label done;
-+ test(ecx, Immediate(0x20));
-+ j(equal, &done, Label::kNear);
-+ mov(high, low);
-+ xor_(low, low);
-+ bind(&done);
-+}
-+
-+void TurboAssembler::ShrPair(Register high, Register low, uint8_t shift) {
-+ if (shift >= 32) {
-+ mov(low, high);
-+ shr(low, shift - 32);
-+ xor_(high, high);
-+ } else {
-+ shrd(high, low, shift);
-+ shr(high, shift);
-+ }
-+}
-+
-+void TurboAssembler::ShrPair_cl(Register high, Register low) {
-+ shrd_cl(low, high);
-+ shr_cl(high);
-+ Label done;
-+ test(ecx, Immediate(0x20));
-+ j(equal, &done, Label::kNear);
-+ mov(low, high);
-+ xor_(high, high);
-+ bind(&done);
-+}
-+
-+void TurboAssembler::SarPair(Register high, Register low, uint8_t shift) {
-+ if (shift >= 32) {
-+ mov(low, high);
-+ sar(low, shift - 32);
-+ sar(high, 31);
-+ } else {
-+ shrd(high, low, shift);
-+ sar(high, shift);
-+ }
-+}
-+
-+void TurboAssembler::SarPair_cl(Register high, Register low) {
-+ shrd_cl(low, high);
-+ sar_cl(high);
-+ Label done;
-+ test(ecx, Immediate(0x20));
-+ j(equal, &done, Label::kNear);
-+ mov(low, high);
-+ sar(high, 31);
-+ bind(&done);
-+}
-+
-+bool MacroAssembler::IsUnsafeImmediate(const Immediate& x) {
-+ static const int kMaxImmediateBits = 17;
-+ if (!RelocInfo::IsNone(x.rmode_)) return false;
-+ return !is_intn(x.immediate(), kMaxImmediateBits);
-+}
-+
-+
-+void MacroAssembler::SafeMove(Register dst, const Immediate& x) {
-+ if (IsUnsafeImmediate(x) && jit_cookie() != 0) {
-+ Move(dst, Immediate(x.immediate() ^ jit_cookie()));
-+ xor_(dst, jit_cookie());
-+ } else {
-+ Move(dst, x);
-+ }
-+}
-+
-+
-+void MacroAssembler::SafePush(const Immediate& x) {
-+ if (IsUnsafeImmediate(x) && jit_cookie() != 0) {
-+ push(Immediate(x.immediate() ^ jit_cookie()));
-+ xor_(Operand(esp, 0), Immediate(jit_cookie()));
-+ } else {
-+ push(x);
-+ }
-+}
-+
-+
-+void MacroAssembler::CmpObjectType(Register heap_object,
-+ InstanceType type,
-+ Register map) {
-+ mov(map, FieldOperand(heap_object, HeapObject::kMapOffset));
-+ CmpInstanceType(map, type);
-+}
-+
-+
-+void MacroAssembler::CmpInstanceType(Register map, InstanceType type) {
-+ cmpb(FieldOperand(map, Map::kInstanceTypeOffset), Immediate(type));
-+}
-+
-+void MacroAssembler::CompareMap(Register obj, Handle<Map> map) {
-+ cmp(FieldOperand(obj, HeapObject::kMapOffset), map);
-+}
-+
-+
-+void MacroAssembler::CheckMap(Register obj,
-+ Handle<Map> map,
-+ Label* fail,
-+ SmiCheckType smi_check_type) {
-+ if (smi_check_type == DO_SMI_CHECK) {
-+ JumpIfSmi(obj, fail);
-+ }
-+
-+ CompareMap(obj, map);
-+ j(not_equal, fail);
-+}
-+
-+
-+Condition MacroAssembler::IsObjectStringType(Register heap_object,
-+ Register map,
-+ Register instance_type) {
-+ mov(map, FieldOperand(heap_object, HeapObject::kMapOffset));
-+ movzx_b(instance_type, FieldOperand(map, Map::kInstanceTypeOffset));
-+ STATIC_ASSERT(kNotStringTag != 0);
-+ test(instance_type, Immediate(kIsNotStringMask));
-+ return zero;
-+}
-+
-+
-+void TurboAssembler::FCmp() {
-+ fucompp();
-+ push(eax);
-+ fnstsw_ax();
-+ sahf();
-+ pop(eax);
-+}
-+
-+
-+void MacroAssembler::FXamMinusZero() {
-+ fxam();
-+ push(eax);
-+ fnstsw_ax();
-+ and_(eax, Immediate(0x4700));
-+ // For minus zero, C3 == 1 && C1 == 1.
-+ cmp(eax, Immediate(0x4200));
-+ pop(eax);
-+ fstp(0);
-+}
-+
-+
-+void MacroAssembler::FXamSign() {
-+ fxam();
-+ push(eax);
-+ fnstsw_ax();
-+ // For negative value (including -0.0), C1 == 1.
-+ and_(eax, Immediate(0x0200));
-+ pop(eax);
-+ fstp(0);
-+}
-+
-+
-+void MacroAssembler::X87CheckIA() {
-+ push(eax);
-+ fnstsw_ax();
-+ // For #IA, IE == 1 && SF == 0.
-+ and_(eax, Immediate(0x0041));
-+ cmp(eax, Immediate(0x0001));
-+ pop(eax);
-+}
-+
-+
-+// rc=00B, round to nearest.
-+// rc=01B, round down.
-+// rc=10B, round up.
-+// rc=11B, round toward zero.
-+void TurboAssembler::X87SetRC(int rc) {
-+ sub(esp, Immediate(kPointerSize));
-+ fnstcw(MemOperand(esp, 0));
-+ and_(MemOperand(esp, 0), Immediate(0xF3FF));
-+ or_(MemOperand(esp, 0), Immediate(rc));
-+ fldcw(MemOperand(esp, 0));
-+ add(esp, Immediate(kPointerSize));
-+}
-+
-+
-+void TurboAssembler::X87SetFPUCW(int cw) {
-+ RecordComment("-- X87SetFPUCW start --");
-+ push(Immediate(cw));
-+ fldcw(MemOperand(esp, 0));
-+ add(esp, Immediate(kPointerSize));
-+ RecordComment("-- X87SetFPUCW end--");
-+}
-+
-+
-+void MacroAssembler::AssertSmi(Register object) {
-+ if (emit_debug_code()) {
-+ test(object, Immediate(kSmiTagMask));
-+ Check(equal, kOperandIsNotASmi);
-+ }
-+}
-+
-+
-+void MacroAssembler::AssertFixedArray(Register object) {
-+ if (emit_debug_code()) {
-+ test(object, Immediate(kSmiTagMask));
-+ Check(not_equal, kOperandIsASmiAndNotAFixedArray);
-+ Push(object);
-+ CmpObjectType(object, FIXED_ARRAY_TYPE, object);
-+ Pop(object);
-+ Check(equal, kOperandIsNotAFixedArray);
-+ }
-+}
-+
-+
-+void MacroAssembler::AssertFunction(Register object) {
-+ if (emit_debug_code()) {
-+ test(object, Immediate(kSmiTagMask));
-+ Check(not_equal, kOperandIsASmiAndNotAFunction);
-+ Push(object);
-+ CmpObjectType(object, JS_FUNCTION_TYPE, object);
-+ Pop(object);
-+ Check(equal, kOperandIsNotAFunction);
-+ }
-+}
-+
-+
-+void MacroAssembler::AssertBoundFunction(Register object) {
-+ if (emit_debug_code()) {
-+ test(object, Immediate(kSmiTagMask));
-+ Check(not_equal, kOperandIsASmiAndNotABoundFunction);
-+ Push(object);
-+ CmpObjectType(object, JS_BOUND_FUNCTION_TYPE, object);
-+ Pop(object);
-+ Check(equal, kOperandIsNotABoundFunction);
-+ }
-+}
-+
-+void MacroAssembler::AssertGeneratorObject(Register object) {
-+ if (emit_debug_code()) {
-+ test(object, Immediate(kSmiTagMask));
-+ Check(not_equal, kOperandIsASmiAndNotAGeneratorObject);
-+ Push(object);
-+ CmpObjectType(object, JS_GENERATOR_OBJECT_TYPE, object);
-+ Pop(object);
-+ Check(equal, kOperandIsNotAGeneratorObject);
-+ }
-+}
-+
-+void MacroAssembler::AssertUndefinedOrAllocationSite(Register object) {
-+ if (emit_debug_code()) {
-+ Label done_checking;
-+ AssertNotSmi(object);
-+ cmp(object, isolate()->factory()->undefined_value());
-+ j(equal, &done_checking);
-+ cmp(FieldOperand(object, 0),
-+ Immediate(isolate()->factory()->allocation_site_map()));
-+ Assert(equal, kExpectedUndefinedOrCell);
-+ bind(&done_checking);
-+ }
-+}
-+
-+
-+void MacroAssembler::AssertNotSmi(Register object) {
-+ if (emit_debug_code()) {
-+ test(object, Immediate(kSmiTagMask));
-+ Check(not_equal, kOperandIsASmi);
-+ }
-+}
-+
-+void TurboAssembler::StubPrologue(StackFrame::Type type) {
-+ push(ebp); // Caller's frame pointer.
-+ mov(ebp, esp);
-+ push(Immediate(Smi::FromInt(type)));
-+}
-+
-+
-+void TurboAssembler::Prologue(bool code_pre_aging) {
-+ PredictableCodeSizeScope predictible_code_size_scope(this,
-+ kNoCodeAgeSequenceLength);
-+ if (code_pre_aging) {
-+ // Pre-age the code.
-+ call(isolate()->builtins()->MarkCodeAsExecutedOnce(),
-+ RelocInfo::CODE_AGE_SEQUENCE);
-+ Nop(kNoCodeAgeSequenceLength - Assembler::kCallInstructionLength);
-+ } else {
-+ push(ebp); // Caller's frame pointer.
-+ mov(ebp, esp);
-+ push(esi); // Callee's context.
-+ push(edi); // Callee's JS function.
-+ }
-+}
-+
-+void MacroAssembler::EmitLoadFeedbackVector(Register vector) {
-+ mov(vector, Operand(ebp, JavaScriptFrameConstants::kFunctionOffset));
-+ mov(vector, FieldOperand(vector, JSFunction::kFeedbackVectorOffset));
-+ mov(vector, FieldOperand(vector, Cell::kValueOffset));
-+}
-+
-+
-+void TurboAssembler::EnterFrame(StackFrame::Type type) {
-+ push(ebp);
-+ mov(ebp, esp);
-+ push(Immediate(Smi::FromInt(type)));
-+ if (type == StackFrame::INTERNAL) {
-+ push(Immediate(CodeObject()));
-+ }
-+ if (emit_debug_code()) {
-+ cmp(Operand(esp, 0), Immediate(isolate()->factory()->undefined_value()));
-+ Check(not_equal, kCodeObjectNotProperlyPatched);
-+ }
-+}
-+
-+
-+void TurboAssembler::LeaveFrame(StackFrame::Type type) {
-+ if (emit_debug_code()) {
-+ cmp(Operand(ebp, CommonFrameConstants::kContextOrFrameTypeOffset),
-+ Immediate(Smi::FromInt(type)));
-+ Check(equal, kStackFrameTypesMustMatch);
-+ }
-+ leave();
-+}
-+
-+void MacroAssembler::EnterBuiltinFrame(Register context, Register target,
-+ Register argc) {
-+ Push(ebp);
-+ Move(ebp, esp);
-+ Push(context);
-+ Push(target);
-+ Push(argc);
-+}
-+
-+void MacroAssembler::LeaveBuiltinFrame(Register context, Register target,
-+ Register argc) {
-+ Pop(argc);
-+ Pop(target);
-+ Pop(context);
-+ leave();
-+}
-+
-+void MacroAssembler::EnterExitFramePrologue(StackFrame::Type frame_type) {
-+ DCHECK(frame_type == StackFrame::EXIT ||
-+ frame_type == StackFrame::BUILTIN_EXIT);
-+
-+ // Set up the frame structure on the stack.
-+ DCHECK_EQ(+2 * kPointerSize, ExitFrameConstants::kCallerSPDisplacement);
-+ DCHECK_EQ(+1 * kPointerSize, ExitFrameConstants::kCallerPCOffset);
-+ DCHECK_EQ(0 * kPointerSize, ExitFrameConstants::kCallerFPOffset);
-+ push(ebp);
-+ mov(ebp, esp);
-+
-+ // Reserve room for entry stack pointer and push the code object.
-+ push(Immediate(Smi::FromInt(frame_type)));
-+ DCHECK_EQ(-2 * kPointerSize, ExitFrameConstants::kSPOffset);
-+ push(Immediate(0)); // Saved entry sp, patched before call.
-+ DCHECK_EQ(-3 * kPointerSize, ExitFrameConstants::kCodeOffset);
-+ push(Immediate(CodeObject())); // Accessed from ExitFrame::code_slot.
-+
-+ // Save the frame pointer and the context in top.
-+ ExternalReference c_entry_fp_address(IsolateAddressId::kCEntryFPAddress,
-+ isolate());
-+ ExternalReference context_address(IsolateAddressId::kContextAddress,
-+ isolate());
-+ ExternalReference c_function_address(IsolateAddressId::kCFunctionAddress,
-+ isolate());
-+ mov(Operand::StaticVariable(c_entry_fp_address), ebp);
-+ mov(Operand::StaticVariable(context_address), esi);
-+ mov(Operand::StaticVariable(c_function_address), ebx);
-+}
-+
-+
-+void MacroAssembler::EnterExitFrameEpilogue(int argc, bool save_doubles) {
-+ // Optionally save FPU state.
-+ if (save_doubles) {
-+ // Store FPU state to m108byte.
-+ int space = 108 + argc * kPointerSize;
-+ sub(esp, Immediate(space));
-+ const int offset = -ExitFrameConstants::kFixedFrameSizeFromFp;
-+ fnsave(MemOperand(ebp, offset - 108));
-+ } else {
-+ sub(esp, Immediate(argc * kPointerSize));
-+ }
-+
-+ // Get the required frame alignment for the OS.
-+ const int kFrameAlignment = base::OS::ActivationFrameAlignment();
-+ if (kFrameAlignment > 0) {
-+ DCHECK(base::bits::IsPowerOfTwo(kFrameAlignment));
-+ and_(esp, -kFrameAlignment);
-+ }
-+
-+ // Patch the saved entry sp.
-+ mov(Operand(ebp, ExitFrameConstants::kSPOffset), esp);
-+}
-+
-+void MacroAssembler::EnterExitFrame(int argc, bool save_doubles,
-+ StackFrame::Type frame_type) {
-+ EnterExitFramePrologue(frame_type);
-+
-+ // Set up argc and argv in callee-saved registers.
-+ int offset = StandardFrameConstants::kCallerSPOffset - kPointerSize;
-+ mov(edi, eax);
-+ lea(esi, Operand(ebp, eax, times_4, offset));
-+
-+ // Reserve space for argc, argv and isolate.
-+ EnterExitFrameEpilogue(argc, save_doubles);
-+}
-+
-+
-+void MacroAssembler::EnterApiExitFrame(int argc) {
-+ EnterExitFramePrologue(StackFrame::EXIT);
-+ EnterExitFrameEpilogue(argc, false);
-+}
-+
-+
-+void MacroAssembler::LeaveExitFrame(bool save_doubles, bool pop_arguments) {
-+ // Optionally restore FPU state.
-+ if (save_doubles) {
-+ const int offset = -ExitFrameConstants::kFixedFrameSizeFromFp;
-+ frstor(MemOperand(ebp, offset - 108));
-+ }
-+
-+ if (pop_arguments) {
-+ // Get the return address from the stack and restore the frame pointer.
-+ mov(ecx, Operand(ebp, 1 * kPointerSize));
-+ mov(ebp, Operand(ebp, 0 * kPointerSize));
-+
-+ // Pop the arguments and the receiver from the caller stack.
-+ lea(esp, Operand(esi, 1 * kPointerSize));
-+
-+ // Push the return address to get ready to return.
-+ push(ecx);
-+ } else {
-+ // Otherwise just leave the exit frame.
-+ leave();
-+ }
-+
-+ LeaveExitFrameEpilogue(true);
-+}
-+
-+
-+void MacroAssembler::LeaveExitFrameEpilogue(bool restore_context) {
-+ // Restore current context from top and clear it in debug mode.
-+ ExternalReference context_address(IsolateAddressId::kContextAddress,
-+ isolate());
-+ if (restore_context) {
-+ mov(esi, Operand::StaticVariable(context_address));
-+ }
-+#ifdef DEBUG
-+ mov(Operand::StaticVariable(context_address), Immediate(0));
-+#endif
-+
-+ // Clear the top frame.
-+ ExternalReference c_entry_fp_address(IsolateAddressId::kCEntryFPAddress,
-+ isolate());
-+ mov(Operand::StaticVariable(c_entry_fp_address), Immediate(0));
-+}
-+
-+
-+void MacroAssembler::LeaveApiExitFrame(bool restore_context) {
-+ mov(esp, ebp);
-+ pop(ebp);
-+
-+ LeaveExitFrameEpilogue(restore_context);
-+}
-+
-+
-+void MacroAssembler::PushStackHandler() {
-+ // Adjust this code if not the case.
-+ STATIC_ASSERT(StackHandlerConstants::kSize == 1 * kPointerSize);
-+ STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0);
-+
-+ // Link the current handler as the next handler.
-+ ExternalReference handler_address(IsolateAddressId::kHandlerAddress,
-+ isolate());
-+ push(Operand::StaticVariable(handler_address));
-+
-+ // Set this new handler as the current one.
-+ mov(Operand::StaticVariable(handler_address), esp);
-+}
-+
-+
-+void MacroAssembler::PopStackHandler() {
-+ STATIC_ASSERT(StackHandlerConstants::kNextOffset == 0);
-+ ExternalReference handler_address(IsolateAddressId::kHandlerAddress,
-+ isolate());
-+ pop(Operand::StaticVariable(handler_address));
-+ add(esp, Immediate(StackHandlerConstants::kSize - kPointerSize));
-+}
-+
-+
-+// Compute the hash code from the untagged key. This must be kept in sync with
-+// ComputeIntegerHash in utils.h and KeyedLoadGenericStub in
-+// code-stub-hydrogen.cc
-+//
-+// Note: r0 will contain hash code
-+void MacroAssembler::GetNumberHash(Register r0, Register scratch) {
-+ // Xor original key with a seed.
-+ if (serializer_enabled()) {
-+ ExternalReference roots_array_start =
-+ ExternalReference::roots_array_start(isolate());
-+ mov(scratch, Immediate(Heap::kHashSeedRootIndex));
-+ mov(scratch,
-+ Operand::StaticArray(scratch, times_pointer_size, roots_array_start));
-+ SmiUntag(scratch);
-+ xor_(r0, scratch);
-+ } else {
-+ int32_t seed = isolate()->heap()->HashSeed();
-+ xor_(r0, Immediate(seed));
-+ }
-+
-+ // hash = ~hash + (hash << 15);
-+ mov(scratch, r0);
-+ not_(r0);
-+ shl(scratch, 15);
-+ add(r0, scratch);
-+ // hash = hash ^ (hash >> 12);
-+ mov(scratch, r0);
-+ shr(scratch, 12);
-+ xor_(r0, scratch);
-+ // hash = hash + (hash << 2);
-+ lea(r0, Operand(r0, r0, times_4, 0));
-+ // hash = hash ^ (hash >> 4);
-+ mov(scratch, r0);
-+ shr(scratch, 4);
-+ xor_(r0, scratch);
-+ // hash = hash * 2057;
-+ imul(r0, r0, 2057);
-+ // hash = hash ^ (hash >> 16);
-+ mov(scratch, r0);
-+ shr(scratch, 16);
-+ xor_(r0, scratch);
-+ and_(r0, 0x3fffffff);
-+}
-+
-+void MacroAssembler::LoadAllocationTopHelper(Register result,
-+ Register scratch,
-+ AllocationFlags flags) {
-+ ExternalReference allocation_top =
-+ AllocationUtils::GetAllocationTopReference(isolate(), flags);
-+
-+ // Just return if allocation top is already known.
-+ if ((flags & RESULT_CONTAINS_TOP) != 0) {
-+ // No use of scratch if allocation top is provided.
-+ DCHECK(scratch.is(no_reg));
-+#ifdef DEBUG
-+ // Assert that result actually contains top on entry.
-+ cmp(result, Operand::StaticVariable(allocation_top));
-+ Check(equal, kUnexpectedAllocationTop);
-+#endif
-+ return;
-+ }
-+
-+ // Move address of new object to result. Use scratch register if available.
-+ if (scratch.is(no_reg)) {
-+ mov(result, Operand::StaticVariable(allocation_top));
-+ } else {
-+ mov(scratch, Immediate(allocation_top));
-+ mov(result, Operand(scratch, 0));
-+ }
-+}
-+
-+
-+void MacroAssembler::UpdateAllocationTopHelper(Register result_end,
-+ Register scratch,
-+ AllocationFlags flags) {
-+ if (emit_debug_code()) {
-+ test(result_end, Immediate(kObjectAlignmentMask));
-+ Check(zero, kUnalignedAllocationInNewSpace);
-+ }
-+
-+ ExternalReference allocation_top =
-+ AllocationUtils::GetAllocationTopReference(isolate(), flags);
-+
-+ // Update new top. Use scratch if available.
-+ if (scratch.is(no_reg)) {
-+ mov(Operand::StaticVariable(allocation_top), result_end);
-+ } else {
-+ mov(Operand(scratch, 0), result_end);
-+ }
-+}
-+
-+
-+void MacroAssembler::Allocate(int object_size,
-+ Register result,
-+ Register result_end,
-+ Register scratch,
-+ Label* gc_required,
-+ AllocationFlags flags) {
-+ DCHECK((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0);
-+ DCHECK(object_size <= kMaxRegularHeapObjectSize);
-+ if (!FLAG_inline_new) {
-+ if (emit_debug_code()) {
-+ // Trash the registers to simulate an allocation failure.
-+ mov(result, Immediate(0x7091));
-+ if (result_end.is_valid()) {
-+ mov(result_end, Immediate(0x7191));
-+ }
-+ if (scratch.is_valid()) {
-+ mov(scratch, Immediate(0x7291));
-+ }
-+ }
-+ jmp(gc_required);
-+ return;
-+ }
-+ DCHECK(!result.is(result_end));
-+
-+ // Load address of new object into result.
-+ LoadAllocationTopHelper(result, scratch, flags);
-+
-+ ExternalReference allocation_limit =
-+ AllocationUtils::GetAllocationLimitReference(isolate(), flags);
-+
-+ // Align the next allocation. Storing the filler map without checking top is
-+ // safe in new-space because the limit of the heap is aligned there.
-+ if ((flags & DOUBLE_ALIGNMENT) != 0) {
-+ DCHECK(kPointerAlignment * 2 == kDoubleAlignment);
-+ Label aligned;
-+ test(result, Immediate(kDoubleAlignmentMask));
-+ j(zero, &aligned, Label::kNear);
-+ if ((flags & PRETENURE) != 0) {
-+ cmp(result, Operand::StaticVariable(allocation_limit));
-+ j(above_equal, gc_required);
-+ }
-+ mov(Operand(result, 0),
-+ Immediate(isolate()->factory()->one_pointer_filler_map()));
-+ add(result, Immediate(kDoubleSize / 2));
-+ bind(&aligned);
-+ }
-+
-+ // Calculate new top and bail out if space is exhausted.
-+ Register top_reg = result_end.is_valid() ? result_end : result;
-+
-+ if (!top_reg.is(result)) {
-+ mov(top_reg, result);
-+ }
-+ add(top_reg, Immediate(object_size));
-+ cmp(top_reg, Operand::StaticVariable(allocation_limit));
-+ j(above, gc_required);
-+
-+ UpdateAllocationTopHelper(top_reg, scratch, flags);
-+
-+ if (top_reg.is(result)) {
-+ sub(result, Immediate(object_size - kHeapObjectTag));
-+ } else {
-+ // Tag the result.
-+ DCHECK(kHeapObjectTag == 1);
-+ inc(result);
-+ }
-+}
-+
-+
-+void MacroAssembler::Allocate(int header_size,
-+ ScaleFactor element_size,
-+ Register element_count,
-+ RegisterValueType element_count_type,
-+ Register result,
-+ Register result_end,
-+ Register scratch,
-+ Label* gc_required,
-+ AllocationFlags flags) {
-+ DCHECK((flags & SIZE_IN_WORDS) == 0);
-+ if (!FLAG_inline_new) {
-+ if (emit_debug_code()) {
-+ // Trash the registers to simulate an allocation failure.
-+ mov(result, Immediate(0x7091));
-+ mov(result_end, Immediate(0x7191));
-+ if (scratch.is_valid()) {
-+ mov(scratch, Immediate(0x7291));
-+ }
-+ // Register element_count is not modified by the function.
-+ }
-+ jmp(gc_required);
-+ return;
-+ }
-+ DCHECK(!result.is(result_end));
-+
-+ // Load address of new object into result.
-+ LoadAllocationTopHelper(result, scratch, flags);
-+
-+ ExternalReference allocation_limit =
-+ AllocationUtils::GetAllocationLimitReference(isolate(), flags);
-+
-+ // Align the next allocation. Storing the filler map without checking top is
-+ // safe in new-space because the limit of the heap is aligned there.
-+ if ((flags & DOUBLE_ALIGNMENT) != 0) {
-+ DCHECK(kPointerAlignment * 2 == kDoubleAlignment);
-+ Label aligned;
-+ test(result, Immediate(kDoubleAlignmentMask));
-+ j(zero, &aligned, Label::kNear);
-+ if ((flags & PRETENURE) != 0) {
-+ cmp(result, Operand::StaticVariable(allocation_limit));
-+ j(above_equal, gc_required);
-+ }
-+ mov(Operand(result, 0),
-+ Immediate(isolate()->factory()->one_pointer_filler_map()));
-+ add(result, Immediate(kDoubleSize / 2));
-+ bind(&aligned);
-+ }
-+
-+ // Calculate new top and bail out if space is exhausted.
-+ // We assume that element_count*element_size + header_size does not
-+ // overflow.
-+ if (element_count_type == REGISTER_VALUE_IS_SMI) {
-+ STATIC_ASSERT(static_cast<ScaleFactor>(times_2 - 1) == times_1);
-+ STATIC_ASSERT(static_cast<ScaleFactor>(times_4 - 1) == times_2);
-+ STATIC_ASSERT(static_cast<ScaleFactor>(times_8 - 1) == times_4);
-+ DCHECK(element_size >= times_2);
-+ DCHECK(kSmiTagSize == 1);
-+ element_size = static_cast<ScaleFactor>(element_size - 1);
-+ } else {
-+ DCHECK(element_count_type == REGISTER_VALUE_IS_INT32);
-+ }
-+ lea(result_end, Operand(element_count, element_size, header_size));
-+ add(result_end, result);
-+ j(carry, gc_required);
-+ cmp(result_end, Operand::StaticVariable(allocation_limit));
-+ j(above, gc_required);
-+
-+ // Tag result.
-+ DCHECK(kHeapObjectTag == 1);
-+ inc(result);
-+
-+ // Update allocation top.
-+ UpdateAllocationTopHelper(result_end, scratch, flags);
-+}
-+
-+void MacroAssembler::Allocate(Register object_size,
-+ Register result,
-+ Register result_end,
-+ Register scratch,
-+ Label* gc_required,
-+ AllocationFlags flags) {
-+ DCHECK((flags & (RESULT_CONTAINS_TOP | SIZE_IN_WORDS)) == 0);
-+ if (!FLAG_inline_new) {
-+ if (emit_debug_code()) {
-+ // Trash the registers to simulate an allocation failure.
-+ mov(result, Immediate(0x7091));
-+ mov(result_end, Immediate(0x7191));
-+ if (scratch.is_valid()) {
-+ mov(scratch, Immediate(0x7291));
-+ }
-+ // object_size is left unchanged by this function.
-+ }
-+ jmp(gc_required);
-+ return;
-+ }
-+ DCHECK(!result.is(result_end));
-+
-+ // Load address of new object into result.
-+ LoadAllocationTopHelper(result, scratch, flags);
-+
-+ ExternalReference allocation_limit =
-+ AllocationUtils::GetAllocationLimitReference(isolate(), flags);
-+
-+ // Align the next allocation. Storing the filler map without checking top is
-+ // safe in new-space because the limit of the heap is aligned there.
-+ if ((flags & DOUBLE_ALIGNMENT) != 0) {
-+ DCHECK(kPointerAlignment * 2 == kDoubleAlignment);
-+ Label aligned;
-+ test(result, Immediate(kDoubleAlignmentMask));
-+ j(zero, &aligned, Label::kNear);
-+ if ((flags & PRETENURE) != 0) {
-+ cmp(result, Operand::StaticVariable(allocation_limit));
-+ j(above_equal, gc_required);
-+ }
-+ mov(Operand(result, 0),
-+ Immediate(isolate()->factory()->one_pointer_filler_map()));
-+ add(result, Immediate(kDoubleSize / 2));
-+ bind(&aligned);
-+ }
-+
-+ // Calculate new top and bail out if space is exhausted.
-+ if (!object_size.is(result_end)) {
-+ mov(result_end, object_size);
-+ }
-+ add(result_end, result);
-+ cmp(result_end, Operand::StaticVariable(allocation_limit));
-+ j(above, gc_required);
-+
-+ // Tag result.
-+ DCHECK(kHeapObjectTag == 1);
-+ inc(result);
-+
-+ UpdateAllocationTopHelper(result_end, scratch, flags);
-+}
-+
-+void MacroAssembler::AllocateHeapNumber(Register result,
-+ Register scratch1,
-+ Register scratch2,
-+ Label* gc_required,
-+ MutableMode mode) {
-+ // Allocate heap number in new space.
-+ Allocate(HeapNumber::kSize, result, scratch1, scratch2, gc_required,
-+ NO_ALLOCATION_FLAGS);
-+
-+ Handle<Map> map = mode == MUTABLE
-+ ? isolate()->factory()->mutable_heap_number_map()
-+ : isolate()->factory()->heap_number_map();
-+
-+ // Set the map.
-+ mov(FieldOperand(result, HeapObject::kMapOffset), Immediate(map));
-+}
-+
-+void MacroAssembler::AllocateJSValue(Register result, Register constructor,
-+ Register value, Register scratch,
-+ Label* gc_required) {
-+ DCHECK(!result.is(constructor));
-+ DCHECK(!result.is(scratch));
-+ DCHECK(!result.is(value));
-+
-+ // Allocate JSValue in new space.
-+ Allocate(JSValue::kSize, result, scratch, no_reg, gc_required,
-+ NO_ALLOCATION_FLAGS);
-+
-+ // Initialize the JSValue.
-+ LoadGlobalFunctionInitialMap(constructor, scratch);
-+ mov(FieldOperand(result, HeapObject::kMapOffset), scratch);
-+ LoadRoot(scratch, Heap::kEmptyFixedArrayRootIndex);
-+ mov(FieldOperand(result, JSObject::kPropertiesOrHashOffset), scratch);
-+ mov(FieldOperand(result, JSObject::kElementsOffset), scratch);
-+ mov(FieldOperand(result, JSValue::kValueOffset), value);
-+ STATIC_ASSERT(JSValue::kSize == 4 * kPointerSize);
-+}
-+
-+void MacroAssembler::InitializeFieldsWithFiller(Register current_address,
-+ Register end_address,
-+ Register filler) {
-+ Label loop, entry;
-+ jmp(&entry, Label::kNear);
-+ bind(&loop);
-+ mov(Operand(current_address, 0), filler);
-+ add(current_address, Immediate(kPointerSize));
-+ bind(&entry);
-+ cmp(current_address, end_address);
-+ j(below, &loop, Label::kNear);
-+}
-+
-+
-+void MacroAssembler::BooleanBitTest(Register object,
-+ int field_offset,
-+ int bit_index) {
-+ bit_index += kSmiTagSize + kSmiShiftSize;
-+ DCHECK(base::bits::IsPowerOfTwo(kBitsPerByte));
-+ int byte_index = bit_index / kBitsPerByte;
-+ int byte_bit_index = bit_index & (kBitsPerByte - 1);
-+ test_b(FieldOperand(object, field_offset + byte_index),
-+ Immediate(1 << byte_bit_index));
-+}
-+
-+void MacroAssembler::GetMapConstructor(Register result, Register map,
-+ Register temp) {
-+ Label done, loop;
-+ mov(result, FieldOperand(map, Map::kConstructorOrBackPointerOffset));
-+ bind(&loop);
-+ JumpIfSmi(result, &done, Label::kNear);
-+ CmpObjectType(result, MAP_TYPE, temp);
-+ j(not_equal, &done, Label::kNear);
-+ mov(result, FieldOperand(result, Map::kConstructorOrBackPointerOffset));
-+ jmp(&loop);
-+ bind(&done);
-+}
-+
-+void MacroAssembler::CallStub(CodeStub* stub) {
-+ DCHECK(AllowThisStubCall(stub)); // Calls are not allowed in some stubs.
-+ call(stub->GetCode(), RelocInfo::CODE_TARGET);
-+}
-+
-+void TurboAssembler::CallStubDelayed(CodeStub* stub) {
-+ DCHECK(AllowThisStubCall(stub)); // Calls are not allowed in some stubs.
-+ call(stub);
-+}
-+
-+void MacroAssembler::TailCallStub(CodeStub* stub) {
-+ jmp(stub->GetCode(), RelocInfo::CODE_TARGET);
-+}
-+
-+bool TurboAssembler::AllowThisStubCall(CodeStub* stub) {
-+ return has_frame() || !stub->SometimesSetsUpAFrame();
-+}
-+
-+void MacroAssembler::CallRuntime(const Runtime::Function* f, int num_arguments,
-+ SaveFPRegsMode save_doubles) {
-+ // If the expected number of arguments of the runtime function is
-+ // constant, we check that the actual number of arguments match the
-+ // expectation.
-+ CHECK(f->nargs < 0 || f->nargs == num_arguments);
-+
-+ // TODO(1236192): Most runtime routines don't need the number of
-+ // arguments passed in because it is constant. At some point we
-+ // should remove this need and make the runtime routine entry code
-+ // smarter.
-+ Move(eax, Immediate(num_arguments));
-+ mov(ebx, Immediate(ExternalReference(f, isolate())));
-+ CEntryStub ces(isolate(), 1, save_doubles);
-+ CallStub(&ces);
-+}
-+
-+void TurboAssembler::CallRuntimeDelayed(Zone* zone, Runtime::FunctionId fid,
-+ SaveFPRegsMode save_doubles) {
-+ const Runtime::Function* f = Runtime::FunctionForId(fid);
-+ // TODO(1236192): Most runtime routines don't need the number of
-+ // arguments passed in because it is constant. At some point we
-+ // should remove this need and make the runtime routine entry code
-+ // smarter.
-+ Move(eax, Immediate(f->nargs));
-+ mov(ebx, Immediate(ExternalReference(f, isolate())));
-+ CallStubDelayed(new (zone) CEntryStub(nullptr, 1, save_doubles));
-+}
-+
-+void MacroAssembler::CallExternalReference(ExternalReference ref,
-+ int num_arguments) {
-+ mov(eax, Immediate(num_arguments));
-+ mov(ebx, Immediate(ref));
-+
-+ CEntryStub stub(isolate(), 1);
-+ CallStub(&stub);
-+}
-+
-+
-+void MacroAssembler::TailCallRuntime(Runtime::FunctionId fid) {
-+ // ----------- S t a t e -------------
-+ // -- esp[0] : return address
-+ // -- esp[8] : argument num_arguments - 1
-+ // ...
-+ // -- esp[8 * num_arguments] : argument 0 (receiver)
-+ //
-+ // For runtime functions with variable arguments:
-+ // -- eax : number of arguments
-+ // -----------------------------------
-+
-+ const Runtime::Function* function = Runtime::FunctionForId(fid);
-+ DCHECK_EQ(1, function->result_size);
-+ if (function->nargs >= 0) {
-+ // TODO(1236192): Most runtime routines don't need the number of
-+ // arguments passed in because it is constant. At some point we
-+ // should remove this need and make the runtime routine entry code
-+ // smarter.
-+ mov(eax, Immediate(function->nargs));
-+ }
-+ JumpToExternalReference(ExternalReference(fid, isolate()));
-+}
-+
-+void MacroAssembler::JumpToExternalReference(const ExternalReference& ext,
-+ bool builtin_exit_frame) {
-+ // Set the entry point and jump to the C entry runtime stub.
-+ mov(ebx, Immediate(ext));
-+ CEntryStub ces(isolate(), 1, kDontSaveFPRegs, kArgvOnStack,
-+ builtin_exit_frame);
-+ jmp(ces.GetCode(), RelocInfo::CODE_TARGET);
-+}
-+
-+void TurboAssembler::PrepareForTailCall(
-+ const ParameterCount& callee_args_count, Register caller_args_count_reg,
-+ Register scratch0, Register scratch1, ReturnAddressState ra_state,
-+ int number_of_temp_values_after_return_address) {
-+#if DEBUG
-+ if (callee_args_count.is_reg()) {
-+ DCHECK(!AreAliased(callee_args_count.reg(), caller_args_count_reg, scratch0,
-+ scratch1));
-+ } else {
-+ DCHECK(!AreAliased(caller_args_count_reg, scratch0, scratch1));
-+ }
-+ DCHECK(ra_state != ReturnAddressState::kNotOnStack ||
-+ number_of_temp_values_after_return_address == 0);
-+#endif
-+
-+ // Calculate the destination address where we will put the return address
-+ // after we drop current frame.
-+ Register new_sp_reg = scratch0;
-+ if (callee_args_count.is_reg()) {
-+ sub(caller_args_count_reg, callee_args_count.reg());
-+ lea(new_sp_reg,
-+ Operand(ebp, caller_args_count_reg, times_pointer_size,
-+ StandardFrameConstants::kCallerPCOffset -
-+ number_of_temp_values_after_return_address * kPointerSize));
-+ } else {
-+ lea(new_sp_reg, Operand(ebp, caller_args_count_reg, times_pointer_size,
-+ StandardFrameConstants::kCallerPCOffset -
-+ (callee_args_count.immediate() +
-+ number_of_temp_values_after_return_address) *
-+ kPointerSize));
-+ }
-+
-+ if (FLAG_debug_code) {
-+ cmp(esp, new_sp_reg);
-+ Check(below, kStackAccessBelowStackPointer);
-+ }
-+
-+ // Copy return address from caller's frame to current frame's return address
-+ // to avoid its trashing and let the following loop copy it to the right
-+ // place.
-+ Register tmp_reg = scratch1;
-+ if (ra_state == ReturnAddressState::kOnStack) {
-+ mov(tmp_reg, Operand(ebp, StandardFrameConstants::kCallerPCOffset));
-+ mov(Operand(esp, number_of_temp_values_after_return_address * kPointerSize),
-+ tmp_reg);
-+ } else {
-+ DCHECK(ReturnAddressState::kNotOnStack == ra_state);
-+ DCHECK_EQ(0, number_of_temp_values_after_return_address);
-+ Push(Operand(ebp, StandardFrameConstants::kCallerPCOffset));
-+ }
-+
-+ // Restore caller's frame pointer now as it could be overwritten by
-+ // the copying loop.
-+ mov(ebp, Operand(ebp, StandardFrameConstants::kCallerFPOffset));
-+
-+ // +2 here is to copy both receiver and return address.
-+ Register count_reg = caller_args_count_reg;
-+ if (callee_args_count.is_reg()) {
-+ lea(count_reg, Operand(callee_args_count.reg(),
-+ 2 + number_of_temp_values_after_return_address));
-+ } else {
-+ mov(count_reg, Immediate(callee_args_count.immediate() + 2 +
-+ number_of_temp_values_after_return_address));
-+ // TODO(ishell): Unroll copying loop for small immediate values.
-+ }
-+
-+ // Now copy callee arguments to the caller frame going backwards to avoid
-+ // callee arguments corruption (source and destination areas could overlap).
-+ Label loop, entry;
-+ jmp(&entry, Label::kNear);
-+ bind(&loop);
-+ dec(count_reg);
-+ mov(tmp_reg, Operand(esp, count_reg, times_pointer_size, 0));
-+ mov(Operand(new_sp_reg, count_reg, times_pointer_size, 0), tmp_reg);
-+ bind(&entry);
-+ cmp(count_reg, Immediate(0));
-+ j(not_equal, &loop, Label::kNear);
-+
-+ // Leave current frame.
-+ mov(esp, new_sp_reg);
-+}
-+
-+void MacroAssembler::InvokePrologue(const ParameterCount& expected,
-+ const ParameterCount& actual,
-+ Label* done,
-+ bool* definitely_mismatches,
-+ InvokeFlag flag,
-+ Label::Distance done_near,
-+ const CallWrapper& call_wrapper) {
-+ bool definitely_matches = false;
-+ *definitely_mismatches = false;
-+ Label invoke;
-+ if (expected.is_immediate()) {
-+ DCHECK(actual.is_immediate());
-+ mov(eax, actual.immediate());
-+ if (expected.immediate() == actual.immediate()) {
-+ definitely_matches = true;
-+ } else {
-+ const int sentinel = SharedFunctionInfo::kDontAdaptArgumentsSentinel;
-+ if (expected.immediate() == sentinel) {
-+ // Don't worry about adapting arguments for builtins that
-+ // don't want that done. Skip adaption code by making it look
-+ // like we have a match between expected and actual number of
-+ // arguments.
-+ definitely_matches = true;
-+ } else {
-+ *definitely_mismatches = true;
-+ mov(ebx, expected.immediate());
-+ }
-+ }
-+ } else {
-+ if (actual.is_immediate()) {
-+ // Expected is in register, actual is immediate. This is the
-+ // case when we invoke function values without going through the
-+ // IC mechanism.
-+ mov(eax, actual.immediate());
-+ cmp(expected.reg(), actual.immediate());
-+ j(equal, &invoke);
-+ DCHECK(expected.reg().is(ebx));
-+ } else if (!expected.reg().is(actual.reg())) {
-+ // Both expected and actual are in (different) registers. This
-+ // is the case when we invoke functions using call and apply.
-+ cmp(expected.reg(), actual.reg());
-+ j(equal, &invoke);
-+ DCHECK(actual.reg().is(eax));
-+ DCHECK(expected.reg().is(ebx));
-+ } else {
-+ definitely_matches = true;
-+ Move(eax, actual.reg());
-+ }
-+ }
-+
-+ if (!definitely_matches) {
-+ Handle<Code> adaptor =
-+ isolate()->builtins()->ArgumentsAdaptorTrampoline();
-+ if (flag == CALL_FUNCTION) {
-+ call_wrapper.BeforeCall(CallSize(adaptor, RelocInfo::CODE_TARGET));
-+ call(adaptor, RelocInfo::CODE_TARGET);
-+ call_wrapper.AfterCall();
-+ if (!*definitely_mismatches) {
-+ jmp(done, done_near);
-+ }
-+ } else {
-+ jmp(adaptor, RelocInfo::CODE_TARGET);
-+ }
-+ bind(&invoke);
-+ }
-+}
-+
-+void MacroAssembler::CheckDebugHook(Register fun, Register new_target,
-+ const ParameterCount& expected,
-+ const ParameterCount& actual) {
-+ Label skip_hook;
-+ ExternalReference debug_hook_active =
-+ ExternalReference::debug_hook_on_function_call_address(isolate());
-+ cmpb(Operand::StaticVariable(debug_hook_active), Immediate(0));
-+ j(equal, &skip_hook);
-+ {
-+ FrameScope frame(this,
-+ has_frame() ? StackFrame::NONE : StackFrame::INTERNAL);
-+ if (expected.is_reg()) {
-+ SmiTag(expected.reg());
-+ Push(expected.reg());
-+ }
-+ if (actual.is_reg()) {
-+ SmiTag(actual.reg());
-+ Push(actual.reg());
-+ }
-+ if (new_target.is_valid()) {
-+ Push(new_target);
-+ }
-+ Push(fun);
-+ Push(fun);
-+ CallRuntime(Runtime::kDebugOnFunctionCall);
-+ Pop(fun);
-+ if (new_target.is_valid()) {
-+ Pop(new_target);
-+ }
-+ if (actual.is_reg()) {
-+ Pop(actual.reg());
-+ SmiUntag(actual.reg());
-+ }
-+ if (expected.is_reg()) {
-+ Pop(expected.reg());
-+ SmiUntag(expected.reg());
-+ }
-+ }
-+ bind(&skip_hook);
-+}
-+
-+
-+void MacroAssembler::InvokeFunctionCode(Register function, Register new_target,
-+ const ParameterCount& expected,
-+ const ParameterCount& actual,
-+ InvokeFlag flag,
-+ const CallWrapper& call_wrapper) {
-+ // You can't call a function without a valid frame.
-+ DCHECK(flag == JUMP_FUNCTION || has_frame());
-+ DCHECK(function.is(edi));
-+ DCHECK_IMPLIES(new_target.is_valid(), new_target.is(edx));
-+
-+ if (call_wrapper.NeedsDebugHookCheck()) {
-+ CheckDebugHook(function, new_target, expected, actual);
-+ }
-+
-+ // Clear the new.target register if not given.
-+ if (!new_target.is_valid()) {
-+ mov(edx, isolate()->factory()->undefined_value());
-+ }
-+
-+ Label done;
-+ bool definitely_mismatches = false;
-+ InvokePrologue(expected, actual, &done, &definitely_mismatches, flag,
-+ Label::kNear, call_wrapper);
-+ if (!definitely_mismatches) {
-+ // We call indirectly through the code field in the function to
-+ // allow recompilation to take effect without changing any of the
-+ // call sites.
-+ Operand code = FieldOperand(function, JSFunction::kCodeEntryOffset);
-+ if (flag == CALL_FUNCTION) {
-+ call_wrapper.BeforeCall(CallSize(code));
-+ call(code);
-+ call_wrapper.AfterCall();
-+ } else {
-+ DCHECK(flag == JUMP_FUNCTION);
-+ jmp(code);
-+ }
-+ bind(&done);
-+ }
-+}
-+
-+
-+void MacroAssembler::InvokeFunction(Register fun, Register new_target,
-+ const ParameterCount& actual,
-+ InvokeFlag flag,
-+ const CallWrapper& call_wrapper) {
-+ // You can't call a function without a valid frame.
-+ DCHECK(flag == JUMP_FUNCTION || has_frame());
-+
-+ DCHECK(fun.is(edi));
-+ mov(ebx, FieldOperand(edi, JSFunction::kSharedFunctionInfoOffset));
-+ mov(esi, FieldOperand(edi, JSFunction::kContextOffset));
-+ mov(ebx, FieldOperand(ebx, SharedFunctionInfo::kFormalParameterCountOffset));
-+
-+ ParameterCount expected(ebx);
-+ InvokeFunctionCode(edi, new_target, expected, actual, flag, call_wrapper);
-+}
-+
-+
-+void MacroAssembler::InvokeFunction(Register fun,
-+ const ParameterCount& expected,
-+ const ParameterCount& actual,
-+ InvokeFlag flag,
-+ const CallWrapper& call_wrapper) {
-+ // You can't call a function without a valid frame.
-+ DCHECK(flag == JUMP_FUNCTION || has_frame());
-+
-+ DCHECK(fun.is(edi));
-+ mov(esi, FieldOperand(edi, JSFunction::kContextOffset));
-+
-+ InvokeFunctionCode(edi, no_reg, expected, actual, flag, call_wrapper);
-+}
-+
-+
-+void MacroAssembler::InvokeFunction(Handle<JSFunction> function,
-+ const ParameterCount& expected,
-+ const ParameterCount& actual,
-+ InvokeFlag flag,
-+ const CallWrapper& call_wrapper) {
-+ Move(edi, function);
-+ InvokeFunction(edi, expected, actual, flag, call_wrapper);
-+}
-+
-+
-+void MacroAssembler::LoadContext(Register dst, int context_chain_length) {
-+ if (context_chain_length > 0) {
-+ // Move up the chain of contexts to the context containing the slot.
-+ mov(dst, Operand(esi, Context::SlotOffset(Context::PREVIOUS_INDEX)));
-+ for (int i = 1; i < context_chain_length; i++) {
-+ mov(dst, Operand(dst, Context::SlotOffset(Context::PREVIOUS_INDEX)));
-+ }
-+ } else {
-+ // Slot is in the current function context. Move it into the
-+ // destination register in case we store into it (the write barrier
-+ // cannot be allowed to destroy the context in esi).
-+ mov(dst, esi);
-+ }
-+
-+ // We should not have found a with context by walking the context chain
-+ // (i.e., the static scope chain and runtime context chain do not agree).
-+ // A variable occurring in such a scope should have slot type LOOKUP and
-+ // not CONTEXT.
-+ if (emit_debug_code()) {
-+ cmp(FieldOperand(dst, HeapObject::kMapOffset),
-+ isolate()->factory()->with_context_map());
-+ Check(not_equal, kVariableResolvedToWithContext);
-+ }
-+}
-+
-+
-+void MacroAssembler::LoadGlobalProxy(Register dst) {
-+ mov(dst, NativeContextOperand());
-+ mov(dst, ContextOperand(dst, Context::GLOBAL_PROXY_INDEX));
-+}
-+
-+void MacroAssembler::LoadGlobalFunction(int index, Register function) {
-+ // Load the native context from the current context.
-+ mov(function, NativeContextOperand());
-+ // Load the function from the native context.
-+ mov(function, ContextOperand(function, index));
-+}
-+
-+
-+void MacroAssembler::LoadGlobalFunctionInitialMap(Register function,
-+ Register map) {
-+ // Load the initial map. The global functions all have initial maps.
-+ mov(map, FieldOperand(function, JSFunction::kPrototypeOrInitialMapOffset));
-+ if (emit_debug_code()) {
-+ Label ok, fail;
-+ CheckMap(map, isolate()->factory()->meta_map(), &fail, DO_SMI_CHECK);
-+ jmp(&ok);
-+ bind(&fail);
-+ Abort(kGlobalFunctionsMustHaveInitialMap);
-+ bind(&ok);
-+ }
-+}
-+
-+
-+// Store the value in register src in the safepoint register stack
-+// slot for register dst.
-+void MacroAssembler::StoreToSafepointRegisterSlot(Register dst, Register src) {
-+ mov(SafepointRegisterSlot(dst), src);
-+}
-+
-+
-+void MacroAssembler::StoreToSafepointRegisterSlot(Register dst, Immediate src) {
-+ mov(SafepointRegisterSlot(dst), src);
-+}
-+
-+
-+void MacroAssembler::LoadFromSafepointRegisterSlot(Register dst, Register src) {
-+ mov(dst, SafepointRegisterSlot(src));
-+}
-+
-+
-+Operand MacroAssembler::SafepointRegisterSlot(Register reg) {
-+ return Operand(esp, SafepointRegisterStackIndex(reg.code()) * kPointerSize);
-+}
-+
-+
-+int MacroAssembler::SafepointRegisterStackIndex(int reg_code) {
-+ // The registers are pushed starting with the lowest encoding,
-+ // which means that lowest encodings are furthest away from
-+ // the stack pointer.
-+ DCHECK(reg_code >= 0 && reg_code < kNumSafepointRegisters);
-+ return kNumSafepointRegisters - reg_code - 1;
-+}
-+
-+
-+void MacroAssembler::CmpHeapObject(Register reg, Handle<HeapObject> object) {
-+ cmp(reg, object);
-+}
-+
-+void MacroAssembler::PushObject(Handle<Object> object) {
-+ if (object->IsHeapObject()) {
-+ Push(Handle<HeapObject>::cast(object));
-+ } else {
-+ Push(Smi::cast(*object));
-+ }
-+}
-+
-+void MacroAssembler::GetWeakValue(Register value, Handle<WeakCell> cell) {
-+ mov(value, cell);
-+ mov(value, FieldOperand(value, WeakCell::kValueOffset));
-+}
-+
-+
-+void MacroAssembler::LoadWeakValue(Register value, Handle<WeakCell> cell,
-+ Label* miss) {
-+ GetWeakValue(value, cell);
-+ JumpIfSmi(value, miss);
-+}
-+
-+void TurboAssembler::Ret() { ret(0); }
-+
-+void TurboAssembler::Ret(int bytes_dropped, Register scratch) {
-+ if (is_uint16(bytes_dropped)) {
-+ ret(bytes_dropped);
-+ } else {
-+ pop(scratch);
-+ add(esp, Immediate(bytes_dropped));
-+ push(scratch);
-+ ret(0);
-+ }
-+}
-+
-+
-+void TurboAssembler::VerifyX87StackDepth(uint32_t depth) {
-+ // Turn off the stack depth check when serializer is enabled to reduce the
-+ // code size.
-+ if (serializer_enabled()) return;
-+ // Make sure the floating point stack is either empty or has depth items.
-+ DCHECK(depth <= 7);
-+ // This is very expensive.
-+ DCHECK(FLAG_debug_code && FLAG_enable_slow_asserts);
-+
-+ // The top-of-stack (tos) is 7 if there is one item pushed.
-+ int tos = (8 - depth) % 8;
-+ const int kTopMask = 0x3800;
-+ push(eax);
-+ fwait();
-+ fnstsw_ax();
-+ and_(eax, kTopMask);
-+ shr(eax, 11);
-+ cmp(eax, Immediate(tos));
-+ Check(equal, kUnexpectedFPUStackDepthAfterInstruction);
-+ fnclex();
-+ pop(eax);
-+}
-+
-+
-+void MacroAssembler::Drop(int stack_elements) {
-+ if (stack_elements > 0) {
-+ add(esp, Immediate(stack_elements * kPointerSize));
-+ }
-+}
-+
-+
-+void TurboAssembler::Move(Register dst, Register src) {
-+ if (!dst.is(src)) {
-+ mov(dst, src);
-+ }
-+}
-+
-+
-+void TurboAssembler::Move(Register dst, const Immediate& x) {
-+ if (!x.is_heap_object_request() && x.is_zero() &&
-+ RelocInfo::IsNone(x.rmode())) {
-+ xor_(dst, dst); // Shorter than mov of 32-bit immediate 0.
-+ } else {
-+ mov(dst, x);
-+ }
-+}
-+
-+
-+void TurboAssembler::Move(const Operand& dst, const Immediate& x) {
-+ mov(dst, x);
-+}
-+
-+
-+void TurboAssembler::Move(Register dst, Handle<HeapObject> object) {
-+ mov(dst, object);
-+}
-+
-+
-+void TurboAssembler::Lzcnt(Register dst, const Operand& src) {
-+ // TODO(intel): Add support for LZCNT (with ABM/BMI1).
-+ Label not_zero_src;
-+ bsr(dst, src);
-+ j(not_zero, ¬_zero_src, Label::kNear);
-+ Move(dst, Immediate(63)); // 63^31 == 32
-+ bind(¬_zero_src);
-+ xor_(dst, Immediate(31)); // for x in [0..31], 31^x == 31-x.
-+}
-+
-+
-+void TurboAssembler::Tzcnt(Register dst, const Operand& src) {
-+ // TODO(intel): Add support for TZCNT (with ABM/BMI1).
-+ Label not_zero_src;
-+ bsf(dst, src);
-+ j(not_zero, ¬_zero_src, Label::kNear);
-+ Move(dst, Immediate(32)); // The result of tzcnt is 32 if src = 0.
-+ bind(¬_zero_src);
-+}
-+
-+
-+void TurboAssembler::Popcnt(Register dst, const Operand& src) {
-+ // TODO(intel): Add support for POPCNT (with POPCNT)
-+ // if (CpuFeatures::IsSupported(POPCNT)) {
-+ // CpuFeatureScope scope(this, POPCNT);
-+ // popcnt(dst, src);
-+ // return;
-+ // }
-+ UNREACHABLE();
-+}
-+
-+
-+void MacroAssembler::SetCounter(StatsCounter* counter, int value) {
-+ if (FLAG_native_code_counters && counter->Enabled()) {
-+ mov(Operand::StaticVariable(ExternalReference(counter)), Immediate(value));
-+ }
-+}
-+
-+
-+void MacroAssembler::IncrementCounter(StatsCounter* counter, int value) {
-+ DCHECK(value > 0);
-+ if (FLAG_native_code_counters && counter->Enabled()) {
-+ Operand operand = Operand::StaticVariable(ExternalReference(counter));
-+ if (value == 1) {
-+ inc(operand);
-+ } else {
-+ add(operand, Immediate(value));
-+ }
-+ }
-+}
-+
-+
-+void MacroAssembler::DecrementCounter(StatsCounter* counter, int value) {
-+ DCHECK(value > 0);
-+ if (FLAG_native_code_counters && counter->Enabled()) {
-+ Operand operand = Operand::StaticVariable(ExternalReference(counter));
-+ if (value == 1) {
-+ dec(operand);
-+ } else {
-+ sub(operand, Immediate(value));
-+ }
-+ }
-+}
-+
-+
-+void MacroAssembler::IncrementCounter(Condition cc,
-+ StatsCounter* counter,
-+ int value) {
-+ DCHECK(value > 0);
-+ if (FLAG_native_code_counters && counter->Enabled()) {
-+ Label skip;
-+ j(NegateCondition(cc), &skip);
-+ pushfd();
-+ IncrementCounter(counter, value);
-+ popfd();
-+ bind(&skip);
-+ }
-+}
-+
-+
-+void MacroAssembler::DecrementCounter(Condition cc,
-+ StatsCounter* counter,
-+ int value) {
-+ DCHECK(value > 0);
-+ if (FLAG_native_code_counters && counter->Enabled()) {
-+ Label skip;
-+ j(NegateCondition(cc), &skip);
-+ pushfd();
-+ DecrementCounter(counter, value);
-+ popfd();
-+ bind(&skip);
-+ }
-+}
-+
-+
-+void TurboAssembler::Assert(Condition cc, BailoutReason reason) {
-+ if (emit_debug_code()) Check(cc, reason);
-+}
-+
-+void TurboAssembler::AssertUnreachable(BailoutReason reason) {
-+ if (emit_debug_code()) Abort(reason);
-+}
-+
-+
-+
-+void TurboAssembler::Check(Condition cc, BailoutReason reason) {
-+ Label L;
-+ j(cc, &L);
-+ Abort(reason);
-+ // will not return here
-+ bind(&L);
-+}
-+
-+
-+void TurboAssembler::CheckStackAlignment() {
-+ int frame_alignment = base::OS::ActivationFrameAlignment();
-+ int frame_alignment_mask = frame_alignment - 1;
-+ if (frame_alignment > kPointerSize) {
-+ DCHECK(base::bits::IsPowerOfTwo(frame_alignment));
-+ Label alignment_as_expected;
-+ test(esp, Immediate(frame_alignment_mask));
-+ j(zero, &alignment_as_expected);
-+ // Abort if stack is not aligned.
-+ int3();
-+ bind(&alignment_as_expected);
-+ }
-+}
-+
-+
-+void TurboAssembler::Abort(BailoutReason reason) {
-+#ifdef DEBUG
-+ const char* msg = GetBailoutReason(reason);
-+ if (msg != NULL) {
-+ RecordComment("Abort message: ");
-+ RecordComment(msg);
-+ }
-+
-+ if (FLAG_trap_on_abort) {
-+ int3();
-+ return;
-+ }
-+#endif
-+
-+ Move(edx, Smi::FromInt(static_cast<int>(reason)));
-+
-+ // Disable stub call restrictions to always allow calls to abort.
-+ if (!has_frame()) {
-+ // We don't actually want to generate a pile of code for this, so just
-+ // claim there is a stack frame, without generating one.
-+ FrameScope scope(this, StackFrame::NONE);
-+ Call(isolate()->builtins()->Abort(), RelocInfo::CODE_TARGET);
-+ } else {
-+ Call(isolate()->builtins()->Abort(), RelocInfo::CODE_TARGET);
-+ }
-+ // will not return here
-+ int3();
-+}
-+
-+
-+void MacroAssembler::LoadInstanceDescriptors(Register map,
-+ Register descriptors) {
-+ mov(descriptors, FieldOperand(map, Map::kDescriptorsOffset));
-+}
-+
-+
-+void MacroAssembler::NumberOfOwnDescriptors(Register dst, Register map) {
-+ mov(dst, FieldOperand(map, Map::kBitField3Offset));
-+ DecodeField<Map::NumberOfOwnDescriptorsBits>(dst);
-+}
-+
-+
-+void MacroAssembler::LoadAccessor(Register dst, Register holder,
-+ int accessor_index,
-+ AccessorComponent accessor) {
-+ mov(dst, FieldOperand(holder, HeapObject::kMapOffset));
-+ LoadInstanceDescriptors(dst, dst);
-+ mov(dst, FieldOperand(dst, DescriptorArray::GetValueOffset(accessor_index)));
-+ int offset = accessor == ACCESSOR_GETTER ? AccessorPair::kGetterOffset
-+ : AccessorPair::kSetterOffset;
-+ mov(dst, FieldOperand(dst, offset));
-+}
-+
-+void MacroAssembler::JumpIfNotBothSequentialOneByteStrings(Register object1,
-+ Register object2,
-+ Register scratch1,
-+ Register scratch2,
-+ Label* failure) {
-+ // Check that both objects are not smis.
-+ STATIC_ASSERT(kSmiTag == 0);
-+ mov(scratch1, object1);
-+ and_(scratch1, object2);
-+ JumpIfSmi(scratch1, failure);
-+
-+ // Load instance type for both strings.
-+ mov(scratch1, FieldOperand(object1, HeapObject::kMapOffset));
-+ mov(scratch2, FieldOperand(object2, HeapObject::kMapOffset));
-+ movzx_b(scratch1, FieldOperand(scratch1, Map::kInstanceTypeOffset));
-+ movzx_b(scratch2, FieldOperand(scratch2, Map::kInstanceTypeOffset));
-+
-+ // Check that both are flat one-byte strings.
-+ const int kFlatOneByteStringMask =
-+ kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask;
-+ const int kFlatOneByteStringTag =
-+ kStringTag | kOneByteStringTag | kSeqStringTag;
-+ // Interleave bits from both instance types and compare them in one check.
-+ const int kShift = 8;
-+ DCHECK_EQ(0, kFlatOneByteStringMask & (kFlatOneByteStringMask << kShift));
-+ and_(scratch1, kFlatOneByteStringMask);
-+ and_(scratch2, kFlatOneByteStringMask);
-+ shl(scratch2, kShift);
-+ or_(scratch1, scratch2);
-+ cmp(scratch1, kFlatOneByteStringTag | (kFlatOneByteStringTag << kShift));
-+ j(not_equal, failure);
-+}
-+
-+
-+void MacroAssembler::JumpIfNotUniqueNameInstanceType(Operand operand,
-+ Label* not_unique_name,
-+ Label::Distance distance) {
-+ STATIC_ASSERT(kInternalizedTag == 0 && kStringTag == 0);
-+ Label succeed;
-+ test(operand, Immediate(kIsNotStringMask | kIsNotInternalizedMask));
-+ j(zero, &succeed);
-+ cmpb(operand, Immediate(SYMBOL_TYPE));
-+ j(not_equal, not_unique_name, distance);
-+
-+ bind(&succeed);
-+}
-+
-+
-+void MacroAssembler::EmitSeqStringSetCharCheck(Register string,
-+ Register index,
-+ Register value,
-+ uint32_t encoding_mask) {
-+ Label is_object;
-+ JumpIfNotSmi(string, &is_object, Label::kNear);
-+ Abort(kNonObject);
-+ bind(&is_object);
-+
-+ push(value);
-+ mov(value, FieldOperand(string, HeapObject::kMapOffset));
-+ movzx_b(value, FieldOperand(value, Map::kInstanceTypeOffset));
-+
-+ and_(value, Immediate(kStringRepresentationMask | kStringEncodingMask));
-+ cmp(value, Immediate(encoding_mask));
-+ pop(value);
-+ Check(equal, kUnexpectedStringType);
-+
-+ // The index is assumed to be untagged coming in, tag it to compare with the
-+ // string length without using a temp register, it is restored at the end of
-+ // this function.
-+ SmiTag(index);
-+ Check(no_overflow, kIndexIsTooLarge);
-+
-+ cmp(index, FieldOperand(string, String::kLengthOffset));
-+ Check(less, kIndexIsTooLarge);
-+
-+ cmp(index, Immediate(Smi::kZero));
-+ Check(greater_equal, kIndexIsNegative);
-+
-+ // Restore the index
-+ SmiUntag(index);
-+}
-+
-+
-+void TurboAssembler::PrepareCallCFunction(int num_arguments, Register scratch) {
-+ int frame_alignment = base::OS::ActivationFrameAlignment();
-+ if (frame_alignment != 0) {
-+ // Make stack end at alignment and make room for num_arguments words
-+ // and the original value of esp.
-+ mov(scratch, esp);
-+ sub(esp, Immediate((num_arguments + 1) * kPointerSize));
-+ DCHECK(base::bits::IsPowerOfTwo(frame_alignment));
-+ and_(esp, -frame_alignment);
-+ mov(Operand(esp, num_arguments * kPointerSize), scratch);
-+ } else {
-+ sub(esp, Immediate(num_arguments * kPointerSize));
-+ }
-+}
-+
-+
-+void TurboAssembler::CallCFunction(ExternalReference function,
-+ int num_arguments) {
-+ // Trashing eax is ok as it will be the return value.
-+ mov(eax, Immediate(function));
-+ CallCFunction(eax, num_arguments);
-+}
-+
-+
-+void TurboAssembler::CallCFunction(Register function, int num_arguments) {
-+ DCHECK(has_frame());
-+ // Check stack alignment.
-+ if (emit_debug_code()) {
-+ CheckStackAlignment();
-+ }
-+
-+ call(function);
-+ if (base::OS::ActivationFrameAlignment() != 0) {
-+ mov(esp, Operand(esp, num_arguments * kPointerSize));
-+ } else {
-+ add(esp, Immediate(num_arguments * kPointerSize));
-+ }
-+}
-+
-+
-+#ifdef DEBUG
-+bool AreAliased(Register reg1,
-+ Register reg2,
-+ Register reg3,
-+ Register reg4,
-+ Register reg5,
-+ Register reg6,
-+ Register reg7,
-+ Register reg8) {
-+ int n_of_valid_regs = reg1.is_valid() + reg2.is_valid() +
-+ reg3.is_valid() + reg4.is_valid() + reg5.is_valid() + reg6.is_valid() +
-+ reg7.is_valid() + reg8.is_valid();
-+
-+ RegList regs = 0;
-+ if (reg1.is_valid()) regs |= reg1.bit();
-+ if (reg2.is_valid()) regs |= reg2.bit();
-+ if (reg3.is_valid()) regs |= reg3.bit();
-+ if (reg4.is_valid()) regs |= reg4.bit();
-+ if (reg5.is_valid()) regs |= reg5.bit();
-+ if (reg6.is_valid()) regs |= reg6.bit();
-+ if (reg7.is_valid()) regs |= reg7.bit();
-+ if (reg8.is_valid()) regs |= reg8.bit();
-+ int n_of_non_aliasing_regs = NumRegs(regs);
-+
-+ return n_of_valid_regs != n_of_non_aliasing_regs;
-+}
-+#endif
-+
-+
-+CodePatcher::CodePatcher(Isolate* isolate, byte* address, int size)
-+ : address_(address),
-+ size_(size),
-+ masm_(isolate, address, size + Assembler::kGap, CodeObjectRequired::kNo) {
-+ // Create a new macro assembler pointing to the address of the code to patch.
-+ // The size is adjusted with kGap on order for the assembler to generate size
-+ // bytes of instructions without failing with buffer size constraints.
-+ DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap);
-+}
-+
-+
-+CodePatcher::~CodePatcher() {
-+ // Indicate that code has changed.
-+ Assembler::FlushICache(masm_.isolate(), address_, size_);
-+
-+ // Check that the code was patched as expected.
-+ DCHECK(masm_.pc_ == address_ + size_);
-+ DCHECK(masm_.reloc_info_writer.pos() == address_ + size_ + Assembler::kGap);
-+}
-+
-+
-+void TurboAssembler::CheckPageFlag(Register object, Register scratch, int mask,
-+ Condition cc, Label* condition_met,
-+ Label::Distance condition_met_distance) {
-+ DCHECK(cc == zero || cc == not_zero);
-+ if (scratch.is(object)) {
-+ and_(scratch, Immediate(~Page::kPageAlignmentMask));
-+ } else {
-+ mov(scratch, Immediate(~Page::kPageAlignmentMask));
-+ and_(scratch, object);
-+ }
-+ if (mask < (1 << kBitsPerByte)) {
-+ test_b(Operand(scratch, MemoryChunk::kFlagsOffset), Immediate(mask));
-+ } else {
-+ test(Operand(scratch, MemoryChunk::kFlagsOffset), Immediate(mask));
-+ }
-+ j(cc, condition_met, condition_met_distance);
-+}
-+
-+
-+void MacroAssembler::CheckPageFlagForMap(
-+ Handle<Map> map,
-+ int mask,
-+ Condition cc,
-+ Label* condition_met,
-+ Label::Distance condition_met_distance) {
-+ DCHECK(cc == zero || cc == not_zero);
-+ Page* page = Page::FromAddress(map->address());
-+ DCHECK(!serializer_enabled()); // Serializer cannot match page_flags.
-+ ExternalReference reference(ExternalReference::page_flags(page));
-+ // The inlined static address check of the page's flags relies
-+ // on maps never being compacted.
-+ DCHECK(!isolate()->heap()->mark_compact_collector()->
-+ IsOnEvacuationCandidate(*map));
-+ if (mask < (1 << kBitsPerByte)) {
-+ test_b(Operand::StaticVariable(reference), Immediate(mask));
-+ } else {
-+ test(Operand::StaticVariable(reference), Immediate(mask));
-+ }
-+ j(cc, condition_met, condition_met_distance);
-+}
-+
-+
-+void MacroAssembler::JumpIfBlack(Register object,
-+ Register scratch0,
-+ Register scratch1,
-+ Label* on_black,
-+ Label::Distance on_black_near) {
-+ HasColor(object, scratch0, scratch1, on_black, on_black_near, 1,
-+ 1); // kBlackBitPattern.
-+ DCHECK(strcmp(Marking::kBlackBitPattern, "11") == 0);
-+}
-+
-+
-+void MacroAssembler::HasColor(Register object,
-+ Register bitmap_scratch,
-+ Register mask_scratch,
-+ Label* has_color,
-+ Label::Distance has_color_distance,
-+ int first_bit,
-+ int second_bit) {
-+ DCHECK(!AreAliased(object, bitmap_scratch, mask_scratch, ecx));
-+
-+ GetMarkBits(object, bitmap_scratch, mask_scratch);
-+
-+ Label other_color, word_boundary;
-+ test(mask_scratch, Operand(bitmap_scratch, MemoryChunk::kHeaderSize));
-+ j(first_bit == 1 ? zero : not_zero, &other_color, Label::kNear);
-+ add(mask_scratch, mask_scratch); // Shift left 1 by adding.
-+ j(zero, &word_boundary, Label::kNear);
-+ test(mask_scratch, Operand(bitmap_scratch, MemoryChunk::kHeaderSize));
-+ j(second_bit == 1 ? not_zero : zero, has_color, has_color_distance);
-+ jmp(&other_color, Label::kNear);
-+
-+ bind(&word_boundary);
-+ test_b(Operand(bitmap_scratch, MemoryChunk::kHeaderSize + kPointerSize),
-+ Immediate(1));
-+
-+ j(second_bit == 1 ? not_zero : zero, has_color, has_color_distance);
-+ bind(&other_color);
-+}
-+
-+
-+void MacroAssembler::GetMarkBits(Register addr_reg,
-+ Register bitmap_reg,
-+ Register mask_reg) {
-+ DCHECK(!AreAliased(addr_reg, mask_reg, bitmap_reg, ecx));
-+ mov(bitmap_reg, Immediate(~Page::kPageAlignmentMask));
-+ and_(bitmap_reg, addr_reg);
-+ mov(ecx, addr_reg);
-+ int shift =
-+ Bitmap::kBitsPerCellLog2 + kPointerSizeLog2 - Bitmap::kBytesPerCellLog2;
-+ shr(ecx, shift);
-+ and_(ecx,
-+ (Page::kPageAlignmentMask >> shift) & ~(Bitmap::kBytesPerCell - 1));
-+
-+ add(bitmap_reg, ecx);
-+ mov(ecx, addr_reg);
-+ shr(ecx, kPointerSizeLog2);
-+ and_(ecx, (1 << Bitmap::kBitsPerCellLog2) - 1);
-+ mov(mask_reg, Immediate(1));
-+ shl_cl(mask_reg);
-+}
-+
-+
-+void MacroAssembler::JumpIfWhite(Register value, Register bitmap_scratch,
-+ Register mask_scratch, Label* value_is_white,
-+ Label::Distance distance) {
-+ DCHECK(!AreAliased(value, bitmap_scratch, mask_scratch, ecx));
-+ GetMarkBits(value, bitmap_scratch, mask_scratch);
-+
-+ // If the value is black or grey we don't need to do anything.
-+ DCHECK(strcmp(Marking::kWhiteBitPattern, "00") == 0);
-+ DCHECK(strcmp(Marking::kBlackBitPattern, "11") == 0);
-+ DCHECK(strcmp(Marking::kGreyBitPattern, "10") == 0);
-+ DCHECK(strcmp(Marking::kImpossibleBitPattern, "01") == 0);
-+
-+ // Since both black and grey have a 1 in the first position and white does
-+ // not have a 1 there we only need to check one bit.
-+ test(mask_scratch, Operand(bitmap_scratch, MemoryChunk::kHeaderSize));
-+ j(zero, value_is_white, Label::kNear);
-+}
-+
-+
-+void MacroAssembler::EnumLength(Register dst, Register map) {
-+ STATIC_ASSERT(Map::EnumLengthBits::kShift == 0);
-+ mov(dst, FieldOperand(map, Map::kBitField3Offset));
-+ and_(dst, Immediate(Map::EnumLengthBits::kMask));
-+ SmiTag(dst);
-+}
-+
-+
-+void MacroAssembler::CheckEnumCache(Label* call_runtime) {
-+ Label next, start;
-+ mov(ecx, eax);
-+
-+ // Check if the enum length field is properly initialized, indicating that
-+ // there is an enum cache.
-+ mov(ebx, FieldOperand(ecx, HeapObject::kMapOffset));
-+
-+ EnumLength(edx, ebx);
-+ cmp(edx, Immediate(Smi::FromInt(kInvalidEnumCacheSentinel)));
-+ j(equal, call_runtime);
-+
-+ jmp(&start);
-+
-+ bind(&next);
-+ mov(ebx, FieldOperand(ecx, HeapObject::kMapOffset));
-+
-+ // For all objects but the receiver, check that the cache is empty.
-+ EnumLength(edx, ebx);
-+ cmp(edx, Immediate(Smi::kZero));
-+ j(not_equal, call_runtime);
-+
-+ bind(&start);
-+
-+ // Check that there are no elements. Register rcx contains the current JS
-+ // object we've reached through the prototype chain.
-+ Label no_elements;
-+ mov(ecx, FieldOperand(ecx, JSObject::kElementsOffset));
-+ cmp(ecx, isolate()->factory()->empty_fixed_array());
-+ j(equal, &no_elements);
-+
-+ // Second chance, the object may be using the empty slow element dictionary.
-+ cmp(ecx, isolate()->factory()->empty_slow_element_dictionary());
-+ j(not_equal, call_runtime);
-+
-+ bind(&no_elements);
-+ mov(ecx, FieldOperand(ebx, Map::kPrototypeOffset));
-+ cmp(ecx, isolate()->factory()->null_value());
-+ j(not_equal, &next);
-+}
-+
-+
-+void MacroAssembler::TestJSArrayForAllocationMemento(
-+ Register receiver_reg,
-+ Register scratch_reg,
-+ Label* no_memento_found) {
-+ Label map_check;
-+ Label top_check;
-+ ExternalReference new_space_allocation_top =
-+ ExternalReference::new_space_allocation_top_address(isolate());
-+ const int kMementoMapOffset = JSArray::kSize - kHeapObjectTag;
-+ const int kMementoLastWordOffset =
-+ kMementoMapOffset + AllocationMemento::kSize - kPointerSize;
-+
-+ // Bail out if the object is not in new space.
-+ JumpIfNotInNewSpace(receiver_reg, scratch_reg, no_memento_found);
-+ // If the object is in new space, we need to check whether it is on the same
-+ // page as the current top.
-+ lea(scratch_reg, Operand(receiver_reg, kMementoLastWordOffset));
-+ xor_(scratch_reg, Operand::StaticVariable(new_space_allocation_top));
-+ test(scratch_reg, Immediate(~Page::kPageAlignmentMask));
-+ j(zero, &top_check);
-+ // The object is on a different page than allocation top. Bail out if the
-+ // object sits on the page boundary as no memento can follow and we cannot
-+ // touch the memory following it.
-+ lea(scratch_reg, Operand(receiver_reg, kMementoLastWordOffset));
-+ xor_(scratch_reg, receiver_reg);
-+ test(scratch_reg, Immediate(~Page::kPageAlignmentMask));
-+ j(not_zero, no_memento_found);
-+ // Continue with the actual map check.
-+ jmp(&map_check);
-+ // If top is on the same page as the current object, we need to check whether
-+ // we are below top.
-+ bind(&top_check);
-+ lea(scratch_reg, Operand(receiver_reg, kMementoLastWordOffset));
-+ cmp(scratch_reg, Operand::StaticVariable(new_space_allocation_top));
-+ j(greater_equal, no_memento_found);
-+ // Memento map check.
-+ bind(&map_check);
-+ mov(scratch_reg, Operand(receiver_reg, kMementoMapOffset));
-+ cmp(scratch_reg, Immediate(isolate()->factory()->allocation_memento_map()));
-+}
-+
-+void MacroAssembler::TruncatingDiv(Register dividend, int32_t divisor) {
-+ DCHECK(!dividend.is(eax));
-+ DCHECK(!dividend.is(edx));
-+ base::MagicNumbersForDivision<uint32_t> mag =
-+ base::SignedDivisionByConstant(static_cast<uint32_t>(divisor));
-+ mov(eax, Immediate(mag.multiplier));
-+ imul(dividend);
-+ bool neg = (mag.multiplier & (static_cast<uint32_t>(1) << 31)) != 0;
-+ if (divisor > 0 && neg) add(edx, dividend);
-+ if (divisor < 0 && !neg && mag.multiplier > 0) sub(edx,
dividend);
-+ if (mag.shift > 0) sar(edx, mag.shift);
-+ mov(eax, dividend);
-+ shr(eax, 31);
-+ add(edx, eax);
-+}
-+
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_TARGET_ARCH_X87
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/macro-assembler-x87.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/macro-assembler-x87.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/macro-assembler-x87.h 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/macro-assembler-x87.h 2018-02-18
19:00:54.200418105 +0100
-@@ -0,0 +1,923 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#ifndef V8_X87_MACRO_ASSEMBLER_X87_H_
-+#define V8_X87_MACRO_ASSEMBLER_X87_H_
-+
-+#include "src/assembler.h"
-+#include "src/bailout-reason.h"
-+#include "src/frames.h"
-+#include "src/globals.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+// Give alias names to registers for calling conventions.
-+const Register kReturnRegister0 = {Register::kCode_eax};
-+const Register kReturnRegister1 = {Register::kCode_edx};
-+const Register kReturnRegister2 = {Register::kCode_edi};
-+const Register kJSFunctionRegister = {Register::kCode_edi};
-+const Register kContextRegister = {Register::kCode_esi};
-+const Register kAllocateSizeRegister = {Register::kCode_edx};
-+const Register kInterpreterAccumulatorRegister = {Register::kCode_eax};
-+const Register kInterpreterBytecodeOffsetRegister = {Register::kCode_ecx};
-+const Register kInterpreterBytecodeArrayRegister = {Register::kCode_edi};
-+const Register kInterpreterDispatchTableRegister = {Register::kCode_esi};
-+const Register kJavaScriptCallArgCountRegister = {Register::kCode_eax};
-+const Register kJavaScriptCallNewTargetRegister = {Register::kCode_edx};
-+const Register kRuntimeCallFunctionRegister = {Register::kCode_ebx};
-+const Register kRuntimeCallArgCountRegister = {Register::kCode_eax};
-+
-+// Spill slots used by interpreter dispatch calling convention.
-+const int kInterpreterDispatchTableSpillSlot = -1;
-+
-+// Convenience for platform-independent signatures. We do not normally
-+// distinguish memory operands from other operands on ia32.
-+typedef Operand MemOperand;
-+
-+enum RememberedSetAction { EMIT_REMEMBERED_SET, OMIT_REMEMBERED_SET };
-+enum SmiCheck { INLINE_SMI_CHECK, OMIT_SMI_CHECK };
-+enum PointersToHereCheck {
-+ kPointersToHereMaybeInteresting,
-+ kPointersToHereAreAlwaysInteresting
-+};
-+
-+enum RegisterValueType { REGISTER_VALUE_IS_SMI, REGISTER_VALUE_IS_INT32 };
-+
-+enum class ReturnAddressState { kOnStack, kNotOnStack };
-+
-+#ifdef DEBUG
-+bool AreAliased(Register reg1, Register reg2, Register reg3 = no_reg,
-+ Register reg4 = no_reg, Register reg5 = no_reg,
-+ Register reg6 = no_reg, Register reg7 = no_reg,
-+ Register reg8 = no_reg);
-+#endif
-+
-+class TurboAssembler: public Assembler {
-+ public:
-+ TurboAssembler(Isolate* isolate, void* buffer, int buffer_size,
-+ CodeObjectRequired create_code_object)
-+ : Assembler(isolate, buffer, buffer_size), isolate_(isolate) {
-+ if (create_code_object == CodeObjectRequired::kYes) {
-+ code_object_ =
-+ Handle<HeapObject>::New(isolate->heap()->undefined_value(),
isolate);
-+ }
-+ }
-+
-+ void set_has_frame(bool value) { has_frame_ = value; }
-+ bool has_frame() { return has_frame_; }
-+
-+ Isolate* isolate() const { return isolate_; }
-+
-+ Handle<HeapObject> CodeObject() {
-+ DCHECK(!code_object_.is_null());
-+ return code_object_;
-+ }
-+
-+ void CheckPageFlag(Register object, Register scratch, int mask, Condition cc,
-+ Label* condition_met,
-+ Label::Distance condition_met_distance = Label::kFar);
-+
-+ // Activation support.
-+ void EnterFrame(StackFrame::Type type);
-+ void EnterFrame(StackFrame::Type type, bool load_constant_pool_pointer_reg) {
-+ // Out-of-line constant pool not implemented on x87.
-+ UNREACHABLE();
-+ }
-+ void LeaveFrame(StackFrame::Type type);
-+
-+ // Print a message to stdout and abort execution.
-+ void Abort(BailoutReason reason);
-+
-+ // Calls Abort(msg) if the condition cc is not satisfied.
-+ // Use --debug_code to enable.
-+ void Assert(Condition cc, BailoutReason reason);
-+
-+ // Like Assert(), but without condition.
-+ // Use --debug_code to enable.
-+ void AssertUnreachable(BailoutReason reason);
-+
-+ // Like Assert(), but always enabled.
-+ void Check(Condition cc, BailoutReason reason);
-+
-+ // Check that the stack is aligned.
-+ void CheckStackAlignment();
-+
-+ // Nop, because x87 does not have a root register.
-+ void InitializeRootRegister() {}
-+
-+ // Move a constant into a destination using the most efficient encoding.
-+ void Move(Register dst, const Immediate& x);
-+
-+ void Move(Register dst, Smi* source) { Move(dst, Immediate(source)); }
-+
-+ // Move if the registers are not identical.
-+ void Move(Register target, Register source);
-+
-+ void Move(const Operand& dst, const Immediate& x);
-+
-+ void Move(Register dst, Handle<HeapObject> handle);
-+
-+ void Call(Handle<Code> target, RelocInfo::Mode rmode) { call(target, rmode); }
-+ void Call(Label* target) { call(target); }
-+
-+ inline bool AllowThisStubCall(CodeStub* stub);
-+ void CallStubDelayed(CodeStub* stub);
-+
-+ void CallRuntimeDelayed(Zone* zone, Runtime::FunctionId fid,
-+ SaveFPRegsMode save_doubles = kDontSaveFPRegs);
-+
-+ // Jump the register contains a smi.
-+ inline void JumpIfSmi(Register value, Label* smi_label,
-+ Label::Distance distance = Label::kFar) {
-+ test(value, Immediate(kSmiTagMask));
-+ j(zero, smi_label, distance);
-+ }
-+ // Jump if the operand is a smi.
-+ inline void JumpIfSmi(Operand value, Label* smi_label,
-+ Label::Distance distance = Label::kFar) {
-+ test(value, Immediate(kSmiTagMask));
-+ j(zero, smi_label, distance);
-+ }
-+
-+ void SmiUntag(Register reg) { sar(reg, kSmiTagSize); }
-+
-+ // Removes current frame and its arguments from the stack preserving
-+ // the arguments and a return address pushed to the stack for the next call.
-+ // |ra_state| defines whether return address is already pushed to stack or
-+ // not. Both |callee_args_count| and |caller_args_count_reg| do not include
-+ // receiver. |callee_args_count| is not modified, |caller_args_count_reg|
-+ // is trashed. |number_of_temp_values_after_return_address| specifies
-+ // the number of words pushed to the stack after the return address. This is
-+ // to allow "allocation" of scratch registers that this function requires
-+ // by saving their values on the stack.
-+ void PrepareForTailCall(const ParameterCount& callee_args_count,
-+ Register caller_args_count_reg, Register scratch0,
-+ Register scratch1, ReturnAddressState ra_state,
-+ int number_of_temp_values_after_return_address);
-+
-+ // Before calling a C-function from generated code, align arguments on stack.
-+ // After aligning the frame, arguments must be stored in esp[0], esp[4],
-+ // etc., not pushed. The argument count assumes all arguments are word sized.
-+ // Some compilers/platforms require the stack to be aligned when calling
-+ // C++ code.
-+ // Needs a scratch register to do some arithmetic. This register will be
-+ // trashed.
-+ void PrepareCallCFunction(int num_arguments, Register scratch);
-+
-+ // Calls a C function and cleans up the space for arguments allocated
-+ // by PrepareCallCFunction. The called function is not allowed to trigger a
-+ // garbage collection, since that might move the code and invalidate the
-+ // return address (unless this is somehow accounted for by the called
-+ // function).
-+ void CallCFunction(ExternalReference function, int num_arguments);
-+ void CallCFunction(Register function, int num_arguments);
-+
-+ void ShlPair(Register high, Register low, uint8_t imm8);
-+ void ShlPair_cl(Register high, Register low);
-+ void ShrPair(Register high, Register low, uint8_t imm8);
-+ void ShrPair_cl(Register high, Register src);
-+ void SarPair(Register high, Register low, uint8_t imm8);
-+ void SarPair_cl(Register high, Register low);
-+
-+ // Generates function and stub prologue code.
-+ void StubPrologue(StackFrame::Type type);
-+ void Prologue(bool code_pre_aging);
-+
-+ void Lzcnt(Register dst, Register src) { Lzcnt(dst, Operand(src)); }
-+ void Lzcnt(Register dst, const Operand& src);
-+
-+ void Tzcnt(Register dst, Register src) { Tzcnt(dst, Operand(src)); }
-+ void Tzcnt(Register dst, const Operand& src);
-+
-+ void Popcnt(Register dst, Register src) { Popcnt(dst, Operand(src)); }
-+ void Popcnt(Register dst, const Operand& src);
-+
-+ void Ret();
-+
-+ // Return and drop arguments from stack, where the number of arguments
-+ // may be bigger than 2^16 - 1. Requires a scratch register.
-+ void Ret(int bytes_dropped, Register scratch);
-+
-+ // Insert code to verify that the x87 stack has the specified depth (0-7)
-+ void VerifyX87StackDepth(uint32_t depth);
-+
-+ void LoadUint32NoSSE2(Register src) {
-+ LoadUint32NoSSE2(Operand(src));
-+ }
-+ void LoadUint32NoSSE2(const Operand& src);
-+
-+ // FCmp is similar to integer cmp, but requires unsigned
-+ // jcc instructions (je, ja, jae, jb, jbe, je, and jz).
-+ void FCmp();
-+ void X87SetRC(int rc);
-+ void X87SetFPUCW(int cw);
-+
-+ void SlowTruncateToIDelayed(Zone* zone, Register result_reg,
-+ Register input_reg,
-+ int offset = HeapNumber::kValueOffset -
-+ kHeapObjectTag);
-+ void TruncateX87TOSToI(Zone* zone, Register result_reg);
-+
-+ void Push(Register src) { push(src); }
-+ void Push(const Operand& src) { push(src); }
-+ void Push(Immediate value) { push(value); }
-+ void Push(Handle<HeapObject> handle) { push(Immediate(handle)); }
-+ void Push(Smi* smi) { Push(Immediate(smi)); }
-+
-+ private:
-+ bool has_frame_;
-+ Isolate* isolate_;
-+ // This handle will be patched with the code object on installation.
-+ Handle<HeapObject> code_object_;
-+};
-+
-+// MacroAssembler implements a collection of frequently used macros.
-+class MacroAssembler: public TurboAssembler {
-+ public:
-+ MacroAssembler(Isolate* isolate, void* buffer, int size,
-+ CodeObjectRequired create_code_object);
-+
-+ int jit_cookie() const { return jit_cookie_; }
-+
-+ void Load(Register dst, const Operand& src, Representation r);
-+ void Store(Register src, const Operand& dst, Representation r);
-+
-+ // Load a register with a long value as efficiently as possible.
-+ void Set(Register dst, int32_t x) {
-+ if (x == 0) {
-+ xor_(dst, dst);
-+ } else {
-+ mov(dst, Immediate(x));
-+ }
-+ }
-+ void Set(const Operand& dst, int32_t x) { mov(dst, Immediate(x)); }
-+
-+ // Operations on roots in the root-array.
-+ void LoadRoot(Register destination, Heap::RootListIndex index);
-+ void StoreRoot(Register source, Register scratch, Heap::RootListIndex index);
-+ void CompareRoot(Register with, Register scratch, Heap::RootListIndex index);
-+ // These methods can only be used with constant roots (i.e. non-writable
-+ // and not in new space).
-+ void CompareRoot(Register with, Heap::RootListIndex index);
-+ void CompareRoot(const Operand& with, Heap::RootListIndex index);
-+ void PushRoot(Heap::RootListIndex index);
-+
-+ // Compare the object in a register to a value and jump if they are equal.
-+ void JumpIfRoot(Register with, Heap::RootListIndex index, Label* if_equal,
-+ Label::Distance if_equal_distance = Label::kFar) {
-+ CompareRoot(with, index);
-+ j(equal, if_equal, if_equal_distance);
-+ }
-+ void JumpIfRoot(const Operand& with, Heap::RootListIndex index,
-+ Label* if_equal,
-+ Label::Distance if_equal_distance = Label::kFar) {
-+ CompareRoot(with, index);
-+ j(equal, if_equal, if_equal_distance);
-+ }
-+
-+ // Compare the object in a register to a value and jump if they are not equal.
-+ void JumpIfNotRoot(Register with, Heap::RootListIndex index,
-+ Label* if_not_equal,
-+ Label::Distance if_not_equal_distance = Label::kFar) {
-+ CompareRoot(with, index);
-+ j(not_equal, if_not_equal, if_not_equal_distance);
-+ }
-+ void JumpIfNotRoot(const Operand& with, Heap::RootListIndex index,
-+ Label* if_not_equal,
-+ Label::Distance if_not_equal_distance = Label::kFar) {
-+ CompareRoot(with, index);
-+ j(not_equal, if_not_equal, if_not_equal_distance);
-+ }
-+
-+ // These functions do not arrange the registers in any particular order so
-+ // they are not useful for calls that can cause a GC. The caller can
-+ // exclude up to 3 registers that do not need to be saved and restored.
-+ void PushCallerSaved(SaveFPRegsMode fp_mode, Register exclusion1 = no_reg,
-+ Register exclusion2 = no_reg,
-+ Register exclusion3 = no_reg);
-+ void PopCallerSaved(SaveFPRegsMode fp_mode, Register exclusion1 = no_reg,
-+ Register exclusion2 = no_reg,
-+ Register exclusion3 = no_reg);
-+
-+ // ---------------------------------------------------------------------------
-+ // GC Support
-+ enum RememberedSetFinalAction { kReturnAtEnd, kFallThroughAtEnd };
-+
-+ // Record in the remembered set the fact that we have a pointer to new space
-+ // at the address pointed to by the addr register. Only works if addr is not
-+ // in new space.
-+ void RememberedSetHelper(Register object, // Used for debug code.
-+ Register addr, Register scratch,
-+ SaveFPRegsMode save_fp,
-+ RememberedSetFinalAction and_then);
-+
-+ void CheckPageFlagForMap(
-+ Handle<Map> map, int mask, Condition cc, Label* condition_met,
-+ Label::Distance condition_met_distance = Label::kFar);
-+
-+ // Check if object is in new space. Jumps if the object is not in new space.
-+ // The register scratch can be object itself, but scratch will be clobbered.
-+ void JumpIfNotInNewSpace(Register object, Register scratch, Label* branch,
-+ Label::Distance distance = Label::kFar) {
-+ InNewSpace(object, scratch, zero, branch, distance);
-+ }
-+
-+ // Check if object is in new space. Jumps if the object is in new space.
-+ // The register scratch can be object itself, but it will be clobbered.
-+ void JumpIfInNewSpace(Register object, Register scratch, Label* branch,
-+ Label::Distance distance = Label::kFar) {
-+ InNewSpace(object, scratch, not_zero, branch, distance);
-+ }
-+
-+ // Check if an object has a given incremental marking color. Also uses ecx!
-+ void HasColor(Register object, Register scratch0, Register scratch1,
-+ Label* has_color, Label::Distance has_color_distance,
-+ int first_bit, int second_bit);
-+
-+ void JumpIfBlack(Register object, Register scratch0, Register scratch1,
-+ Label* on_black,
-+ Label::Distance on_black_distance = Label::kFar);
-+
-+ // Checks the color of an object. If the object is white we jump to the
-+ // incremental marker.
-+ void JumpIfWhite(Register value, Register scratch1, Register scratch2,
-+ Label* value_is_white, Label::Distance distance);
-+
-+ // Notify the garbage collector that we wrote a pointer into an object.
-+ // |object| is the object being stored into, |value| is the object being
-+ // stored. value and scratch registers are clobbered by the operation.
-+ // The offset is the offset from the start of the object, not the offset from
-+ // the tagged HeapObject pointer. For use with FieldOperand(reg, off).
-+ void RecordWriteField(
-+ Register object, int offset, Register value, Register scratch,
-+ SaveFPRegsMode save_fp,
-+ RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET,
-+ SmiCheck smi_check = INLINE_SMI_CHECK,
-+ PointersToHereCheck pointers_to_here_check_for_value =
-+ kPointersToHereMaybeInteresting);
-+
-+ // As above, but the offset has the tag presubtracted. For use with
-+ // Operand(reg, off).
-+ void RecordWriteContextSlot(
-+ Register context, int offset, Register value, Register scratch,
-+ SaveFPRegsMode save_fp,
-+ RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET,
-+ SmiCheck smi_check = INLINE_SMI_CHECK,
-+ PointersToHereCheck pointers_to_here_check_for_value =
-+ kPointersToHereMaybeInteresting) {
-+ RecordWriteField(context, offset + kHeapObjectTag, value, scratch, save_fp,
-+ remembered_set_action, smi_check,
-+ pointers_to_here_check_for_value);
-+ }
-+
-+ // For page containing |object| mark region covering |address|
-+ // dirty. |object| is the object being stored into, |value| is the
-+ // object being stored. The address and value registers are clobbered by the
-+ // operation. RecordWrite filters out smis so it does not update the
-+ // write barrier if the value is a smi.
-+ void RecordWrite(
-+ Register object, Register address, Register value, SaveFPRegsMode save_fp,
-+ RememberedSetAction remembered_set_action = EMIT_REMEMBERED_SET,
-+ SmiCheck smi_check = INLINE_SMI_CHECK,
-+ PointersToHereCheck pointers_to_here_check_for_value =
-+ kPointersToHereMaybeInteresting);
-+
-+ // Notify the garbage collector that we wrote a code entry into a
-+ // JSFunction. Only scratch is clobbered by the operation.
-+ void RecordWriteCodeEntryField(Register js_function, Register code_entry,
-+ Register scratch);
-+
-+ // For page containing |object| mark the region covering the object's map
-+ // dirty. |object| is the object being stored into, |map| is the Map object
-+ // that was stored.
-+ void RecordWriteForMap(Register object, Handle<Map> map, Register scratch1,
-+ Register scratch2, SaveFPRegsMode save_fp);
-+
-+ // Frame restart support
-+ void MaybeDropFrames();
-+
-+ // Enter specific kind of exit frame. Expects the number of
-+ // arguments in register eax and sets up the number of arguments in
-+ // register edi and the pointer to the first argument in register
-+ // esi.
-+ void EnterExitFrame(int argc, bool save_doubles, StackFrame::Type frame_type);
-+
-+ void EnterApiExitFrame(int argc);
-+
-+ // Leave the current exit frame. Expects the return value in
-+ // register eax:edx (untouched) and the pointer to the first
-+ // argument in register esi (if pop_arguments == true).
-+ void LeaveExitFrame(bool save_doubles, bool pop_arguments = true);
-+
-+ // Leave the current exit frame. Expects the return value in
-+ // register eax (untouched).
-+ void LeaveApiExitFrame(bool restore_context);
-+
-+ // Find the function context up the context chain.
-+ void LoadContext(Register dst, int context_chain_length);
-+
-+ // Load the global proxy from the current context.
-+ void LoadGlobalProxy(Register dst);
-+
-+ // Load the global function with the given index.
-+ void LoadGlobalFunction(int index, Register function);
-+
-+ // Load the initial map from the global function. The registers
-+ // function and map can be the same.
-+ void LoadGlobalFunctionInitialMap(Register function, Register map);
-+
-+ // Push and pop the registers that can hold pointers.
-+ void PushSafepointRegisters() { pushad(); }
-+ void PopSafepointRegisters() { popad(); }
-+ // Store the value in register/immediate src in the safepoint
-+ // register stack slot for register dst.
-+ void StoreToSafepointRegisterSlot(Register dst, Register src);
-+ void StoreToSafepointRegisterSlot(Register dst, Immediate src);
-+ void LoadFromSafepointRegisterSlot(Register dst, Register src);
-+
-+ void CmpHeapObject(Register reg, Handle<HeapObject> object);
-+ void PushObject(Handle<Object> object);
-+
-+ void CmpObject(Register reg, Handle<Object> object) {
-+ AllowDeferredHandleDereference heap_object_check;
-+ if (object->IsHeapObject()) {
-+ CmpHeapObject(reg, Handle<HeapObject>::cast(object));
-+ } else {
-+ cmp(reg, Immediate(Smi::cast(*object)));
-+ }
-+ }
-+
-+ void GetWeakValue(Register value, Handle<WeakCell> cell);
-+ void LoadWeakValue(Register value, Handle<WeakCell> cell, Label* miss);
-+
-+ // ---------------------------------------------------------------------------
-+ // JavaScript invokes
-+
-+ // Invoke the JavaScript function code by either calling or jumping.
-+
-+ void InvokeFunctionCode(Register function, Register new_target,
-+ const ParameterCount& expected,
-+ const ParameterCount& actual, InvokeFlag flag,
-+ const CallWrapper& call_wrapper);
-+
-+ // On function call, call into the debugger if necessary.
-+ void CheckDebugHook(Register fun, Register new_target,
-+ const ParameterCount& expected,
-+ const ParameterCount& actual);
-+
-+ // Invoke the JavaScript function in the given register. Changes the
-+ // current context to the context in the function before invoking.
-+ void InvokeFunction(Register function, Register new_target,
-+ const ParameterCount& actual, InvokeFlag flag,
-+ const CallWrapper& call_wrapper);
-+
-+ void InvokeFunction(Register function, const ParameterCount& expected,
-+ const ParameterCount& actual, InvokeFlag flag,
-+ const CallWrapper& call_wrapper);
-+
-+ void InvokeFunction(Handle<JSFunction> function,
-+ const ParameterCount& expected,
-+ const ParameterCount& actual, InvokeFlag flag,
-+ const CallWrapper& call_wrapper);
-+
-+ // Support for constant splitting.
-+ bool IsUnsafeImmediate(const Immediate& x);
-+ void SafeMove(Register dst, const Immediate& x);
-+ void SafePush(const Immediate& x);
-+
-+ // Compare object type for heap object.
-+ // Incoming register is heap_object and outgoing register is map.
-+ void CmpObjectType(Register heap_object, InstanceType type, Register map);
-+
-+ // Compare instance type for map.
-+ void CmpInstanceType(Register map, InstanceType type);
-+
-+ // Compare an object's map with the specified map.
-+ void CompareMap(Register obj, Handle<Map> map);
-+
-+ // Check if the map of an object is equal to a specified map and branch to
-+ // label if not. Skip the smi check if not required (object is known to be a
-+ // heap object). If mode is ALLOW_ELEMENT_TRANSITION_MAPS, then also match
-+ // against maps that are ElementsKind transition maps of the specified map.
-+ void CheckMap(Register obj, Handle<Map> map, Label* fail,
-+ SmiCheckType smi_check_type);
-+
-+ // Check if the object in register heap_object is a string. Afterwards the
-+ // register map contains the object map and the register instance_type
-+ // contains the instance_type. The registers map and instance_type can be the
-+ // same in which case it contains the instance type afterwards. Either of the
-+ // registers map and instance_type can be the same as heap_object.
-+ Condition IsObjectStringType(Register heap_object, Register map,
-+ Register instance_type);
-+
-+ void FXamMinusZero();
-+ void FXamSign();
-+ void X87CheckIA();
-+
-+ void ClampUint8(Register reg);
-+ void ClampTOSToUint8(Register result_reg);
-+
-+ void SlowTruncateToI(Register result_reg, Register input_reg,
-+ int offset = HeapNumber::kValueOffset - kHeapObjectTag);
-+
-+ void TruncateHeapNumberToI(Register result_reg, Register input_reg);
-+
-+ void X87TOSToI(Register result_reg, MinusZeroMode minus_zero_mode,
-+ Label* lost_precision, Label* is_nan, Label* minus_zero,
-+ Label::Distance dst = Label::kFar);
-+
-+ // Smi tagging support.
-+ void SmiTag(Register reg) {
-+ STATIC_ASSERT(kSmiTag == 0);
-+ STATIC_ASSERT(kSmiTagSize == 1);
-+ add(reg, reg);
-+ }
-+
-+ // Modifies the register even if it does not contain a Smi!
-+ void UntagSmi(Register reg, Label* is_smi) {
-+ STATIC_ASSERT(kSmiTagSize == 1);
-+ sar(reg, kSmiTagSize);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ j(not_carry, is_smi);
-+ }
-+
-+ // Jump if register contain a non-smi.
-+ inline void JumpIfNotSmi(Register value, Label* not_smi_label,
-+ Label::Distance distance = Label::kFar) {
-+ test(value, Immediate(kSmiTagMask));
-+ j(not_zero, not_smi_label, distance);
-+ }
-+ // Jump if the operand is not a smi.
-+ inline void JumpIfNotSmi(Operand value, Label* smi_label,
-+ Label::Distance distance = Label::kFar) {
-+ test(value, Immediate(kSmiTagMask));
-+ j(not_zero, smi_label, distance);
-+ }
-+ // Jump if the value cannot be represented by a smi.
-+ inline void JumpIfNotValidSmiValue(Register value, Register scratch,
-+ Label* on_invalid,
-+ Label::Distance distance = Label::kFar) {
-+ mov(scratch, value);
-+ add(scratch, Immediate(0x40000000U));
-+ j(sign, on_invalid, distance);
-+ }
-+
-+ // Jump if the unsigned integer value cannot be represented by a smi.
-+ inline void JumpIfUIntNotValidSmiValue(
-+ Register value, Label* on_invalid,
-+ Label::Distance distance = Label::kFar) {
-+ cmp(value, Immediate(0x40000000U));
-+ j(above_equal, on_invalid, distance);
-+ }
-+
-+ void LoadInstanceDescriptors(Register map, Register descriptors);
-+ void EnumLength(Register dst, Register map);
-+ void NumberOfOwnDescriptors(Register dst, Register map);
-+ void LoadAccessor(Register dst, Register holder, int accessor_index,
-+ AccessorComponent accessor);
-+
-+ template<typename Field>
-+ void DecodeField(Register reg) {
-+ static const int shift = Field::kShift;
-+ static const int mask = Field::kMask >> Field::kShift;
-+ if (shift != 0) {
-+ sar(reg, shift);
-+ }
-+ and_(reg, Immediate(mask));
-+ }
-+
-+ template<typename Field>
-+ void DecodeFieldToSmi(Register reg) {
-+ static const int shift = Field::kShift;
-+ static const int mask = (Field::kMask >> Field::kShift) << kSmiTagSize;
-+ STATIC_ASSERT((mask & (0x80000000u >> (kSmiTagSize - 1))) == 0);
-+ STATIC_ASSERT(kSmiTag == 0);
-+ if (shift < kSmiTagSize) {
-+ shl(reg, kSmiTagSize - shift);
-+ } else if (shift > kSmiTagSize) {
-+ sar(reg, shift - kSmiTagSize);
-+ }
-+ and_(reg, Immediate(mask));
-+ }
-+
-+ // Abort execution if argument is not a smi, enabled via --debug-code.
-+ void AssertSmi(Register object);
-+
-+ // Abort execution if argument is a smi, enabled via --debug-code.
-+ void AssertNotSmi(Register object);
-+
-+ // Abort execution if argument is not a FixedArray, enabled via --debug-code.
-+ void AssertFixedArray(Register object);
-+
-+ // Abort execution if argument is not a JSFunction, enabled via --debug-code.
-+ void AssertFunction(Register object);
-+
-+ // Abort execution if argument is not a JSBoundFunction,
-+ // enabled via --debug-code.
-+ void AssertBoundFunction(Register object);
-+
-+ // Abort execution if argument is not a JSGeneratorObject (or subclass),
-+ // enabled via --debug-code.
-+ void AssertGeneratorObject(Register object);
-+
-+ // Abort execution if argument is not undefined or an AllocationSite, enabled
-+ // via --debug-code.
-+ void AssertUndefinedOrAllocationSite(Register object);
-+
-+ // ---------------------------------------------------------------------------
-+ // Exception handling
-+
-+ // Push a new stack handler and link it into stack handler chain.
-+ void PushStackHandler();
-+
-+ // Unlink the stack handler on top of the stack from the stack handler chain.
-+ void PopStackHandler();
-+
-+ // ---------------------------------------------------------------------------
-+ // Inline caching support
-+
-+ void GetNumberHash(Register r0, Register scratch);
-+
-+ // ---------------------------------------------------------------------------
-+ // Allocation support
-+
-+ // Allocate an object in new space or old space. If the given space
-+ // is exhausted control continues at the gc_required label. The allocated
-+ // object is returned in result and end of the new object is returned in
-+ // result_end. The register scratch can be passed as no_reg in which case
-+ // an additional object reference will be added to the reloc info. The
-+ // returned pointers in result and result_end have not yet been tagged as
-+ // heap objects. If result_contains_top_on_entry is true the content of
-+ // result is known to be the allocation top on entry (could be result_end
-+ // from a previous call). If result_contains_top_on_entry is true scratch
-+ // should be no_reg as it is never used.
-+ void Allocate(int object_size, Register result, Register result_end,
-+ Register scratch, Label* gc_required, AllocationFlags flags);
-+
-+ void Allocate(int header_size, ScaleFactor element_size,
-+ Register element_count, RegisterValueType element_count_type,
-+ Register result, Register result_end, Register scratch,
-+ Label* gc_required, AllocationFlags flags);
-+
-+ void Allocate(Register object_size, Register result, Register result_end,
-+ Register scratch, Label* gc_required, AllocationFlags flags);
-+
-+ // Allocate a heap number in new space with undefined value. The
-+ // register scratch2 can be passed as no_reg; the others must be
-+ // valid registers. Returns tagged pointer in result register, or
-+ // jumps to gc_required if new space is full.
-+ void AllocateHeapNumber(Register result, Register scratch1, Register scratch2,
-+ Label* gc_required, MutableMode mode = IMMUTABLE);
-+
-+ // Allocate and initialize a JSValue wrapper with the specified {constructor}
-+ // and {value}.
-+ void AllocateJSValue(Register result, Register constructor, Register value,
-+ Register scratch, Label* gc_required);
-+
-+ // Initialize fields with filler values. Fields starting at |current_address|
-+ // not including |end_address| are overwritten with the value in |filler|. At
-+ // the end the loop, |current_address| takes the value of |end_address|.
-+ void InitializeFieldsWithFiller(Register current_address,
-+ Register end_address, Register filler);
-+
-+ // ---------------------------------------------------------------------------
-+ // Support functions.
-+
-+ // Check a boolean-bit of a Smi field.
-+ void BooleanBitTest(Register object, int field_offset, int bit_index);
-+
-+ // Machine code version of Map::GetConstructor().
-+ // |temp| holds |result|'s map when done.
-+ void GetMapConstructor(Register result, Register map, Register temp);
-+
-+ // ---------------------------------------------------------------------------
-+ // Runtime calls
-+
-+ // Call a code stub. Generate the code if necessary.
-+ void CallStub(CodeStub* stub);
-+
-+ // Tail call a code stub (jump). Generate the code if necessary.
-+ void TailCallStub(CodeStub* stub);
-+
-+ // Call a runtime routine.
-+ void CallRuntime(const Runtime::Function* f, int num_arguments,
-+ SaveFPRegsMode save_doubles = kDontSaveFPRegs);
-+ void CallRuntimeSaveDoubles(Runtime::FunctionId fid) {
-+ const Runtime::Function* function = Runtime::FunctionForId(fid);
-+ CallRuntime(function, function->nargs, kSaveFPRegs);
-+ }
-+
-+ // Convenience function: Same as above, but takes the fid instead.
-+ void CallRuntime(Runtime::FunctionId fid,
-+ SaveFPRegsMode save_doubles = kDontSaveFPRegs) {
-+ const Runtime::Function* function = Runtime::FunctionForId(fid);
-+ CallRuntime(function, function->nargs, save_doubles);
-+ }
-+
-+ // Convenience function: Same as above, but takes the fid instead.
-+ void CallRuntime(Runtime::FunctionId fid, int num_arguments,
-+ SaveFPRegsMode save_doubles = kDontSaveFPRegs) {
-+ CallRuntime(Runtime::FunctionForId(fid), num_arguments, save_doubles);
-+ }
-+
-+ // Convenience function: call an external reference.
-+ void CallExternalReference(ExternalReference ref, int num_arguments);
-+
-+ // Convenience function: tail call a runtime routine (jump).
-+ void TailCallRuntime(Runtime::FunctionId fid);
-+
-+ // Jump to a runtime routine.
-+ void JumpToExternalReference(const ExternalReference& ext,
-+ bool builtin_exit_frame = false);
-+
-+ // ---------------------------------------------------------------------------
-+ // Utilities
-+
-+ // Emit code that loads |parameter_index|'th parameter from the stack to
-+ // the register according to the CallInterfaceDescriptor definition.
-+ // |sp_to_caller_sp_offset_in_words| specifies the number of words pushed
-+ // below the caller's sp (on x87 it's at least return address).
-+ template <class Descriptor>
-+ void LoadParameterFromStack(
-+ Register reg, typename Descriptor::ParameterIndices parameter_index,
-+ int sp_to_ra_offset_in_words = 1) {
-+ DCHECK(Descriptor::kPassLastArgsOnStack);
-+ DCHECK_LT(parameter_index, Descriptor::kParameterCount);
-+ DCHECK_LE(Descriptor::kParameterCount - Descriptor::kStackArgumentsCount,
-+ parameter_index);
-+ int offset = (Descriptor::kParameterCount - parameter_index - 1 +
-+ sp_to_ra_offset_in_words) *
-+ kPointerSize;
-+ mov(reg, Operand(esp, offset));
-+ }
-+
-+ // Emit code to discard a non-negative number of pointer-sized elements
-+ // from the stack, clobbering only the esp register.
-+ void Drop(int element_count);
-+
-+ void Jump(Handle<Code> target, RelocInfo::Mode rmode) { jmp(target, rmode); }
-+ void Pop(Register dst) { pop(dst); }
-+ void Pop(const Operand& dst) { pop(dst); }
-+ void PushReturnAddressFrom(Register src) { push(src); }
-+ void PopReturnAddressTo(Register dst) { pop(dst); }
-+
-+ // Emit code for a truncating division by a constant. The dividend register is
-+ // unchanged, the result is in edx, and eax gets clobbered.
-+ void TruncatingDiv(Register dividend, int32_t divisor);
-+
-+ // ---------------------------------------------------------------------------
-+ // StatsCounter support
-+
-+ void SetCounter(StatsCounter* counter, int value);
-+ void IncrementCounter(StatsCounter* counter, int value);
-+ void DecrementCounter(StatsCounter* counter, int value);
-+ void IncrementCounter(Condition cc, StatsCounter* counter, int value);
-+ void DecrementCounter(Condition cc, StatsCounter* counter, int value);
-+
-+ // ---------------------------------------------------------------------------
-+ // String utilities.
-+
-+ // Checks if both objects are sequential one-byte strings, and jumps to label
-+ // if either is not.
-+ void JumpIfNotBothSequentialOneByteStrings(
-+ Register object1, Register object2, Register scratch1, Register scratch2,
-+ Label* on_not_flat_one_byte_strings);
-+
-+ // Checks if the given register or operand is a unique name
-+ void JumpIfNotUniqueNameInstanceType(Register reg, Label* not_unique_name,
-+ Label::Distance distance = Label::kFar) {
-+ JumpIfNotUniqueNameInstanceType(Operand(reg), not_unique_name, distance);
-+ }
-+
-+ void JumpIfNotUniqueNameInstanceType(Operand operand, Label* not_unique_name,
-+ Label::Distance distance = Label::kFar);
-+
-+ void EmitSeqStringSetCharCheck(Register string, Register index,
-+ Register value, uint32_t encoding_mask);
-+
-+ static int SafepointRegisterStackIndex(Register reg) {
-+ return SafepointRegisterStackIndex(reg.code());
-+ }
-+
-+ // Load the type feedback vector from a JavaScript frame.
-+ void EmitLoadFeedbackVector(Register vector);
-+
-+ void EnterBuiltinFrame(Register context, Register target, Register argc);
-+ void LeaveBuiltinFrame(Register context, Register target, Register argc);
-+
-+ // Expects object in eax and returns map with validated enum cache
-+ // in eax. Assumes that any other register can be used as a scratch.
-+ void CheckEnumCache(Label* call_runtime);
-+
-+ // AllocationMemento support. Arrays may have an associated
-+ // AllocationMemento object that can be checked for in order to pretransition
-+ // to another type.
-+ // On entry, receiver_reg should point to the array object.
-+ // scratch_reg gets clobbered.
-+ // If allocation info is present, conditional code is set to equal.
-+ void TestJSArrayForAllocationMemento(Register receiver_reg,
-+ Register scratch_reg,
-+ Label* no_memento_found);
-+
-+ private:
-+ int jit_cookie_;
-+
-+ // Helper functions for generating invokes.
-+ void InvokePrologue(const ParameterCount& expected,
-+ const ParameterCount& actual, Label* done,
-+ bool* definitely_mismatches, InvokeFlag flag,
-+ Label::Distance done_distance,
-+ const CallWrapper& call_wrapper);
-+
-+ void EnterExitFramePrologue(StackFrame::Type frame_type);
-+ void EnterExitFrameEpilogue(int argc, bool save_doubles);
-+
-+ void LeaveExitFrameEpilogue(bool restore_context);
-+
-+ // Allocation support helpers.
-+ void LoadAllocationTopHelper(Register result, Register scratch,
-+ AllocationFlags flags);
-+
-+ void UpdateAllocationTopHelper(Register result_end, Register scratch,
-+ AllocationFlags flags);
-+
-+ // Helper for implementing JumpIfNotInNewSpace and JumpIfInNewSpace.
-+ void InNewSpace(Register object, Register scratch, Condition cc,
-+ Label* condition_met,
-+ Label::Distance condition_met_distance = Label::kFar);
-+
-+ // Helper for finding the mark bits for an address. Afterwards, the
-+ // bitmap register points at the word with the mark bits and the mask
-+ // the position of the first bit. Uses ecx as scratch and leaves addr_reg
-+ // unchanged.
-+ inline void GetMarkBits(Register addr_reg, Register bitmap_reg,
-+ Register mask_reg);
-+
-+ // Compute memory operands for safepoint stack slots.
-+ Operand SafepointRegisterSlot(Register reg);
-+ static int SafepointRegisterStackIndex(int reg_code);
-+
-+ // Needs access to SafepointRegisterStackIndex for compiled frame
-+ // traversal.
-+ friend class StandardFrame;
-+};
-+
-+// The code patcher is used to patch (typically) small parts of code e.g. for
-+// debugging and other types of instrumentation. When using the code patcher
-+// the exact number of bytes specified must be emitted. Is not legal to emit
-+// relocation information. If any of these constraints are violated it causes
-+// an assertion.
-+class CodePatcher {
-+ public:
-+ CodePatcher(Isolate* isolate, byte* address, int size);
-+ ~CodePatcher();
-+
-+ // Macro assembler to emit code.
-+ MacroAssembler* masm() { return &masm_; }
-+
-+ private:
-+ byte* address_; // The address of the code being patched.
-+ int size_; // Number of bytes of the expected patch size.
-+ MacroAssembler masm_; // Macro assembler used to generate the code.
-+};
-+
-+// -----------------------------------------------------------------------------
-+// Static helper functions.
-+
-+// Generate an Operand for loading a field from an object.
-+inline Operand FieldOperand(Register object, int offset) {
-+ return Operand(object, offset - kHeapObjectTag);
-+}
-+
-+// Generate an Operand for loading an indexed field from an object.
-+inline Operand FieldOperand(Register object, Register index, ScaleFactor scale,
-+ int offset) {
-+ return Operand(object, index, scale, offset - kHeapObjectTag);
-+}
-+
-+inline Operand FixedArrayElementOperand(Register array, Register index_as_smi,
-+ int additional_offset = 0) {
-+ int offset = FixedArray::kHeaderSize + additional_offset * kPointerSize;
-+ return FieldOperand(array, index_as_smi, times_half_pointer_size, offset);
-+}
-+
-+inline Operand ContextOperand(Register context, int index) {
-+ return Operand(context, Context::SlotOffset(index));
-+}
-+
-+inline Operand ContextOperand(Register context, Register index) {
-+ return Operand(context, index, times_pointer_size, Context::SlotOffset(0));
-+}
-+
-+inline Operand NativeContextOperand() {
-+ return ContextOperand(esi, Context::NATIVE_CONTEXT_INDEX);
-+}
-+
-+#define ACCESS_MASM(masm) masm->
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_X87_MACRO_ASSEMBLER_X87_H_
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/OWNERS
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/OWNERS
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/OWNERS 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/OWNERS 2018-02-18
19:00:54.200418105 +0100
-@@ -0,0 +1,2 @@
-+weiliang.lin(a)intel.com
-+chunyang.dai(a)intel.com
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/simulator-x87.cc
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/simulator-x87.cc
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/simulator-x87.cc 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/simulator-x87.cc 2018-02-18
19:00:54.200418105 +0100
-@@ -0,0 +1,7 @@
-+// Copyright 2008 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#include "src/x87/simulator-x87.h"
-+
-+// Since there is no simulator for the ia32 architecture this file is empty.
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/simulator-x87.h
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/simulator-x87.h
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/src/x87/simulator-x87.h 1970-01-01
01:00:00.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/src/x87/simulator-x87.h 2018-02-18
19:00:54.200418105 +0100
-@@ -0,0 +1,52 @@
-+// Copyright 2012 the V8 project authors. All rights reserved.
-+// Use of this source code is governed by a BSD-style license that can be
-+// found in the LICENSE file.
-+
-+#ifndef V8_X87_SIMULATOR_X87_H_
-+#define V8_X87_SIMULATOR_X87_H_
-+
-+#include "src/allocation.h"
-+
-+namespace v8 {
-+namespace internal {
-+
-+// Since there is no simulator for the ia32 architecture the only thing we can
-+// do is to call the entry directly.
-+#define CALL_GENERATED_CODE(isolate, entry, p0, p1, p2, p3, p4) \
-+ (entry(p0, p1, p2, p3, p4))
-+
-+
-+typedef int (*regexp_matcher)(String*, int, const byte*,
-+ const byte*, int*, int, Address, int, Isolate*);
-+
-+// Call the generated regexp code directly. The code at the entry address should
-+// expect eight int/pointer sized arguments and return an int.
-+#define CALL_GENERATED_REGEXP_CODE(isolate, entry, p0, p1, p2, p3, p4, p5, p6, \
-+ p7, p8) \
-+ (FUNCTION_CAST<regexp_matcher>(entry)(p0, p1, p2, p3, p4, p5, p6, p7, p8))
-+
-+
-+// The stack limit beyond which we will throw stack overflow errors in
-+// generated code. Because generated code on ia32 uses the C stack, we
-+// just use the C stack limit.
-+class SimulatorStack : public v8::internal::AllStatic {
-+ public:
-+ static inline uintptr_t JsLimitFromCLimit(Isolate* isolate,
-+ uintptr_t c_limit) {
-+ USE(isolate);
-+ return c_limit;
-+ }
-+
-+ static inline uintptr_t RegisterCTryCatch(Isolate* isolate,
-+ uintptr_t try_catch_address) {
-+ USE(isolate);
-+ return try_catch_address;
-+ }
-+
-+ static inline void UnregisterCTryCatch(Isolate* isolate) { USE(isolate); }
-+};
-+
-+} // namespace internal
-+} // namespace v8
-+
-+#endif // V8_X87_SIMULATOR_X87_H_
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/test/cctest/BUILD.gn
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/test/cctest/BUILD.gn
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/test/cctest/BUILD.gn 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/test/cctest/BUILD.gn 2018-02-18
19:00:54.200418105 +0100
-@@ -287,6 +287,17 @@
- "test-macro-assembler-x64.cc",
- "test-run-wasm-relocation-x64.cc",
- ]
-+ } else if (v8_current_cpu == "x87") {
-+ sources += [ ### gcmole(arch:x87) ###
-+ "test-assembler-x87.cc",
-+ "test-code-stubs-x87.cc",
-+ "test-code-stubs.cc",
-+ "test-code-stubs.h",
-+ "test-disasm-x87.cc",
-+ "test-log-stack-tracer.cc",
-+ "test-macro-assembler-x87.cc",
-+ "test-run-wasm-relocation-x87.cc",
-+ ]
- } else if (v8_current_cpu == "ppc" || v8_current_cpu == "ppc64")
{
- sources += [ ### gcmole(arch:ppc) ###
- "test-assembler-ppc.cc",
-@@ -332,7 +343,7 @@
-
- defines = []
-
-- if (is_component_build) {
-+ if (is_component_build || v8_build_shared) {
- # cctest can't be built against a shared library, so we
- # need to depend on the underlying static target in that case.
- deps += [ "../..:v8_maybe_snapshot" ]
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/test/cctest/cctest.gyp
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/test/cctest/cctest.gyp
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/test/cctest/cctest.gyp 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/test/cctest/cctest.gyp 2018-02-18
19:00:54.289416795 +0100
-@@ -308,6 +308,16 @@
- 'test-disasm-mips64.cc',
- 'test-macro-assembler-mips64.cc',
- ],
-+ 'cctest_sources_x87': [ ### gcmole(arch:x87) ###
-+ 'test-assembler-x87.cc',
-+ 'test-code-stubs.cc',
-+ 'test-code-stubs.h',
-+ 'test-code-stubs-x87.cc',
-+ 'test-disasm-x87.cc',
-+ 'test-macro-assembler-x87.cc',
-+ 'test-log-stack-tracer.cc',
-+ 'test-run-wasm-relocation-x87.cc',
-+ ],
- },
- 'includes': ['../../gypfiles/toolchain.gypi',
'../../gypfiles/features.gypi'],
- 'targets': [
-@@ -392,6 +402,11 @@
- '<@(cctest_sources_mips64el)',
- ],
- }],
-+ ['v8_target_arch=="x87"', {
-+ 'sources': [
-+ '<@(cctest_sources_x87)',
-+ ],
-+ }],
- [ 'OS=="linux" or OS=="qnx"', {
- 'sources': [
- 'test-platform-linux.cc',
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/dev/gen-tags.py
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/dev/gen-tags.py
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/dev/gen-tags.py 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/dev/gen-tags.py 2018-02-18
19:00:54.289416795 +0100
-@@ -20,7 +20,7 @@
- import sys
-
- # All arches that this script understands.
--ARCHES = ["ia32", "x64", "arm", "arm64",
"mips", "mips64", "ppc", "s390"]
-+ARCHES = ["ia32", "x64", "arm", "arm64",
"mips", "mips64", "ppc", "s390", "x87"]
-
- def PrintHelpAndExit():
- print(__doc__)
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/dev/gm.py
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/dev/gm.py
---- qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/dev/gm.py 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/dev/gm.py 2018-02-18
19:00:54.366415663 +0100
-@@ -33,7 +33,7 @@
-
- # All arches that this script understands.
- ARCHES = ["ia32", "x64", "arm", "arm64",
"mipsel", "mips64el", "ppc", "ppc64",
-- "s390", "s390x"]
-+ "s390", "s390x", "x87"]
- # Arches that get built/run when you don't specify any.
- DEFAULT_ARCHES = ["ia32", "x64", "arm",
"arm64"]
- # Modes that this script understands.
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/run-tests.py
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/run-tests.py
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/run-tests.py 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/run-tests.py 2018-02-18
19:00:54.366415663 +0100
-@@ -187,6 +187,7 @@
- "android_x64",
- "arm",
- "ia32",
-+ "x87",
- "mips",
- "mipsel",
- "mips64",
-@@ -210,6 +211,7 @@
- "mips64el",
- "s390",
- "s390x",
-+ "x87",
- "arm64"]
-
-
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/testrunner/local/statusfile.py
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/testrunner/local/statusfile.py
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/testrunner/local/statusfile.py 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/testrunner/local/statusfile.py 2018-02-18
19:00:54.443414530 +0100
-@@ -59,10 +59,10 @@
- # Support arches, modes to be written as keywords instead of strings.
- VARIABLES = {ALWAYS: True}
- for var in ["debug", "release", "big",
"little",
-- "android_arm", "android_arm64",
"android_ia32", "android_x64",
-- "arm", "arm64", "ia32", "mips",
"mipsel", "mips64", "mips64el",
-- "x64", "ppc", "ppc64", "s390",
"s390x", "macos", "windows",
-- "linux", "aix"]:
-+ "android_arm", "android_arm64",
"android_ia32", "android_x87",
-+ "android_x64", "arm", "arm64",
"ia32", "mips", "mipsel", "mips64",
-+ "mips64el", "x64", "x87", "ppc",
"ppc64", "s390", "s390x", "macos",
-+ "windows", "linux", "aix"]:
- VARIABLES[var] = var
-
- # Allow using variants as keywords.
-diff -Nur
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/verify_source_deps.py
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/verify_source_deps.py
----
qtwebengine-everywhere-src-5.10.1/src/3rdparty/chromium/v8/tools/verify_source_deps.py 2018-02-02
11:39:52.000000000 +0100
-+++
qtwebengine-everywhere-src-5.10.1-no-sse2/src/3rdparty/chromium/v8/tools/verify_source_deps.py 2018-02-18
19:00:54.514413486 +0100
-@@ -82,6 +82,7 @@
- 'solaris',
- 'vtune',
- 'v8-version.h',
-+ 'x87',
- ]
-
- ALL_GN_PREFIXES = [
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/core/core_module.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/src/core/core_module.pro
---- qtwebengine-everywhere-src-5.10.1/src/core/core_module.pro 2018-02-09
05:07:39.000000000 +0100
-+++ qtwebengine-everywhere-src-5.10.1-no-sse2/src/core/core_module.pro 2018-02-18
19:00:54.514413486 +0100
-@@ -53,6 +53,31 @@
-
- POST_TARGETDEPS += $$NINJA_TARGETDEPS
-
-+# go through the shared libraries that GN wants to link to
-+# ignore the dummy convert_dict shared library used only to get a .pri file
-+# add the ones NOT in lib/sse2 to LIBS_PRIVATE
-+# don't add those in lib/sse2 that are only replacements for the normal ones
-+# collect all shared libraries, non-SSE2 and SSE2, so they can be installed
-+for(shlib, NINJA_SOLIBS) {
-+ !contains(shlib, .*convert_dict.*) {
-+ contains(shlib, .*/lib/sse2/.*) {
-+ shlibs_sse2 += $$shlib
-+ } else {
-+ LIBS_PRIVATE += $$shlib
-+ shlibs += $$shlib
-+ }
-+ }
-+}
-+
-+# set the shared libraries to be installed
-+# add an rpath to their installation location
-+shlib_install_path = $$[QT_INSTALL_LIBS]/qtwebengine
-+!isEmpty(shlibs) {
-+ shlibs.files += $$shlibs
-+ shlibs_sse2.files += $$shlibs_sse2
-+ LIBS_PRIVATE += -Wl,--rpath,$$shlib_install_path
-+}
-+
-
- LIBS_PRIVATE += -L$$api_library_path
- CONFIG *= no_smart_library_merge
-@@ -122,7 +147,12 @@
- locales.path = $$[QT_INSTALL_TRANSLATIONS]/qtwebengine_locales
- resources.CONFIG += no_check_exist
- resources.path = $$[QT_INSTALL_DATA]/resources
-- INSTALLS += locales resources
-+ # install the shared libraries
-+ shlibs.CONFIG += no_check_exist
-+ shlibs.path = $$shlib_install_path
-+ shlibs_sse2.CONFIG += no_check_exist
-+ shlibs_sse2.path = $$shlib_install_path/sse2
-+ INSTALLS += locales resources shlibs shlibs_sse2
-
- !qtConfig(webengine-system-icu) {
- icu.CONFIG += no_check_exist
-diff -Nur qtwebengine-everywhere-src-5.10.1/src/process/process.pro
qtwebengine-everywhere-src-5.10.1-no-sse2/src/process/process.pro
---- qtwebengine-everywhere-src-5.10.1/src/process/process.pro 2018-02-09
05:07:39.000000000 +0100
-+++ qtwebengine-everywhere-src-5.10.1-no-sse2/src/process/process.pro 2018-02-18
19:00:54.515413471 +0100
-@@ -9,6 +9,8 @@
-
- SOURCES = main.cpp
-
-+QMAKE_LFLAGS += -Wl,-rpath-link,$$OUT_PWD/../core/release
-+
- win32 {
- SOURCES += \
- support_win.cpp
diff --git a/qtwebengine-everywhere-src-5.12.0-gn-bootstrap-verbose.patch
b/qtwebengine-everywhere-src-5.12.0-gn-bootstrap-verbose.patch
new file mode 100644
index 0000000..9d4cadb
--- /dev/null
+++ b/qtwebengine-everywhere-src-5.12.0-gn-bootstrap-verbose.patch
@@ -0,0 +1,12 @@
+diff -up qtwebengine-everywhere-src-5.12.0/src/buildtools/gn.pro.gn-bootstrap-verbose
qtwebengine-everywhere-src-5.12.0/src/buildtools/gn.pro
+---
qtwebengine-everywhere-src-5.12.0/src/buildtools/gn.pro.gn-bootstrap-verbose 2018-12-07
09:53:18.262171677 -0600
++++ qtwebengine-everywhere-src-5.12.0/src/buildtools/gn.pro 2018-12-07 09:57:53.246646133
-0600
+@@ -18,7 +18,7 @@ build_pass|!debug_and_release {
+ src_3rd_party_dir = $$absolute_path("$${getChromiumSrcDir()}/../",
"$$QTWEBENGINE_ROOT")
+ gn_bootstrap = $$system_path($$absolute_path(gn/build/gen.py,
$$src_3rd_party_dir))
+
+- gn_configure = $$system_quote($$gn_bootstrap) --no-last-commit-position
--out-path $$out_path
++ gn_configure = $$system_quote($$gn_bootstrap) --verbose
--no-last-commit-position --out-path $$out_path
+ !system("$$pythonPathForSystem() $$gn_configure") {
+ error("GN generation error!")
+ }
diff --git a/qtwebengine-everywhere-src-5.12.1-python2.patch
b/qtwebengine-everywhere-src-5.12.1-python2.patch
new file mode 100644
index 0000000..259952d
--- /dev/null
+++ b/qtwebengine-everywhere-src-5.12.1-python2.patch
@@ -0,0 +1,11 @@
+diff -up qtwebengine-everywhere-src-5.12.1/src/core/config/linux.pri.python2
qtwebengine-everywhere-src-5.12.1/src/core/config/linux.pri
+--- qtwebengine-everywhere-src-5.12.1/src/core/config/linux.pri.python2 2019-02-01
09:30:19.194657298 -0600
++++ qtwebengine-everywhere-src-5.12.1/src/core/config/linux.pri 2019-02-01
10:53:16.756357279 -0600
+@@ -205,5 +205,5 @@ gn_args += linux_link_libpci=true
+ CHROMIUM_SRC_DIR = "$$QTWEBENGINE_ROOT/$$getChromiumSrcDir()"
+ R_G_F_PY = "$$CHROMIUM_SRC_DIR/build/linux/unbundle/replace_gn_files.py"
+ R_G_F_PY_ARGS = "--system-libraries yasm"
+-log("Running python $$R_G_F_PY $$R_G_F_PY_ARGS$${EOL}")
+-!system("python $$R_G_F_PY $$R_G_F_PY_ARGS"): error("-- unbundling
failed")
++log("Running python2 $$R_G_F_PY $$R_G_F_PY_ARGS$${EOL}")
++!system("python2 $$R_G_F_PY $$R_G_F_PY_ARGS"): error("-- unbundling
failed")
diff --git a/qtwebengine-opensource-src-5.12.1-fix-extractcflag.patch
b/qtwebengine-opensource-src-5.12.1-fix-extractcflag.patch
new file mode 100644
index 0000000..3f047c2
--- /dev/null
+++ b/qtwebengine-opensource-src-5.12.1-fix-extractcflag.patch
@@ -0,0 +1,12 @@
+diff -up
qtwebengine-everywhere-src-5.12.1/mkspecs/features/functions.prf.fix-extractcflag
qtwebengine-everywhere-src-5.12.1/mkspecs/features/functions.prf
+---
qtwebengine-everywhere-src-5.12.1/mkspecs/features/functions.prf.fix-extractcflag 2019-02-01
09:25:44.950965875 -0600
++++ qtwebengine-everywhere-src-5.12.1/mkspecs/features/functions.prf 2019-02-01
09:28:39.290041131 -0600
+@@ -11,7 +11,7 @@ defineReplace(getChromiumSrcDir) {
+ }
+
+ defineReplace(extractCFlag) {
+- CFLAGS = $$QMAKE_CC $$QMAKE_CFLAGS
++ CFLAGS = $$QMAKE_CC $$QMAKE_CFLAGS $$QMAKE_CFLAGS_RELEASE
+ OPTION = $$find(CFLAGS, $$1)
+ OPTION = $$split(OPTION, =)
+ PARAM = $$member(OPTION, 1)
diff --git a/qtwebengine-opensource-src-5.9.0-fix-extractcflag.patch
b/qtwebengine-opensource-src-5.9.0-fix-extractcflag.patch
deleted file mode 100644
index 4fcd592..0000000
--- a/qtwebengine-opensource-src-5.9.0-fix-extractcflag.patch
+++ /dev/null
@@ -1,12 +0,0 @@
-diff -ur qtwebengine-opensource-src-5.9.0/mkspecs/features/functions.prf
qtwebengine-opensource-src-5.9.0-fix-extractcflag/mkspecs/features/functions.prf
---- qtwebengine-opensource-src-5.9.0/mkspecs/features/functions.prf 2017-05-19
06:22:04.000000000 +0200
-+++
qtwebengine-opensource-src-5.9.0-fix-extractcflag/mkspecs/features/functions.prf 2017-06-08
00:36:16.303520106 +0200
-@@ -302,7 +302,7 @@
- }
-
- defineReplace(extractCFlag) {
-- CFLAGS = $$QMAKE_CC $$QMAKE_CFLAGS
-+ CFLAGS = $$QMAKE_CC $$QMAKE_CFLAGS $$QMAKE_CFLAGS_RELEASE
- OPTION = $$find(CFLAGS, $$1)
- OPTION = $$split(OPTION, =)
- return ($$member(OPTION, 1))
diff --git a/qtwebengine-opensource-src-5.9.0-webrtc-neon-detect.patch
b/qtwebengine-opensource-src-5.9.0-webrtc-neon-detect.patch
deleted file mode 100644
index a21802a..0000000
--- a/qtwebengine-opensource-src-5.9.0-webrtc-neon-detect.patch
+++ /dev/null
@@ -1,14 +0,0 @@
-diff -ur
qtwebengine-opensource-src-5.9.0/src/3rdparty/chromium/third_party/webrtc/system_wrappers/BUILD.gn
qtwebengine-opensource-src-5.9.0-webrtc-neon-detect/src/3rdparty/chromium/third_party/webrtc/system_wrappers/BUILD.gn
----
qtwebengine-opensource-src-5.9.0/src/3rdparty/chromium/third_party/webrtc/system_wrappers/BUILD.gn 2017-05-18
16:51:44.000000000 +0200
-+++
qtwebengine-opensource-src-5.9.0-webrtc-neon-detect/src/3rdparty/chromium/third_party/webrtc/system_wrappers/BUILD.gn 2017-06-10
13:20:14.959007488 +0200
-@@ -93,9 +93,7 @@
- if (is_linux) {
- defines += [ "WEBRTC_THREAD_RR" ]
-
-- if (!build_with_chromium) {
-- deps += [ ":cpu_features_linux" ]
-- }
-+ deps += [ ":cpu_features_linux" ]
-
- libs += [ "rt" ]
- }
diff --git a/qtwebengine-opensource-src-5.9.2-arm-fpu-fix.patch
b/qtwebengine-opensource-src-5.9.2-arm-fpu-fix.patch
deleted file mode 100644
index a475199..0000000
--- a/qtwebengine-opensource-src-5.9.2-arm-fpu-fix.patch
+++ /dev/null
@@ -1,11 +0,0 @@
-diff -ur qtwebengine-opensource-src-5.9.0/src/core/config/linux.pri
qtwebengine-opensource-src-5.9.0-arm-fpu-fix/src/core/config/linux.pri
---- qtwebengine-opensource-src-5.9.0/src/core/config/linux.pri 2017-05-19
06:22:04.000000000 +0200
-+++ qtwebengine-opensource-src-5.9.0-arm-fpu-fix/src/core/config/linux.pri 2017-06-13
14:51:26.986633933 +0200
-@@ -64,6 +64,7 @@
- gn_args += arm_use_neon=true
- } else {
- MFPU = $$extractCFlag("-mfpu=.*")
-+ !isEmpty(MFPU): gn_args += arm_fpu=\"$$MFPU\"
- !isEmpty(MFPU):contains(MFPU, ".*neon.*") {
- gn_args += arm_use_neon=true
- } else {
diff --git a/sources b/sources
index 965905d..135d182 100644
--- a/sources
+++ b/sources
@@ -1 +1 @@
-SHA512 (qtwebengine-everywhere-src-5.11.3-clean.tar.xz) =
02b787df5a79eaa9c30a2ecfaee899291dd5033078c356313d580d86284a78d86aa13de1dd67968b9c1e214662adbb1623ea0afb7289f0eb3c0b99d8d2a53a2e
+SHA512 (qtwebengine-everywhere-src-5.12.1-clean.tar.xz) =
779d63b93849a6a5b8ecea1c1480ce80c01cc678929947ba64ea5003f9de51e76b49f06d9f0dee89afb28a7713f11d2a7412b55acb3203c548ca8ebf564b30cb
commit 9ba4864fbb245d8aa5cc97547a1b35d6ff035eda
Author: Bjrn Esser <besser82(a)fedoraproject.org>
Date: Tue Feb 5 15:03:56 2019 +0100
rebuilt (libvpx)
diff --git a/qt5-qtwebengine.spec b/qt5-qtwebengine.spec
index ac72844..6232444 100644
--- a/qt5-qtwebengine.spec
+++ b/qt5-qtwebengine.spec
@@ -51,7 +51,7 @@
Summary: Qt5 - QtWebEngine components
Name: qt5-qtwebengine
Version: 5.11.3
-Release: 4%{?dist}
+Release: 5%{?dist}
# See LICENSE.GPL LICENSE.LGPL LGPL_EXCEPTION.txt, for details
# See also
http://qt-project.org/doc/qt-5.0/qtdoc/licensing.html
@@ -601,6 +601,9 @@ done
%changelog
+* Tue Feb 05 2019 Bjrn Esser <besser82(a)fedoraproject.org> - 5.11.3-5
+- rebuilt (libvpx)
+
* Sat Feb 02 2019 Fedora Release Engineering <releng(a)fedoraproject.org> - 5.11.3-4
- Rebuilt for
https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild
commit 25fd9ec4f76377997f086ce3bd0acb7aafb975a5
Author: Fedora Release Engineering <releng(a)fedoraproject.org>
Date: Sat Feb 2 10:46:43 2019 +0000
- Rebuilt for
https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild
Signed-off-by: Fedora Release Engineering <releng(a)fedoraproject.org>
diff --git a/qt5-qtwebengine.spec b/qt5-qtwebengine.spec
index 4dc9995..ac72844 100644
--- a/qt5-qtwebengine.spec
+++ b/qt5-qtwebengine.spec
@@ -51,7 +51,7 @@
Summary: Qt5 - QtWebEngine components
Name: qt5-qtwebengine
Version: 5.11.3
-Release: 3%{?dist}
+Release: 4%{?dist}
# See LICENSE.GPL LICENSE.LGPL LGPL_EXCEPTION.txt, for details
# See also
http://qt-project.org/doc/qt-5.0/qtdoc/licensing.html
@@ -601,6 +601,9 @@ done
%changelog
+* Sat Feb 02 2019 Fedora Release Engineering <releng(a)fedoraproject.org> - 5.11.3-4
+- Rebuilt for
https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild
+
* Thu Jan 03 2019 Rex Dieter <rdieter(a)fedoraproject.org> - 5.11.3-3
- -devtools subpkg, workaround multilib conflicts (#1663299)