rpms/kernel/devel iwlwifi-fix-internal-scan-race.patch, NONE, 1.1.2.2 iwlwifi-fix-scan-races.patch, NONE, 1.1.2.1 iwlwifi-recalculate-average-tpt-if-not-current.patch, NONE, 1.1.2.2 iwlwifi-recover_from_tx_stall.patch, NONE, 1.1.2.2 keys-find-keyring-by-name-can-gain-access-to-the-freed-keyring.patch, NONE, 1.1.2.1 patch-2.6.32.14.bz2.sign, NONE, 1.1.2.1 .cvsignore, 1.1014.2.44, 1.1014.2.45 kernel.spec, 1.1294.2.104, 1.1294.2.105 linux-2.6-utrace.patch, 1.107.6.9, 1.107.6.10 sources, 1.976.2.45, 1.976.2.46 upstream, 1.888.2.44, 1.888.2.45 xen.pvops.patch, 1.1.2.67, 1.1.2.68 btrfs-check-for-read-permission-on-src-file-in-clone-ioctl.patch, 1.1.2.1, NONE iwlwifi_-clear-all-the-stop_queue-flag-after-load-firmware.patch, 1.1.2.1, NONE patch-2.6.32.13.bz2.sign, 1.1.2.1, NONE revert-ath9k_-fix-lockdep-warning-when-unloading-module.patch, 1.1.2.1, NONE

myoung myoung at fedoraproject.org
Sat May 29 14:05:09 UTC 2010


Author: myoung

Update of /cvs/pkgs/rpms/kernel/devel
In directory cvs01.phx2.fedoraproject.org:/tmp/cvs-serv8841

Modified Files:
      Tag: private-myoung-dom0-branch
	.cvsignore kernel.spec linux-2.6-utrace.patch sources upstream 
	xen.pvops.patch 
Added Files:
      Tag: private-myoung-dom0-branch
	iwlwifi-fix-internal-scan-race.patch 
	iwlwifi-fix-scan-races.patch 
	iwlwifi-recalculate-average-tpt-if-not-current.patch 
	iwlwifi-recover_from_tx_stall.patch 
	keys-find-keyring-by-name-can-gain-access-to-the-freed-keyring.patch 
	patch-2.6.32.14.bz2.sign 
Removed Files:
      Tag: private-myoung-dom0-branch
	btrfs-check-for-read-permission-on-src-file-in-clone-ioctl.patch 
	iwlwifi_-clear-all-the-stop_queue-flag-after-load-firmware.patch 
	patch-2.6.32.13.bz2.sign 
	revert-ath9k_-fix-lockdep-warning-when-unloading-module.patch 
Log Message:
another pvops update (to 2.6.32.14)


iwlwifi-fix-internal-scan-race.patch:
 iwl-scan.c |   22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)

--- NEW FILE iwlwifi-fix-internal-scan-race.patch ---
>From reinette.chatre at intel.com Thu May 13 17:49:59 2010
Return-path: <reinette.chatre at intel.com>
Envelope-to: linville at tuxdriver.com
Delivery-date: Thu, 13 May 2010 17:49:59 -0400
Received: from mga09.intel.com ([134.134.136.24])
	by smtp.tuxdriver.com with esmtp (Exim 4.63)
	(envelope-from <reinette.chatre at intel.com>)
	id 1OCgI1-0007H3-Eg
	for linville at tuxdriver.com; Thu, 13 May 2010 17:49:59 -0400
Received: from orsmga002.jf.intel.com ([10.7.209.21])
  by orsmga102.jf.intel.com with ESMTP; 13 May 2010 14:48:04 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.53,224,1272870000"; 
   d="scan'208";a="517743256"
Received: from rchatre-desk.amr.corp.intel.com.jf.intel.com (HELO localhost.localdomain) ([134.134.15.94])
  by orsmga002.jf.intel.com with ESMTP; 13 May 2010 14:49:12 -0700
From: Reinette Chatre <reinette.chatre at intel.com>
To: linville at tuxdriver.com
Cc: linux-wireless at vger.kernel.org, ipw3945-devel at lists.sourceforge.net, Reinette Chatre <reinette.chatre at intel.com>
Subject: [PATCH 1/2] iwlwifi: fix internal scan race
Date: Thu, 13 May 2010 14:49:44 -0700
Message-Id: <1273787385-9248-2-git-send-email-reinette.chatre at intel.com>
X-Mailer: git-send-email 1.6.3.3
In-Reply-To: <1273787385-9248-1-git-send-email-reinette.chatre at intel.com>
References: <1273787385-9248-1-git-send-email-reinette.chatre at intel.com>
X-Spam-Score: -4.2 (----)
X-Spam-Status: No
Status: RO
Content-Length: 3370
Lines: 91

From: Reinette Chatre <reinette.chatre at intel.com>

It is possible for internal scan to race against itself if the device is
not returning the scan results from first requests. What happens in this
case is the cleanup done during the abort of the first internal scan also
cleans up part of the new scan, causing it to access memory it shouldn't.

Here are details:
* First internal scan is triggered and scan command sent to device.
* After seven seconds there is no scan results so the watchdog timer
  triggers a scan abort.
* The scan abort succeeds and a SCAN_COMPLETE_NOTIFICATION is received for
 failed scan.
* During processing of SCAN_COMPLETE_NOTIFICATION we clear STATUS_SCANNING
  and queue the "scan_completed" work.
** At this time, since the problem that caused the internal scan in first
   place is still present, a new internal scan is triggered.
The behavior at this point is a bit different between 2.6.34 and 2.6.35
since 2.6.35 has a lot of this synchronized. The rest of the race
description will thus be generalized.
** As part of preparing for the scan "is_internal_short_scan" is set to
true.
* At this point the completion work for fist scan is run. As part of this
  there is some locking missing around the "is_internal_short_scan"
  variable and it is set to "false".
** Now the second scan runs and it considers itself a real (not internal0
   scan and thus causes problems with wrong memory being accessed.

The fix is twofold.
* Since "is_internal_short_scan" should be protected by mutex, fix this in
  scan completion work so that changes to it can be serialized.
* Do not queue a new internal scan if one is in progress.

This fixes https://bugzilla.kernel.org/show_bug.cgi?id=15824

Signed-off-by: Reinette Chatre <reinette.chatre at intel.com>
---
 drivers/net/wireless/iwlwifi/iwl-scan.c |   21 ++++++++++++++++++---
 1 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/drivers/net/wireless/iwlwifi/iwl-scan.c b/drivers/net/wireless/iwlwifi/iwl-scan.c
index 2367286..a2c4855 100644
--- a/drivers/net/wireless/iwlwifi/iwl-scan.c
+++ b/drivers/net/wireless/iwlwifi/iwl-scan.c
@@ -560,6 +560,11 @@ static void iwl_bg_start_internal_scan(struct work_struct *work)
 
 	mutex_lock(&priv->mutex);
 
+	if (priv->is_internal_short_scan == true) {
+		IWL_DEBUG_SCAN(priv, "Internal scan already in progress\n");
+		goto unlock;
+	}
+
 	if (!iwl_is_ready_rf(priv)) {
 		IWL_DEBUG_SCAN(priv, "not ready or exit pending\n");
 		goto unlock;
@@ -957,17 +962,27 @@ void iwl_bg_scan_completed(struct work_struct *work)
 {
 	struct iwl_priv *priv =
 	    container_of(work, struct iwl_priv, scan_completed);
+	bool internal = false;
 
 	IWL_DEBUG_SCAN(priv, "SCAN complete scan\n");
 
 	cancel_delayed_work(&priv->scan_check);
 
-	if (!priv->is_internal_short_scan)
-		ieee80211_scan_completed(priv->hw, false);
-	else {
+	mutex_lock(&priv->mutex);
+	if (priv->is_internal_short_scan) {
 		priv->is_internal_short_scan = false;
 		IWL_DEBUG_SCAN(priv, "internal short scan completed\n");
+		internal = true;
 	}
+	mutex_unlock(&priv->mutex);
+
+	/*
+	 * Do not hold mutex here since this will cause mac80211 to call
+	 * into driver again into functions that will attempt to take
+	 * mutex.
+	 */
+	if (!internal)
+		ieee80211_scan_completed(priv->hw, false);
 
 	if (test_bit(STATUS_EXIT_PENDING, &priv->status))
 		return;
-- 
1.6.3.3




iwlwifi-fix-scan-races.patch:
 iwl-agn.c  |    1 +
 iwl-core.c |    1 -
 iwl-core.h |    2 +-
 iwl-dev.h  |    1 +
 iwl-scan.c |   31 ++++++++++++++++++++-----------
 5 files changed, 23 insertions(+), 13 deletions(-)

--- NEW FILE iwlwifi-fix-scan-races.patch ---
commit 88be026490ed89c2ffead81a52531fbac5507e01
Author: Johannes Berg <johannes.berg at intel.com>
Date:   Wed Apr 7 00:21:36 2010 -0700

    iwlwifi: fix scan races
    
    When an internal scan is started, nothing protects the
    is_internal_short_scan variable which can cause crashes,
    cf. https://bugzilla.kernel.org/show_bug.cgi?id=15667.
    Fix this by making the short scan request use the mutex
    for locking, which requires making the request go to a
    work struct so that it can sleep.
    
    Reported-by: Peter Zijlstra <peterz at infradead.org>
    Signed-off-by: Johannes Berg <johannes.berg at intel.com>
    Signed-off-by: Reinette Chatre <reinette.chatre at intel.com>

diff --git a/drivers/net/wireless/iwlwifi/iwl-agn.c b/drivers/net/wireless/iwlwifi/iwl-agn.c
index e4c2e1e..ba0fdba 100644
--- a/drivers/net/wireless/iwlwifi/iwl-agn.c
+++ b/drivers/net/wireless/iwlwifi/iwl-agn.c
@@ -3330,6 +3330,7 @@ static void iwl_cancel_deferred_work(struct iwl_priv *priv)
 
 	cancel_delayed_work_sync(&priv->init_alive_start);
 	cancel_delayed_work(&priv->scan_check);
+	cancel_work_sync(&priv->start_internal_scan);
 	cancel_delayed_work(&priv->alive_start);
 	cancel_work_sync(&priv->beacon_update);
 	del_timer_sync(&priv->statistics_periodic);
diff --git a/drivers/net/wireless/iwlwifi/iwl-core.c b/drivers/net/wireless/iwlwifi/iwl-core.c
index 894bcb8..1459cdb 100644
--- a/drivers/net/wireless/iwlwifi/iwl-core.c
+++ b/drivers/net/wireless/iwlwifi/iwl-core.c
@@ -3357,7 +3357,6 @@ static void iwl_force_rf_reset(struct iwl_priv *priv)
 	 */
 	IWL_DEBUG_INFO(priv, "perform radio reset.\n");
 	iwl_internal_short_hw_scan(priv);
-	return;
 }
 
 
diff --git a/drivers/net/wireless/iwlwifi/iwl-core.h b/drivers/net/wireless/iwlwifi/iwl-core.h
index 732590f..36940a9 100644
--- a/drivers/net/wireless/iwlwifi/iwl-core.h
+++ b/drivers/net/wireless/iwlwifi/iwl-core.h
@@ -506,7 +506,7 @@ void iwl_init_scan_params(struct iwl_priv *priv);
 int iwl_scan_cancel(struct iwl_priv *priv);
 int iwl_scan_cancel_timeout(struct iwl_priv *priv, unsigned long ms);
 int iwl_mac_hw_scan(struct ieee80211_hw *hw, struct cfg80211_scan_request *req);
-int iwl_internal_short_hw_scan(struct iwl_priv *priv);
+void iwl_internal_short_hw_scan(struct iwl_priv *priv);
 int iwl_force_reset(struct iwl_priv *priv, int mode);
 u16 iwl_fill_probe_req(struct iwl_priv *priv, struct ieee80211_mgmt *frame,
 		       const u8 *ie, int ie_len, int left);
diff --git a/drivers/net/wireless/iwlwifi/iwl-dev.h b/drivers/net/wireless/iwlwifi/iwl-dev.h
index 6054c5f..ef1720a 100644
--- a/drivers/net/wireless/iwlwifi/iwl-dev.h
+++ b/drivers/net/wireless/iwlwifi/iwl-dev.h
@@ -1296,6 +1296,7 @@ struct iwl_priv {
 	struct work_struct tt_work;
 	struct work_struct ct_enter;
 	struct work_struct ct_exit;
+	struct work_struct start_internal_scan;
 
 	struct tasklet_struct irq_tasklet;
 
diff --git a/drivers/net/wireless/iwlwifi/iwl-scan.c b/drivers/net/wireless/iwlwifi/iwl-scan.c
index bd2f7c4..5062f4e 100644
--- a/drivers/net/wireless/iwlwifi/iwl-scan.c
+++ b/drivers/net/wireless/iwlwifi/iwl-scan.c
@@ -469,6 +469,8 @@ EXPORT_SYMBOL(iwl_init_scan_params);
 
 static int iwl_scan_initiate(struct iwl_priv *priv)
 {
+	WARN_ON(!mutex_is_locked(&priv->mutex));
+
 	IWL_DEBUG_INFO(priv, "Starting scan...\n");
 	set_bit(STATUS_SCANNING, &priv->status);
 	priv->is_internal_short_scan = false;
@@ -546,24 +548,31 @@ EXPORT_SYMBOL(iwl_mac_hw_scan);
  * internal short scan, this function should only been called while associated.
  * It will reset and tune the radio to prevent possible RF related problem
  */
-int iwl_internal_short_hw_scan(struct iwl_priv *priv)
+void iwl_internal_short_hw_scan(struct iwl_priv *priv)
 {
-	int ret = 0;
+	queue_work(priv->workqueue, &priv->start_internal_scan);
+}
+
+static void iwl_bg_start_internal_scan(struct work_struct *work)
+{
+	struct iwl_priv *priv =
+		container_of(work, struct iwl_priv, start_internal_scan);
+
+	mutex_lock(&priv->mutex);
 
 	if (!iwl_is_ready_rf(priv)) {
-		ret = -EIO;
 		IWL_DEBUG_SCAN(priv, "not ready or exit pending\n");
-		goto out;
+		goto unlock;
 	}
+
 	if (test_bit(STATUS_SCANNING, &priv->status)) {
 		IWL_DEBUG_SCAN(priv, "Scan already in progress.\n");
-		ret = -EAGAIN;
-		goto out;
+		goto unlock;
 	}
+
 	if (test_bit(STATUS_SCAN_ABORTING, &priv->status)) {
 		IWL_DEBUG_SCAN(priv, "Scan request while abort pending\n");
-		ret = -EAGAIN;
-		goto out;
+		goto unlock;
 	}
 
 	priv->scan_bands = 0;
@@ -576,9 +585,8 @@ int iwl_internal_short_hw_scan(struct iwl_priv *priv)
 	set_bit(STATUS_SCANNING, &priv->status);
 	priv->is_internal_short_scan = true;
 	queue_work(priv->workqueue, &priv->request_scan);
-
-out:
-	return ret;
+ unlock:
+	mutex_unlock(&priv->mutex);
 }
 EXPORT_SYMBOL(iwl_internal_short_hw_scan);
 
@@ -964,6 +972,7 @@ void iwl_setup_scan_deferred_work(struct iwl_priv *priv)
 	INIT_WORK(&priv->scan_completed, iwl_bg_scan_completed);
 	INIT_WORK(&priv->request_scan, iwl_bg_request_scan);
 	INIT_WORK(&priv->abort_scan, iwl_bg_abort_scan);
+	INIT_WORK(&priv->start_internal_scan, iwl_bg_start_internal_scan);
 	INIT_DELAYED_WORK(&priv->scan_check, iwl_bg_scan_check);
 }
 EXPORT_SYMBOL(iwl_setup_scan_deferred_work);

iwlwifi-recalculate-average-tpt-if-not-current.patch:
 iwl-agn-rs.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

--- NEW FILE iwlwifi-recalculate-average-tpt-if-not-current.patch ---
Backport of the following upstream commit...

commit 3d79b2a9eeaa066b35c49fbb17e3156a3c482c3e
Author: Reinette Chatre <reinette.chatre at intel.com>
Date:   Mon May 3 10:55:07 2010 -0700

    iwlwifi: recalculate average tpt if not current
    
    We currently have this check as a BUG_ON, which is being hit by people.
    Previously it was an error with a recalculation if not current, return that
    code.
    
    The BUG_ON was introduced by:
    commit 3110bef78cb4282c58245bc8fd6d95d9ccb19749
    Author: Guy Cohen <guy.cohen at intel.com>
    Date:   Tue Sep 9 10:54:54 2008 +0800
    
        iwlwifi: Added support for 3 antennas
    
    ... the portion adding the BUG_ON is reverted since we are encountering the 
    and BUG_ON was created with assumption that error is not encountered.
    
    Signed-off-by: Reinette Chatre <reinette.chatre at intel.com>
    Signed-off-by: John W. Linville <linville at tuxdriver.com>

diff -up linux-2.6.32.noarch/drivers/net/wireless/iwlwifi/iwl-agn-rs.c.orig linux-2.6.32.noarch/drivers/net/wireless/iwlwifi/iwl-agn-rs.c
--- linux-2.6.32.noarch/drivers/net/wireless/iwlwifi/iwl-agn-rs.c.orig	2010-05-25 14:25:21.000000000 -0400
+++ linux-2.6.32.noarch/drivers/net/wireless/iwlwifi/iwl-agn-rs.c	2010-05-25 14:26:45.000000000 -0400
@@ -2195,8 +2195,12 @@ static void rs_rate_scale_perform(struct
 	/* Else we have enough samples; calculate estimate of
 	 * actual average throughput */
 
-	BUG_ON(window->average_tpt != ((window->success_ratio *
-			tbl->expected_tpt[index] + 64) / 128));
+	if (window->average_tpt != ((window->success_ratio *
+			tbl->expected_tpt[index] + 64) / 128)) {
+		IWL_ERR(priv, "expected_tpt should have been calculated by now\n");
+		window->average_tpt = ((window->success_ratio *
+					tbl->expected_tpt[index] + 64) / 128);
+	}
 
 	/* If we are searching for better modulation mode, check success. */
 	if (lq_sta->search_better_tbl &&

iwlwifi-recover_from_tx_stall.patch:
 iwl-3945.c |    1 +
 1 file changed, 1 insertion(+)

--- NEW FILE iwlwifi-recover_from_tx_stall.patch ---
https://bugzilla.redhat.com/show_bug.cgi?id=589777#c5

diff -up linux-2.6.33.noarch/drivers/net/wireless/iwlwifi/iwl-3945.c.orig linux-2.6.33.noarch/drivers/net/wireless/iwlwifi/iwl-3945.c
--- linux-2.6.33.noarch/drivers/net/wireless/iwlwifi/iwl-3945.c.orig	2010-05-19 16:07:15.000000000 -0400
+++ linux-2.6.33.noarch/drivers/net/wireless/iwlwifi/iwl-3945.c	2010-05-19 16:09:42.000000000 -0400
@@ -2794,6 +2794,7 @@ static struct iwl_lib_ops iwl3945_lib = 
 	.post_associate = iwl3945_post_associate,
 	.isr = iwl_isr_legacy,
 	.config_ap = iwl3945_config_ap,
+	.recover_from_tx_stall = iwl_bg_monitor_recover,
 };
 
 static struct iwl_hcmd_utils_ops iwl3945_hcmd_utils = {

keys-find-keyring-by-name-can-gain-access-to-the-freed-keyring.patch:
 keyring.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

--- NEW FILE keys-find-keyring-by-name-can-gain-access-to-the-freed-keyring.patch ---
>From cea7daa3589d6b550546a8c8963599f7c1a3ae5c Mon Sep 17 00:00:00 2001
From: Toshiyuki Okajima <toshi.okajima at jp.fujitsu.com>
Date: Fri, 30 Apr 2010 14:32:13 +0100
Subject: [PATCH] KEYS: find_keyring_by_name() can gain access to a freed keyring

find_keyring_by_name() can gain access to a keyring that has had its reference
count reduced to zero, and is thus ready to be freed.  This then allows the
dead keyring to be brought back into use whilst it is being destroyed.

The following timeline illustrates the process:

|(cleaner)                           (user)
|
| free_user(user)                    sys_keyctl()
|  |                                  |
|  key_put(user->session_keyring)     keyctl_get_keyring_ID()
|  ||	//=> keyring->usage = 0        |
|  |schedule_work(&key_cleanup_task)   lookup_user_key()
|  ||                                   |
|  kmem_cache_free(,user)               |
|  .                                    |[KEY_SPEC_USER_KEYRING]
|  .                                    install_user_keyrings()
|  .                                    ||
| key_cleanup() [<= worker_thread()]    ||
|  |                                    ||
|  [spin_lock(&key_serial_lock)]        |[mutex_lock(&key_user_keyr..mutex)]
|  |                                    ||
|  atomic_read() == 0                   ||
|  |{ rb_ease(&key->serial_node,) }     ||
|  |                                    ||
|  [spin_unlock(&key_serial_lock)]      |find_keyring_by_name()
|  |                                    |||
|  keyring_destroy(keyring)             ||[read_lock(&keyring_name_lock)]
|  ||                                   |||
|  |[write_lock(&keyring_name_lock)]    ||atomic_inc(&keyring->usage)
|  |.                                   ||| *** GET freeing keyring ***
|  |.                                   ||[read_unlock(&keyring_name_lock)]
|  ||                                   ||
|  |list_del()                          |[mutex_unlock(&key_user_k..mutex)]
|  ||                                   |
|  |[write_unlock(&keyring_name_lock)]  ** INVALID keyring is returned **
|  |                                    .
|  kmem_cache_free(,keyring)            .
|                                       .
|                                       atomic_dec(&keyring->usage)
v                                         *** DESTROYED ***
TIME

If CONFIG_SLUB_DEBUG=y then we may see the following message generated:

	=============================================================================
	BUG key_jar: Poison overwritten
	-----------------------------------------------------------------------------

	INFO: 0xffff880197a7e200-0xffff880197a7e200. First byte 0x6a instead of 0x6b
	INFO: Allocated in key_alloc+0x10b/0x35f age=25 cpu=1 pid=5086
	INFO: Freed in key_cleanup+0xd0/0xd5 age=12 cpu=1 pid=10
	INFO: Slab 0xffffea000592cb90 objects=16 used=2 fp=0xffff880197a7e200 flags=0x200000000000c3
	INFO: Object 0xffff880197a7e200 @offset=512 fp=0xffff880197a7e300

	Bytes b4 0xffff880197a7e1f0:  5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZZZZZZZZZ
	  Object 0xffff880197a7e200:  6a 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b jkkkkkkkkkkkkkkk

Alternatively, we may see a system panic happen, such as:

	BUG: unable to handle kernel NULL pointer dereference at 0000000000000001
	IP: [<ffffffff810e61a3>] kmem_cache_alloc+0x5b/0xe9
	PGD 6b2b4067 PUD 6a80d067 PMD 0
	Oops: 0000 [#1] SMP
	last sysfs file: /sys/kernel/kexec_crash_loaded
	CPU 1
	...
	Pid: 31245, comm: su Not tainted 2.6.34-rc5-nofixed-nodebug #2 D2089/PRIMERGY
	RIP: 0010:[<ffffffff810e61a3>]  [<ffffffff810e61a3>] kmem_cache_alloc+0x5b/0xe9
	RSP: 0018:ffff88006af3bd98  EFLAGS: 00010002
	RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffff88007d19900b
	RDX: 0000000100000000 RSI: 00000000000080d0 RDI: ffffffff81828430
	RBP: ffffffff81828430 R08: ffff88000a293750 R09: 0000000000000000
	R10: 0000000000000001 R11: 0000000000100000 R12: 00000000000080d0
	R13: 00000000000080d0 R14: 0000000000000296 R15: ffffffff810f20ce
	FS:  00007f97116bc700(0000) GS:ffff88000a280000(0000) knlGS:0000000000000000
	CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
	CR2: 0000000000000001 CR3: 000000006a91c000 CR4: 00000000000006e0
	DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
	DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
	Process su (pid: 31245, threadinfo ffff88006af3a000, task ffff8800374414c0)
	Stack:
	 0000000512e0958e 0000000000008000 ffff880037f8d180 0000000000000001
	 0000000000000000 0000000000008001 ffff88007d199000 ffffffff810f20ce
	 0000000000008000 ffff88006af3be48 0000000000000024 ffffffff810face3
	Call Trace:
	 [<ffffffff810f20ce>] ? get_empty_filp+0x70/0x12f
	 [<ffffffff810face3>] ? do_filp_open+0x145/0x590
	 [<ffffffff810ce208>] ? tlb_finish_mmu+0x2a/0x33
	 [<ffffffff810ce43c>] ? unmap_region+0xd3/0xe2
	 [<ffffffff810e4393>] ? virt_to_head_page+0x9/0x2d
	 [<ffffffff81103916>] ? alloc_fd+0x69/0x10e
	 [<ffffffff810ef4ed>] ? do_sys_open+0x56/0xfc
	 [<ffffffff81008a02>] ? system_call_fastpath+0x16/0x1b
	Code: 0f 1f 44 00 00 49 89 c6 fa 66 0f 1f 44 00 00 65 4c 8b 04 25 60 e8 00 00 48 8b 45 00 49 01 c0 49 8b 18 48 85 db 74 0d 48 63 45 18 <48> 8b 04 03 49 89 00 eb 14 4c 89 f9 83 ca ff 44 89 e6 48 89 ef
	RIP  [<ffffffff810e61a3>] kmem_cache_alloc+0x5b/0xe9

This problem is that find_keyring_by_name does not confirm that the keyring is
valid before accepting it.

Skipping keyrings that have been reduced to a zero count seems the way to go.
To this end, use atomic_inc_not_zero() to increment the usage count and skip
the candidate keyring if that returns false.

The following script _may_ cause the bug to happen, but there's no guarantee
as the window of opportunity is small:

	#!/bin/sh
	LOOP=100000
	USER=dummy_user
	/bin/su -c "exit;" $USER || { /usr/sbin/adduser -m $USER; add=1; }
	for ((i=0; i<LOOP; i++))
	do
		/bin/su -c "echo '$i' > /dev/null" $USER
	done
	(( add == 1 )) && /usr/sbin/userdel -r $USER
	exit

Note that the nominated user must not be in use.

An alternative way of testing this may be:

	for ((i=0; i<100000; i++))
	do
		keyctl session foo /bin/true || break
	done >&/dev/null

as that uses a keyring named "foo" rather than relying on the user and
user-session named keyrings.

Reported-by: Toshiyuki Okajima <toshi.okajima at jp.fujitsu.com>
Signed-off-by: David Howells <dhowells at redhat.com>
Tested-by: Toshiyuki Okajima <toshi.okajima at jp.fujitsu.com>
Acked-by: Serge Hallyn <serue at us.ibm.com>
Signed-off-by: James Morris <jmorris at namei.org>
---
 security/keys/keyring.c |   18 +++++++++---------
 1 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/security/keys/keyring.c b/security/keys/keyring.c
index dd7cd0f..0b27271 100644
--- a/security/keys/keyring.c
+++ b/security/keys/keyring.c
@@ -526,9 +526,8 @@ struct key *find_keyring_by_name(const char *name, bool skip_perm_check)
 	struct key *keyring;
 	int bucket;
 
-	keyring = ERR_PTR(-EINVAL);
 	if (!name)
-		goto error;
+		return ERR_PTR(-EINVAL);
 
 	bucket = keyring_hash(name);
 
@@ -555,17 +554,18 @@ struct key *find_keyring_by_name(const char *name, bool skip_perm_check)
 					   KEY_SEARCH) < 0)
 				continue;
 
-			/* we've got a match */
-			atomic_inc(&keyring->usage);
-			read_unlock(&keyring_name_lock);
-			goto error;
+			/* we've got a match but we might end up racing with
+			 * key_cleanup() if the keyring is currently 'dead'
+			 * (ie. it has a zero usage count) */
+			if (!atomic_inc_not_zero(&keyring->usage))
+				continue;
+			goto out;
 		}
 	}
 
-	read_unlock(&keyring_name_lock);
 	keyring = ERR_PTR(-ENOKEY);
-
- error:
+out:
+	read_unlock(&keyring_name_lock);
 	return keyring;
 
 } /* end find_keyring_by_name() */
-- 
1.7.1



--- NEW FILE patch-2.6.32.14.bz2.sign ---
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: See http://www.kernel.org/signature.html for info

iD8DBQBL/Z2cyGugalF9Dw4RAhrTAJ93gCefgqW5XJX2dNuOnAXkjge5TQCfRGUF
m5IXFo0cdMqQyzHyVIW2Vds=
=DZ3B
-----END PGP SIGNATURE-----


Index: .cvsignore
===================================================================
RCS file: /cvs/pkgs/rpms/kernel/devel/.cvsignore,v
retrieving revision 1.1014.2.44
retrieving revision 1.1014.2.45
diff -u -p -r1.1014.2.44 -r1.1014.2.45
--- .cvsignore	18 May 2010 21:58:01 -0000	1.1014.2.44
+++ .cvsignore	29 May 2010 14:04:42 -0000	1.1014.2.45
@@ -5,4 +5,4 @@ kernel-2.6.*.config
 temp-*
 kernel-2.6.32
 linux-2.6.32.tar.bz2
-patch-2.6.32.13.bz2
+patch-2.6.32.14.bz2


Index: kernel.spec
===================================================================
RCS file: /cvs/pkgs/rpms/kernel/devel/kernel.spec,v
retrieving revision 1.1294.2.104
retrieving revision 1.1294.2.105
diff -u -p -r1.1294.2.104 -r1.1294.2.105
--- kernel.spec	18 May 2010 21:58:02 -0000	1.1294.2.104
+++ kernel.spec	29 May 2010 14:04:45 -0000	1.1294.2.105
@@ -61,7 +61,7 @@ Summary: The Linux kernel
 %if 0%{?released_kernel}
 
 # Do we have a -stable update to apply?
-%define stable_update 13
+%define stable_update 14
 # Is it a -stable RC?
 %define stable_rc 0
 # Set rpm version accordingly
@@ -766,7 +766,6 @@ Patch3051: linux-2.6-nfs4-callback-hidde
 
 # btrfs
 Patch3100: linux-2.6-btrfs-fix-acl.patch
-Patch3101: btrfs-check-for-read-permission-on-src-file-in-clone-ioctl.patch
 
 # XFS
 
@@ -804,7 +803,6 @@ Patch12391: iwlwifi-reset-card-during-pr
 
 # patches from Intel to address intermittent firmware failures with iwlagn
 Patch12401: iwlwifi_-check-for-aggregation-frame-and-queue.patch
-Patch12403: iwlwifi_-clear-all-the-stop_queue-flag-after-load-firmware.patch
 Patch12404: iwlwifi_-add-function-to-reset_tune-radio-if-needed.patch
 Patch12405: iwlwifi_-Logic-to-control-how-frequent-radio-should-be-reset-if-needed.patch
 Patch12406: iwlwifi_-Tune-radio-to-prevent-unexpected-behavior.patch
@@ -821,8 +819,18 @@ Patch12416: iwlwifi_-iwl_good_ack_health
 # fix possible corruption with ssd
 Patch12700: ext4-issue-discard-operation-before-releasing-blocks.patch
 
-# Revert "ath9k: fix lockdep warning when unloading module"
-Patch12900: revert-ath9k_-fix-lockdep-warning-when-unloading-module.patch
+# iwlwifi: fix scan races
+Patch12910: iwlwifi-fix-scan-races.patch
+# iwlwifi: fix internal scan race
+Patch12911: iwlwifi-fix-internal-scan-race.patch
+# iwlwifi: recover_from_tx_stall
+Patch12912: iwlwifi-recover_from_tx_stall.patch
+
+# iwlwifi: recalculate average tpt if not current
+Patch12920: iwlwifi-recalculate-average-tpt-if-not-current.patch
+
+# CVE-2010-1437
+Patch13000: keys-find-keyring-by-name-can-gain-access-to-the-freed-keyring.patch
 
 Patch19997: xen.pvops.pre.patch
 Patch19998: xen.pvops.patch
@@ -1297,7 +1305,6 @@ ApplyPatch linux-2.6-execshield.patch
 
 # btrfs
 ApplyPatch linux-2.6-btrfs-fix-acl.patch
-ApplyPatch btrfs-check-for-read-permission-on-src-file-in-clone-ioctl.patch
 
 # eCryptfs
 
@@ -1484,7 +1491,6 @@ ApplyPatch iwlwifi-reset-card-during-pro
 
 # patches from Intel to address intermittent firmware failures with iwlagn
 ApplyPatch iwlwifi_-check-for-aggregation-frame-and-queue.patch
-ApplyPatch iwlwifi_-clear-all-the-stop_queue-flag-after-load-firmware.patch
 ApplyPatch iwlwifi_-add-function-to-reset_tune-radio-if-needed.patch
 ApplyPatch iwlwifi_-Logic-to-control-how-frequent-radio-should-be-reset-if-needed.patch
 ApplyPatch iwlwifi_-Tune-radio-to-prevent-unexpected-behavior.patch
@@ -1501,8 +1507,18 @@ ApplyPatch iwlwifi_-iwl_good_ack_health-
 # fix possible corruption with ssd
 ApplyPatch ext4-issue-discard-operation-before-releasing-blocks.patch
 
-# Revert "ath9k: fix lockdep warning when unloading module"
-ApplyPatch revert-ath9k_-fix-lockdep-warning-when-unloading-module.patch
+# iwlwifi: fix scan races
+ApplyPatch iwlwifi-fix-scan-races.patch
+# iwlwifi: fix internal scan race
+ApplyPatch iwlwifi-fix-internal-scan-race.patch
+# iwlwifi: recover_from_tx_stall
+ApplyPatch iwlwifi-recover_from_tx_stall.patch
+
+# iwlwifi: recalculate average tpt if not current
+ApplyPatch iwlwifi-recalculate-average-tpt-if-not-current.patch
+
+# CVE-2010-1437
+ApplyPatch keys-find-keyring-by-name-can-gain-access-to-the-freed-keyring.patch
 
 ApplyPatch xen.pvops.pre.patch
 ApplyPatch xen.pvops.patch
@@ -2159,7 +2175,30 @@ fi
 # plz don't put in a version string unless you're going to tag
 # and build.
 
+
+
 %changelog
+* Sat May 29 2010 Michael Young <m.a.young at durham.ac.uk>
+- update pvops
+
+* Thu May 27 2010 Chuck Ebbert <cebbert at redhat.com>  2.6.32.14-127
+- CVE-2010-1437: keyrings: find_keyring_by_name() can gain the freed keyring
+
+* Wed May 26 2010 Chuck Ebbert <cebbert at redhat.com>  2.6.32.14-126
+- Linux 2.6.32.14
+- Drop merged patches:
+    btrfs-check-for-read-permission-on-src-file-in-clone-ioctl.patch
+    iwlwifi_-clear-all-the-stop_queue-flag-after-load-firmware.patch
+    revert-ath9k_-fix-lockdep-warning-when-unloading-module.patch
+
+* Mon May 24 2010 John W. Linville <linville at redhat.com>
+- iwlwifi: recalculate average tpt if not current (#588021)
+
+* Mon May 24 2010 John W. Linville <linville at redhat.com>
+- iwlwifi: recover_from_tx_stall (#589777)
+- iwlwifi: fix scan races (#592011)
+- iwlwifi: fix internal scan race (#592011)
+
 * Tue May 18 2010 Michael Young <m.a.young at durham.ac.uk>
 - update pvops
 

linux-2.6-utrace.patch:
 Documentation/DocBook/Makefile    |    2 
 Documentation/DocBook/utrace.tmpl |  590 +++++++++
 fs/proc/array.c                   |    3 
 include/linux/sched.h             |    5 
 include/linux/tracehook.h         |   87 +
 include/linux/utrace.h            |  692 ++++++++++
 init/Kconfig                      |    9 
 kernel/Makefile                   |    1 
 kernel/fork.c                     |    3 
 kernel/ptrace.c                   |   14 
 kernel/utrace.c                   | 2436 ++++++++++++++++++++++++++++++++++++++
 11 files changed, 3840 insertions(+), 2 deletions(-)

Index: linux-2.6-utrace.patch
===================================================================
RCS file: /cvs/pkgs/rpms/kernel/devel/linux-2.6-utrace.patch,v
retrieving revision 1.107.6.9
retrieving revision 1.107.6.10
diff -u -p -r1.107.6.9 -r1.107.6.10
--- linux-2.6-utrace.patch	18 May 2010 21:58:03 -0000	1.107.6.9
+++ linux-2.6-utrace.patch	29 May 2010 14:04:48 -0000	1.107.6.10
@@ -656,9 +656,9 @@ index 822c2d5..9069c91 100644  
  #include <linux/ptrace.h>
  #include <linux/tracehook.h>
 +#include <linux/utrace.h>
- #include <linux/swapops.h>
  
  #include <asm/pgtable.h>
+ #include <asm/processor.h>
 @@ -189,6 +190,8 @@ static inline void task_state(struct seq
  		cred->uid, cred->euid, cred->suid, cred->fsuid,
  		cred->gid, cred->egid, cred->sgid, cred->fsgid);


Index: sources
===================================================================
RCS file: /cvs/pkgs/rpms/kernel/devel/sources,v
retrieving revision 1.976.2.45
retrieving revision 1.976.2.46
diff -u -p -r1.976.2.45 -r1.976.2.46
--- sources	18 May 2010 21:58:04 -0000	1.976.2.45
+++ sources	29 May 2010 14:04:52 -0000	1.976.2.46
@@ -1,2 +1,2 @@
 260551284ac224c3a43c4adac7df4879  linux-2.6.32.tar.bz2
-ba6abb1ffee513a1d4f831599ddae490  patch-2.6.32.13.bz2
+90f0ec928aff643f05a8b98fad54b10c  patch-2.6.32.14.bz2


Index: upstream
===================================================================
RCS file: /cvs/pkgs/rpms/kernel/devel/upstream,v
retrieving revision 1.888.2.44
retrieving revision 1.888.2.45
diff -u -p -r1.888.2.44 -r1.888.2.45
--- upstream	18 May 2010 21:58:04 -0000	1.888.2.44
+++ upstream	29 May 2010 14:04:52 -0000	1.888.2.45
@@ -1,2 +1,2 @@
 linux-2.6.32.tar.bz2
-patch-2.6.32.13.bz2
+patch-2.6.32.14.bz2

xen.pvops.patch:
 Documentation/x86/x86_64/boot-options.txt       |    6 
 arch/ia64/include/asm/dma-mapping.h             |    2 
 arch/ia64/include/asm/swiotlb.h                 |    2 
 arch/ia64/include/asm/xen/events.h              |    4 
 arch/ia64/kernel/pci-swiotlb.c                  |    4 
 arch/powerpc/include/asm/dma-mapping.h          |    2 
 arch/powerpc/kernel/setup_32.c                  |    2 
 arch/powerpc/kernel/setup_64.c                  |    2 
 arch/x86/Kconfig                                |    4 
 arch/x86/include/asm/amd_iommu.h                |    4 
 arch/x86/include/asm/calgary.h                  |    2 
 arch/x86/include/asm/dma-mapping.h              |    7 
 arch/x86/include/asm/gart.h                     |    9 
 arch/x86/include/asm/hpet.h                     |    2 
 arch/x86/include/asm/hugetlb.h                  |   30 
 arch/x86/include/asm/io.h                       |   15 
 arch/x86/include/asm/io_apic.h                  |    3 
 arch/x86/include/asm/iommu.h                    |    2 
 arch/x86/include/asm/irq_vectors.h              |   14 
 arch/x86/include/asm/microcode.h                |    9 
 arch/x86/include/asm/mmu.h                      |    3 
 arch/x86/include/asm/paravirt.h                 |    7 
 arch/x86/include/asm/paravirt_types.h           |    2 
 arch/x86/include/asm/pci.h                      |    8 
 arch/x86/include/asm/pci_x86.h                  |    2 
 arch/x86/include/asm/pgtable.h                  |    6 
 arch/x86/include/asm/pgtable_64.h               |    2 
 arch/x86/include/asm/processor.h                |    4 
 arch/x86/include/asm/swiotlb.h                  |   11 
 arch/x86/include/asm/syscalls.h                 |    8 
 arch/x86/include/asm/tlbflush.h                 |    6 
 arch/x86/include/asm/x86_init.h                 |   10 
 arch/x86/include/asm/xen/hypercall.h            |   50 
 arch/x86/include/asm/xen/hypervisor.h           |   25 
 arch/x86/include/asm/xen/interface.h            |    8 
 arch/x86/include/asm/xen/interface_32.h         |    5 
 arch/x86/include/asm/xen/interface_64.h         |   13 
 arch/x86/include/asm/xen/iommu.h                |   12 
 arch/x86/include/asm/xen/page.h                 |   16 
 arch/x86/include/asm/xen/pci.h                  |  104 +
 arch/x86/include/asm/xen/swiotlb-xen.h          |   14 
 arch/x86/kernel/Makefile                        |    1 
 arch/x86/kernel/acpi/boot.c                     |   21 
 arch/x86/kernel/acpi/processor.c                |   17 
 arch/x86/kernel/acpi/sleep.c                    |    2 
 arch/x86/kernel/amd_iommu.c                     |   23 
 arch/x86/kernel/amd_iommu_init.c                |   25 
 arch/x86/kernel/aperture_64.c                   |    4 
 arch/x86/kernel/apic/io_apic.c                  |   53 
 arch/x86/kernel/cpu/mtrr/Makefile               |    1 
 arch/x86/kernel/cpu/mtrr/amd.c                  |    6 
 arch/x86/kernel/cpu/mtrr/centaur.c              |    6 
 arch/x86/kernel/cpu/mtrr/cyrix.c                |    6 
 arch/x86/kernel/cpu/mtrr/generic.c              |   13 
 arch/x86/kernel/cpu/mtrr/main.c                 |   20 
 arch/x86/kernel/cpu/mtrr/mtrr.h                 |   11 
 arch/x86/kernel/cpu/mtrr/xen.c                  |  109 +
 arch/x86/kernel/crash.c                         |    1 
 arch/x86/kernel/hpet.c                          |    2 
 arch/x86/kernel/ioport.c                        |   40 
 arch/x86/kernel/ldt.c                           |    3 
 arch/x86/kernel/microcode_core.c                |    6 
 arch/x86/kernel/microcode_xen.c                 |  201 ++
 arch/x86/kernel/paravirt.c                      |    1 
 arch/x86/kernel/pci-calgary_64.c                |   73 -
 arch/x86/kernel/pci-dma.c                       |   38 
 arch/x86/kernel/pci-gart_64.c                   |   40 
 arch/x86/kernel/pci-nommu.c                     |   11 
 arch/x86/kernel/pci-swiotlb.c                   |   21 
 arch/x86/kernel/process.c                       |   27 
 arch/x86/kernel/reboot.c                        |    4 
 arch/x86/kernel/setup.c                         |    6 
 arch/x86/kernel/x86_init.c                      |    8 
 arch/x86/mm/Makefile                            |    5 
 arch/x86/mm/gup.c                               |    5 
 arch/x86/mm/pat.c                               |    2 
 arch/x86/mm/pgtable.c                           |   19 
 arch/x86/mm/tlb.c                               |   37 
 arch/x86/pci/Makefile                           |    1 
 arch/x86/pci/common.c                           |   18 
 arch/x86/pci/i386.c                             |    2 
 arch/x86/pci/init.c                             |    6 
 arch/x86/pci/xen.c                              |  154 ++
 arch/x86/xen/Kconfig                            |   37 
 arch/x86/xen/Makefile                           |    5 
 arch/x86/xen/apic.c                             |   33 
 arch/x86/xen/enlighten.c                        |  254 +++
 arch/x86/xen/mmu.c                              |  527 +++++++
 arch/x86/xen/pci-swiotlb-xen.c                  |   52 
 arch/x86/xen/pci.c                              |  296 ++++
 arch/x86/xen/setup.c                            |  125 +
 arch/x86/xen/smp.c                              |    9 
 arch/x86/xen/suspend.c                          |    4 
 arch/x86/xen/time.c                             |   16 
 arch/x86/xen/vga.c                              |   67 
 arch/x86/xen/xen-ops.h                          |   20 
 block/blk-core.c                                |    2 
 drivers/acpi/Makefile                           |    1 
 drivers/acpi/acpi_memhotplug.c                  |   19 
 drivers/acpi/acpica/hwsleep.c                   |   16 
 drivers/acpi/processor_core.c                   |   35 
 drivers/acpi/processor_idle.c                   |   20 
 drivers/acpi/processor_perflib.c                |    4 
 drivers/acpi/processor_xen.c                    |  616 ++++++++
 drivers/acpi/sleep.c                            |   19 
 drivers/block/Kconfig                           |    1 
 drivers/block/xen-blkfront.c                    |  346 +++-
 drivers/char/agp/intel-agp.c                    |   23 
 drivers/char/hvc_xen.c                          |  101 -
 drivers/gpu/drm/drm_drv.c                       |    2 
 drivers/gpu/drm/drm_gem.c                       |    2 
 drivers/gpu/drm/drm_scatter.c                   |   67 
 drivers/gpu/drm/ttm/ttm_bo_vm.c                 |    2 
 drivers/input/xen-kbdfront.c                    |    7 
 drivers/net/Kconfig                             |    1 
 drivers/net/xen-netfront.c                      |   11 
 drivers/pci/Kconfig                             |   10 
 drivers/pci/Makefile                            |    4 
 drivers/pci/bus.c                               |    1 
 drivers/pci/dmar.c                              |    7 
 drivers/pci/intel-iommu.c                       |    6 
 drivers/pci/msi.c                               |   17 
 drivers/pci/xen-iommu.c                         |  271 +++
 drivers/pci/xen-pcifront.c                      | 1156 +++++++++++++++
 drivers/video/Kconfig                           |    1 
 drivers/video/broadsheetfb.c                    |    2 
 drivers/video/fb_defio.c                        |    4 
 drivers/video/hecubafb.c                        |    2 
 drivers/video/metronomefb.c                     |    2 
 drivers/video/xen-fbfront.c                     |    9 
 drivers/xen/Kconfig                             |  138 +
 drivers/xen/Makefile                            |   29 
 drivers/xen/acpi.c                              |   23 
 drivers/xen/acpi_processor.c                    |  417 +++++
 drivers/xen/balloon.c                           |  264 ++-
 drivers/xen/biomerge.c                          |   14 
 drivers/xen/blkback/Makefile                    |    4 
 drivers/xen/blkback/blkback-pagemap.c           |  109 +
 drivers/xen/blkback/blkback-pagemap.h           |   36 
 drivers/xen/blkback/blkback.c                   |  675 +++++++++
 drivers/xen/blkback/common.h                    |  143 +
 drivers/xen/blkback/interface.c                 |  186 ++
 drivers/xen/blkback/vbd.c                       |  161 ++
 drivers/xen/blkback/xenbus.c                    |  546 +++++++
 drivers/xen/blktap/Makefile                     |    3 
 drivers/xen/blktap/blktap.h                     |  253 +++
 drivers/xen/blktap/control.c                    |  284 +++
 drivers/xen/blktap/device.c                     | 1138 +++++++++++++++
 drivers/xen/blktap/request.c                    |  297 ++++
 drivers/xen/blktap/ring.c                       |  615 ++++++++
 drivers/xen/blktap/sysfs.c                      |  451 ++++++
 drivers/xen/blktap/wait_queue.c                 |   40 
 drivers/xen/cpu_hotplug.c                       |    1 
 drivers/xen/events.c                            |  529 ++++++-
 drivers/xen/evtchn.c                            |   83 -
 drivers/xen/features.c                          |    2 
 drivers/xen/gntdev.c                            |  626 ++++++++
 drivers/xen/grant-table.c                       |  176 ++
 drivers/xen/manage.c                            |  103 +
 drivers/xen/mce.c                               |  216 ++
 drivers/xen/netback/Makefile                    |    3 
 drivers/xen/netback/common.h                    |  316 ++++
 drivers/xen/netback/interface.c                 |  437 ++++++
 drivers/xen/netback/netback.c                   | 1746 ++++++++++++++++++++++++
 drivers/xen/netback/xenbus.c                    |  528 +++++++
 drivers/xen/pci.c                               |  124 +
 drivers/xen/pciback/Makefile                    |   17 
 drivers/xen/pciback/conf_space.c                |  435 +++++
 drivers/xen/pciback/conf_space.h                |  126 +
 drivers/xen/pciback/conf_space_capability.c     |   66 
 drivers/xen/pciback/conf_space_capability.h     |   26 
 drivers/xen/pciback/conf_space_capability_msi.c |  110 +
 drivers/xen/pciback/conf_space_capability_pm.c  |  113 +
 drivers/xen/pciback/conf_space_capability_vpd.c |   40 
 drivers/xen/pciback/conf_space_header.c         |  385 +++++
 drivers/xen/pciback/conf_space_quirks.c         |  140 +
 drivers/xen/pciback/conf_space_quirks.h         |   35 
 drivers/xen/pciback/controller.c                |  442 ++++++
 drivers/xen/pciback/passthrough.c               |  178 ++
 drivers/xen/pciback/pci_stub.c                  | 1370 ++++++++++++++++++
 drivers/xen/pciback/pciback.h                   |  142 +
 drivers/xen/pciback/pciback_ops.c               |  242 +++
 drivers/xen/pciback/slot.c                      |  191 ++
 drivers/xen/pciback/vpci.c                      |  244 +++
 drivers/xen/pciback/xenbus.c                    |  722 +++++++++
 drivers/xen/pcpu.c                              |  420 +++++
 drivers/xen/platform-pci.c                      |  259 +++
 drivers/xen/sys-hypervisor.c                    |    1 
 drivers/xen/xen_acpi_memhotplug.c               |  209 ++
 drivers/xen/xenbus/Makefile                     |    5 
 drivers/xen/xenbus/xenbus_client.c              |   92 -
 drivers/xen/xenbus/xenbus_probe.c               |  409 +----
 drivers/xen/xenbus/xenbus_probe.h               |   29 
 drivers/xen/xenbus/xenbus_probe_backend.c       |  293 ++++
 drivers/xen/xenbus/xenbus_probe_frontend.c      |  314 ++++
 drivers/xen/xenbus/xenbus_xs.c                  |   57 
 drivers/xen/xenfs/Makefile                      |    3 
 drivers/xen/xenfs/privcmd.c                     |  404 +++++
 drivers/xen/xenfs/super.c                       |  100 +
 drivers/xen/xenfs/xenfs.h                       |    3 
 drivers/xen/xenfs/xenstored.c                   |   67 
 include/acpi/acpi_drivers.h                     |   21 
 include/acpi/processor.h                        |   22 
 include/asm-generic/pci.h                       |    2 
 include/drm/drmP.h                              |    2 
 include/linux/bootmem.h                         |    1 
 include/linux/dmar.h                            |   15 
 include/linux/fb.h                              |    1 
 include/linux/interrupt.h                       |    1 
 include/linux/mm.h                              |   15 
 include/linux/page-flags.h                      |   20 
 include/linux/swiotlb.h                         |  115 +
 include/linux/vmalloc.h                         |    2 
 include/xen/Kbuild                              |    1 
 include/xen/acpi.h                              |  106 +
 include/xen/balloon.h                           |    8 
 include/xen/blkif.h                             |  123 +
 include/xen/events.h                            |   40 
 include/xen/gntdev.h                            |  119 +
 include/xen/grant_table.h                       |   44 
 include/xen/hvm.h                               |   32 
 include/xen/interface/features.h                |    3 
 include/xen/interface/grant_table.h             |   23 
 include/xen/interface/hvm/hvm_op.h              |   72 
 include/xen/interface/hvm/params.h              |  112 +
 include/xen/interface/io/pciif.h                |  124 +
 include/xen/interface/io/ring.h                 |    3 
 include/xen/interface/io/xenbus.h               |    8 
 include/xen/interface/memory.h                  |   92 +
 include/xen/interface/physdev.h                 |   68 
 include/xen/interface/platform.h                |  381 +++++
 include/xen/interface/platform_pci.h            |   45 
 include/xen/interface/xen-mca.h                 |  429 +++++
 include/xen/interface/xen.h                     |   45 
 include/xen/pcpu.h                              |   30 
 include/xen/platform_pci.h                      |   47 
 include/xen/privcmd.h                           |   80 +
 include/xen/xen-ops.h                           |   13 
 include/xen/xen.h                               |   32 
 include/xen/xenbus.h                            |    3 
 kernel/irq/manage.c                             |    3 
 lib/Makefile                                    |    3 
 lib/swiotlb-core.c                              |  572 +++++++
 lib/swiotlb-xen.c                               |  504 ++++++
 lib/swiotlb.c                                   |  551 -------
 mm/bootmem.c                                    |   24 
 mm/memory.c                                     |   43 
 mm/mmap.c                                       |   12 
 mm/page_alloc.c                                 |   14 
 mm/vmalloc.c                                    |    7 
 250 files changed, 26589 insertions(+), 1571 deletions(-)

Index: xen.pvops.patch
===================================================================
RCS file: /cvs/pkgs/rpms/kernel/devel/Attic/xen.pvops.patch,v
retrieving revision 1.1.2.67
retrieving revision 1.1.2.68
diff -u -p -r1.1.2.67 -r1.1.2.68
--- xen.pvops.patch	18 May 2010 21:58:04 -0000	1.1.2.67
+++ xen.pvops.patch	29 May 2010 14:04:52 -0000	1.1.2.68
@@ -2714,7 +2714,7 @@ index aaa6b78..7d2829d 100644
  	}
  }
 diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
-index d0ba107..0b4f9d1 100644
+index 5fd5b07..11d8667 100644
 --- a/arch/x86/kernel/process.c
 +++ b/arch/x86/kernel/process.c
 @@ -73,16 +73,12 @@ void exit_thread(void)
@@ -5120,6 +5120,19 @@ index 360f8d8..632ea35 100644
  	per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;
  
  	/* make sure interrupts start blocked */
+diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
+index 987267f..a9c6611 100644
+--- a/arch/x86/xen/suspend.c
++++ b/arch/x86/xen/suspend.c
+@@ -60,6 +60,6 @@ static void xen_vcpu_notify_restore(void *data)
+ 
+ void xen_arch_resume(void)
+ {
+-	smp_call_function(xen_vcpu_notify_restore,
+-			       (void *)CLOCK_EVT_NOTIFY_RESUME, 1);
++	on_each_cpu(xen_vcpu_notify_restore,
++		    (void *)CLOCK_EVT_NOTIFY_RESUME, 1);
+ }
 diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
 index 9d1f853..af5463a 100644
 --- a/arch/x86/xen/time.c
@@ -5370,7 +5383,7 @@ index cc22f9a..747d96f 100644
  	status = acpi_hw_write_pm1_control(pm1a_control, pm1b_control);
  	if (ACPI_FAILURE(status)) {
 diff --git a/drivers/acpi/processor_core.c b/drivers/acpi/processor_core.c
-index ec742a4..4ccecf6 100644
+index ec742a4..492a899 100644
 --- a/drivers/acpi/processor_core.c
 +++ b/drivers/acpi/processor_core.c
 @@ -58,6 +58,7 @@
@@ -5429,7 +5442,23 @@ index ec742a4..4ccecf6 100644
  {
  
  	if (acpi_device_dir(device)) {
-@@ -711,7 +710,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
+@@ -408,15 +407,6 @@ static int acpi_processor_remove_fs(struct acpi_device *device)
+ 
+ 	return 0;
+ }
+-#else
+-static inline int acpi_processor_add_fs(struct acpi_device *device)
+-{
+-	return 0;
+-}
+-static inline int acpi_processor_remove_fs(struct acpi_device *device)
+-{
+-	return 0;
+-}
+ #endif
+ 
+ /* Use the acpiid in MADT to map cpus in case of SMP */
+@@ -711,7 +701,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
  
  static DEFINE_PER_CPU(void *, processor_device_array);
  
@@ -5438,7 +5467,7 @@ index ec742a4..4ccecf6 100644
  {
  	struct acpi_processor *pr = acpi_driver_data(device);
  	int saved;
-@@ -879,7 +878,7 @@ err_free_cpumask:
+@@ -879,7 +869,7 @@ err_free_cpumask:
  	return result;
  }
  
@@ -5447,7 +5476,7 @@ index ec742a4..4ccecf6 100644
  {
  	struct acpi_processor *pr = NULL;
  
-@@ -1154,7 +1153,11 @@ static int __init acpi_processor_init(void)
+@@ -1154,7 +1144,11 @@ static int __init acpi_processor_init(void)
  	if (result < 0)
  		goto out_proc;
  
@@ -5460,7 +5489,7 @@ index ec742a4..4ccecf6 100644
  	if (result < 0)
  		goto out_cpuidle;
  
-@@ -1190,7 +1193,10 @@ static void __exit acpi_processor_exit(void)
+@@ -1190,7 +1184,10 @@ static void __exit acpi_processor_exit(void)
  
  	acpi_processor_uninstall_hotplug_notify();
  
@@ -6176,7 +6205,7 @@ index 0000000..2f37c9c
 +	acpi_bus_unregister_driver(&xen_acpi_processor_driver);
 +}
 diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
-index 7c85265..882ed92 100644
+index 9ed9292..3770a02 100644
 --- a/drivers/acpi/sleep.c
 +++ b/drivers/acpi/sleep.c
 @@ -19,6 +19,8 @@
@@ -17276,10 +17305,10 @@ index 0000000..e346e81
 +xen-netback-y := netback.o xenbus.o interface.o
 diff --git a/drivers/xen/netback/common.h b/drivers/xen/netback/common.h
 new file mode 100644
-index 0000000..51f97c0
+index 0000000..81050a6
 --- /dev/null
 +++ b/drivers/xen/netback/common.h
-@@ -0,0 +1,227 @@
+@@ -0,0 +1,316 @@
 +/******************************************************************************
 + * arch/xen/drivers/netif/backend/common.h
 + *
@@ -17340,6 +17369,7 @@ index 0000000..51f97c0
 +struct xen_netif {
 +	/* Unique identifier for this interface. */
 +	domid_t          domid;
++	int              group;
 +	unsigned int     handle;
 +
 +	u8               fe_dev_addr[6];
@@ -17506,13 +17536,101 @@ index 0000000..51f97c0
 +	return netif->features & NETIF_F_SG;
 +}
 +
++struct pending_tx_info {
++	struct xen_netif_tx_request req;
++	struct xen_netif *netif;
++};
++typedef unsigned int pending_ring_idx_t;
++
++struct netbk_rx_meta {
++	skb_frag_t frag;
++	int id;
++};
++
++struct netbk_tx_pending_inuse {
++	struct list_head list;
++	unsigned long alloc_time;
++};
++
++#define MAX_PENDING_REQS 256
++
++/* extra field used in struct page */
++union page_ext {
++	struct {
++#if BITS_PER_LONG < 64
++#define IDX_WIDTH   8
++#define GROUP_WIDTH (BITS_PER_LONG - IDX_WIDTH)
++		unsigned int group:GROUP_WIDTH;
++		unsigned int idx:IDX_WIDTH;
++#else
++		unsigned int group, idx;
++#endif
++	} e;
++	void *mapping;
++};
++
++struct xen_netbk {
++	union {
++		struct {
++			struct tasklet_struct net_tx_tasklet;
++			struct tasklet_struct net_rx_tasklet;
++		} tasklet;
++
++		struct {
++			wait_queue_head_t netbk_action_wq;
++			struct task_struct *task;
++		} kthread;
++	};
++
++	struct sk_buff_head rx_queue;
++	struct sk_buff_head tx_queue;
++
++	struct timer_list net_timer;
++	struct timer_list netbk_tx_pending_timer;
++
++	struct page **mmap_pages;
++
++	pending_ring_idx_t pending_prod;
++	pending_ring_idx_t pending_cons;
++	pending_ring_idx_t dealloc_prod;
++	pending_ring_idx_t dealloc_cons;
++
++	struct list_head pending_inuse_head;
++	struct list_head net_schedule_list;
++
++	/* Protect the net_schedule_list in netif. */
++	spinlock_t net_schedule_list_lock;
++
++	atomic_t netfront_count;
++
++	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
++	struct netbk_tx_pending_inuse pending_inuse[MAX_PENDING_REQS];
++	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
++	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
++
++	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
++	u16 pending_ring[MAX_PENDING_REQS];
++	u16 dealloc_ring[MAX_PENDING_REQS];
++
++	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+3];
++	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
++	struct gnttab_transfer grant_trans_op[NET_RX_RING_SIZE];
++	struct gnttab_copy grant_copy_op[NET_RX_RING_SIZE];
++	unsigned char rx_notify[NR_IRQS];
++	u16 notify_list[NET_RX_RING_SIZE];
++	struct netbk_rx_meta meta[NET_RX_RING_SIZE];
++};
++
++extern struct xen_netbk *xen_netbk;
++extern int xen_netbk_group_nr;
++
 +#endif /* __NETIF__BACKEND__COMMON_H__ */
 diff --git a/drivers/xen/netback/interface.c b/drivers/xen/netback/interface.c
 new file mode 100644
-index 0000000..086d939
+index 0000000..172ef4c
 --- /dev/null
 +++ b/drivers/xen/netback/interface.c
-@@ -0,0 +1,410 @@
+@@ -0,0 +1,437 @@
 +/******************************************************************************
 + * arch/xen/drivers/netif/backend/interface.c
 + *
@@ -17569,8 +17687,33 @@ index 0000000..086d939
 +static unsigned long netbk_queue_length = 32;
 +module_param_named(queue_length, netbk_queue_length, ulong, 0644);
 +
++static void netbk_add_netif(struct xen_netbk *netbk, int group_nr,
++			   struct xen_netif *netif)
++{
++	int i;
++	int min_netfront_count;
++	int min_group = 0;
++	min_netfront_count = atomic_read(&netbk[0].netfront_count);
++	for (i = 0; i < group_nr; i++) {
++		int netfront_count = atomic_read(&netbk[i].netfront_count);
++		if (netfront_count < min_netfront_count) {
++			min_group = i;
++			min_netfront_count = netfront_count;
++		}
++	}
++
++	netif->group = min_group;
++	atomic_inc(&netbk[netif->group].netfront_count);
++}
++
++static void netbk_remove_netif(struct xen_netbk *netbk, struct xen_netif *netif)
++{
++	atomic_dec(&netbk[netif->group].netfront_count);
++}
++
 +static void __netif_up(struct xen_netif *netif)
 +{
++	netbk_add_netif(xen_netbk, xen_netbk_group_nr, netif);
 +	enable_irq(netif->irq);
 +	netif_schedule_work(netif);
 +}
@@ -17579,6 +17722,7 @@ index 0000000..086d939
 +{
 +	disable_irq(netif->irq);
 +	netif_deschedule_work(netif);
++	netbk_remove_netif(xen_netbk, netif);
 +}
 +
 +static int net_open(struct net_device *dev)
@@ -17729,6 +17873,7 @@ index 0000000..086d939
 +	netif = netdev_priv(dev);
 +	memset(netif, 0, sizeof(*netif));
 +	netif->domid  = domid;
++	netif->group  = -1;
 +	netif->handle = handle;
 +	netif->features = NETIF_F_SG;
 +	atomic_set(&netif->refcnt, 1);
@@ -17925,10 +18070,10 @@ index 0000000..086d939
 +}
 diff --git a/drivers/xen/netback/netback.c b/drivers/xen/netback/netback.c
 new file mode 100644
-index 0000000..5dc4f98
+index 0000000..b23fab0
 --- /dev/null
 +++ b/drivers/xen/netback/netback.c
-@@ -0,0 +1,1609 @@
+@@ -0,0 +1,1746 @@
 +/******************************************************************************
 + * drivers/xen/netback/netback.c
 + *
@@ -17969,6 +18114,7 @@ index 0000000..5dc4f98
 +
 +#include <linux/tcp.h>
 +#include <linux/udp.h>
++#include <linux/kthread.h>
 +
 +#include <xen/balloon.h>
 +#include <xen/events.h>
@@ -17979,18 +18125,10 @@ index 0000000..5dc4f98
 +
 +/*define NETBE_DEBUG_INTERRUPT*/
 +
-+struct netbk_rx_meta {
-+	skb_frag_t frag;
-+	int id;
-+};
-+
-+struct netbk_tx_pending_inuse {
-+	struct list_head list;
-+	unsigned long alloc_time;
-+};
++struct xen_netbk *xen_netbk;
++int xen_netbk_group_nr;
 +
-+
-+static void netif_idx_release(u16 pending_idx);
++static void netif_idx_release(struct xen_netbk *netbk, u16 pending_idx);
 +static void make_tx_response(struct xen_netif *netif,
 +			     struct xen_netif_tx_request *txp,
 +			     s8       st);
@@ -18001,47 +18139,44 @@ index 0000000..5dc4f98
 +					     u16      size,
 +					     u16      flags);
 +
-+static void net_tx_action(unsigned long unused);
-+static DECLARE_TASKLET(net_tx_tasklet, net_tx_action, 0);
-+
-+static void net_rx_action(unsigned long unused);
-+static DECLARE_TASKLET(net_rx_tasklet, net_rx_action, 0);
-+
-+static struct timer_list net_timer;
-+static struct timer_list netbk_tx_pending_timer;
-+
-+#define MAX_PENDING_REQS 256
++static void net_tx_action(unsigned long data);
 +
-+static struct sk_buff_head rx_queue;
++static void net_rx_action(unsigned long data);
 +
-+static struct page **mmap_pages;
-+static inline unsigned long idx_to_pfn(unsigned int idx)
++static inline unsigned long idx_to_pfn(struct xen_netbk *netbk,
++				       unsigned int idx)
 +{
-+	return page_to_pfn(mmap_pages[idx]);
++	return page_to_pfn(netbk->mmap_pages[idx]);
 +}
 +
-+static inline unsigned long idx_to_kaddr(unsigned int idx)
++static inline unsigned long idx_to_kaddr(struct xen_netbk *netbk,
++					 unsigned int idx)
 +{
-+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(idx));
++	return (unsigned long)pfn_to_kaddr(idx_to_pfn(netbk, idx));
 +}
 +
 +/* extra field used in struct page */
-+static inline void netif_set_page_index(struct page *pg, unsigned int index)
++static inline void netif_set_page_ext(struct page *pg, unsigned int group,
++		unsigned int idx)
 +{
-+	*(unsigned long *)&pg->mapping = index + 1;
++	union page_ext ext = { .e = { .group = group + 1, .idx = idx } };
++
++	BUILD_BUG_ON(sizeof(ext) > sizeof(ext.mapping));
++	pg->mapping = ext.mapping;
 +}
 +
-+static inline int netif_page_index(struct page *pg)
++static inline unsigned int netif_page_group(const struct page *pg)
 +{
-+	unsigned long idx = (unsigned long)pg->mapping - 1;
++	union page_ext ext = { .mapping = pg->mapping };
 +
-+	if (!PageForeign(pg))
-+		return -1;
++	return ext.e.group - 1;
++}
 +
-+	if ((idx >= MAX_PENDING_REQS) || (mmap_pages[idx] != pg))
-+		return -1;
++static inline unsigned int netif_page_index(const struct page *pg)
++{
++	union page_ext ext = { .mapping = pg->mapping };
 +
-+	return idx;
++	return ext.e.idx;
 +}
 +
 +/*
@@ -18052,46 +18187,17 @@ index 0000000..5dc4f98
 + */
 +#define PKT_PROT_LEN 72
 +
-+static struct pending_tx_info {
-+	struct xen_netif_tx_request req;
-+	struct xen_netif *netif;
-+} pending_tx_info[MAX_PENDING_REQS];
-+static u16 pending_ring[MAX_PENDING_REQS];
-+typedef unsigned int pending_ring_idx_t;
-+
 +static inline pending_ring_idx_t pending_index(unsigned i)
 +{
 +	return i & (MAX_PENDING_REQS-1);
 +}
 +
-+static pending_ring_idx_t pending_prod, pending_cons;
-+
-+static inline pending_ring_idx_t nr_pending_reqs(void)
++static inline pending_ring_idx_t nr_pending_reqs(struct xen_netbk *netbk)
 +{
-+	return MAX_PENDING_REQS - pending_prod + pending_cons;
++	return MAX_PENDING_REQS -
++		netbk->pending_prod + netbk->pending_cons;
 +}
 +
-+/* Freed TX SKBs get batched on this ring before return to pending_ring. */
-+static u16 dealloc_ring[MAX_PENDING_REQS];
-+static pending_ring_idx_t dealloc_prod, dealloc_cons;
-+
-+/* Doubly-linked list of in-use pending entries. */
-+static struct netbk_tx_pending_inuse pending_inuse[MAX_PENDING_REQS];
-+static LIST_HEAD(pending_inuse_head);
-+
-+static struct sk_buff_head tx_queue;
-+
-+static grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
-+static struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
-+static struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
-+
-+static LIST_HEAD(net_schedule_list);
-+static DEFINE_SPINLOCK(net_schedule_list_lock);
-+
-+#define MAX_MFN_ALLOC 64
-+static unsigned long mfn_list[MAX_MFN_ALLOC];
-+static unsigned int alloc_index = 0;
-+
 +/* Setting this allows the safe use of this driver without netloop. */
 +static int MODPARM_copy_skb = 1;
 +module_param_named(copy_skb, MODPARM_copy_skb, bool, 0);
@@ -18099,18 +18205,31 @@ index 0000000..5dc4f98
 +
 +int netbk_copy_skb_mode;
 +
-+static inline unsigned long alloc_mfn(void)
++static int MODPARM_netback_kthread;
++module_param_named(netback_kthread, MODPARM_netback_kthread, bool, 0);
++MODULE_PARM_DESC(netback_kthread, "Use kernel thread to replace tasklet");
++
++/*
++ * Netback bottom half handler.
++ * dir indicates the data direction.
++ * rx: 1, tx: 0.
++ */
++static inline void xen_netbk_bh_handler(struct xen_netbk *netbk, int dir)
 +{
-+	BUG_ON(alloc_index == 0);
-+	return mfn_list[--alloc_index];
++	if (MODPARM_netback_kthread)
++		wake_up(&netbk->kthread.netbk_action_wq);
++	else if (dir)
++		tasklet_schedule(&netbk->tasklet.net_rx_tasklet);
++	else
++		tasklet_schedule(&netbk->tasklet.net_tx_tasklet);
 +}
 +
-+static inline void maybe_schedule_tx_action(void)
++static inline void maybe_schedule_tx_action(struct xen_netbk *netbk)
 +{
 +	smp_mb();
-+	if ((nr_pending_reqs() < (MAX_PENDING_REQS/2)) &&
-+	    !list_empty(&net_schedule_list))
-+		tasklet_schedule(&net_tx_tasklet);
++	if ((nr_pending_reqs(netbk) < (MAX_PENDING_REQS/2)) &&
++	    !list_empty(&netbk->net_schedule_list))
++		xen_netbk_bh_handler(netbk, 0);
 +}
 +
 +static struct sk_buff *netbk_copy_skb(struct sk_buff *skb)
@@ -18215,9 +18334,15 @@ index 0000000..5dc4f98
 +int netif_be_start_xmit(struct sk_buff *skb, struct net_device *dev)
 +{
 +	struct xen_netif *netif = netdev_priv(dev);
++	struct xen_netbk *netbk;
 +
 +	BUG_ON(skb->dev != dev);
 +
++	if (netif->group == -1)
++		goto drop;
++
++	netbk = &xen_netbk[netif->group];
++
 +	/* Drop the packet if the target domain has no receive buffers. */
 +	if (unlikely(!netif_schedulable(netif) || netbk_queue_full(netif)))
 +		goto drop;
@@ -18259,9 +18384,9 @@ index 0000000..5dc4f98
 +			mod_timer(&netif->tx_queue_timeout, jiffies + HZ/2);
 +		}
 +	}
++	skb_queue_tail(&netbk->rx_queue, skb);
 +
-+	skb_queue_tail(&rx_queue, skb);
-+	tasklet_schedule(&net_rx_tasklet);
++	xen_netbk_bh_handler(netbk, 1);
 +
 +	return 0;
 +
@@ -18294,6 +18419,7 @@ index 0000000..5dc4f98
 +	struct gnttab_copy *copy_gop;
 +	struct xen_netif_rx_request *req;
 +	unsigned long old_mfn;
++	int group = netif_page_group(page);
 +	int idx = netif_page_index(page);
 +
 +	old_mfn = virt_to_mfn(page_address(page));
@@ -18302,8 +18428,9 @@ index 0000000..5dc4f98
 +
 +	copy_gop = npo->copy + npo->copy_prod++;
 +	copy_gop->flags = GNTCOPY_dest_gref;
-+	if (idx > -1) {
-+		struct pending_tx_info *src_pend = &pending_tx_info[idx];
++	if (PageForeign(page)) {
++		struct xen_netbk *netbk = &xen_netbk[group];
++		struct pending_tx_info *src_pend = &netbk->pending_tx_info[idx];
 +		copy_gop->source.domid = src_pend->netif->domid;
 +		copy_gop->source.u.ref = src_pend->req.gref;
 +		copy_gop->flags |= GNTCOPY_source_gref;
@@ -18403,9 +18530,10 @@ index 0000000..5dc4f98
 +	}
 +}
 +
-+static void net_rx_action(unsigned long unused)
++static void net_rx_action(unsigned long data)
 +{
 +	struct xen_netif *netif = NULL;
++	struct xen_netbk *netbk = (struct xen_netbk *)data;
 +	s8 status;
 +	u16 id, irq, flags;
 +	struct xen_netif_rx_response *resp;
@@ -18418,30 +18546,19 @@ index 0000000..5dc4f98
 +	int count;
 +	unsigned long offset;
 +
-+	/*
-+	 * Putting hundreds of bytes on the stack is considered rude.
-+	 * Static works because a tasklet can only be on one CPU at any time.
-+	 */
-+	static struct multicall_entry rx_mcl[NET_RX_RING_SIZE+3];
-+	static struct mmu_update rx_mmu[NET_RX_RING_SIZE];
-+	static struct gnttab_transfer grant_trans_op[NET_RX_RING_SIZE];
-+	static struct gnttab_copy grant_copy_op[NET_RX_RING_SIZE];
-+	static unsigned char rx_notify[NR_IRQS];
-+	static u16 notify_list[NET_RX_RING_SIZE];
-+	static struct netbk_rx_meta meta[NET_RX_RING_SIZE];
-+
 +	struct netrx_pending_operations npo = {
-+		mmu: rx_mmu,
-+		trans: grant_trans_op,
-+		copy: grant_copy_op,
-+		mcl: rx_mcl,
-+		meta: meta};
++		.mmu   = netbk->rx_mmu,
++		.trans = netbk->grant_trans_op,
++		.copy  = netbk->grant_copy_op,
++		.mcl   = netbk->rx_mcl,
++		.meta  = netbk->meta,
++	};
 +
 +	skb_queue_head_init(&rxq);
 +
 +	count = 0;
 +
-+	while ((skb = skb_dequeue(&rx_queue)) != NULL) {
++	while ((skb = skb_dequeue(&netbk->rx_queue)) != NULL) {
 +		nr_frags = skb_shinfo(skb)->nr_frags;
 +		*(int *)skb->cb = nr_frags;
 +
@@ -18456,39 +18573,39 @@ index 0000000..5dc4f98
 +			break;
 +	}
 +
-+	BUG_ON(npo.meta_prod > ARRAY_SIZE(meta));
++	BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta));
 +
 +	npo.mmu_mcl = npo.mcl_prod;
 +	if (npo.mcl_prod) {
 +		BUG_ON(xen_feature(XENFEAT_auto_translated_physmap));
-+		BUG_ON(npo.mmu_prod > ARRAY_SIZE(rx_mmu));
++		BUG_ON(npo.mmu_prod > ARRAY_SIZE(netbk->rx_mmu));
 +		mcl = npo.mcl + npo.mcl_prod++;
 +
 +		BUG_ON(mcl[-1].op != __HYPERVISOR_update_va_mapping);
 +		mcl[-1].args[MULTI_UVMFLAGS_INDEX] = UVMF_TLB_FLUSH|UVMF_ALL;
 +
 +		mcl->op = __HYPERVISOR_mmu_update;
-+		mcl->args[0] = (unsigned long)rx_mmu;
++		mcl->args[0] = (unsigned long)netbk->rx_mmu;
 +		mcl->args[1] = npo.mmu_prod;
 +		mcl->args[2] = 0;
 +		mcl->args[3] = DOMID_SELF;
 +	}
 +
 +	if (npo.trans_prod) {
-+		BUG_ON(npo.trans_prod > ARRAY_SIZE(grant_trans_op));
++		BUG_ON(npo.trans_prod > ARRAY_SIZE(netbk->grant_trans_op));
 +		mcl = npo.mcl + npo.mcl_prod++;
 +		mcl->op = __HYPERVISOR_grant_table_op;
 +		mcl->args[0] = GNTTABOP_transfer;
-+		mcl->args[1] = (unsigned long)grant_trans_op;
++		mcl->args[1] = (unsigned long)netbk->grant_trans_op;
 +		mcl->args[2] = npo.trans_prod;
 +	}
 +
 +	if (npo.copy_prod) {
-+		BUG_ON(npo.copy_prod > ARRAY_SIZE(grant_copy_op));
++		BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
 +		mcl = npo.mcl + npo.mcl_prod++;
 +		mcl->op = __HYPERVISOR_grant_table_op;
 +		mcl->args[0] = GNTTABOP_copy;
-+		mcl->args[1] = (unsigned long)grant_copy_op;
++		mcl->args[1] = (unsigned long)netbk->grant_copy_op;
 +		mcl->args[2] = npo.copy_prod;
 +	}
 +
@@ -18496,7 +18613,7 @@ index 0000000..5dc4f98
 +	if (!npo.mcl_prod)
 +		return;
 +
-+	BUG_ON(npo.mcl_prod > ARRAY_SIZE(rx_mcl));
++	BUG_ON(npo.mcl_prod > ARRAY_SIZE(netbk->rx_mcl));
 +
 +	ret = HYPERVISOR_multicall(npo.mcl, npo.mcl_prod);
 +	BUG_ON(ret != 0);
@@ -18513,7 +18630,7 @@ index 0000000..5dc4f98
 +
 +		status = netbk_check_gop(nr_frags, netif->domid, &npo);
 +
-+		id = meta[npo.meta_cons].id;
++		id = netbk->meta[npo.meta_cons].id;
 +		flags = nr_frags ? NETRXF_more_data : 0;
 +
 +		if (skb->ip_summed == CHECKSUM_PARTIAL) /* local packet? */
@@ -18526,7 +18643,7 @@ index 0000000..5dc4f98
 +		resp = make_rx_response(netif, id, status, offset,
 +					skb_headlen(skb), flags);
 +
-+		if (meta[npo.meta_cons].frag.size) {
++		if (netbk->meta[npo.meta_cons].frag.size) {
 +			struct xen_netif_extra_info *gso =
 +				(struct xen_netif_extra_info *)
 +				RING_GET_RESPONSE(&netif->rx,
@@ -18534,7 +18651,7 @@ index 0000000..5dc4f98
 +
 +			resp->flags |= NETRXF_extra_info;
 +
-+			gso->u.gso.size = meta[npo.meta_cons].frag.size;
++			gso->u.gso.size = netbk->meta[npo.meta_cons].frag.size;
 +			gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
 +			gso->u.gso.pad = 0;
 +			gso->u.gso.features = 0;
@@ -18544,15 +18661,15 @@ index 0000000..5dc4f98
 +		}
 +
 +		netbk_add_frag_responses(netif, status,
-+					 meta + npo.meta_cons + 1,
-+					 nr_frags);
++				netbk->meta + npo.meta_cons + 1,
++				nr_frags);
 +
 +		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&netif->rx, ret);
 +		irq = netif->irq;
-+		if (ret && !rx_notify[irq] &&
++		if (ret && !netbk->rx_notify[irq] &&
 +				(netif->smart_poll != 1)) {
-+			rx_notify[irq] = 1;
-+			notify_list[notify_nr++] = irq;
++			netbk->rx_notify[irq] = 1;
++			netbk->notify_list[notify_nr++] = irq;
 +		}
 +
 +		if (netif_queue_stopped(netif->dev) &&
@@ -18577,24 +18694,27 @@ index 0000000..5dc4f98
 +	}
 +
 +	while (notify_nr != 0) {
-+		irq = notify_list[--notify_nr];
-+		rx_notify[irq] = 0;
++		irq = netbk->notify_list[--notify_nr];
++		netbk->rx_notify[irq] = 0;
 +		notify_remote_via_irq(irq);
 +	}
 +
 +	/* More work to do? */
-+	if (!skb_queue_empty(&rx_queue) && !timer_pending(&net_timer))
-+		tasklet_schedule(&net_rx_tasklet);
++	if (!skb_queue_empty(&netbk->rx_queue) &&
++			!timer_pending(&netbk->net_timer))
++		xen_netbk_bh_handler(netbk, 1);
 +}
 +
-+static void net_alarm(unsigned long unused)
++static void net_alarm(unsigned long data)
 +{
-+	tasklet_schedule(&net_rx_tasklet);
++	struct xen_netbk *netbk = (struct xen_netbk *)data;
++	xen_netbk_bh_handler(netbk, 1);
 +}
 +
-+static void netbk_tx_pending_timeout(unsigned long unused)
++static void netbk_tx_pending_timeout(unsigned long data)
 +{
-+	tasklet_schedule(&net_tx_tasklet);
++	struct xen_netbk *netbk = (struct xen_netbk *)data;
++	xen_netbk_bh_handler(netbk, 0);
 +}
 +
 +struct net_device_stats *netif_be_get_stats(struct net_device *dev)
@@ -18610,37 +18730,40 @@ index 0000000..5dc4f98
 +
 +static void remove_from_net_schedule_list(struct xen_netif *netif)
 +{
-+	spin_lock_irq(&net_schedule_list_lock);
++	struct xen_netbk *netbk = &xen_netbk[netif->group];
++	spin_lock_irq(&netbk->net_schedule_list_lock);
 +	if (likely(__on_net_schedule_list(netif))) {
 +		list_del_init(&netif->list);
 +		netif_put(netif);
 +	}
-+	spin_unlock_irq(&net_schedule_list_lock);
++	spin_unlock_irq(&netbk->net_schedule_list_lock);
 +}
 +
 +static void add_to_net_schedule_list_tail(struct xen_netif *netif)
 +{
++	struct xen_netbk *netbk = &xen_netbk[netif->group];
 +	if (__on_net_schedule_list(netif))
 +		return;
 +
-+	spin_lock_irq(&net_schedule_list_lock);
++	spin_lock_irq(&netbk->net_schedule_list_lock);
 +	if (!__on_net_schedule_list(netif) &&
 +	    likely(netif_schedulable(netif))) {
-+		list_add_tail(&netif->list, &net_schedule_list);
++		list_add_tail(&netif->list, &netbk->net_schedule_list);
 +		netif_get(netif);
 +	}
-+	spin_unlock_irq(&net_schedule_list_lock);
++	spin_unlock_irq(&netbk->net_schedule_list_lock);
 +}
 +
 +void netif_schedule_work(struct xen_netif *netif)
 +{
++	struct xen_netbk *netbk = &xen_netbk[netif->group];
 +	int more_to_do;
 +
 +	RING_FINAL_CHECK_FOR_REQUESTS(&netif->tx, more_to_do);
 +
 +	if (more_to_do) {
 +		add_to_net_schedule_list_tail(netif);
-+		maybe_schedule_tx_action();
++		maybe_schedule_tx_action(netbk);
 +	}
 +}
 +
@@ -18677,13 +18800,15 @@ index 0000000..5dc4f98
 +	netif_schedule_work(netif);
 +}
 +
-+static inline int copy_pending_req(pending_ring_idx_t pending_idx)
++static inline int copy_pending_req(struct xen_netbk *netbk,
++				   pending_ring_idx_t pending_idx)
 +{
-+	return gnttab_copy_grant_page(grant_tx_handle[pending_idx],
-+				      &mmap_pages[pending_idx]);
++	return gnttab_copy_grant_page(
++			netbk->grant_tx_handle[pending_idx],
++			&netbk->mmap_pages[pending_idx]);
 +}
 +
-+inline static void net_tx_action_dealloc(void)
++static inline void net_tx_action_dealloc(struct xen_netbk *netbk)
 +{
 +	struct netbk_tx_pending_inuse *inuse, *n;
 +	struct gnttab_unmap_grant_ref *gop;
@@ -18693,49 +18818,56 @@ index 0000000..5dc4f98
 +	int ret;
 +	LIST_HEAD(list);
 +
-+	dc = dealloc_cons;
-+	gop = tx_unmap_ops;
++	dc = netbk->dealloc_cons;
++	gop = netbk->tx_unmap_ops;
 +
 +	/*
 +	 * Free up any grants we have finished using
 +	 */
 +	do {
-+		dp = dealloc_prod;
++		dp = netbk->dealloc_prod;
 +
 +		/* Ensure we see all indices enqueued by netif_idx_release(). */
 +		smp_rmb();
 +
 +		while (dc != dp) {
 +			unsigned long pfn;
++			struct netbk_tx_pending_inuse *pending_inuse =
++					netbk->pending_inuse;
 +
-+			pending_idx = dealloc_ring[pending_index(dc++)];
++			pending_idx = netbk->dealloc_ring[pending_index(dc++)];
 +			list_move_tail(&pending_inuse[pending_idx].list, &list);
 +
-+			pfn = idx_to_pfn(pending_idx);
++			pfn = idx_to_pfn(netbk, pending_idx);
 +			/* Already unmapped? */
 +			if (!phys_to_machine_mapping_valid(pfn))
 +				continue;
 +
-+			gnttab_set_unmap_op(gop, idx_to_kaddr(pending_idx),
-+					    GNTMAP_host_map,
-+					    grant_tx_handle[pending_idx]);
++			gnttab_set_unmap_op(gop,
++					idx_to_kaddr(netbk, pending_idx),
++					GNTMAP_host_map,
++					netbk->grant_tx_handle[pending_idx]);
 +			gop++;
 +		}
 +
 +		if (netbk_copy_skb_mode != NETBK_DELAYED_COPY_SKB ||
-+		    list_empty(&pending_inuse_head))
++		    list_empty(&netbk->pending_inuse_head))
 +			break;
 +
 +		/* Copy any entries that have been pending for too long. */
-+		list_for_each_entry_safe(inuse, n, &pending_inuse_head, list) {
++		list_for_each_entry_safe(inuse, n,
++				&netbk->pending_inuse_head, list) {
++			struct pending_tx_info *pending_tx_info;
++			pending_tx_info = netbk->pending_tx_info;
++
 +			if (time_after(inuse->alloc_time + HZ / 2, jiffies))
 +				break;
 +
-+			pending_idx = inuse - pending_inuse;
++			pending_idx = inuse - netbk->pending_inuse;
 +
 +			pending_tx_info[pending_idx].netif->nr_copied_skbs++;
 +
-+			switch (copy_pending_req(pending_idx)) {
++			switch (copy_pending_req(netbk, pending_idx)) {
 +			case 0:
 +				list_move_tail(&inuse->list, &list);
 +				continue;
@@ -18748,16 +18880,21 @@ index 0000000..5dc4f98
 +
 +			break;
 +		}
-+	} while (dp != dealloc_prod);
++	} while (dp != netbk->dealloc_prod);
 +
-+	dealloc_cons = dc;
++	netbk->dealloc_cons = dc;
 +
 +	ret = HYPERVISOR_grant_table_op(
-+		GNTTABOP_unmap_grant_ref, tx_unmap_ops, gop - tx_unmap_ops);
++		GNTTABOP_unmap_grant_ref, netbk->tx_unmap_ops,
++		gop - netbk->tx_unmap_ops);
 +	BUG_ON(ret);
 +
 +	list_for_each_entry_safe(inuse, n, &list, list) {
-+		pending_idx = inuse - pending_inuse;
++		struct pending_tx_info *pending_tx_info;
++		pending_ring_idx_t index;
++
++		pending_tx_info = netbk->pending_tx_info;
++		pending_idx = inuse - netbk->pending_inuse;
 +
 +		netif = pending_tx_info[pending_idx].netif;
 +
@@ -18765,9 +18902,10 @@ index 0000000..5dc4f98
 +				 NETIF_RSP_OKAY);
 +
 +		/* Ready for next use. */
-+		gnttab_reset_grant_page(mmap_pages[pending_idx]);
++		gnttab_reset_grant_page(netbk->mmap_pages[pending_idx]);
 +
-+		pending_ring[pending_index(pending_prod++)] = pending_idx;
++		index = pending_index(netbk->pending_prod++);
++		netbk->pending_ring[index] = pending_idx;
 +
 +		netif_put(netif);
 +
@@ -18775,7 +18913,8 @@ index 0000000..5dc4f98
 +	}
 +}
 +
-+static void netbk_tx_err(struct xen_netif *netif, struct xen_netif_tx_request *txp, RING_IDX end)
++static void netbk_tx_err(struct xen_netif *netif,
++		struct xen_netif_tx_request *txp, RING_IDX end)
 +{
 +	RING_IDX cons = netif->tx.req_cons;
 +
@@ -18831,7 +18970,8 @@ index 0000000..5dc4f98
 +	return frags;
 +}
 +
-+static struct gnttab_map_grant_ref *netbk_get_requests(struct xen_netif *netif,
++static struct gnttab_map_grant_ref *netbk_get_requests(struct xen_netbk *netbk,
++						  struct xen_netif *netif,
 +						  struct sk_buff *skb,
 +						  struct xen_netif_tx_request *txp,
 +						  struct gnttab_map_grant_ref *mop)
@@ -18845,9 +18985,14 @@ index 0000000..5dc4f98
 +	start = ((unsigned long)shinfo->frags[0].page == pending_idx);
 +
 +	for (i = start; i < shinfo->nr_frags; i++, txp++) {
-+		pending_idx = pending_ring[pending_index(pending_cons++)];
++		pending_ring_idx_t index;
++		struct pending_tx_info *pending_tx_info =
++			netbk->pending_tx_info;
++
++		index = pending_index(netbk->pending_cons++);
++		pending_idx = netbk->pending_ring[index];
 +
-+		gnttab_set_map_op(mop++, idx_to_kaddr(pending_idx),
++		gnttab_set_map_op(mop++, idx_to_kaddr(netbk, pending_idx),
 +				  GNTMAP_host_map | GNTMAP_readonly,
 +				  txp->gref, netif->domid);
 +
@@ -18860,11 +19005,13 @@ index 0000000..5dc4f98
 +	return mop;
 +}
 +
-+static int netbk_tx_check_mop(struct sk_buff *skb,
-+			       struct gnttab_map_grant_ref **mopp)
++static int netbk_tx_check_mop(struct xen_netbk *netbk,
++			      struct sk_buff *skb,
++			      struct gnttab_map_grant_ref **mopp)
 +{
 +	struct gnttab_map_grant_ref *mop = *mopp;
 +	int pending_idx = *((u16 *)skb->data);
++	struct pending_tx_info *pending_tx_info = netbk->pending_tx_info;
 +	struct xen_netif *netif = pending_tx_info[pending_idx].netif;
 +	struct xen_netif_tx_request *txp;
 +	struct skb_shared_info *shinfo = skb_shinfo(skb);
@@ -18874,15 +19021,17 @@ index 0000000..5dc4f98
 +	/* Check status of header. */
 +	err = mop->status;
 +	if (unlikely(err)) {
++		pending_ring_idx_t index;
++		index = pending_index(netbk->pending_prod++);
 +		txp = &pending_tx_info[pending_idx].req;
 +		make_tx_response(netif, txp, NETIF_RSP_ERROR);
-+		pending_ring[pending_index(pending_prod++)] = pending_idx;
++		netbk->pending_ring[index] = pending_idx;
 +		netif_put(netif);
 +	} else {
 +		set_phys_to_machine(
-+			__pa(idx_to_kaddr(pending_idx)) >> PAGE_SHIFT,
++			__pa(idx_to_kaddr(netbk, pending_idx)) >> PAGE_SHIFT,
 +			FOREIGN_FRAME(mop->dev_bus_addr >> PAGE_SHIFT));
-+		grant_tx_handle[pending_idx] = mop->handle;
++		netbk->grant_tx_handle[pending_idx] = mop->handle;
 +	}
 +
 +	/* Skip first skb fragment if it is on same page as header fragment. */
@@ -18890,26 +19039,30 @@ index 0000000..5dc4f98
 +
 +	for (i = start; i < nr_frags; i++) {
 +		int j, newerr;
++		pending_ring_idx_t index;
 +
 +		pending_idx = (unsigned long)shinfo->frags[i].page;
 +
 +		/* Check error status: if okay then remember grant handle. */
 +		newerr = (++mop)->status;
 +		if (likely(!newerr)) {
++			unsigned long addr;
++			addr = idx_to_kaddr(netbk, pending_idx);
 +			set_phys_to_machine(
-+				__pa(idx_to_kaddr(pending_idx))>>PAGE_SHIFT,
++				__pa(addr)>>PAGE_SHIFT,
 +				FOREIGN_FRAME(mop->dev_bus_addr>>PAGE_SHIFT));
-+			grant_tx_handle[pending_idx] = mop->handle;
++			netbk->grant_tx_handle[pending_idx] = mop->handle;
 +			/* Had a previous error? Invalidate this fragment. */
 +			if (unlikely(err))
-+				netif_idx_release(pending_idx);
++				netif_idx_release(netbk, pending_idx);
 +			continue;
 +		}
 +
 +		/* Error on this fragment: respond to client with an error. */
-+		txp = &pending_tx_info[pending_idx].req;
++		txp = &netbk->pending_tx_info[pending_idx].req;
 +		make_tx_response(netif, txp, NETIF_RSP_ERROR);
-+		pending_ring[pending_index(pending_prod++)] = pending_idx;
++		index = pending_index(netbk->pending_prod++);
++		netbk->pending_ring[index] = pending_idx;
 +		netif_put(netif);
 +
 +		/* Not the first error? Preceding frags already invalidated. */
@@ -18918,10 +19071,10 @@ index 0000000..5dc4f98
 +
 +		/* First error: invalidate header and preceding fragments. */
 +		pending_idx = *((u16 *)skb->data);
-+		netif_idx_release(pending_idx);
++		netif_idx_release(netbk, pending_idx);
 +		for (j = start; j < i; j++) {
 +			pending_idx = (unsigned long)shinfo->frags[i].page;
-+			netif_idx_release(pending_idx);
++			netif_idx_release(netbk, pending_idx);
 +		}
 +
 +		/* Remember the error: invalidate all subsequent fragments. */
@@ -18932,7 +19085,7 @@ index 0000000..5dc4f98
 +	return err;
 +}
 +
-+static void netbk_fill_frags(struct sk_buff *skb)
++static void netbk_fill_frags(struct xen_netbk *netbk, struct sk_buff *skb)
 +{
 +	struct skb_shared_info *shinfo = skb_shinfo(skb);
 +	int nr_frags = shinfo->nr_frags;
@@ -18945,12 +19098,12 @@ index 0000000..5dc4f98
 +
 +		pending_idx = (unsigned long)frag->page;
 +
-+		pending_inuse[pending_idx].alloc_time = jiffies;
-+		list_add_tail(&pending_inuse[pending_idx].list,
-+			      &pending_inuse_head);
++		netbk->pending_inuse[pending_idx].alloc_time = jiffies;
++		list_add_tail(&netbk->pending_inuse[pending_idx].list,
++			      &netbk->pending_inuse_head);
 +
-+		txp = &pending_tx_info[pending_idx].req;
-+		frag->page = virt_to_page(idx_to_kaddr(pending_idx));
++		txp = &netbk->pending_tx_info[pending_idx].req;
++		frag->page = virt_to_page(idx_to_kaddr(netbk, pending_idx));
 +		frag->size = txp->size;
 +		frag->page_offset = txp->offset;
 +
@@ -19082,15 +19235,15 @@ index 0000000..5dc4f98
 +	return false;
 +}
 +
-+static unsigned net_tx_build_mops(void)
++static unsigned net_tx_build_mops(struct xen_netbk *netbk)
 +{
 +	struct gnttab_map_grant_ref *mop;
 +	struct sk_buff *skb;
 +	int ret;
 +
-+	mop = tx_map_ops;
-+	while (((nr_pending_reqs() + MAX_SKB_FRAGS) < MAX_PENDING_REQS) &&
-+		!list_empty(&net_schedule_list)) {
++	mop = netbk->tx_map_ops;
++	while (((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) &&
++		!list_empty(&netbk->net_schedule_list)) {
 +		struct xen_netif *netif;
 +		struct xen_netif_tx_request txreq;
 +		struct xen_netif_tx_request txfrags[MAX_SKB_FRAGS];
@@ -19099,9 +19252,11 @@ index 0000000..5dc4f98
 +		RING_IDX idx;
 +		int work_to_do;
 +		unsigned int data_len;
++		pending_ring_idx_t index;
 +	
 +		/* Get a netif from the list with work to do. */
-+		netif = list_first_entry(&net_schedule_list, struct xen_netif, list);
++		netif = list_first_entry(&netbk->net_schedule_list,
++				struct xen_netif, list);
 +		netif_get(netif);
 +		remove_from_net_schedule_list(netif);
 +
@@ -19160,7 +19315,8 @@ index 0000000..5dc4f98
 +			continue;
 +		}
 +
-+		pending_idx = pending_ring[pending_index(pending_cons)];
++		index = pending_index(netbk->pending_cons);
++		pending_idx = netbk->pending_ring[index];
 +
 +		data_len = (txreq.size > PKT_PROT_LEN &&
 +			    ret < MAX_SKB_FRAGS) ?
@@ -19188,14 +19344,14 @@ index 0000000..5dc4f98
 +			}
 +		}
 +
-+		gnttab_set_map_op(mop, idx_to_kaddr(pending_idx),
++		gnttab_set_map_op(mop, idx_to_kaddr(netbk, pending_idx),
 +				  GNTMAP_host_map | GNTMAP_readonly,
 +				  txreq.gref, netif->domid);
 +		mop++;
 +
-+		memcpy(&pending_tx_info[pending_idx].req,
++		memcpy(&netbk->pending_tx_info[pending_idx].req,
 +		       &txreq, sizeof(txreq));
-+		pending_tx_info[pending_idx].netif = netif;
++		netbk->pending_tx_info[pending_idx].netif = netif;
 +		*((u16 *)skb->data) = pending_idx;
 +
 +		__skb_put(skb, data_len);
@@ -19210,40 +19366,40 @@ index 0000000..5dc4f98
 +			skb_shinfo(skb)->frags[0].page = (void *)~0UL;
 +		}
 +
-+		__skb_queue_tail(&tx_queue, skb);
++		__skb_queue_tail(&netbk->tx_queue, skb);
 +
-+		pending_cons++;
++		netbk->pending_cons++;
 +
-+		mop = netbk_get_requests(netif, skb, txfrags, mop);
++		mop = netbk_get_requests(netbk, netif, skb, txfrags, mop);
 +
 +		netif->tx.req_cons = idx;
 +		netif_schedule_work(netif);
 +
-+		if ((mop - tx_map_ops) >= ARRAY_SIZE(tx_map_ops))
++		if ((mop - netbk->tx_map_ops) >= ARRAY_SIZE(netbk->tx_map_ops))
 +			break;
 +	}
 +
-+	return mop - tx_map_ops;
++	return mop - netbk->tx_map_ops;
 +}
 +
-+static void net_tx_submit(void)
++static void net_tx_submit(struct xen_netbk *netbk)
 +{
 +	struct gnttab_map_grant_ref *mop;
 +	struct sk_buff *skb;
 +
-+	mop = tx_map_ops;
-+	while ((skb = __skb_dequeue(&tx_queue)) != NULL) {
++	mop = netbk->tx_map_ops;
++	while ((skb = __skb_dequeue(&netbk->tx_queue)) != NULL) {
 +		struct xen_netif_tx_request *txp;
 +		struct xen_netif *netif;
 +		u16 pending_idx;
 +		unsigned data_len;
 +
 +		pending_idx = *((u16 *)skb->data);
-+		netif       = pending_tx_info[pending_idx].netif;
-+		txp         = &pending_tx_info[pending_idx].req;
++		netif = netbk->pending_tx_info[pending_idx].netif;
++		txp = &netbk->pending_tx_info[pending_idx].req;
 +
 +		/* Check the remap error code. */
-+		if (unlikely(netbk_tx_check_mop(skb, &mop))) {
++		if (unlikely(netbk_tx_check_mop(netbk, skb, &mop))) {
 +			DPRINTK("netback grant failed.\n");
 +			skb_shinfo(skb)->nr_frags = 0;
 +			kfree_skb(skb);
@@ -19252,7 +19408,7 @@ index 0000000..5dc4f98
 +
 +		data_len = skb->len;
 +		memcpy(skb->data,
-+		       (void *)(idx_to_kaddr(pending_idx)|txp->offset),
++		       (void *)(idx_to_kaddr(netbk, pending_idx)|txp->offset),
 +		       data_len);
 +		if (data_len < txp->size) {
 +			/* Append the packet payload as a fragment. */
@@ -19260,7 +19416,7 @@ index 0000000..5dc4f98
 +			txp->size -= data_len;
 +		} else {
 +			/* Schedule a response immediately. */
-+			netif_idx_release(pending_idx);
++			netif_idx_release(netbk, pending_idx);
 +		}
 +
 +		if (txp->flags & NETTXF_csum_blank)
@@ -19268,7 +19424,7 @@ index 0000000..5dc4f98
 +		else if (txp->flags & NETTXF_data_validated)
 +			skb->ip_summed = CHECKSUM_UNNECESSARY;
 +
-+		netbk_fill_frags(skb);
++		netbk_fill_frags(netbk, skb);
 +
 +		/*
 +		 * If the initial fragment was < PKT_PROT_LEN then
@@ -19301,70 +19457,84 @@ index 0000000..5dc4f98
 +			continue;
 +		}
 +
-+		netif_rx(skb);
++		netif_rx_ni(skb);
 +		netif->dev->last_rx = jiffies;
 +	}
 +
 +	if (netbk_copy_skb_mode == NETBK_DELAYED_COPY_SKB &&
-+	    !list_empty(&pending_inuse_head)) {
++	    !list_empty(&netbk->pending_inuse_head)) {
 +		struct netbk_tx_pending_inuse *oldest;
 +
-+		oldest = list_entry(pending_inuse_head.next,
++		oldest = list_entry(netbk->pending_inuse_head.next,
 +				    struct netbk_tx_pending_inuse, list);
-+		mod_timer(&netbk_tx_pending_timer, oldest->alloc_time + HZ);
++		mod_timer(&netbk->netbk_tx_pending_timer,
++				oldest->alloc_time + HZ);
 +	}
 +}
 +
 +/* Called after netfront has transmitted */
-+static void net_tx_action(unsigned long unused)
++static void net_tx_action(unsigned long data)
 +{
++	struct xen_netbk *netbk = (struct xen_netbk *)data;
 +	unsigned nr_mops;
 +	int ret;
 +
-+	if (dealloc_cons != dealloc_prod)
-+		net_tx_action_dealloc();
++	if (netbk->dealloc_cons != netbk->dealloc_prod)
++		net_tx_action_dealloc(netbk);
 +
-+	nr_mops = net_tx_build_mops();
++	nr_mops = net_tx_build_mops(netbk);
 +
 +	if (nr_mops == 0)
 +		return;
 +
 +	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
-+					tx_map_ops, nr_mops);
++					netbk->tx_map_ops, nr_mops);
 +	BUG_ON(ret);
 +
-+	net_tx_submit();
++	net_tx_submit(netbk);
 +}
 +
-+static void netif_idx_release(u16 pending_idx)
++static void netif_idx_release(struct xen_netbk *netbk, u16 pending_idx)
 +{
 +	static DEFINE_SPINLOCK(_lock);
 +	unsigned long flags;
++	pending_ring_idx_t index;
 +
 +	spin_lock_irqsave(&_lock, flags);
-+	dealloc_ring[pending_index(dealloc_prod)] = pending_idx;
++	index = pending_index(netbk->dealloc_prod);
++	netbk->dealloc_ring[index] = pending_idx;
 +	/* Sync with net_tx_action_dealloc: insert idx /then/ incr producer. */
 +	smp_wmb();
-+	dealloc_prod++;
++	netbk->dealloc_prod++;
 +	spin_unlock_irqrestore(&_lock, flags);
 +
-+	tasklet_schedule(&net_tx_tasklet);
++	xen_netbk_bh_handler(netbk, 0);
 +}
 +
 +static void netif_page_release(struct page *page, unsigned int order)
 +{
++	int group = netif_page_group(page);
 +	int idx = netif_page_index(page);
++	struct xen_netbk *netbk = &xen_netbk[group];
 +	BUG_ON(order);
-+	BUG_ON(idx < 0);
-+	netif_idx_release(idx);
++	BUG_ON(group < 0 || group >= xen_netbk_group_nr);
++	BUG_ON(idx < 0 || idx >= MAX_PENDING_REQS);
++	BUG_ON(netbk->mmap_pages[idx] != page);
++	netif_idx_release(netbk, idx);
 +}
 +
 +irqreturn_t netif_be_int(int irq, void *dev_id)
 +{
 +	struct xen_netif *netif = dev_id;
++	struct xen_netbk *netbk;
++
++	if (netif->group == -1)
++		return IRQ_NONE;
++
++	netbk = &xen_netbk[netif->group];
 +
 +	add_to_net_schedule_list_tail(netif);
-+	maybe_schedule_tx_action();
++	maybe_schedule_tx_action(netbk);
 +
 +	if (netif_schedulable(netif) && !netbk_queue_full(netif))
 +		netif_wake_queue(netif->dev);
@@ -19432,75 +19602,180 @@ index 0000000..5dc4f98
 +	struct list_head *ent;
 +	struct xen_netif *netif;
 +	int i = 0;
++	int group = 0;
 +
 +	printk(KERN_ALERT "netif_schedule_list:\n");
-+	spin_lock_irq(&net_schedule_list_lock);
 +
-+	list_for_each (ent, &net_schedule_list) {
-+		netif = list_entry(ent, struct xen_netif, list);
-+		printk(KERN_ALERT " %d: private(rx_req_cons=%08x "
-+		       "rx_resp_prod=%08x\n",
-+		       i, netif->rx.req_cons, netif->rx.rsp_prod_pvt);
-+		printk(KERN_ALERT "   tx_req_cons=%08x tx_resp_prod=%08x)\n",
-+		       netif->tx.req_cons, netif->tx.rsp_prod_pvt);
-+		printk(KERN_ALERT "   shared(rx_req_prod=%08x "
-+		       "rx_resp_prod=%08x\n",
-+		       netif->rx.sring->req_prod, netif->rx.sring->rsp_prod);
-+		printk(KERN_ALERT "   rx_event=%08x tx_req_prod=%08x\n",
-+		       netif->rx.sring->rsp_event, netif->tx.sring->req_prod);
-+		printk(KERN_ALERT "   tx_resp_prod=%08x, tx_event=%08x)\n",
-+		       netif->tx.sring->rsp_prod, netif->tx.sring->rsp_event);
-+		i++;
++	for (group = 0; group < xen_netbk_group_nr; group++) {
++		struct xen_netbk *netbk = &xen_netbk[group];
++		spin_lock_irq(&netbk->net_schedule_list_lock);
++		printk(KERN_ALERT "xen_netback group number: %d\n", group);
++		list_for_each(ent, &netbk->net_schedule_list) {
++			netif = list_entry(ent, struct xen_netif, list);
++			printk(KERN_ALERT " %d: private(rx_req_cons=%08x "
++				"rx_resp_prod=%08x\n",
++				i, netif->rx.req_cons, netif->rx.rsp_prod_pvt);
++			printk(KERN_ALERT
++				"   tx_req_cons=%08x, tx_resp_prod=%08x)\n",
++				netif->tx.req_cons, netif->tx.rsp_prod_pvt);
++			printk(KERN_ALERT
++				"   shared(rx_req_prod=%08x "
++				"rx_resp_prod=%08x\n",
++				netif->rx.sring->req_prod,
++				netif->rx.sring->rsp_prod);
++			printk(KERN_ALERT
++				"   rx_event=%08x, tx_req_prod=%08x\n",
++				netif->rx.sring->rsp_event,
++				netif->tx.sring->req_prod);
++			printk(KERN_ALERT
++				"   tx_resp_prod=%08x, tx_event=%08x)\n",
++				netif->tx.sring->rsp_prod,
++				netif->tx.sring->rsp_event);
++			i++;
++		}
++		spin_unlock_irq(&netbk->net_schedule_list_lock);
 +	}
 +
-+	spin_unlock_irq(&net_schedule_list_lock);
 +	printk(KERN_ALERT " ** End of netif_schedule_list **\n");
 +
 +	return IRQ_HANDLED;
 +}
 +#endif
 +
++static inline int rx_work_todo(struct xen_netbk *netbk)
++{
++	return !skb_queue_empty(&netbk->rx_queue);
++}
++
++static inline int tx_work_todo(struct xen_netbk *netbk)
++{
++	if (netbk->dealloc_cons != netbk->dealloc_prod)
++		return 1;
++
++	if (((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) &&
++			!list_empty(&netbk->net_schedule_list))
++		return 1;
++
++	return 0;
++}
++
++static int netbk_action_thread(void *data)
++{
++	struct xen_netbk *netbk = (struct xen_netbk *)data;
++	while (!kthread_should_stop()) {
++		wait_event_interruptible(netbk->kthread.netbk_action_wq,
++				rx_work_todo(netbk)
++				|| tx_work_todo(netbk)
++				|| kthread_should_stop());
++		cond_resched();
++
++		if (kthread_should_stop())
++			break;
++
++		if (rx_work_todo(netbk))
++			net_rx_action((unsigned long)netbk);
++
++		if (tx_work_todo(netbk))
++			net_tx_action((unsigned long)netbk);
++	}
++
++	return 0;
++}
++
 +static int __init netback_init(void)
 +{
 +	int i;
 +	struct page *page;
 +	int rc = 0;
++	int group;
 +
 +	if (!xen_domain())
 +		return -ENODEV;
 +
++	xen_netbk_group_nr = num_online_cpus();
++	xen_netbk = (struct xen_netbk *)vmalloc(sizeof(struct xen_netbk) *
++					    xen_netbk_group_nr);
++	if (!xen_netbk) {
++		printk(KERN_ALERT "%s: out of memory\n", __func__);
++		return -ENOMEM;
++	}
++
 +	/* We can increase reservation by this much in net_rx_action(). */
 +//	balloon_update_driver_allowance(NET_RX_RING_SIZE);
 +
-+	skb_queue_head_init(&rx_queue);
-+	skb_queue_head_init(&tx_queue);
++	for (group = 0; group < xen_netbk_group_nr; group++) {
++		struct xen_netbk *netbk = &xen_netbk[group];
++		skb_queue_head_init(&netbk->rx_queue);
++		skb_queue_head_init(&netbk->tx_queue);
++
++		init_timer(&netbk->net_timer);
++		netbk->net_timer.data = (unsigned long)netbk;
++		netbk->net_timer.function = net_alarm;
++
++		init_timer(&netbk->netbk_tx_pending_timer);
++		netbk->netbk_tx_pending_timer.data = (unsigned long)netbk;
++		netbk->netbk_tx_pending_timer.function =
++			netbk_tx_pending_timeout;
++
++		netbk->mmap_pages =
++			alloc_empty_pages_and_pagevec(MAX_PENDING_REQS);
++		if (!netbk->mmap_pages) {
++			printk(KERN_ALERT "%s: out of memory\n", __func__);
++			del_timer(&netbk->netbk_tx_pending_timer);
++			del_timer(&netbk->net_timer);
++			rc = -ENOMEM;
++			goto failed_init;
++		}
++
++		for (i = 0; i < MAX_PENDING_REQS; i++) {
++			page = netbk->mmap_pages[i];
++			SetPageForeign(page, netif_page_release);
++			netif_set_page_ext(page, group, i);
++			INIT_LIST_HEAD(&netbk->pending_inuse[i].list);
++		}
++
++		netbk->pending_cons = 0;
++		netbk->pending_prod = MAX_PENDING_REQS;
++		for (i = 0; i < MAX_PENDING_REQS; i++)
++			netbk->pending_ring[i] = i;
++
++		if (MODPARM_netback_kthread) {
++			init_waitqueue_head(&netbk->kthread.netbk_action_wq);
++			netbk->kthread.task =
++				kthread_create(netbk_action_thread,
++					       (void *)netbk,
++					       "netback/%u", group);
++
++			if (!IS_ERR(netbk->kthread.task)) {
++				kthread_bind(netbk->kthread.task, group);
++				wake_up_process(netbk->kthread.task);
++			} else {
++				printk(KERN_ALERT
++					"kthread_run() fails at netback\n");
++				free_empty_pages_and_pagevec(netbk->mmap_pages,
++						MAX_PENDING_REQS);
++				del_timer(&netbk->netbk_tx_pending_timer);
++				del_timer(&netbk->net_timer);
++				rc = PTR_ERR(netbk->kthread.task);
++				goto failed_init;
++			}
++		} else {
++			tasklet_init(&netbk->tasklet.net_tx_tasklet,
++				     net_tx_action,
++				     (unsigned long)netbk);
++			tasklet_init(&netbk->tasklet.net_rx_tasklet,
++				     net_rx_action,
++				     (unsigned long)netbk);
++		}
++
++		INIT_LIST_HEAD(&netbk->pending_inuse_head);
++		INIT_LIST_HEAD(&netbk->net_schedule_list);
 +
-+	init_timer(&net_timer);
-+	net_timer.data = 0;
-+	net_timer.function = net_alarm;
-+
-+	init_timer(&netbk_tx_pending_timer);
-+	netbk_tx_pending_timer.data = 0;
-+	netbk_tx_pending_timer.function = netbk_tx_pending_timeout;
-+
-+	mmap_pages = alloc_empty_pages_and_pagevec(MAX_PENDING_REQS);
-+	if (mmap_pages == NULL) {
-+		printk("%s: out of memory\n", __FUNCTION__);
-+		return -ENOMEM;
-+	}
++		spin_lock_init(&netbk->net_schedule_list_lock);
 +
-+	for (i = 0; i < MAX_PENDING_REQS; i++) {
-+		page = mmap_pages[i];
-+		SetPageForeign(page, netif_page_release);
-+		netif_set_page_index(page, i);
-+		INIT_LIST_HEAD(&pending_inuse[i].list);
++		atomic_set(&netbk->netfront_count, 0);
 +	}
 +
-+	pending_cons = 0;
-+	pending_prod = MAX_PENDING_REQS;
-+	for (i = 0; i < MAX_PENDING_REQS; i++)
-+		pending_ring[i] = i;
-+
 +	netbk_copy_skb_mode = NETBK_DONT_COPY_SKB;
 +	if (MODPARM_copy_skb) {
 +		if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_and_replace,
@@ -19520,7 +19795,7 @@ index 0000000..5dc4f98
 +	(void)bind_virq_to_irqhandler(VIRQ_DEBUG,
 +				      0,
 +				      netif_be_dbg,
-+				      SA_SHIRQ,
++				      IRQF_SHARED,
 +				      "net-be-dbg",
 +				      &netif_be_dbg);
 +#endif
@@ -19528,9 +19803,16 @@ index 0000000..5dc4f98
 +	return 0;
 +
 +failed_init:
-+	free_empty_pages_and_pagevec(mmap_pages, MAX_PENDING_REQS);
-+	del_timer(&netbk_tx_pending_timer);
-+	del_timer(&net_timer);
++	for (i = 0; i < group; i++) {
++		struct xen_netbk *netbk = &xen_netbk[i];
++		free_empty_pages_and_pagevec(netbk->mmap_pages,
++				MAX_PENDING_REQS);
++		del_timer(&netbk->netbk_tx_pending_timer);
++		del_timer(&netbk->net_timer);
++		if (MODPARM_netback_kthread)
++			kthread_stop(netbk->kthread.task);
++	}
++	vfree(xen_netbk);
 +	return rc;
 +
 +}
@@ -19540,10 +19822,10 @@ index 0000000..5dc4f98
 +MODULE_LICENSE("Dual BSD/GPL");
 diff --git a/drivers/xen/netback/xenbus.c b/drivers/xen/netback/xenbus.c
 new file mode 100644
-index 0000000..70636d0
+index 0000000..ba7b1de
 --- /dev/null
 +++ b/drivers/xen/netback/xenbus.c
-@@ -0,0 +1,523 @@
+@@ -0,0 +1,528 @@
 +/*  Xenbus code for netif backend
 +    Copyright (C) 2005 Rusty Russell <rusty at rustcorp.com.au>
 +    Copyright (C) 2005 XenSource Ltd
@@ -19708,12 +19990,17 @@ index 0000000..70636d0
 + */
 +static int netback_uevent(struct xenbus_device *xdev, struct kobj_uevent_env *env)
 +{
-+	struct backend_info *be = dev_get_drvdata(&xdev->dev);
-+	struct xen_netif *netif = be->netif;
++	struct backend_info *be;
++	struct xen_netif *netif;
 +	char *val;
 +
 +	DPRINTK("netback_uevent");
 +
++	be = dev_get_drvdata(&xdev->dev);
++	if (!be)
++		return 0;
++	netif = be->netif;
++
 +	val = xenbus_read(XBT_NIL, xdev->nodename, "script", NULL);
 +	if (IS_ERR(val)) {
 +		int err = PTR_ERR(val);
@@ -28494,16 +28781,28 @@ index f4906f6..e7233e8 100644
 +
  #endif /*__ACPI_DRIVERS_H__*/
 diff --git a/include/acpi/processor.h b/include/acpi/processor.h
-index 740ac3a..3d1205f 100644
+index 740ac3a..7ee588d 100644
 --- a/include/acpi/processor.h
 +++ b/include/acpi/processor.h
-@@ -238,6 +238,13 @@ struct acpi_processor_errata {
+@@ -238,6 +238,25 @@ struct acpi_processor_errata {
  	} piix4;
  };
  
 +extern int acpi_processor_errata(struct acpi_processor *pr);
++#ifdef CONFIG_ACPI_PROCFS
 +extern int acpi_processor_add_fs(struct acpi_device *device);
 +extern int acpi_processor_remove_fs(struct acpi_device *device);
++#else
++static inline int acpi_processor_add_fs(struct acpi_device *device)
++{
++	return 0;
++}
++
++static inline int acpi_processor_remove_fs(struct acpi_device *device)
++{
++	return 0;
++}
++#endif
 +extern int acpi_processor_set_pdc(struct acpi_processor *pr);
 +extern int acpi_processor_remove(struct acpi_device *device, int type);
 +extern void acpi_processor_notify(struct acpi_device *device, u32 event);
@@ -28511,7 +28810,7 @@ index 740ac3a..3d1205f 100644
  extern int acpi_processor_preregister_performance(struct
  						  acpi_processor_performance
  						  *performance);
-@@ -295,6 +302,8 @@ static inline void acpi_processor_ffh_cstate_enter(struct acpi_processor_cx
+@@ -295,6 +314,8 @@ static inline void acpi_processor_ffh_cstate_enter(struct acpi_processor_cx
  void acpi_processor_ppc_init(void);
  void acpi_processor_ppc_exit(void);
  int acpi_processor_ppc_has_changed(struct acpi_processor *pr);
@@ -28520,7 +28819,7 @@ index 740ac3a..3d1205f 100644
  #else
  static inline void acpi_processor_ppc_init(void)
  {
-@@ -331,6 +340,7 @@ int acpi_processor_power_init(struct acpi_processor *pr,
+@@ -331,6 +352,7 @@ int acpi_processor_power_init(struct acpi_processor *pr,
  int acpi_processor_cst_has_changed(struct acpi_processor *pr);
  int acpi_processor_power_exit(struct acpi_processor *pr,
  			      struct acpi_device *device);
@@ -28648,19 +28947,6 @@ index 24c3956..3d74515 100644
  #ifdef CONFIG_NUMA
  	/*
  	 * set_policy() op must add a reference to any non-NULL @new mempolicy
-diff --git a/include/linux/module.h b/include/linux/module.h
-index 460df15..482efc8 100644
---- a/include/linux/module.h
-+++ b/include/linux/module.h
-@@ -455,7 +455,7 @@ void symbol_put_addr(void *addr);
- static inline local_t *__module_ref_addr(struct module *mod, int cpu)
- {
- #ifdef CONFIG_SMP
--	return (local_t *) per_cpu_ptr(mod->refptr, cpu);
-+	return (local_t *) (mod->refptr + per_cpu_offset(cpu));
- #else
- 	return &mod->ref;
- #endif
 diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
 index 6b202b1..b03950e 100644
 --- a/include/linux/page-flags.h


--- btrfs-check-for-read-permission-on-src-file-in-clone-ioctl.patch DELETED ---


--- iwlwifi_-clear-all-the-stop_queue-flag-after-load-firmware.patch DELETED ---


--- patch-2.6.32.13.bz2.sign DELETED ---


--- revert-ath9k_-fix-lockdep-warning-when-unloading-module.patch DELETED ---



More information about the scm-commits mailing list