Packaging of noVNC and Websockets
by Adam Young
It looks like a couple of projects are interested in using the noVNC
viewer as a way of talking to machines from a web browser. I've made a
first stab at packageing them, and, in doing so, learned a little bit.
The noVNC code is designed around a proxy that, under the Debian deploy,
lives in /usr/share/noVNC/utils/. This directory contains shell
scripts, a shared object complete with Makefile, and lots of python
code. Needless to say, it does not match Fedora packaging standards.
It uses the Websocket protocol, which is not quite HTTP. Apache HTTPD
does not support Websocket natively, although there is apparently a
path to do so via http://code.google.com/p/pywebsocket/. However, the
noVNC approach is to bundle a simple web server and websocket
implementation. In addition, a python script called websockify handles
SSL.
When deployed, the web proxy does not lock down browsing of sub dirs.
When run from an init script that did not set cwd, it exposes the
entire directory tree underneath. The normal usage is better: devstack
runs $ cd /opt/stack/noVNC && ./utils/nova-novncproxy --config-file
/etc/nova/nova.conf --web . Run this way, it only exposes the
/usr/share/noVNC directory as read only, but really should not allow
directory indexing. However, our current init script runs:
daemon --user nova --pidfile $pidfile "$exec --flagfile $config
--logfile $logfile &>/dev/null & echo \$! > $pidfile"
where $exec is
/usr/bin/nova-vncproxy.
In my spec file, in order to match this, I moved the executables from
/opt/stack/noVNC/utils to /usr/bin, but that does not seem like a good
long term solution: they are generically named and should have novnc as
part of their name as well.
I've also and renamed /opt/stack/noVNC/utils/nova-novncproxy to
/usr/bin/nova-vncproxy which seems like it should not be necessary.
Currently, the Openstack specific code is in the upstream git repo for
noVNC, but it really should be moved to the Nova git repository. I'll
talk to the original author to find out his rationale, and to see if we
can get it moved over.
I've posted my current work here
http://admiyo.fedorapeople.org/noVNC/
But would not suggest that people use it yet. I am certainly willing to
take feed back on the spec file:
http://admiyo.fedorapeople.org/noVNC/novnc.spec
Dan B suggested a few things that I'd like to record here:
1. Is there a need to create a novnc user with an empty home dir to run in?
2. The python code should be made into a site-package.
11 years, 6 months
Fedora 17 openstack packages Status
by Pádraig Brady
Here's a summary of the OpenStack package status for Fedora 17.
Updates have been submitted for Essex final for all
of the OpenStack packages. Thanks to those involved!
Package Status Karma Needed for stable
--------------------------------------------------------------------------
openstack-nova-2012.1-1 updates-testing 2
openstack-glance-2012.1-3 stable-pending 0
openstack-keystone-2012.1-1 stable-pending 0
python-django-horizon-2012.1-1 updates-testing 1
python-novaclient-2012.1-1 updates-testing 2
python-eventlet-0.9.16-6 updates-testing 2
openstack-quantum-2012.1-1 updates-testing 3
python-quantumclient-2012.1-1 updates-testing 3
python-keystoneclient-2012.1-1 updates-testing 2
openstack-swift-1.4.8-1 updates-testing 2
openstack-utils-2012.1-1 package-review-request
We can push from updates-testing to stable after 3 days
(when stable is open), but any karma feedback for the
above packages would be appreciated.
As for stuff left to do and Fedora release dates:
updates-testing is open as you can see.
stable will open again April 17th (after beta release)
stable will close on May 7th (final change deadline)
Also I'd appreciate a review of openstack-utils-2012.1-1
https://bugzilla.redhat.com/show_bug.cgi?id=811601
http://fedoraproject.org/wiki/Packaging:ReviewGuidelines
cheers,
Pádraig.
11 years, 7 months
Essex EPEL Testing
by Brown, David M JR
Fedora Devs,
I just spent the last couple of days fighting with Essex on RHEL6 and its been entertaining and I'd like to share some of the oddities and experiences.
System configuration is the following.
Two nodes on their own /24 connected by cross over to each other on the second interface.
The first node is the cloud controller and has tons of storage (11T) and 32Gb ram and 16 cores
The second node I would like to make an extra compute node and it has 24Gb ram and 8 cores (still in a work in progress)
Originally the cloud controller was running Diablo on RHEL6 and was working fine.
I couldn't find any 'upgrade' instructions for going between Diablo and Essex and I wasn't too worried because the usage of the cloud was limited to just a couple of guys. So I was satisfied with backing up manually all the data and rebuild the cluster. I noticed when I did the update that things stopped working and following the install instructions blew away all local data in the cloud.
I was following the instructions found at the following URL.
http://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL
I got the packages from
http://pbrady.fedorapeople.org/openstack-el6/
First issue. Wow, this is long, its almost long enough that making an uber script in a common package somewhere to run would strip out of most of the manual commands to run. I'd suggest first pulling out all the openstack-config-set commands and put them in a script to run. Not sure what to do about the swift documentation bits, that seems like a very manual set of configurations why aren't they part of the swift rpm? Another suggestion would be to split it out into a couple of documents one describing installation and configuration then the next describing putting data/users into it and starting stuff? thoughts?
After I got everything setup and working I noticed an issue with the dashboard, most of the static stuff wasn't showing up I had to add a symlink.
/usr/share/openstack-dashboard/static -> openstack_dashboard/static
Then the dashboard picked up the right stuff and it worked.
There's some consistency issues and I'm not sure if this is an openstack issue in general. The euca tools and how you configure them with keystone only seem to work with your personal instances and configuration. However, the dashboard seems to show users everything associated with the project instead. For example when I allocate floating IPs from the website those won't show up when I run euca-describe-addresses and respectively euca-allocate-address won't show up the IP allocated in the dashboard. I've looked at the database and the project ids are used when using the dashboard and user ids are used when using the euca tools. I think the euca tools could be setup to see everything that the dashboard sees however the documentation doesn't point to how to do that.
There also seems to be some serious functionality faults that I can't seem to make work. I can't make a user attached to multiple projects, not sure how to do that. Also, seems like there's a lot of, "huh, that doesn't seem implemented yet." However, this seems like a general openstack issue, documentation says X but that doesn't work yet or anymore.
I'm having a serious issue not getting a the second compute node working `nova-manage service list' doesn't show ':-)' for the compute and network services running on that node. I've followed the instructions to the letter and tried getting things working but its not going.
nova.conf for the controller.
[DEFAULT]
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lib/nova/tmp
dhcpbridge = /usr/bin/nova-dhcpbridge
dhcpbridge_flagfile = /etc/nova/nova.conf
force_dhcp_release = False
injected_network_template = /usr/share/nova/interfaces.template
libvirt_xml_template = /usr/share/nova/libvirt.xml.template
libvirt_nonblocking = True
vpn_client_template = /usr/share/nova/client.ovpn.template
credentials_template = /usr/share/nova/novarc.template
network_manager = nova.network.manager.FlatDHCPManager
iscsi_helper = tgtadm
sql_connection = mysql://nova:nova@localhost/nova
connection_type = libvirt
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
rpc_backend = nova.rpc.impl_qpid
root_helper = sudo nova-rootwrap
auth_strategy = keystone
public_interface = eth0
quota_floating_ips = 100
nova.conf on compute node
[DEFAULT]
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lib/nova/tmp
dhcpbridge = /usr/bin/nova-dhcpbridge
dhcpbridge_flagfile = /etc/nova/nova.conf
force_dhcp_release = True
injected_network_template = /usr/share/nova/interfaces.template
libvirt_xml_template = /usr/share/nova/libvirt.xml.template
libvirt_nonblocking = True
vpn_client_template = /usr/share/nova/client.ovpn.template
credentials_template = /usr/share/nova/novarc.template
network_manager = nova.network.manager.FlatDHCPManager
iscsi_helper = tgtadm
sql_connection = mysql://nova:nova@CC_NAME/nova
connection_type = libvirt
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
rpc_backend = nova.rpc.impl_qpid
root_helper = sudo nova-rootwrap
rabbit_host = CC_NAME
glance_api_servers = CC_NAME:9292
iscsi_ip_prefix = CC_ADDR
public_interface = eth2
verbose = True
s3_host = CC_NAME
ec2_api = CC_NAME
ec2_url = http://CC_NAME:8773/services/Cloud
fixed_range = 10.0.0.0/24
network_size = 256
Any help would be helpful.
Thanks,
- David Brown
11 years, 7 months
OpenStack status
by Pádraig Brady
Hi,
It's been nearly 3 weeks since the last status update, so here's the latest:
https://fedoraproject.org/wiki/OpenStack_status_report_2012-04-24
Historical archives are here:
http://fedoraproject.org/wiki/OpenStack_status_reports
Cheers,
Pádraig.
(appended below for convenience)
= OpenStack Upstream =
== NEWS ==
There were significant upstream OpenStack news items since the last summary.
* Apr 5th: The [http://www.openstack.org/projects/essex/press-release/ Essex release] was completed
* Apr 16-18: The [http://wiki.openstack.org/FolsomSummitEtherpads Folsom design summit] took place
* Apr 12th: An [http://lists.openstack.org/pipermail/foundation/2012-April/000254.html OpenStack Foundation announcement] was made, listing companies who have signed the framework acknowledgement letter, including Red Hat who have signed at the platinum level.
** http://gb.redhat.com/about/news/archive/2012/4/red-hat-openstack-and-open...
** http://gb.redhat.com/about/news/archive/2012/4/Red-Hat-and-OpenStack-Anno...
This generated significant interest in the media
** http://www.huffingtonpost.com/arnal-dayaratna/openstack-foundation-sets_b...
** http://gigaom.com/cloud/its-official-ibm-and-red-hat-are-onboard-with-ope...
== Essex contribution stats ==
Mark McLoughlin used gitdm to analyze [http://markmail.org/message/uw3d5a3sxs7a4iru who contributed to Essex]
which was widely reported both before and during the OpenStack Folsom design summit.
* http://www.readwriteweb.com/cloud/2012/04/who-wrote-openstack-essex-a-de.php
* http://www.internetnews.com/blog/skerner/red-hat-contributes-more-to-open...
== Stable branches ==
There was some discussion about OpenStack stable branches which are
the basis of Fedora OpenStack packages, before and at the summit.
The thought was that there could be 2 stable branches maintained,
one actively and one passively (only NB fixes and no point releases):
* http://openstack.markmail.org/thread/5en25luqxahzah4n
= Fedora OpenStack Packages =
== Essex package status ==
As of April 10th, Essex final was available from the Fedora 17 updates-testing repo:
* http://lists.fedoraproject.org/pipermail/cloud/2012-April/001365.html
These will be pushed to stable before it closes on May 7th (F17 final change deadline)
== Essex for Fedora 16 ==
Alan Pevec continues to update the [https://fedoraproject.org/wiki/OpenStack#Preview_repository Fedora 16 preview repo] with the
latest Essex packages as they're released for Fedora 17.
== OpenStack database fixes ==
There were a couple of issues fixed with the interaction of the OpenStack services with MySQL.
* There was an issue with the db-setup script where it created [https://bugzilla.redhat.com/811130 root-owned log files], thus interfering with nova service startup.
This was fixed, and the db-setup script refactored from each OpenStack service and
incorporated in a new [https://admin.fedoraproject.org/pkgdb/acls/name/openstack-utils openstack-utils package] with other support utilities for OpenStack in Fedora/EPEL
* Also Derek higgins identified an [https://bugzilla.redhat.com/815812 issue with MySQL on F16] which was resolved
by updating systemd.
== Security updates ==
There were two security updates applied:
* Apr 17: XSS vulnerability in Horizon log viewer - CVE-2012-2094
* Apr 19: No quota enforced on Nova security group rules - CVE-2012-2101
= EPEL =
== Preview Repository ==
Work on Essex for EPEL6 continues with new OpenStack services horizon and quantum added.
A [http://pbrady.fedorapeople.org/openstack-el6/ preview repo] is available,
and many users have tested that, reporting both success and any issues they had.
== Setup Guide ==
Adam Young prepared detailed [https://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL setup notes]
for installing those preview packages.
Derek Higgins amended the above with swift and keystone configuration details.
== Centos chinese notes ==
"DarkFlower" posted to the openstack dev list with a document detailing
[https://lists.launchpad.net/openstack/msg10168.html pip installing Essex packages on Centos 6.2]
= Misc =
== Keystone in httpd ==
Adam Young implemented a [http://adam.younglogic.com/2012/04/keystone-httpd/ proof of concept] for keystone
using a more standard web server, rather than being eventlet based.
== SELinux improvements ==
Adam also identified some SELinux issues with the OpenStack (and base python) packages,
and is looking into addressing those.
== DevStack ==
Some users are [https://twitter.com/#!/robynbergeron/statuses/190064780311146496 reporting success with devstack on Fedora]
== Heat Project ==
The Heat project, which is a AWS CloudForm API implementation for OpenStack
was [http://openstack.markmail.org/thread/phbfasxphhnk2g6r announced].
It's progressing quickly with [https://lists.launchpad.net/openstack/msg10478.html API v2 released].
Also Openstack [https://github.com/heat-api/heat/wiki/Configuring-Floating-IPs floating IP setup details] were documented by the Heat project
11 years, 7 months