Stephan, can you please reprovision fed-cloud09-15? I.e. both controller and all compute nodes.
I'm ccing mailing list so if you experimenting with FC, you know why your VM are gone. There should be none production VM.
It may be final version, but I'm not sure, because I made some changes to packstack, which I could not test right now. So it may be final or I may request one more reprovision.
Once declared final, I plan to do Fedora classroom for everybody who want to learn how it is set up, what you can do and what you should not do.
Mirek
On 20 April 2015 at 07:07, Miroslav Suchy msuchy@redhat.com wrote:
Stephan, can you please reprovision fed-cloud09-15? I.e. both controller and all compute nodes.
I will start on this project in a short while. My apologies for the delay... this got stuck in a folder for autoemails for some reason.
I'm ccing mailing list so if you experimenting with FC, you know why your VM are gone. There should be none production VM.
It may be final version, but I'm not sure, because I made some changes to packstack, which I could not test right now. So it may be final or I may request one more reprovision.
Once declared final, I plan to do Fedora classroom for everybody who want to learn how it is set up, what you can do and what you should not do.
Mirek _______________________________________________ infrastructure mailing list infrastructure@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/infrastructure
On 20 April 2015 at 18:18, Stephen John Smoogen smooge@gmail.com wrote:
On 20 April 2015 at 07:07, Miroslav Suchy msuchy@redhat.com wrote:
Stephan, can you please reprovision fed-cloud09-15? I.e. both controller and all compute nodes.
I will start on this project in a short while. My apologies for the delay... this got stuck in a folder for autoemails for some reason.
I have run into a crashing bug on rebuilding boxes.. fed-cloud09, fed-cloud10, fed-cloud11 are rebuilt fed-cloud12 -> fed-cloud15 are not.
I will finish fed-cloud12 -> fed-cloud15 later 'today' when I get some feedback on crasher.
On 20 April 2015 at 07:07, Miroslav Suchy msuchy@redhat.com wrote:
Stephan, can you please reprovision fed-cloud09-15? I.e. both controller and all compute nodes.
I'm ccing mailing list so if you experimenting with FC, you know why your VM are gone. There should be none production VM.
It may be final version, but I'm not sure, because I made some changes to packstack, which I could not test right now. So it may be final or I may request one more reprovision.
Once declared final, I plan to do Fedora classroom for everybody who want to learn how it is set up, what you can do and what you should not do.
Mirek _______________________________________________ infrastructure mailing list infrastructure@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/infrastructure
Systems have been rebuilt. ansible on fed-cloud09 currently dies at
TASK: [command vgrename vg_guests cinder-volumes] ***************************** failed: [fed-cloud09.cloud.fedoraproject.org] => {"changed": true, "cmd": ["vgrename", "vg_guests", "cinder-volumes"], "delta": "0:00:00.040102", "end": "2015-04-21 19:25:44.696825", "rc": 5, "start": "2015-04-21 19:25:44.656723", "warnings": []} stderr: New volume group "cinder-volumes" already exists ...ignoring
TASK: [lvg vg=cinder-volumes pvs=/dev/md127 pesize=32 vg_options=""] ********** ok: [fed-cloud09.cloud.fedoraproject.org]
TASK: [Create logical volume for Swift] *************************************** ok: [fed-cloud09.cloud.fedoraproject.org]
TASK: [Create FS on Swift storage] ******************************************** ok: [fed-cloud09.cloud.fedoraproject.org]
TASK: [template src={{ files }}/fedora-cloud/hosts dest=/etc/hosts owner=root mode=0644] *** ok: [fed-cloud09.cloud.fedoraproject.org]
TASK: [stat path=/etc/packstack_sucessfully_finished] ************************* ok: [fed-cloud09.cloud.fedoraproject.org]
TASK: [service name=NetworkManager state=stopped enabled=no] ****************** ok: [fed-cloud09.cloud.fedoraproject.org]
TASK: [service name=network state=started enabled=yes] ************************ failed: [fed-cloud09.cloud.fedoraproject.org] => {"failed": true} msg: Job for network.service failed. See 'systemctl status network.service' and 'journalctl -xn' for details.
FATAL: all hosts have already failed -- aborting
=====
I ran it twice in case it was a "we needed to run it once and then again to get past this" type error.
Further update:
fed-cloud10 through fed-cloud15 have had the openstack playbook run on them. For some reason the boxes fed-cloud12 -> fed-cloud15 would not run correctly because something either deleted the eth1 file before the playbook was run or during the run. Not sure what is causing it to work on fed-cloud09->fed-cloud10
infrastructure@lists.fedoraproject.org