Searching again...

Paul W. Frields stickster at gmail.com
Fri Feb 3 15:56:26 UTC 2012


On Thu, Feb 02, 2012 at 05:14:47PM -0500, Eric H. Christensen wrote:
> On Thu, Feb 02, 2012 at 08:44:18PM +0000, Robert 'Bob' Jensen wrote:
> > 
> > ----- "Kevin Fenzi" <kevin at scrye.com> wrote:
> > 
> > > So, I got to looking at search engines again the other day. In
> > > particular the horrible horrible mediawiki one we are using on the
> > > wiki. 
> > > 
> > > This pointed me to sphinx. 
> > > 
> > > - There is a mediawiki sphinx plugin. (needs packaging)
> > > - sphinx is c++ and already packaged. 
> > > - sphinx uses mysql directly to index the database contents. 
> > > - You can pass other data into it via an xml format. This could be a
> > >   pain for any non wiki setups. 
> > > 
> > > It was noted that the new tagger application uses xapian as it's
> > > search
> > > engine. 
> > > 
> > > - xapian is also c++
> > > - xapain has a web crawler/indexer (omega) that could index our other
> > >   stuff more easily than sphinx. 
> > > - There's no mediawiki plugin for xapian, but we could point the wiki
> > >   search box to a site wide search using xapian. 
> > > 
> > > So, there's tradeoffs either way. 
> > > 
> > > Would anyone care to lead an effort to test these two? 
> > > xapian would probably be easy to test from anywhere. 
> > > sphinx might require some access to our mediawiki database, but you
> > > could also just setup a new mediawiki, the plugin and sphinx and see
> > > how it works there. 
> > > 
> > > If no one steps up I can look at doing it next week. ;) 
> > > 
> > 
> > My concern has always been the wiki content search being horrible
> > as Kevin also mentioned. For me sphinx sounds like the best tool
> > for that job out of the box from the description provided. I have
> > a couple concerns that we need to be sure to test with xapian
> > being a crawler.
> > 
> > - Will this work for pages on the wiki that are already hard to
> > - find because they are not linked to from anywhere?  Are we sure
> > - it will work on docs.fp.o and it's JavaScript navigation menu?
> > 
> > I am willing to help out testing if another can take the lead on
> > it.
> > 
> > -- Bob
> 
> When we were discussing sphinx the other day I seem to remember
> something about it being able to read docbook (or am I just
> mis-remembering the entire conversation).  That could be interesting
> for docs.fp.o.  Docs.fp.o has a failback mode for the javascript
> with an document index of sorts that could be helpful for crawling.
> 
> The web crawling functionality sounds interesting but, like Bob
> noted, if wiki pages aren't linked then they may never be found.  Do
> we know exactly what the mw plugin does in sphinx?

If sphinx accesses the wiki's backend DB as Kevin indicated, it seems
unlikely pages would be hidden from searching simply because they're
not linked.  Only testing will tell for sure.  On the other hand, if
that's the only way it works then I'm not sure how it could digest
docs.fp.o which has no similar DB.

> I like the idea of having a site-wide search feature as you don't
> know if the answer you seek is in a document, the wiki, or a
> webpage.
> 
> Depending on how badly my day job keeps me moving this weekend I
> could possible test one or the other.  I think I'd like to look at
> xapian just to see how well it indexes the wiki.

-- 
Paul W. Frields                                http://paul.frields.org/
  gpg fingerprint: 3DA6 A0AC 6D58 FEC4 0233  5906 ACDB C937 BD11 3717
  http://redhat.com/   -  -  -  -   http://pfrields.fedorapeople.org/
    The open source story continues to grow: http://opensource.com


More information about the infrastructure mailing list