Per http://thread.gmane.org/gmane.comp.version-control.fossil-scm.user/1489:
On Dec 2, 2009, at 10:52 AM, Daniel Clark wrote:
> For a while it's seemed odd to me that google hadn't updated its
> indexes of a few sites, so I took a look and noticed the obvious -
> robots.txt was disallowing all crawlers:
>
> (from main.c)
>
> /* Prevent robots from indexing this site.
> */
> if( strcmp(g.zPath, "robots.txt")==0 ){
> cgi_set_content_type("text/plain");
> @ User-agent: *
> @ Disallow: /
> cgi_reply();
> exit(0);
> }
>
> As far as I can tell there isn't a way to disable or tune this from
> the fossil level; if there isn't interest in changing this I'm sure I
> can just redirect via apache to some actual file, but IMHO it would be
> good to be able to easily make fossil projects searchable (perhaps even
> have this be the default), esp. since at the moment the only reason that
> say fossil-scm.org is searchable is because the robots.txt file happens
> to be at:
>
> http://fossil-scm.org/index.html/robots.txt
>
> (eg a random apache configuration choice to have rather ugly URLs with
> "index.html" in all of them.)
Note that http://www.fossil-scm.org/ does not use apache. The
redirect occurs within fossil itself.
I suppose that since the existing "robots.txt" is essentially a no-op,
we might as well remove it.
|