OSM Route Manager and History Viewer are now hosted on the FOSSGIS development server under the new URLs http://osmrm.openstreetmap.de/ and http://osmhv.openstreetmap.de/. I will continue to use the old URL (http://osm.cdauth.eu/route-manager/ and http://osm.cdauth.eu/history-viewer/ ), which now redirect to the new locations.
I have implemented some new caching and queueing mechanism so that only one request is processed at each time (to avoid a too extensive use of resources and API calls). Hopefully, the service will finally run acceptably stably due to these changes.
Comment from seav on 1 January 2011 at 04:14
I find the History Viewer an invaluable tool for inspecting changesets. Thanks for developing this!
Comment from vsandre on 3 January 2011 at 11:14
THX for your great tool.
Comment from Candid Dauth on 3 January 2011 at 15:40
I read about that bookmarklet some time ago and created two Greasemonkey scripts that create a link on the changeset and relation pages: http://userscripts.org/scripts/show/92779 and http://userscripts.org/scripts/show/92776
Comment from seav on 4 January 2011 at 14:27
Hmmm, not sure where to report this but it seems that since you put the queuing mechanism in the History Viewer, either:
a) some changesets take a really long time to analyze (maybe put a timeout?) leading to a long processing time in the queue, or
b) the queue doesn't "pop" until somebody has viewed (or fetched the HTML output of) the latest processed changeset leading to a possible deadlock?
Comment from Candid Dauth on 4 January 2011 at 15:17
What position in the queue does it display for your changeset? As there is one queue for all functions of Route Manager and History Viewer, it is not unlikely that the queue takes a long time to process, as especially the Relation Blame function of History Viewer can take very long.
Comment from Petr Dlouhý on 9 January 2011 at 17:52
I have written you private message about this. I taught, that the service have been temporally frozen, but now I see, that the problem lasts for longer time. I am waiting about an hour now without any change in order, which is very annoying.
If this is because the service is blocked by one task, there obviously should be timeout. I would expect maximum timeout for task about 1 minute (depends on service usage, could be altered dynamically).
Comment from Candid Dauth on 9 January 2011 at 17:58
I have reset the queue now. I don’t know how to avoid this, but one minute is way too short as a timeout (the relation blame function takes a long time even for small relations). Maybe I will create separate queues for the different functions.
Comment from Petr Dlouhý on 9 January 2011 at 18:53
The restart didn't helped - the queue is still stacked.
I didn't have a chance to try the new services yet, so I have no measure for the waiting times. Waiting times more than few minutes might be big blocker for users of this service (at least for the basic functions).
Separation of queues is good approach, and I can imagine other actions, that can be made to get waiting times shorter or more fair:
-Long running task could be suspended (or pre-detected) and moved to queue with time-expensive tasks for later completion.
-There could be more running tasks at time (could help on multi processor/core system, and also could help to outrun long-running tasks).
Comment from Candid Dauth on 9 January 2011 at 19:14
Running more tasks at a time will not bring a speed improvement but only consume more memory as the limitation of speed is the OSM API.
The fairest policy would be to not have a queue at all but to process everything at the same time. The problem is that for an analysation to run, lots of OSM objects need to be kept in memory. That’s the reason why I implemented the queue, because this way the memory consumption is kept as low as possible.
Comment from Petr Dlouhý on 9 January 2011 at 19:31
OK, I don't know more details, so my propositions are little bit pointless.
The service seems to be working now, so thank you for that.
BTW, wasn't on the old version of relation manager altitude profile of the route?
Comment from seav on 27 January 2011 at 13:45
There really needs to be a timeout. Maybe an hour? There seems to be something at the head of the queue right now that is taking almost 24 hours long to process. The changeset that I wanted analyzed is at the same queue number as it was almost 24 hours ago.
Alternatively, I could just run an instance of the History Viewer for myself. :-P
Comment from Candid Dauth on 27 January 2011 at 15:47
Okay, I have activated some detailed logging, and it seems that there is in fact a dead lock somewhere.
Comment from seav on 28 January 2011 at 18:02
It seems to have happened again, though.
Comment from seav on 24 April 2011 at 06:11
Is there a deadlock again? Maybe there should really be a timeout.
Comment from seav on 2 May 2011 at 13:29
Hi. I'm really sorry for the bother, but the queue is now up to 445 as of this moment.