{23} Trac comments (3729 matches)

Results (2401 - 2500 of 3729)

Ticket Posixtime Author Newvalue
#1427 1319709925000000 dread Done in cset:ec9a9efc2d40 for release 1.5
#1428 1319804773000000 dread Fixed in a3b0080467d4 for release 1.5
#1430 1320339962000000 amercader I've been digging more on this one. To reproduce it, you just have to edit the same dataset in both sites (production and testing). Just after editing the dataset, the search index will get mixed site_ids. I checked the jetty logs (see attached files) and just after editing a dataset there are two POST requests to update the index. The request logs don't show the requests params so it's hard to tell what the second call does (it probably is the commit): https://bitbucket.org/okfn/ckan/src/97e1e90d66d7/ckan/lib/search/index.py#cl-144 In any case, it's clear that the problem may be related with the datasets in the two cores sharing the same id. We are currently using the dataset id as uniqueKey in SOLR, i.e in our schema.xml, we are defining: {{{ <uniqueKey>id</uniqueKey> }}} According to the SOLR docs: "If a document is added that contains the same value for this field as an existing document, the old document will be deleted." http://wiki.apache.org/solr/SchemaXml#The_Unique_Key_Field I would expect the uniqueKey not to be mixed between cores, but it looks like it happens otherwise. Maybe we should generate a solr_id specific to each document for each site, as described here: http://wiki.apache.org/solr/UniqueKey#UUID_techniques (Note that apart from the testing/production site use case, at some point sites involved in harvesting could also end up with datasets with the same id.) Again, I'm not a SOLR expert, so the problem could be a completely different one!
#1430 1320431138000000 amercader Right, more news on this front. I've tested a patch which uses a hash of the dataset id and site_id to produce a unique id, and then configured the iati solr cores to use index_id as uniqueKey: https://bitbucket.org/okfn/ckan/changeset/855f5a452f60 Unfortunately, that did not solve the issue. Again, updating the same dataset in both apps messes things up. In this case, documents don't get replaced, but duplicated, so the new uniqueKey is working. I am more inclined to think that this is caused by a misconfiguration in the SOLR instance in s046. This is the file where the two cores are configured: {{{ <solr persistent="true" sharedLib="lib"> <cores adminPath="/admin/cores"> <core name="testing.iatiregistry.org" instanceDir="testing.iatiregistry.org"> <property name="dataDir" value="/usr/share/solr/testing.iatiregistry.org/data" /> </core> <core name="iatiregistry.org" instanceDir="iatiregistry.org"> <property name="dataDir" value="/usr/share/solr/iatiregistry.org/data" /> </core> </cores> </solr> }}} Following this paths: /usr/share/solr/iatiregistry.org symlinks to /etc/solr/iatiregistry.org, /etc/solr/iatiregistry.org/data is empty (as well as the testing equivalent). On the other hand, looking at the admin interface and at some errors that I got it seems that the data folder that both cores are using is /var/lib/solr/data/index Maybe that's the problem?
#1430 1320444567000000 pudo You probably know this by know but the solrconfig.xml for both points to /var/lib/solr/data (line 71).
#1430 1320660144000000 amercader No, I didn't know it. Is this supposed to be the correct setting or should each core have its own data dir?
#1430 1320667523000000 rgrp @pudo: brilliant spot. How did that misconfig end up there? Anyway, commenting out and rebooting means we do now have separate cores seems to fix it.
#1430 1320670062000000 amercader As rgrp mentioned, we commented out the <dataDir> directive in the solrconfig.xml files and rebooted. That made the cores use the data dir they were supposed to (the one in solr.xml) and from the tests I made looks like it finally fixed the issue.
#1430 1324033923000000 dread Fixed in CKAN 1.5.1. Affects all previous CKANs that use SOLR in these circumstances.
#1431 1320084104000000 dread Fixed in cset:711a68a12d90 for release 1.5.
#1432 1324290164000000 ross PhantomJS ( http://www.phantomjs.org/ ) looks like the perfect way to run a JS scraper offline but still within the context of a browser. Would need to consider how the script would be executed over multi-page datasets, but I'm fairly sure it would be possible to come up with a workable solution.
#1433 1320666509000000 kindly Done but waiting to merge after 1.5 release.
#1433 1321827123000000 kindly cset: 68c37312ef70349431213465005761edf434d27e
#1433 1324472583000000 dread Has gone into CKAN 1.5.1 release.
#1434 1338203960000000 seanh At least for the core extensions, any i18n-able strings should be getting pulled into the ckan.pot file by default.
#1434 1341236921000000 seanh Done for the core extensions, pull request: https://github.com/okfn/ckan/pull/47 Perhaps strings from the non-core but officially supported extensions should be added to, but we haven't decided which extensions those are yet, so add another ticket to do that later: http://trac.ckan.org/ticket/2625
#1435 1320930310000000 dread Interesting. What's the reason to justify the effort?
#1435 1323168783000000 thejimmyg I personally can't see the benefit of switching to a generic paid service when we already have a highly customised and working infrastructure based on buildbot and buildkit - we do testing in VMs as well as continuous integration. What is the advantage? Suggest wont fix?
#1435 1323283538000000 dread From IRC today: {{{ <rgrp> dread: btw have a big suggstion -- switch to continuous.io for our buildbot stuff ... <rgrp> openspending have done this and it's a nice setup ... <dread> rgrp: interesting - what's better? <dread> i assume it just starts a script and reports the result? <rgrp> that's right though also has integration with some bakcens (but we probably don't need that) <rgrp> the point is we don't need to boot machines, install and configure buildbot etc (though we may now have automated that ...) <dread> ah, i see, it's in the cloud <dread> did all that months ago, and don't need to setup the machine any more. <dread> if we do it again though, I'm all for it }}}
#1436 1320230896000000 johnglover This also applies to CKAN core
#1436 1320243278000000 johnglover Fixed in branch feature-1371-task-status-logic-layer, will merge with default after 1.5 release later today.
#1437 1320173496000000 dread This can be fixed with one line of code, so I'm doing it for release 1.5.
#1437 1320173795000000 dread Fixed in cset:5d0bf20a1746 for 1.5. Broken since 1.4.3.
#1439 1325474974000000 rgrp Moving out of v1.6 (has not super ticket atm and seems of low relative importance -- nice to have but not essential).
#1440 1320235084000000 dread I've done this. 'db load' now also does 'db upgrade' and 'search-index rebuild'. If you want to debug the load (or don't need the search index, cos it takes ages to do), then you can just do 'db load-only' which does what 'db load' did before. Changed in cset:9e51df5c496b for 1.5 release.
#1441 1320235062000000 dread Fixed in cset:35542f1c60a2 for release 1.5.
#1442 1320275623000000 dread Fixed in cset:819a74bd1c03 for release 1.5. Note release 1.5 doesn't have #1381 feature, so you're not supposed to create with groups anyway, but this will be useful if people try it.
#1443 1320432511000000 dread Fixed in cset:96b5d9af70a7 for release-v1.5
#1444 1323089960000000 dread I've not looked at this yet, but now detection is disabled, less important.
#1444 1340190852000000 ross 7 months, no activity, dead bug.
#1445 1323178176000000 zephod JohnGlover has done most of this and I've been adding aesthetic tweaks. The resource page is now tidy and has a similar look and feel to its parent Dataset view page. I'm closing this ticket and its sibling #1450; I am merging the feature-1450-dataset-view branch which contains all of this work.
#1445 1330019916000000 dread This went into CKAN 1.6
#1446 1321872724000000 rgrp Have made significant progress on new recline but not yet operational. See https://github.com/okfn/recline and https://github.com/okfn/recline/issues/6 https://github.com/okfn/recline/issues/12 https://github.com/okfn/recline/issues/8 also https://github.com/okfn/recline/issues/10 and more ...
#1446 1326281594000000 rgrp Marking as complete as Data Explorer is now functional enough for our purposes. Tickets done include (* indicates improvement over current explorer) * Core Backbone Models representing Dataset and Tabular data: https://github.com/okfn/recline/issues/10 * New theme: https://github.com/okfn/recline/issues/22 * Read-only mode: https://github.com/okfn/recline/issues/17 * Introduce hash navigation / state support (*): https://github.com/okfn/recline/issues/19 * Re-enable editing in DataTable (*): https://github.com/okfn/recline/issues/13 * [super] DataTable view (in Backbone) (*): https://github.com/okfn/recline/issues/14 * DataExplorer parent view (*): https://github.com/okfn/recline/issues/12 * Simple graph widget using flot: https://github.com/okfn/recline/issues/11 NB: quite a bit of work done pre 2012-01-09 iteration but in that iteration that we finished the work.
#1446 1326281658000000 rgrp Total time: * 2012-01-09: 4.5d * Pre that date: 5d
#1447 1322220690000000 dread This appears to have happened again today on test.ckan.net and someone has sorted it. The problem is visible on munin as inodes running out. * eu25 seems ready to fall over in about a week: http://munin.okfn.org/okfn.org/eu25.okfn.org-df_inode.html * thedatahub.org on s055 (and other fry instances) seem to have dynamically adjusted inode table size (by the kernel) so is less of a problem
#1447 1322648873000000 dread As predicted, this happened again today. From the following analysis it confirms that the problem is the cache growing and growing. Disk usage in megabytes: {{{ okfn@s025:~/var/srvc/publicdata.eu$ du -s -m /* 7 /bin 22 /boot 1 /dev 10 /etc 4157 /home 0 /initrd.img 0 /initrd.img.old 114 /lib 1 /lost+found 1 /media 1 /mnt 1 /opt 0 /proc 1 /root 7 /sbin 1 /selinux 1 /srv 0 /sys 1 /tmp 421 /usr 443 /var 0 /vmlinuz 0 /vmlinuz.old }}} {{{ okfn@s025:~/var/srvc/publicdata.eu$ du -s -m /home/okfn/var/srvc/publicdata.eu/*2173 /home/okfn/var/srvc/publicdata.eu/backup 1 /home/okfn/var/srvc/publicdata.eu/backup_RENAMED_TO_AVOID_MAYHEM.sh 1 /home/okfn/var/srvc/publicdata.eu/common.sh 1893 /home/okfn/var/srvc/publicdata.eu/data 1 /home/okfn/var/srvc/publicdata.eu/fetch.sh 1 /home/okfn/var/srvc/publicdata.eu/gather.sh 1 /home/okfn/var/srvc/publicdata.eu/pip-requirements.txt 1 /home/okfn/var/srvc/publicdata.eu/publicdata.eu.ini 86 /home/okfn/var/srvc/publicdata.eu/pyenv 1 /home/okfn/var/srvc/publicdata.eu/run.sh 1 /home/okfn/var/srvc/publicdata.eu/sstore 0 /home/okfn/var/srvc/publicdata.eu/who.ini }}} {{{ okfn@s025:~/var/srvc/publicdata.eu$ ls -l /home/okfn/var/srvc/publicdata.eu/backup total 2224588 -rw-r--r-- 1 okfn okfn 343199744 2011-06-14 20:50 db-20110614-2050.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 20:51 db-20110614-2051.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 20:52 db-20110614-2052.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 20:53 db-20110614-2053.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 20:54 db-20110614-2054.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 20:55 db-20110614-2055.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 20:56 db-20110614-2056.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 20:57 db-20110614-2057.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 20:58 db-20110614-2058.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 20:59 db-20110614-2059.sql -rw-r--r-- 1 okfn okfn 1036288 2011-06-14 22:00 db-20110614-2200.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:01 db-20110614-2201.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:02 db-20110614-2202.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:03 db-20110614-2203.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:04 db-20110614-2204.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:05 db-20110614-2205.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:06 db-20110614-2206.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:07 db-20110614-2207.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:08 db-20110614-2208.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:09 db-20110614-2209.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:10 db-20110614-2210.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:11 db-20110614-2211.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:12 db-20110614-2212.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:13 db-20110614-2213.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:14 db-20110614-2214.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:15 db-20110614-2215.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:16 db-20110614-2216.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:17 db-20110614-2217.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:18 db-20110614-2218.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:19 db-20110614-2219.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:20 db-20110614-2220.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:21 db-20110614-2221.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:22 db-20110614-2222.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:23 db-20110614-2223.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:24 db-20110614-2224.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:25 db-20110614-2225.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:26 db-20110614-2226.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:27 db-20110614-2227.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:28 db-20110614-2228.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:29 db-20110614-2229.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:30 db-20110614-2230.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:31 db-20110614-2231.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:32 db-20110614-2232.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:33 db-20110614-2233.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:34 db-20110614-2234.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:35 db-20110614-2235.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:36 db-20110614-2236.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:37 db-20110614-2237.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:38 db-20110614-2238.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:39 db-20110614-2239.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:40 db-20110614-2240.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:41 db-20110614-2241.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:42 db-20110614-2242.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:43 db-20110614-2243.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:44 db-20110614-2244.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:45 db-20110614-2245.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:46 db-20110614-2246.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:47 db-20110614-2247.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:48 db-20110614-2248.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:49 db-20110614-2249.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:50 db-20110614-2250.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:51 db-20110614-2251.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:52 db-20110614-2252.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:53 db-20110614-2253.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:54 db-20110614-2254.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:55 db-20110614-2255.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:56 db-20110614-2256.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:57 db-20110614-2257.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:58 db-20110614-2258.sql -rw-r--r-- 1 okfn okfn 0 2011-06-14 22:59 db-20110614-2259.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:00 db-20110615-0000.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:01 db-20110615-0001.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:02 db-20110615-0002.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:03 db-20110615-0003.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:04 db-20110615-0004.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:05 db-20110615-0005.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:06 db-20110615-0006.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:07 db-20110615-0007.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:08 db-20110615-0008.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:09 db-20110615-0009.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:10 db-20110615-0010.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:11 db-20110615-0011.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:12 db-20110615-0012.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:13 db-20110615-0013.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:14 db-20110615-0014.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:15 db-20110615-0015.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:16 db-20110615-0016.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:17 db-20110615-0017.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:18 db-20110615-0018.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:19 db-20110615-0019.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:20 db-20110615-0020.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:21 db-20110615-0021.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:22 db-20110615-0022.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:23 db-20110615-0023.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:24 db-20110615-0024.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:25 db-20110615-0025.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:26 db-20110615-0026.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:27 db-20110615-0027.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:28 db-20110615-0028.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:29 db-20110615-0029.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:30 db-20110615-0030.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:31 db-20110615-0031.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:32 db-20110615-0032.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:33 db-20110615-0033.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:34 db-20110615-0034.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:35 db-20110615-0035.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:36 db-20110615-0036.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:37 db-20110615-0037.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:38 db-20110615-0038.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:39 db-20110615-0039.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:40 db-20110615-0040.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:41 db-20110615-0041.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:42 db-20110615-0042.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:43 db-20110615-0043.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:44 db-20110615-0044.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:45 db-20110615-0045.sql -rw-r--r-- 1 okfn okfn 0 2011-06-15 00:46 db-20110615-0046.sql -rw-r--r-- 1 okfn okfn 483144447 2011-06-15 10:00 db-20110615-1000.sql -rw-r--r-- 1 okfn okfn 482136064 2011-06-15 10:07 db-20110615-1007.sql -rw-r--r-- 1 okfn okfn 483144447 2011-06-15 10:50 db-20110615-1050.sql -rw-r--r-- 1 okfn okfn 483053568 2011-06-15 10:51 db-20110615-1051.sql }}} {{{ okfn@s025:~/var/srvc/publicdata.eu$ du -s -m /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/* 117 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/0 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/1 117 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/2 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/3 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/4 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/5 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/6 117 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/7 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/8 117 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/9 117 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/a 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/b 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/c 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/d 117 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/e 116 /home/okfn/var/srvc/publicdata.eu/data/sessions/container_file/f }}}
#1447 1324374324000000 dread Once again eu25 ran out of space again today.
#1447 1326279495000000 dread eu25 ran out of space again this weekend and eu8 (at/it/us_co) today.
#1447 1326296987000000 nils.toedtmann For time being, i created a cron script [https://bitbucket.org/okfn/sysadmin/src/default/etc/cron/remove_old_files remove_old_files]. You could just copy it to /etc/cron.daily/, but i recommend to not run it as root: if it's misconfigured, it could wipe the system! So you better copy it to /home/okfn/sbin/ (not /home/okfn/bin/ which often is the sysadmin HG repo), and add it to some unprivileged user's crontab. In most cases, the leftover files are owned by user "www-data", so {{{ $ sudo crontab -e -u www-data }}} and then add something like {{{ 37 4 * * * /home/okfn/sbin/remove_old_files }}} Don't forget to edit the script remove_old_files itself and list the directories you want to be cleaned up. This is already done on s008/eu8 and s019/eu19. dread, do you want to do this for s025/eu25 and see how this goes? ---- Todo nils: verify tomorrow on s019 that it worked properly, e.g. this should show only a few files: {{{ find /var/lib/ckan/nederland/data/sessions/ -type f -amin +$((7*24*60)) -ls }}}
#1447 1330082636000000 nils.toedtmann I had forgotten to check s019 how well my cleanup script is working (and now s019 is gone), but at least it didn't destroy it :-) You might want to give it a try on s025/PDEU. (Tell me if you want me to do that).
#1447 1330082808000000 dread Yes please Nils!
#1447 1330089662000000 nils.toedtmann OK i fixed a bug in my script and refactored it so that it can now be dropped into /etc/cron.daily/ while still deleting as unprivileged user. It is now running on s025, removing everything older than 7 days. Please verify in 9 days or so that it's working. Consider to add [https://bitbucket.org/okfn/sysadmin/src/default/etc/cron/remove_old_files this cron job] to the ckan deb package e.g. as "/etc/cron.daily/ckan-cleanup"
#1447 1332510790000000 nils.toedtmann Just checked s025 (which is depricated now), looks like my script is working fine - nothing older than a week in /home/okfn/var/srvc/publicdata.eu/data/sessions/. We should activate this script on other hosts as well, e.g. so55/thedatahub.
#1447 1332510913000000 nils.toedtmann Just to add: the remove_old_files script is only a workaround, not a fix. CKAN should clean up after itself. Feel free to re-open this ticket for a proper solution ;-)
#1447 1332511544000000 rgrp @kindly: hope ok to assign to you (maybe just for review and thought on who would be best placed to look at ...)
#1447 1332519029000000 nils.toedtmann Ticket http://trac.okfn.org/ticket/1222 tracks the effort to push the clean-up script onto CKAN hosts.
#1447 1340726283000000 nils.toedtmann This is becoming painful for the sysadmins. Please fix.
#1447 1340727330000000 dread BTW on DGU I have set it up to use memcached for these sessions (v. easy) and I think it solves the problem.
#1448 1325774155000000 rgrp @kindly: Any details ;-) (even when this happened, approach taken)?
#1449 1321465348000000 johnglover Basic change done in branch feature-1449-resource-listing. Currently showing: * resource name (clickable) or (none) * resource description * resource format * resource last modified (if exists)
#1450 1321875986000000 johnglover Still working on this, moving to new sprint
#1450 1323109665000000 zephod johnglover architected this and I've added aesthetic tweaks. Now pushed on branch: feature-1450-dataset-view. One bug ticket remains (#1517) before the work can be merged.
#1450 1330019974000000 dread In CKAN 1.6.
#1451 1321876014000000 johnglover Haven't looked at this yet, moving to new sprint
#1451 1322567112000000 johnglover This was taken from the UX pad. I'm looking at it now to make sure that it's working and that we can use it on the new dataset view page. Is it really a good idea to move it to core though? There was no reason given on the pad so I'm not sure why this should be moved to core?
#1451 1322568784000000 dread With the cron job setup, maybe this is best left as an extension. The core CKAN was designed to be mean and lean.
#1451 1323165781000000 johnglover Scripts and templates updated for CKAN 1.5.1 Waiting for Tom to finish dataset view and resource view updates, then will discuss best place to put the download stats on the page.
#1451 1324288407000000 johnglover Need to add analytics javascript to new resource view page, will do under ticket #1519.
#1451 1324401792000000 johnglover This ticket again brought up the issue of a need for inversion of control for writing to CKAN templates (main templates should really ask extensions for data or provide hooks rather than having extensions overwrite sections using genshi/jquery). Have decided to defer this issue for now, as it will be looked at when looking at making our whole extension system more robust early next year. Related point: * We should provide a dashboard area (again via interfaces/hooks) that groups together report data from related extensions. So stats and googleanalytics info should be available from a similar area / url. This is also deferred until the extension overhaul/refactor. Other outstanding issues: * Location / style of download count can be improved * Download count should be shown on resource pages These should be looked at by whoever takes over frontend design.
#1452 1321276189000000 dread Seb Bacon: > I agree.  It is quite standard for people to have their browser > language as en-US in many countries. > Yes, geo-location is likely to be better if you want to automate it. > > http://www.maxmind.com/app/geoip_country > > But I wouldn't automatically detect it at all.  Just have a default > language for each site, as you suggest.
#1452 1340190587000000 ross I believe this has been addressed by Toby.
#1453 1320947955000000 icmurray Added the restriction of not allowing the double quote character, '"', as well as commas as this simplifies any use of quoting multiple words to mean a single tag name. For example, this simplifies the use of quotes in identifying tags in internal markdown links: {{{ tag:"multiple word tag name" }}} A possible solution is to allowing escaping, such as: {{{ tag:"something about \"Ian\"" }}} But I think the compromise is a better solution than allowing the escaping as it's simpler, and this may crop up elsewhere.
#1453 1321548452000000 icmurray The allowable characters in a tagname has changed to "unicode alphanmeric plus simple punctuation". This means: - alphanumeric (inc. foreign characters) - [ .-_] The completed feature is in the [https://github.com/okfn/ckan/tree/feature-1453-flexible-tag-names feature-1453-flexible-tag-names branch]. Awaiting a code review.
#1453 1321635178000000 dread Code review: * basically - really excellent code and very thorough :-) * links should have %20 rather than spaces (tests/misc/test_format_text.py:61) * also check unicode chars encoding in urls (tests/misc/test_format_text.py:115) * also check searching for the tag with this encoding (ckan/tests/functional/api/model/test_tag.py:35) * we follow the PEP8 coding style which I interpret to mean not having blank lines after a function definition. But whichever, we're not consistent from file to file, but we should be within each file. e.g. ckan/tests/forms/test_package.py:12. * moo package problem - need to ensure test works on its own and when run as part of the suite, so independent of whether moo exists. tests/functional/api/test_action.py: * best to make tag search case insensitive - see ckan/tests/functional/api/model * It's worth keeping the old test in addition to your modified one - because query for just {q:''} will return both packages too. ckan/tests/functional/api/test_package_search.py:203 * Let's add an example of tag search with quotes in /doc/api.rst:337 * Please put imports at the top of the file, unless there's a good reason ckan/tests/functional/api/test_package_search.py:296 * Can you not the old test any more? It seems sufficiently to the test you changed it to, so can we include both? ckan/tests/functional/api/test_package_search.py:295
#1453 1321965613000000 icmurray Updated code now in feature-1453-flexible-tag-names branch. (Also, deleted the ian-review branch.)
#1453 1322581830000000 dread I believe this is finished now. This was merged into master in cset:c0aaa31c4b7ded54d and headed for release 1.5.2.
#1453 1329395697000000 dread This has gone into release 1.6.
#1454 1320849527000000 dread Done in cset:138c5daf7765 heading for release 1.5.1.
#1455 1321872633000000 dread John to look at.
#1455 1322491506000000 johnglover Fixed - commit: https://github.com/okfn/ckan/commit/05b675a4314ad269c6e6a095d57e3f2a21e771eb Note: - Includes a small change to Solr schema file. - Search index will need to be rebuilt for changes to take effect
#1455 1324474466000000 dread Fix has gone into CKAN 1.5.1
#1456 1320930770000000 dread I see you've done this in cset:939e0e0809c1. Close now? BTW Take a look at the first suggestion from #1423 too, whilst in the area.
#1456 1324472178000000 dread Went into 1.5.1 release
#1458 1340632821000000 icmurray Re-assigned to amercader. Kept on ckan-future.
#1458 1340632932000000 icmurray Sorry, I shouldn't have touched this. I pulled it from the wrong milestone.
#1460 1328001114000000 rgrp @dread: re-assigning to you (at least to review). Would be really good to have this closed out asap.
#1460 1328638393000000 dread Have not looked at this yet. Bumping to this sprint.
#1461 1321359503000000 dread Fixed in cset:6ea5d3c50444 so all functions supply api key.
#1462 1323172211000000 thejimmyg Let's work together to fix the packaging aspect too.
#1462 1326283656000000 rgrp @amercader: update? Could you also either close or move sprint (or to backlog as appropriate).
#1462 1326284357000000 amercader Closing as this has been fixed and deployed. @thejimmyg Not sure if there are still issues regarding packaging. Feel free to create a specific ticket for this if we need to work on it.
#1462 1330083671000000 dread This went into CKAN 1.5.1.
#1463 1321874259000000 johnglover This works with the new Celery feature that will be in 1.5.1 (which should be released in this sprint). So, will not update this old version of QA for 1.5, people should use the new version (on okfn Github) after 1.5.1 is released.
#1464 1323169763000000 thejimmyg As part of the queue upgrade we'll also fix #1064 which explains that the current queue implementation is over-engineered.
#1464 1323710659000000 dread Pls update status and milestone
#1464 1328529042000000 rgrp Closing as wontfix as no further info and seems unimportant.
#1465 1323710679000000 dread Pls update status and milestone.
#1466 1323710030000000 dread Is this still critical, James?
#1466 1324298713000000 rgrp Moving to v1.6 as no specific milestone for this.
#1466 1328529062000000 rgrp Moving to backlog.
#1467 1321872507000000 dread Leaving this to James to schedule
#1467 1325462196000000 rgrp Moving to current sprint as this sprint is now long finished. @jimmyg: please close, defer, update as necessary!
#1467 1326104869000000 thejimmyg The publisher issue seems to be resolved now, although during investigation I also found these issues: * 9 of the records don't have a published by and I wondered why * Lots of them are state=deleted (so do we really want to include these?) * We're still showing the deprecated agency field * Many of the departments are blank Pawel is not available to work on these anyway at the moment, so let's pick them up as part of the disintegration work to migrate to CKAN. Marking the main ticket as "worksforme" since it does now.
#1467 1326119954000000 dread I believe that the "publisher issue" that James alludes to is that the dump doesn't contain the 'parent publisher' field that is generated in the DGU system on the Drupal side. This information will be stored following the Groups Refactor #1477 and should be added to the dump at this point. Excluding Datasets that are state=deleted is a good idea. I've split that off into #1623 The other issues mentioned are simply data quality - the same whether viewing the dump or elsewhere.
#1467 1326120319000000 dread > 'parent publisher' Sorry, I meant 'parent department'
#1468 1321875777000000 johnglover New ticket, moving over to new sprint
#1468 1322495417000000 johnglover Done - commit: https://github.com/okfn/ckan/commit/7789e85c973c9e085f623486bced6be14f25678f rebuild can now take an optional package name/id (single package to be consistent with other paster commands, not a list of packages)
#1468 1322591997000000 dread We originally talked about a command-line interface for deleting packages. I've done this here: #1499. Note: you can update the search index from a paster shell, simply by doing this before running your commands that edit packages: {{{ from ckan import plugins plugins.load('synchronous_search') }}}
#1469 1329760150000000 amercader This is mostly done (current form is http://i.imgur.com/zmfc5.png). Still some tests missing and a little bit of cleanup and documentation required.
Note: See TracReports for help on using and creating reports.