{23} Trac comments (3729 matches)

Results (3201 - 3300 of 3729)

Ticket Posixtime Author Newvalue
#324 1278599927000000 dread Done in changesets leading up to cset:ca565562129d.
#323 1277722845000000 dread Done but not pushed yet. Took 3.5 days.
#322 1274773856000000 pudo This looks very reasonable. Maybe we should have a webhooks client as a simple demo for this?
#322 1274807530000000 pudo Replying to [comment:2 pudo]: > This looks very reasonable. Maybe we should have a webhooks client as a simple demo for this? c.f. #327
#322 1277722821000000 dread Done but not pushed. Took 3.5 days.
#321 1291831399000000 anonymous This has now been superseded with this proposal: #787
#320 1279105983000000 dread site_title added in cset:b4c0e0a5630d site_logo is changeable in one place in the template so not essential
#320 1279130535000000 dread Took 1.5h
#319 1274366882000000 dread Fixed in cset:a1ef783d27d2 on default and metastable.
#318 1274377385000000 wwaites Some more datapoints from Leigh Dodds of Talis: I'm still having no joy with this I'm afraid. I'm test parsing the data locally using the TDB command-line tools, specifically tdbcheck which will parse the data and generate warnings/exceptions. This uses the same parsing code, data and URI validation code as we're using on the Platform. Currently its giving me warnings for invalid lexical values for dates, e.g: Lexical not valid for datatype: "2008"^^http://www.w3.org/2001/XMLSchema#date While these aren't a major issue, looking at some of the data suggests that there are more underlying data problems that need checking and fixing up, e.g: Lexical not valid for datatype: "n/a"^^http://www.w3.org/2001/XMLSchema#date Lexical not valid for datatype: "27/04/2006 13:56"^^http://www.w3.org/2001/XMLSchema#date Lexical not valid for datatype: "Real time calculation"^^http://www.w3.org/2001/XMLSchema#date Lexical not valid for datatype: "varies by country"^^http://www.w3.org/2001/XMLSchema#date And there are still some invalid URIs, e.g: <https://mqi.ic.nhs.uk/IndicatorDataView.aspx?query=NRLS%3&ref=3.02.16> Code: 30/ILLEGAL_PERCENT_ENCODING in QUERY: The host component a percent occurred without two following hexadecimal digits. Can I suggest you try running the converted data through tdbcheck to iron out any problems? Then I can push it into the Platform.
#318 1275320677000000 dread We can't change any of the metadata without permission from the various departments who supplied it. I think "Don't shoot the messenger" is apt here. Adding this to the form validation isn't going to change any of the existing data. I think this is better off in the data quality scoring.
#318 1276271343000000 wwaites url validation reputed to be here: http://www.livinglogic.de/Python/url/Howto.html
#318 1276438793000000 wwaites Some good news, ll.url seems to take bad urls and make them into good urls. viz: {{{ In [1]: from ll import url In [2]: print url.URL("https://mqi.ic.nhs.uk/IndicatorDataView.aspx?query=NRLS%3&ref=3.02.16") ------> print(url.URL("https://mqi.ic.nhs.uk/IndicatorDataView.aspx?query=NRLS%3&ref=3.02.16")) /Users/ww/Work/OKF/ckanrdf/lib/python2.6/site-packages/ll/url.py:2358: UserWarning: truncated escape at position 4 value = _unescape(namevalue[1].replace("+", " ")) https://mqi.ic.nhs.uk/IndicatorDataView.aspx?query=NRLS%253&ref=3%2E02%2E16 }}}
#318 1276438832000000 wwaites Also fyi, getting ll.url is done like so {{ pip install ll-xist }}}
#318 1276438907000000 wwaites I've updated ckanrdf to strip out datatypes and use this ll.url on external references so that should be sufficient to hold off talis. Still need to work particularly on validating dates though...
#318 1280737620000000 rgrp Important but low priority according to CO so bumping into next milestone (v1.2). NB: did not seem able to update milestone in trac interface! (Perhaps due to agilo stuff?)
#318 1283179768000000 wwaites CO may not realise the implications when they said it was low priority. The implication of this lack of validation is that it is impossible to generate valid URIs in the RDF which means it cannot be imported by Talis. So until there is a solution to this, no RDF catalog.
#318 1296340768000000 rgrp Still not sure what the priority is so moving to awaiting triage.
#318 1296467308000000 pudo This will be implicit in #852, thus not building something specific for it now.
#318 1296482049000000 wwaites We still require form validation to check URIs. They are not free-form strings. This is not the same as 852 or necessarily included in it.
#318 1311176497000000 thejimmyg Assigning to John so that he can see whether the QA code correctly flags these kinds of problems. If it does, we can close this ticket because although the API will serve invalid URLs, the publishers will be notified to clean up.
#318 1311770683000000 johnglover The QA code should identify invalid URLs. Resources with invalid urls will have an 'openness_score' of 0 and an 'openness_score_reason' of 'Invalid url scheme' or 'invalid URL'.
#318 1349778662000000 dread Here's a real example - one of many from MOD {{{http://www.dasa.mod.uk/applications/newWeb/www/index.php?page=48&thiscontent=180&date=2011-05-26&pubType=1&PublishTime=09:30:00&from=home&tabOption=1}}} Browsers accept colons and slashes happily, which is the main usage of our links. The URL looks better with the colons and slashes, rather than the encoded version. The average departmental user doesn't understand that the reason to encode them is for some academic RFC and RDF which is not "liberal in what it accepts". Since the RDF tool has a satisfactory way to encode links, this problem is essentially solved. Therefore I'm changing ckanext-archiver to accept these unencoded links, I'm afraid.
#317 1279005278000000 pudo this has been in for a while now but still needs to be extended to include the indexing of entities (ckan.model.search_index)
#317 1279286041000000 pudo should be done after refactoring the search functions.
#316 1274366801000000 dread This exception occurs for ckan.net with just this one character: http://ckan.net/package/search?q=%C2 (you can wget it) But I can't recreate it on my machine. Maybe it's a version issue. The client that is making all these crazy calls is googlebot.
#316 1291831177000000 thejimmyg I've just tested this on ckan.net and it gives a sensible message: There was an error while searching. Please try another search term.
#315 1275846764000000 dread Fixed in cset:61548ced8b7d - Quote marks correctly read in for data4nr data, which makes this problem record ok (which opened in openoffice fine incidentally). Fields in package are now dumped in correct order to make it clearer. Not changed resource serialisation - if you want tidy json, then use the json dump. No real call for half-way house dump.
#313 1275404524000000 johnbywater Fixed in changeset 06c949266644.
#312 1311176173000000 thejimmyg This ticket is more than 6 months old so closing in line with our current ticketing policy.
#311 1274282065000000 rgrp Resolved (sort of) in cset:489007a10bb9. This was a migration issue. Tracked this down to fact that on ckan.net we have: "package_resource_revision_pkey" PRIMARY KEY, btree (id) When it should be: "package_resource_revision_pkey" PRIMARY KEY, btree (id, revision_id) Looking in browser:ckan/migration/versions/012_add_resources.py find: {{{ Column('revision_id', UnicodeText, ForeignKey('revision.id')), #NB revision_id should have been primary_key too (joint with id) }}} How come this was not corrected here or at least noted for upgrade of ckan.net??? I have now fixed this so that others doing migration (at least with v1.0) will end up with correct code. I have also fixed issue on ckan.net by manual sql!
#310 1279300525000000 dread Fixed in cset:a0acf179355c Cost: 2h
#309 1280743432000000 pudo fixed in cset:1382
#308 1275302577000000 rgrp Duplicate of ticket:234.
#307 1288027815000000 rgrp Done in cset:90e318c3c7dc/datapkg cset:0036b5c505eb/datapkg etc
#306 1318181194000000 rgrp Duplicate of https://github.com/okfn/datapkg/issues/4
#305 1272994804000000 rgrp Closed in cset:8136e7369c0c
#304 1272447296000000 johnbywater Fixed in c4bf92996b8a.
#303 1272454025000000 johnbywater We could also fix up the temporal model.
#303 1272912573000000 dread Package history page now shows revisions for tag, extra and resources. Needs tidying up and adding to REST. Done in cset:dc99df3ab4bd
#303 1272988619000000 dread Done in cset:dc99df3ab4bd cset:beb72a0aa810 cset:96bab1eb53f5 and vdm cset:bb9f97b1c4b0
#302 1272453821000000 johnbywater Fixed in 61c8b3107f0e.
#301 1274832112000000 tim Duplicate of #190, although that ticket has a different implementation. It prefers comments as blog comments.
#301 1281342885000000 rgrp Should not have been closed. ticket:190 is about comments this is about a wiki-like discussion page which is very different.
#301 1340632055000000 icmurray Unassign in order for it to be triaged.
#300 1272384474000000 dread Rufus fixed this in cset:e6e3
#299 1294407099000000 thejimmyg Merging into #896.
#298 1294407080000000 thejimmyg Merging into #896.
#297 1294407051000000 thejimmyg Merging into #896.
#296 1294407032000000 thejimmyg Merging into #896.
#295 1272384758000000 dread Done in cset:18edc4d95f2f. Took: 3h
#294 1291830960000000 thejimmyg Duplicate of #812
#293 1271885457000000 johnbywater Can't reproduce this exception. Have added tests covering adding, removing and updating resources, and it all seems to work.
#293 1271940083000000 johnbywater With the metastable revision of CKAN, the package resource data structure in ckanclient scripts must have all four keys set in the Resource-Dict. Setting 'hash' in the resource data cures this issues with "metastable". The "bad" code was: resources.append((res['url'], res['format'], res['description'], res['hash'])) KeyError: 'hash' This code was adjusted in revision 40c4fe04038d, to default unspecified resource attributes to the empty string. It was changed further in subsequent revisions, to use parameters of the Package.add_resource method to default unspecified values. However, the API doc doesn't mention the resource hash, although it is required, but I just fixed that in the source (in revision 0f20bfb45d13). http://knowledgeforge.net/ckan/doc/ckan/api.html
#292 1272286005000000 dread Achieved in cset:56b02fda195e (rgrp), cset:95498407d15e and cset:f5af59a3365c. Remaining broken test put in ticket:300.
#291 1273254895000000 dread This seems spurious. The options.q is unicode finds foreign characters fine. The hack has since been taken out.
#289 1271249368000000 dread Done in cset:bf98b63331cf
#288 1271173777000000 dread Fixed in cset:ad64bd0f6073
#287 1270801210000000 dread Done in cset:76560fa09db8 through cset:ea397fc03587
#286 1270723629000000 dread Done in cset:76560fa09db8 and cset:752a634a3095 and error handling in cset:aa021336d64d
#285 1340631923000000 icmurray Unassign to be triaged.
#283 1270715817000000 rgrp Can I *strongly* suggest we just use the existing perfectly-good-system for flagging stuff called "tags" :) I suggest we have agree in the community a standard set of "meta" tags for this kind of stuff. E.g. i'm already using the "duplicate" tag for marking duplicates (I also add in notes the link to duplicate package but that's optional). So I suggest we: 1. Create "reserved" tag prefix "meta" 2. Create following specific tags (suggestions/comments welcome): * meta.duplicate - duplicate of another package. If possible indicate in notes or an extras field (to be decided) what it is duplicate of * meta.spam Editors can then just visit http://ckan.net/tag/read/meta.spam and work through list of packages there. If "push" notifications are required as well as "pull" then I suggest this be put in an external service (e.g. rss2email) rather than integrated into CKAN core.
#282 1338206417000000 ross This was discussed with Toby who has a ticket for this same feature because of disqus requirements.
#281 1270723675000000 dread Done in cset:5e9f8ce150c2
#280 1271173769000000 dread Fixed on default in cset:ad64bd0f6073
#279 1272451384000000 johnbywater Fixed in 5b6029c72f9a.
#278 1271173752000000 dread Fixed in cset:ad64bd0f6073 - copes with spaces now.
#277 1294416367000000 thejimmyg My opinion is that having configuration in a database is a bad idea. We are currently considering moving to a system where CKAN is installable using apt-get. Since we're already moving functionality into CKAN extensions, choosing what you want kind of CKAN you would like would then be as simple as chosing the package to install. Configuring it would just be editing the config file. I don't think this is as relevant as it was 10 months ago. Anyone mind if I change this to wontfix?
#277 1296340723000000 rgrp I think this is useful but there may be complexities giving the non-reloading nature of python apps. Have also converted to an extension
#277 1296470458000000 kindly I think generally this is a bad idea. I think in a few controlled circumstances some configuration is worth changing at runtime, however looking through the development.ini file I do not see hardly anything in there that does not require a restart anyway. It would be good to have some clear examples of things that would be in the extension.
#276 1271250866000000 dread I could not recreate this. I think it is only for particular packages?
#276 1272276020000000 dread Estimate: 0.5day
#276 1272996237000000 dread Fixed in cset:060e2df72148
#275 1280737701000000 rgrp Also, now if you visit ckan.net/admin/Package get 500 error and following in logs: [Mon Aug 02 08:17:35 2010] [error] [client 86.26.8.30] Error - <type 'exceptions.UnicodeEncodeError'>: 'ascii' codec can't encode character u'\\xf4' in position 988: ordinal not in range(128)
#275 1280999840000000 pudo cset:1396 fixes this
#275 1280999963000000 pudo Replying to [comment:3 pudo]: > cset:1396 fixes this Where "this" is the field renderer.
#275 1281001911000000 pudo Replying to [comment:2 rgrp]: > Also, now if you visit ckan.net/admin/Package get 500 error and following in logs: > > [Mon Aug 02 08:17:35 2010] [error] [client 86.26.8.30] Error - <type 'exceptions.UnicodeEncodeError'>: 'ascii' codec can't encode character u'\\xf4' in position 988: ordinal not in range(128) Cannot reproduce this at the moment. Will wait for it to occur again and then create another ticket.
#274 1278700842000000 dread A significant chunk of the work towards this done in cset:742adebb707c (refactoring search options).
#274 1279890237000000 pudo This is included in the Solr indexing engine and will become available as Solr is adopted.
#274 1280262229000000 rgrp Need to work on postgres and test there.
#274 1281452015000000 dread The docs are now out of date and there doesn't seem to be a test for this.
#274 1287398398000000 dread I fixed the docs a couple of weeks ago. Need to ensure there is a test though.
#274 1287402155000000 pudo Replying to [comment:7 dread]: > I fixed the docs a couple of weeks ago. Need to ensure there is a test though. there is as of cset:c2e66cec3610
#274 1287402800000000 dread Replying to [comment:8 pudo]: > Replying to [comment:7 dread]: > > I fixed the docs a couple of weeks ago. Need to ensure there is a test though. > > there is as of cset:c2e66cec3610 Error: Invalid Changeset Number
#273 1268996987000000 pudo cf http://lists.okfn.org/pipermail/ckan-discuss/2010-February/000042.html
#273 1270717895000000 pudo SOLR Requirements * 4GB Memory * Sun Java * Tomcat * Scala (for Etherpad) * MySQL 5 (for Etherpad) * Cheap bandwidth/low latency to the CKAN servers.
#273 1278578527000000 dread rgrp: We plan to use SOLR. May investigate Xapian. Nothing more to do in this investigation.
#272 1271764003000000 dread Package feed done in ticket:255
#272 1290004225000000 cygri A strong +1! Feeds for groups and tags would be extremely useful.
#271 1272280005000000 johnbywater Initial spike solution has been written, covering four user stories: 1. Commit CKAN revisions to changeset system (#296) 2. Update CKAN repository from changeset system (#297) 3. Pull changesets from remote CKAN instance (#298) 4. Merge diverging lines of changesets (#299) Emails to ckan-discuss include: * http://lists.okfn.org/pipermail/ckan-discuss/2010-March/000109.html * http://lists.okfn.org/pipermail/ckan-discuss/2010-March/000154.html
#270 1290596640000000 dread <[email protected]> Nearly everything mentioned here has been achieved with the SpreadsheetData, DataRecords<-SpreadsheetDataRecords, PackageImporter<-SpreadsheetPackageImporter design. New imports can take advantage of this.
#269 1288036269000000 rgrp I think this has been long obsoleted by other work (e.g. more recent work on gov form).
#269 1288037103000000 dread Several of these points haven't been considered in the recent work.
#269 1291830780000000 thejimmyg Just discussed this with Evan... notes field could use a WYSIWYG * No, Evan wants to discourage fancy features, plain text/markdown is fine * auto-complete on tags - DONE * department drop down options list interact with user permission - Evan building the API we need for this now. * licenses -> drop down is fine, let's just OGL as default So just default licence and replacing department with provider and via to be implemented on this ticket. Evan will provide: organisation.one() to look up one organsisation by ID organisation.many() to look up a list of organsisation by ID all at once organisation.match() to match a string and return a organsisation ID organisation.department() to take a organsisation ID and return the organsisation ID of the department it represents.
#269 1291897538000000 dread Licence is defaulted in CKAN cset:5bfbcd457426 (merged into default) and DGU cset:2d798e8af3d7. "replacing department with provider" is covered in ticket: https://trac.dataco.coi.gov.uk/projects/datagov/ticket/742
#268 1285070682000000 dread Duplicate of ticket:652
#267 1272960518000000 rgrp Fixed in cset:9e2e66cced90/vdm
#264 1272390013000000 dread Mainly sorted in ticket:292. Also related changes in cset:ed4c500fcd90
#263 1271690219000000 johnbywater All seems to work. Reported Wordpress trouble may result from user having Wordpress account, but no blog (ie they have 'myname' login, but haven't got a 'myname.wordpress.com' blog. Having the blog makes it work (otherwise you get told that you aren't the owner of the identity).
#263 1273072985000000 rgrp Issue where if you click on google or yahoo seems to "remember" login so next time you click on google or yahoo item it takes you to what you clicked on previously (e.g. click on google then go back and click on yahoo and you are taken to google login). Also think this plugin may be nicer: http://code.google.com/p/openid-selector/ Also: * typo in page (indentity) * Not sure "sign in or create new account" is clear without some indication of how openid works e.g. better to say something like: "Login to CKAN using Open ID"
Note: See TracReports for help on using and creating reports.