{23} Trac comments (3729 matches)

Results (501 - 600 of 3729)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Ticket Posixtime Author Newvalue
#313 1275404524000000 johnbywater Fixed in changeset 06c949266644.
#315 1275846764000000 dread Fixed in cset:61548ced8b7d - Quote marks correctly read in for data4nr data, which makes this problem record ok (which opened in openoffice fine incidentally). Fields in package are now dumped in correct order to make it clearer. Not changed resource serialisation - if you want tidy json, then use the json dump. No real call for half-way house dump.
#316 1274366801000000 dread This exception occurs for ckan.net with just this one character: http://ckan.net/package/search?q=%C2 (you can wget it) But I can't recreate it on my machine. Maybe it's a version issue. The client that is making all these crazy calls is googlebot.
#316 1291831177000000 thejimmyg I've just tested this on ckan.net and it gives a sensible message: There was an error while searching. Please try another search term.
#317 1279005278000000 pudo this has been in for a while now but still needs to be extended to include the indexing of entities (ckan.model.search_index)
#317 1279286041000000 pudo should be done after refactoring the search functions.
#318 1274377385000000 wwaites Some more datapoints from Leigh Dodds of Talis: I'm still having no joy with this I'm afraid. I'm test parsing the data locally using the TDB command-line tools, specifically tdbcheck which will parse the data and generate warnings/exceptions. This uses the same parsing code, data and URI validation code as we're using on the Platform. Currently its giving me warnings for invalid lexical values for dates, e.g: Lexical not valid for datatype: "2008"^^http://www.w3.org/2001/XMLSchema#date While these aren't a major issue, looking at some of the data suggests that there are more underlying data problems that need checking and fixing up, e.g: Lexical not valid for datatype: "n/a"^^http://www.w3.org/2001/XMLSchema#date Lexical not valid for datatype: "27/04/2006 13:56"^^http://www.w3.org/2001/XMLSchema#date Lexical not valid for datatype: "Real time calculation"^^http://www.w3.org/2001/XMLSchema#date Lexical not valid for datatype: "varies by country"^^http://www.w3.org/2001/XMLSchema#date And there are still some invalid URIs, e.g: <https://mqi.ic.nhs.uk/IndicatorDataView.aspx?query=NRLS%3&ref=3.02.16> Code: 30/ILLEGAL_PERCENT_ENCODING in QUERY: The host component a percent occurred without two following hexadecimal digits. Can I suggest you try running the converted data through tdbcheck to iron out any problems? Then I can push it into the Platform.
#318 1275320677000000 dread We can't change any of the metadata without permission from the various departments who supplied it. I think "Don't shoot the messenger" is apt here. Adding this to the form validation isn't going to change any of the existing data. I think this is better off in the data quality scoring.
#318 1276271343000000 wwaites url validation reputed to be here: http://www.livinglogic.de/Python/url/Howto.html
#318 1276438793000000 wwaites Some good news, ll.url seems to take bad urls and make them into good urls. viz: {{{ In [1]: from ll import url In [2]: print url.URL("https://mqi.ic.nhs.uk/IndicatorDataView.aspx?query=NRLS%3&ref=3.02.16") ------> print(url.URL("https://mqi.ic.nhs.uk/IndicatorDataView.aspx?query=NRLS%3&ref=3.02.16")) /Users/ww/Work/OKF/ckanrdf/lib/python2.6/site-packages/ll/url.py:2358: UserWarning: truncated escape at position 4 value = _unescape(namevalue[1].replace("+", " ")) https://mqi.ic.nhs.uk/IndicatorDataView.aspx?query=NRLS%253&ref=3%2E02%2E16 }}}
#318 1276438832000000 wwaites Also fyi, getting ll.url is done like so {{ pip install ll-xist }}}
#318 1276438907000000 wwaites I've updated ckanrdf to strip out datatypes and use this ll.url on external references so that should be sufficient to hold off talis. Still need to work particularly on validating dates though...
#318 1280737620000000 rgrp Important but low priority according to CO so bumping into next milestone (v1.2). NB: did not seem able to update milestone in trac interface! (Perhaps due to agilo stuff?)
#318 1283179768000000 wwaites CO may not realise the implications when they said it was low priority. The implication of this lack of validation is that it is impossible to generate valid URIs in the RDF which means it cannot be imported by Talis. So until there is a solution to this, no RDF catalog.
#318 1296340768000000 rgrp Still not sure what the priority is so moving to awaiting triage.
#318 1296467308000000 pudo This will be implicit in #852, thus not building something specific for it now.
#318 1296482049000000 wwaites We still require form validation to check URIs. They are not free-form strings. This is not the same as 852 or necessarily included in it.
#318 1311176497000000 thejimmyg Assigning to John so that he can see whether the QA code correctly flags these kinds of problems. If it does, we can close this ticket because although the API will serve invalid URLs, the publishers will be notified to clean up.
#318 1311770683000000 johnglover The QA code should identify invalid URLs. Resources with invalid urls will have an 'openness_score' of 0 and an 'openness_score_reason' of 'Invalid url scheme' or 'invalid URL'.
#318 1349778662000000 dread Here's a real example - one of many from MOD {{{http://www.dasa.mod.uk/applications/newWeb/www/index.php?page=48&thiscontent=180&date=2011-05-26&pubType=1&PublishTime=09:30:00&from=home&tabOption=1}}} Browsers accept colons and slashes happily, which is the main usage of our links. The URL looks better with the colons and slashes, rather than the encoded version. The average departmental user doesn't understand that the reason to encode them is for some academic RFC and RDF which is not "liberal in what it accepts". Since the RDF tool has a satisfactory way to encode links, this problem is essentially solved. Therefore I'm changing ckanext-archiver to accept these unencoded links, I'm afraid.
#319 1274366882000000 dread Fixed in cset:a1ef783d27d2 on default and metastable.
#320 1279105983000000 dread site_title added in cset:b4c0e0a5630d site_logo is changeable in one place in the template so not essential
#320 1279130535000000 dread Took 1.5h
#321 1291831399000000 anonymous This has now been superseded with this proposal: #787
#322 1274773856000000 pudo This looks very reasonable. Maybe we should have a webhooks client as a simple demo for this?
#322 1274807530000000 pudo Replying to [comment:2 pudo]: > This looks very reasonable. Maybe we should have a webhooks client as a simple demo for this? c.f. #327
#322 1277722821000000 dread Done but not pushed. Took 3.5 days.
#323 1277722845000000 dread Done but not pushed yet. Took 3.5 days.
#324 1274807970000000 pudo I am currently writing a Solr subclass for the search index (#317) and would propose adding standard methods to the ckan.libn.search.Search class: index_package(), index_tag(), index_group(). Those could then be called by a generic queue consumer, irrespective of the used search back-end. I will prototype such a consumer soon, so we should talk to avoid doing some duplicate work here.
#324 1278599927000000 dread Done in changesets leading up to cset:ca565562129d.
#325 1278599979000000 dread Both sending and received tickets closed.
#326 1274789296000000 dread Done in cset:66c21d78d2f8. Took 20mins.
#327 1281342690000000 rgrp Remove from v1.1 as this awaiting triage now and we are not sure when exactly we will do this.
#327 1296467361000000 pudo Nerdy solution that doesn't really seem to catch on, does nothing that cannot be done through queue workers.
#328 1275318745000000 dread Done in cset:170cac0b50ac and uploaded to kforge.
#329 1275079189000000 dread Fixed in cset:d264f9d57477 and cset:07701ef4085e
#330 1275303122000000 dread Fixed in cset:2f18d0e661fd on metastable and default branches.
#331 1282833125000000 dread CKAN timestamps should not be in a timezone, since when the clock goes back, it could cause problems for vdm. But there may be cases when CKAN is running on a machine that needs the clock set for a particular country (say for a front-end running on the same machine), and so vdm should be changed to create timestamps using UTC specifically (rather than add a timezone, since a mixture of timezones won't sort). And when we display it (or reply to a request) we convert it to the local timezone, as suggested in the description.
#332 1275306933000000 dread Switched from YUI a couple of months ago.
#332 1280743320000000 pudo was fixed in cset:1381
#333 1275407987000000 dread Use case 1: decided that when the user is redirected back to the front-end system, the URL contains a parameter with the package just edited. (In addition to the notification message.) Use case 2: decided that if the load on the front-end is not high from 100 non-web requests. Should it become a problem in future, the queue consumer could be adapted to slow down / amalgate multiple requests.
#334 1280743667000000 pudo fixed in cset:1380
#335 1275997752000000 dread Done in cset:5c0c0b6e1342
#335 1276179605000000 dread On discussion with rgrp it's clear that it's also useful to set the redirect url in a config variable - then the client doesn't have to change. This was done in cset:b9fdd208dd45
#336 1276162601000000 dread
#336 1278700021000000 dread Done in cset:742adebb707c and cset:1748e6554e77.
#336 1278700266000000 dread Took 0.75 days.
#336 1279368544000000 Donny http://ckan.net/api/search/resource?url=http://scraperwiki.com&all_fields=1&callback=ckantest yields Bad search option: Field "callback" not recognised in Resource search.
#336 1279373842000000 dread Fixed in cset:e719f449bc74
#337 1279300972000000 dread Fixed in cset:f5cc13ade0e8 Cost: 30m
#338 1279886392000000 johnbywater Putting this into API Version 2 (similar to package references).
#340 1276010234000000 dread Done in cset:f77639ddcf0d
#340 1328807317000000 dread Went into CKAN 1.0.2?
#341 1277483030000000 dread Done in cset:c7a9ba55db0d on metastable and merged to default.
#342 1276278485000000 dread Done in cset:61f145b7d4a8
#343 1277824018000000 johnbywater Regarding the package search, if I do: $ paster db clean && paster db init && paster create-test-data and then: $ paster serve development.ini & and then: $ curl http://127.0.0.1:5000/api/1/search/package?name=warandpeace I get: {"count": 1, "results": ["warandpeace"]} and with: $ curl http://127.0.0.1:5000/api/2/search/package?name=warandpeace I get: {"count": 1, "results": ["c90b6c00-9496-4c8c-b7fa-7bdd3ef65c72"]} Am I missing something?
#343 1277892699000000 johnbywater Okay, so in version 2, names were still being used in the relationships part of the packages entity. But I don't know why these entities can't be retrieved independently, and references to the entities returned in representations of the entities which reference them.
#344 1277477712000000 dread Done in cset:13737a7ba4d9
#345 1291831615000000 thejimmyg This is a bit out of date. We have moved to a system of "stable" and "default" branches with feature branches for features, bugs and tickets. We already have default and stable tested by buildbot.
#346 1294410298000000 thejimmyg Could you take a look at this at some point please David? If it is already resolved could you please close the ticket? Thanks!
#346 1296477510000000 dread We no longer use the "Gdu" SoS doc.
#349 1277820679000000 anonymous Mostly done, but issue regarding departments still outstanding: can the association between packages, groups, and departments be placed elsewhere?
#350 1277072822000000 [email protected] Also check out the SEO review done by Charles Coxhead - http://www.quicksitereviews.com/ckan-net/ Pagination – In the tag section of the site particularly I’d suggest replacing the traditional numbered pagination with alphabetical links, ie, display 26 links A-Z and then on each of those pages displaying the relevant tags. The point being that the tag pages represent an opportunity to index pages which intersect with very targeted search demand, so let’s give them all every opportunity to get crawled and indexed. Linking to all 26 alphabetically arranged pages from the main tag page (and maybe even the home page) will bring all the tag pages closer to the home page and give them all some more link popularity. Also suggest something similar to this for the listings in the Packages section, so arrange these on alphabetical pages. Tag page titles – Take a programmatic, but more focussed approach to these. Currently all tag pages have the title “CKAN – Comprehensive Knowledge Archive Network – Tags – [tag]” – I would make this something like “[tag] – Open Data – CKAN” or something like that which puts the emphasis on the tag keyword and adds some meaningful qualifiers to that. Tag page listings – On the tag pages there are related open data sets listed and next to each the associated tags. This leads to masses of tag links on the page and loads of duplicated links. If possible I recommend removing the tags from each data set listing, and instead displaying a cloud of “Related tags” on the right hand side. Group pages – The user curated group pages are possibly even better than the tag pages from a search perspective because they all intersect with very clear search demand, so would also benefit from having improved page titles. I suggest a similar approach to the above where the group name becomes the main keyword with some qualifiers added. It might even be a good idea to let group owners define their own page titles. Data set pages – These also could do with improved page titles, presumably the data set name, eg. “[data-set-name] – CKAN” Heading tags – not a big deal but I’m always in favor of using H tags strictly for headlines (rather than wrapping an H1 around the site logo for example).
#350 1279230686000000 dread Done in cset:b9b82e7ae078 Cost 9 hours, including: alphabetical pager, title changes, removed links from tags in package list.
#353 1280262363000000 pudo cset:1366:e719f449bc74 fixes this
#353 1280737966000000 pudo Add a supervisord example config
#353 1280756379000000 pudo This is done as of cset:8 (of ckanext!)
#355 1311177552000000 thejimmyg Our policy is to recommend the use of hyphens, but not to enforce it. The new package name suggestion autocomplete JavaScript uses hyphens.
#356 1278931816000000 rgrp Done in cset:a1af5f8fe59e. Have not done advanced search link normal package search currently provides no info about how to use advanced features.
#357 1277461466000000 johnbywater Fixed in 79c426c0acb6.
#358 1303122109000000 kindly This ticket needs to have a more thorough spec which needs to include. * Examples of put/post requests to resources and if they are needed. * Dealing with resources that do not have a related packages in terms of authorization. Do they have a new action? how granular is the authorization? per resource? system level? etc. * The rules related to authorization for resources attached to packages. i.e you only get read permissions when the related package has read permissions? do they have their own rules?
#358 1303123611000000 dread This ticket was designed only for reading resources, following an ongoing requirement from the Scraperwiki collaboration. Assume PUT/POST is out of scope. I suggest dealing with resources that aren't attached to packages in an entirely new ticket or CEP, as the implications are wider than this aspect of the API.
#358 1310128782000000 thejimmyg Merging with #922 go there for latest updates.
#359 1291135692000000 rgrp Done in cset:90e318c3c7dc/datapkg and cset:0036b5c505eb/datapkg and others.
#360 1288004891000000 rgrp Done in cset:beaa842ed502/datapkg and following.
#361 1291135756000000 rgrp Done in cset:7305c1d04692/datapkg
#362 1296469470000000 rgrp Rating are currently disabled (invisible) so moving this down.
#362 1311176564000000 thejimmyg This ticket is more than 6 months old so marking as invalid in line with out ticketing policy.
#363 1291733459000000 dread Changes to user properties aren't linked to a package.
#363 1298840718000000 kindly revision objects are made everytime a new revision is made even if their are no changes.
#363 1310125872000000 thejimmyg This will eventually be fixed as part of braoder VDM changes. This work cannot be prioritised above other things we want to do.
#364 1281451132000000 dread But this works with the new SOLR search now - close?
#364 1291637291000000 rgrp Have now switched to solr search (and maybe working in postgres by now). Note correct link is http://ckan.net/package?q=statistics
#365 1279300621000000 dread Fixed in cset:c11738dcb1ba Cost: 1d
#366 1297075053000000 rgrp Now #938 is done this is straightforward.
#366 1299845116000000 dread I'm very pleased that this now works when you try to edit a package that is not allowed. Are there other circumstances we should cover or can we close this?
#366 1299845781000000 pudo You're right, that's done!
#366 1300212171000000 dread changeset d7a8df888f44
#367 1279303693000000 dread Done in cset:79200de013e1 Cost: 1h
#368 1291831811000000 thejimmyg I don't have enough information to debug this problem. I'm assuming that since this has been a while that the problem is solved? If not please re-open the ticket and add your contact details.
#371 1292257189000000 nils.toedtmann (I know the term "QoS" as a very specific networking term about classifying and prioritising network traffic. I assume here it means ''uptime'', ''availability'', ''performance monitoring''?) There seem to be are at least three monitors already in place: * http://munin.okfn.org/ on eu1 monitoring eu[0-7] and us1, gathering additional health information via locally installed daemons. Munin's notification subsystem is not configured. * http://nagios.hmg.ckan.net/ on hmg.ckan.net monitoring the CKAN-HMG service group (network monitoring only). Notfications are not configured (or?) * We have a http://wasitup.com/ account which is watching some OKFN services (e.g. {ca,de,www}.ckan.org, {blog,www}.okfn.org) and sending loads of alerts to [email protected]. Only checking for "HTTP 200 OK" and whether the response contains a configurable string. My 2ct: We should consolidate. What do we want? * A webservice like https://www.pingdom.com/ ($40/month incl 30 checks and 200 SMS, $0.5/month per extra check, $0.14-20/SMS) or http://www.serverdensity.com/ ($10/server-month plus 5-10p/SMS)? * Or run our own monitor (nagios, opsview, monin)? In the latter case we want to have a separate machine which is not in on EC2 (but e.g. ByteMark), dedicated to monitoring only. We should also include root mails into the alert/notification policies. Root mails should be trimmed down to important warnings and errors only.
#371 1292257389000000 nils.toedtmann The nagios fork [http://www.opsview.com/ OPSview] might be worth a look.
#371 1292704716000000 nils.toedtmann Replying to [comment:9 nils.toedtmann]: > There seem to be are at least three monitors already in place: [[BR]] Correction: at least four, we seem to have a Montastic account, too:[[BR]] On 18/12/10 15:03, [email protected] wrote: {{{ Dear okfn, This is a monthly reminder that you have an account on Montastic, the website monitor service. ### ACCOUNT INFORMATION Signup date: 2009-10-06 Email you signup with: [email protected] ### 20 WEBSITES MONITORED [OK] - http://www.ckan.net/ [OK] - http://www.knowledgeforge.net/ [OK] - http://okfn.org/ [not monitored] - http://blog.okfn.org/ [...] ### EMAIL ALERT RECIPIENTS - [email protected] - [email protected] - [email protected] [...] To make changes to your account or contact us, go to www.montastic.com. [...] }}}
#371 1294411939000000 thejimmyg It is implied in this that the performance of sites should beat the QoS criteria, therefore closing #485. Ensuring this happens is an ongoing process.
#371 1294417434000000 thejimmyg From #440 we'll also need to "Write and pass comprehensive performance tests"
#371 1294417553000000 thejimmyg From #395: At the moment, some pages within CKAN tend to load slowly. We should create a profiling setup in which we can measure response times for complete requests and individual methods calls. This could be used to identify bottlenecks and find an appropriate caching or tuning strategy to improve CKAN performance. NB: We should also agree on a maximum request latency. TODO: Read up on all those QoS tickets to avoid overlapping efforts.
#371 1294676093000000 anonymous Mainly handled in http://knowledgeforge.net/okfn/tasks/ticket/564 now. Close here?
#371 1300217820000000 thejimmyg Marking as closed since http://knowledgeforge.net/okfn/tasks/ticket/600 now takes on this ticket. I will check nils has added the new DGU Bytemark servers are added to Nagios.
#372 1280514163000000 johnbywater Moved from sprint 1.1.1
#373 1286376071000000 dread Done
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Note: See TracReports for help on using and creating reports.