{23} Trac comments (3729 matches)

Results (3001 - 3100 of 3729)

Ticket Posixtime Author Newvalue
#2614 1343685572000000 aron.carroll Done in 71aca07. Moving paster to a seperate ticket.
#2667 1342530810000000 aron.carroll Done in 63c193e. Assigning to Ira for QA
#128 1260294960000000 dread Done in 57c5b5ed4737 Cost: 30 mins
#2803 1344256274000000 aron.carroll Done in 3b2427e
#2804 1344269865000000 aron.carroll Done in 38f824b
#160 1261399380000000 dread Done in 1h in cset:9c3e64104cbf. Not allowing space - non-standard.
#2794 1343903064000000 aron.carroll Done in 11f3b6297c
#71 1250181211000000 dread Done in 1.5 hrs Change set 2b434d63d5fd
#2384 1337100810000000 dread Done in 0bc06c6495c749d6b481164d59cceeca873acefb
#2834 1344856692000000 aron.carroll Done in 092d257
#74 1250182938000000 dread Done in 0.5h Checked in to dd2f9713a6a2. Third detail done - enquiry can be done for all packages. Still need to change isitopen site to make use of the parameter ?ckan-package=<package_name>
#1480 1321978546000000 dread Done in /api/util. Added length limits to mungers. Also added markdown test and documentation. Cset:e43009db393cb for 1.5.2
#1528 1323198001000000 zephod Done here. https://github.com/okfn/ckan/commit/b2748e395760083be2071b007641e787a071f955
#213 1264436021000000 dread Done from cset:f342b4928466
#1434 1341236921000000 seanh Done for the core extensions, pull request: https://github.com/okfn/ckan/pull/47 Perhaps strings from the non-core but officially supported extensions should be added to, but we haven't decided which extensions those are yet, so add another ticket to do that later: http://trac.ckan.org/ticket/2625
#201 1264439427000000 dread Done by rgrp
#1281 1314021919000000 dread Done by pudo in cset:eaf342823caf on default branch, so headed for release 1.4.4. "Work around the fact that locale changes do not affect the running request and therefore led to an incorrect flashing message. Now manually changing language."
#207 1270569952000000 dread Done by nick, put into forms branch and will be merged in soon.
#1433 1320666509000000 kindly Done but waiting to merge after 1.5 release.
#2345 1337962454000000 seanh Done but now I need to fix readthedocs again
#322 1277722821000000 dread Done but not pushed. Took 3.5 days.
#323 1277722845000000 dread Done but not pushed yet. Took 3.5 days.
#82 1256565343000000 dread Done basic work in vdm in cset:2a51e39be179. Previous work in ckan in cset:2cfa1c47acd2 - maybe not needed.
#528 1283536475000000 pudo Done at http://iati.ckan.net
#2611 1342008509000000 aron.carroll Done aside from Change 'Groups' to 'Publishers' everywhere which is now #2658
#21 1254740405000000 dread Done as part of ticket:126
#182 1270567116000000 rgrp Done as part of UI overhaul for v0.11 around xmas. Current openness icons seem good enough.
#683 1287142289000000 dread Done apart from ultrastable - not in use at moment.
#1324 1315948336000000 rgrp Done and merged into main at cset:8bb0720a2150 (and deployed!)
#222 1282909280000000 dread Done already.
#677 1292587315000000 dread Done a few weeks ago in dgu repo.
#390 1282214629000000 dread Done - removed 'test' package: 88c485ec-fb70-44b2-9e18-b5dbcb7de57e 2010-07-28 14:35:30.922018 frontend2
#1468 1322495417000000 johnglover Done - commit: https://github.com/okfn/ckan/commit/7789e85c973c9e085f623486bced6be14f25678f rebuild can now take an optional package name/id (single package to be consistent with other paster commands, not a list of packages)
#832 1296334980000000 rgrp Done (~4w ago). See https://bitbucket.org/okfn/ckanext-stats and remove from core cset:311313e4afdb.
#2770 1343389747000000 seanh Done (but not log messages on history page, won't do those)
#102 1260285104000000 rgrp Done (a couple of weeks or more ago) in cset:061e3f3d253b/vdm. Migration script as used in CKAN in browser:ckan/migration/versions/008_update_vdm_ids.py.
#1571 1325555270000000 rgrp Done (4d) ago: rename todo(s) -> issue(s) in extension
#373 1286376071000000 dread Done
#421 1282909772000000 dread Done
#454 1286376044000000 dread Done
#479 1288004211000000 dread Done
#507 1282909852000000 dread Done
#565 1294412708000000 thejimmyg Don't understand this one? Anyone else know about it?
#154 1257535066000000 rgrp Don't think anything obvious to fix at present (and perhaps plan a larger ticket on form customization).
#686 1287997047000000 dread Don't need this yet.
#757 1294233016000000 thejimmyg Don't fully understand this ticket. Will return to it.
#2323 1335516967000000 ross Doesn't make sense in Organizations mode. Adding existing dataset would mean that dataset would have to already be in another organization. Leaving this open though to fix in case we decide that datasets *can* be in more than one organization at a time.
#2676 1342461123000000 aron.carroll Does this actually offer any benefit over just visiting the related data page?
#427 1297686183000000 thejimmyg Documentation of the licenses service was handled in #973. Changing this ticket to be about matching the license service in UKLII.
#2751 1343597577000000 shevski Do we need two versions? 1 for thedatahub - where we stick to having "groups" resources etc and 1 for datasuite where groups are re-named as "publishers" and more emphasis on data files vs resources?
#906 1324299793000000 rgrp Do we need to change in core code or just configure solr?
#2458 1343737692000000 shevski Do we need a design for this? Or can we just use the same format as datahub.io/stats but within the new template?
#2527 1339683264000000 toby Do we have a specific usecase for this?
#191 1305732414000000 dread Do this after refactor #1129
#50 1267648356000000 rgrp Do not see there is much more to do here.
#2240 1332408769000000 ross Distributed to ckan-coord
#2710 1342630558000000 aron.carroll Displayed in a lightbox as of f347f11
#1289 1317315211000000 dread Discussions have not resolved this either way. Decided to leave it like it is for now.
#800 1294245610000000 thejimmyg Discussion for this ticket is now at #728
#103 1301943140000000 dread Didn't take this up in #1012 after all. Closing as wont fix.
#1519 1324297045000000 johnglover Didn't have time to look at this in previous sprint, moving to current sprint.
#1109 1304698621000000 dread Did we decide that this facility for storing non-strings via the API is a new feature, rather than a bug?
#1630 1327620031000000 rgrp Did not make it by end of day but now done!
#1737 1330908235000000 rgrp Did not get to this sprint as focused on #1797.
#2641 1343737625000000 shevski Design here: https://okfn.basecamphq.com/projects/9558659-demo-ckan-front-end/posts/65443876/comments
#2212 1331720191000000 johnglover Deployed on test server, where it imported 4087 datasets. A small number of datasets were not created as they failed CKAN validation - most of which had strange values such as 9999 and 0 for date fields (some also didn't have unique names).
#1640 1326710888000000 amercader Depends on #1655
#4 1220900713000000 rgrp Dependent on upgrade to vdm v0.2 (sqlalchemy). Once that is done should be fairly simple (can port query stuff from microfacts?).
#39 1220900869000000 rgrp Dependent on ticket:51 (upgrade to vdm v0.2).
#1570 1324314741000000 rgrp Deleting this ticket as integrated file storage has been available and finished for months.
#1214 1310044531000000 dread Deleting an extra using 'null' works for me: {{{ $ curl -d '{"name":"dtest2", "extras":{"1":"1", "2":"2", "3":"3"}}' http://test.ckan.net/api/rest/package -H "Authorization: tester" ... $ curl -d '{"name":"dtest2", "extras":{"1":"1", "2":"2", "3":null}}' http://test.ckan.net/api/rest/package -H "Authorization: tester" {"maintainer": null, "name": "dtest2", "relationships_as_subject": [], "author": null, "url": null, "relationships_as_object": [], "notes": null, "title": "dtest2", "maintainer_email": null, "revision_timestamp": "2011-07-07T12:57:18.454890", "author_email": null, "state": "active", "version": null, "groups": [], "license_id": null, "revision_id": "f0ff31c0-027b-49ce-9daf-94a73d96a913", "tags": [], "id": "fdeeb287-2783-4aac-9fc7-a6717e54e22f", "resources": [], "extras": [{"state": "active", "value": "\"1\"", "revision_timestamp": "2011-07-07T12:57:18.454890", "package_id": "fdeeb287-2783-4aac-9fc7-a6717e54e22f", "key": "1", "revision_id": "f0ff31c0-027b-49ce-9daf-94a73d96a913", "id": "d1937073-7bfc-48c5-b6ff-b00d90b451ae"}, {"state": "active", "value": "\"2\"", "revision_timestamp": "2011-07-07T12:57:18.454890", "package_id": "fdeeb287-2783-4aac-9fc7-a6717e54e22f", "key": "2", "revision_id": "f0ff31c0-027b-49ce-9daf-94a73d96a913", "id": "8147886f-9769-440c-8c35-b7d6a2f46de7"}]} }}}
#1394 1324292900000000 dread Defo for this sprint.
#39 1214244190000000 rgrp Deferring as transition to new vdm has not yet happened will do as part of 0.7. Also fairly low priority ...
#1810 1329918479000000 johnglover Deferred for now, moving to backlog.
#1811 1329918517000000 johnglover Deferred for now, moving to backlog
#62 1249410921000000 rgrp Defer until after conversion to formalchemy (ticket:76) is complete.
#141 1255007583000000 dread Decision made to put it in a section alongside REST docs at api/index. Search API docs already done in cset:5562b3e53977. Refactored in cset:a096132a6c6b
#1831 1331655458000000 ross Decided to delay this until later but code is in feature-1831-login-by-email
#1518 1323760656000000 rgrp Debug via js console revealed the problem: Google storage replaces spaces with +.
#402 1296467635000000 pudo De-dup: #891
#1044 1300627007000000 pudo David, thanks for writing those tests - perhaps we should combine them with the ones below ("TestLockedDownUsage") which attempt to basically do the same. As for the inconsistency introduced by the global check in the REST API you're right: Of course it is strange that WUI access checks are more granular than the API checks. The alternative is that we either move authz checks into all relevant REST places (a major refactoring I would be suspicious of) or that we introduce duplicate checks on the WUI actions (I actually have performance concerns about that, authz is incredibly slow - and it introduces another level of special authz that I think we really should not have). Given the choice I'd opt for the REST refactor - there is no good reason to make SITE_READ a duplicate check where authz already applies. On the other hand, this is a function we really don't want to enable or even have and that was only added to satisfy some very specific user demands. Given that these are fulfilled, I'm actually OK keeping the inconsistency for now - nobody will see it in normal operations and in a locked-down environment, users will need to have API keys anyway. Regarding the naming, I'm pretty opionion-less: SITE_READ was proposed by rgrp and I think its pretty fitting, while OTHER_READ would just confuse me. PUBLIC_READ might work, though.
#252 1294410341000000 thejimmyg David, do you know where this requirement has come from? Is is still relevant?
#103 1311179429000000 thejimmyg David, I'm in the middle of a ticket refactor. Please don't open tickets I've just closed ;) This will be taken forward as part of #1233
#1298 1324384191000000 seanh David, Ignore my link above to a branch on my ckan fork. Now that I have permissions I'm pushing my branches to the okfn ckan repo on github. My super branch for the activity streams feature is '''feature-1515-activity-streams''' on the okfn ckan repo on github. This page comparing my branch to master is particularly useful: https://github.com/okfn/ckan/compare/master...feature-1515-activity-streams (click on the Files Changed tab) For reviewing this ticket, the relevant changes to review are: * ckan/lib/activity.py, all of it * ckan/model/activity.py, all of it * My changes to ckan/model/meta.py * My changes to ckan/model/package.py * My changes to ckan/model/resource.py * ckan/tests/models/test_activity.py, all of it The other changes on my branch are for other activity streams tickets that follow this one. The super ticket #1515 has an overview of it all.
#1280 1328786670000000 dread David tells me that this was fixed in CKAN when we moved to SQLAlchemy 0.7 #1433 which went into CKAN 1.5.1.
#1680 1326900832000000 ross David suggested that we could implement this with a group extra instead of a new attribute.
#954 1303118513000000 thejimmyg David Raznick has implemented JSON errors for the v1 and v2 API, we'll look at this over the next few weeks.
#888 1311773103000000 johnglover Dataproxy / Dataapi now deprecated in favour of the combination of new QA archive / process commands and the webstore. Changes in relation to Dataproxy / Dataapi: * Currently only supports CSV files, but plans to add support for excel and google docs spreadsheets soon. * Uses David Raznick's CSV parser instead of Brewery for parsing, handles messy CSV data better. Changes in relation to old QA functionality: * decoupled archiving (downloading) and QA process * added a new 'process' command which parses downloaded files and adds them to a local webstore Closing for now, any improvements/feature requests should be in tickets relating to either the QA functionality or the webstore.
#698 1293472613000000 anonymous Data proxy documentation: http://democracyfarm.org/dataproxy/api.html (included in sources) Updated ('s' as in structured) data proxy app: http://sdataproxy.appspot.com
#1797 1330863639000000 rgrp Data Viewer support for new DataStore in https://github.com/okfn/ckan/commit/9ab8b0283bb086eb4cd663ff73c27066bdd3c79a
#2468 1339757934000000 rgrp DONE. (Closed on github)
#234 1294410993000000 thejimmyg Cygri opened #815 which I've closed as a duplicate. He requests: "The search field (on the homepage and in the top right corner of each page) should have autocomplete for package name. If a package name is selected, it should not do a search but go straight to the package page." @memespring - Is this something you are looking at?
#540 1283324947000000 wwaites Cut-and-paste from ckan-discuss: I had a look at Varnish and I agree that the configuration language is complicated. In fact by default Varnish disregards cache control headers and in general behaves in a very standards non-compliant way. I have no doubt that it is very fast -- if you are willing to spend the efford to customise its configuration for the exact layout of pages and headers and such that each web site it is going to be used with will use. In other words, there is a large administrative burden. So I decided to change tack and see where the Squid proxy has gotten to in the decade or so since I last met it. Squid is a general purpose caching proxy that can be configured as an http accelerator. The configuration is simple. You tell it where your web servers are for which sites. The web servers make sure to set the cache control headers appropriately. Here are some results from my testing, against http://de.ckan.net/package/list?page=B which is an example of a slow page. Except for the first, which only did 100 requests, the tests were set to 8 simultaneous connections and a total of 1000 requests. {{{ No caching of any kind: Requests per second: 0.44 [#/sec] (mean) Beaker Cache (filesystem): Requests per second: 43.16 [#/sec] (mean) SQUID setting cache control headers correctly: Requests per second: 421.33 [#/sec] (mean) }}} The results are clear. Using the application cache is about 100 times faster than doing nothing. Using squid is about 1000 times faster. (Doing both wouldn't necessarily help very much). I'm sure we could squeeze a bit more performance out of it if we used Varnish, but probably not an order of magnitude and I don't think it is worth the administrative burden. If we set up a production Squid instance (or farm), with a bare minimum of work it can cache for any number of sites, not just CKAN. For the python coders, here's what you have to do to set the headers properly so that squid will cache the page: {{{ del response.headers["Pragma"] del response.headers["Cache-Control"] from time import gmtime, strftime response.headers["Last-Modified"] = strftime("%a, %d %b %Y %H:%M:%S GMT", gmtime()) response.cache_expires(seconds=3600) }}} A further advantage is that the *browsers* will also understand these cache-control headers and do their own caching - just setting them properly without even using Squid should result in some subjective performance improvements. That's all for now, I suggest we dedicate a machine to just running squid, the more RAM the better and big discs are good, and put it between the world and the ckans. Oh, and comb through the controllers setting the headers correctly where appropriate...
#720 1288459344000000 pudo Customer has stated they do not want this in the current iteration.
#1508 1323184952000000 ross Custom form is referenced in #1485
#1819 1332163324000000 kindly Currently using package_show_rest. Should be moved to just use package_show but that is another ticket.
#2304 1337073870000000 seanh Currently implemented in this branch https://github.com/okfn/ckan/compare/master...feature-2304-follow waiting for code review and merge into master
#677 1287745278000000 dread Currently blocked, waiting for exact details of script.
#2209 1331551170000000 ross Current thinking is that option 4 is a default (as per ckanext-rdf) rdf output that is generated not in code (as currently) but using a genshi xml template to read the package into an RDF format (as if it were HTML). This would then be overrideable so that for ecportal where the format of the RDF is different (change of vocabs etc) we can just point the config to a new template. Pros: Easy to implement Easy to use Not hard-coded as currently Fast execution Cons: Requires knowledge of required RDF output if default is not useful RDF and not any of the other formats yet. Only works with package/resource/tags unless more work is done
#2812 1344507240000000 shevski Current text does need updating, but I think we need an explanation along the lines of: "Groups allow you to group together datasets under a organisation (for example, the Department of Health) or topic (e.g. Transport, Health) so make it easier for users to browse datasets by theme. Groups also enable you to assign roles and authorisation to members of the group - i.e. individuals can be given the right to publish datasets from a particular organisation." But even that could be clearer. Mark's text could be misleading since we haven't currently implemented private datasets or the right auth settings.
Note: See TracReports for help on using and creating reports.