{23} Trac comments (3729 matches)

Results (2601 - 2700 of 3729)

Ticket Posixtime Author Newvalue
#794 1294248216000000 thejimmyg This work is now underway in the underlying DGU implementation.
#766 1294248066000000 thejimmyg I haven't heard this mentioned yet, but yes, let's try to implement it if possible. Appears to expose a CSW interface? http://webhelp.esri.com/geoportal_extension/9.3.1/index.htm#ext_csw_clnts.htm
#753 1294247922000000 thejimmyg Not an explicit requirement yet so closing ticket. Will re-open if needed.
#692 1294247841000000 thejimmyg CKAN doesn't need to implement this, Drupal does. Incidentally, the initial version is implemented on UAT anyway.
#739 1294247758000000 thejimmyg This needs to be implemented via an API call that Drupal can use
#884 1294247654000000 thejimmyg Related to ticket #665, reopening so that I can close it once I've got it working with Drupal too.
#665 1294247602000000 thejimmyg Duplicate of #884
#800 1294245610000000 thejimmyg Discussion for this ticket is now at #728
#801 1294233386000000 thejimmyg Perhaps this should be implemented differently so that each harvest attempt creates its own database entry with a timestamp and a status attached. We'll need to move to this when we move to a queue based system.
#802 1294233294000000 thejimmyg Merging with #801
#804 1294233156000000 thejimmyg This is the document I drafted after discussions with each party which was approved by JF.
#757 1294233016000000 thejimmyg Don't fully understand this ticket. Will return to it.
#728 1294232752000000 thejimmyg WAF records may always need to be re-harvested to see if they have changed. Does CSW provide any functionality that allows us to see what has changed?
#799 1294232675000000 thejimmyg How can we tell a WAF document has changed, we simply need to re-harvest it to see surely? Moving the issue to ticket #728 to be dealt with together.
#711 1294232521000000 thejimmyg At the moment DGU has locations such as Wales, England etc. This won't change and won't try to be merged with INSPIRE datasets or bounding boxes. What we do need is a flag for INSPIRE so that the different types of package can be highlighted.
#566 1294232284000000 thejimmyg To a large extent it does now so closing this ticket in favour of more specific ones.
#563 1294231961000000 thejimmyg There is a choice here as to whether we provide an export to GeoNetwork or support a minimal CSW interface ourselves.
#679 1294166120000000 dread I added some extra bits in cset:1ca7ba29d409. Resource formats disagree between DGU and the FAQ - have sided with DGU for now as it's simpler. I think this ticket is complete now.
#510 1294138332000000 dread Completed 21st Dec 2010.
#853 1294047550000000 pudo Looking at the changeset this cannot be functional yet: where is the implementation of the policy document exchange. It seems to me like this is currently adding the actual credentials to the request (self.ofs.conn.add_aws_auth_header(headers, 'PUT', path)). c.f.: * http://doc.s3.amazonaws.com/proposals/post.html#Form_Fields * http://code.google.com/apis/storage/docs/reference-methods.html#postobject
#889 1293715109000000 rgrp Done in cset:cdbb2e6128b3 (had added item to template earlier pulling from 'g' object but had not tied that up to config as yet). Also added test and also '''removed existing google analytics code''' -- so need to explicitly set this in config from now on.
#698 1293649815000000 rgrp This ticket is complete: * ckanext-dataapi: working /api/data/{resource-id} with tests * https://bitbucket.org/okfn/dataproxy - the dataproxy code running at http://jsonpdataproxy.appspot.com * functioning but needs tests and improvements There a whole bunch of improvements to be done but these will be in ticket:888
#698 1293472613000000 anonymous Data proxy documentation: http://democracyfarm.org/dataproxy/api.html (included in sources) Updated ('s' as in structured) data proxy app: http://sdataproxy.appspot.com
#868 1293401808000000 anonymous Attached are the timings I have for the tests after I upgraded to 0.57 and after a few simple test tweaks. They do not include setup and teardown time at the class level as they are not assignable to individual tests.
#884 1293319984000000 wwaites Implemented in feature-884-rmsource branch at https://bitbucket.org/ww/ckan/changeset/50cbfc5a6bee
#885 1293314219000000 wwaites See: https://bitbucket.org/ww/ckanext-ows impossible as of yet to test against live server from the OS
#885 1293278564000000 wwaites Friedrich had some mixed experiences with owslib and some German CSW endpoints: http://pudo.okfnpad.org/geodaten
#876 1293218714000000 anonymous I agree with all your points about testing apart form using sqlite, especially splitting out the functional tests and continuous integration. > Longer term I agree that it would be better to run local tests against postgres too, but that will I think involve refactoring many of the tests. Well there are two options 1. refactor the tests 2. refactor the code to use sqlite and postress It is a value judgment as to which is more complicated. I personally think 2 is more complicated but may be wrong on that. The real danger with 2 is that you are needlessly adding complication to production code, with 1 you are only changing the tests. Upgrading to sqlalchemy 0.5+ should happen first regardless. You will need upto date documentation. There is another option too. Put the postgres data directory on tempfs/ramfs and turn off durability [http://www.postgresql.org/docs/9.0/static/non-durability.html here]. We would need a way to db init before the tests where run (or) at boot). This may be the best of both worlds. Anyway Happy xmas!!
#876 1293188088000000 anonymous Thanks for your feedback, very useful. I don't really agree with the people in the linked discussion who say it's pointless testing against a different database from production. The goal here is to make it easy enough for people to run as many tests as possible that they actually do so. Even 15 minutes is too long in that case. With sqlite we can get it in at under 5 minutes. I would also like to identify the longest running tests (which I would characterise as "functional" or "integration" tests and make them run as a separate suite, and then encourage a culture of writing true unit tests before functional tests, so that running unit tests can happen in 1 minute and be part of the regular development cycle. That's no replacement for also running *all* tests periodically, and also running tests under postgres, which we can continue to do on the continuous integration server. Longer term I agree that it would be better to run local tests against postgres too, but that will I think involve refactoring many of the tests.
#527 1293097531000000 rgrp Just to note: did this relate to IATI or ...? Any way to add component and milestone?
#851 1293025112000000 wwaites Implemented curlReq that does a curl request and returns statements similar to or analogous to httpReq. Require curate<=0.8 This in a cron job is sufficient to go through all the packages and update them with a broken link tag now: {{{ curate -r https://github.com/wwaites/curate/raw/master/examples/tagging.n3 -s -k API_KEY }}}
#533 1292957374000000 dread Not a current issue afaik.
#735 1292957248000000 dread I have been through looking for package names with a trailing underscore checking if they should indeed be separate package from those without. #872 and #873 cover creation of the duplicates in the first place.
#872 1292957110000000 dread Done in ckanext cset:8a7e931ef37c
#510 1292955569000000 dread Now setup - just need to check the cron fires ok.
#880 1292945137000000 dread Fixed in ckanclient cset:e7c0af586367 and ckanext cset:82d974ab6860 with test in dgu cset:c0b2c5fd95ea
#851 1292941303000000 wwaites urllib2 is good for http(s) urls but not, unfortunately, for other types most prominently ftp. change the httpReq action to use http://curl.haxx.se/libcurl/python/
#876 1292939711000000 [email protected] I have read quite a lot of people having problems with savepoints with sqlite and thought they were not supported on sqlalchemy. They are at least not consistant with postgres ones. I may well be out of date on this. Here is an [http://groups.google.com/group/sqlalchemy/browse_thread/thread/dc9d1b61044bf730/65a62a33ec313842?lnk=gst&q=no+such+savepoint#65a62a33ec313842 example] even though its a bit old. I did get some non deterninistic errors, the above seemed to fix them. A failed subtransaction is not handled well by sqlalchemy and I think this causes knockon effects due to the unresolved transaction. I would stay well clear of them entirely if possible. What are the errors you are getting?? My 2 cents. ignore me at will... I would think about using a different backend for testing than production. [http://stackoverflow.com/questions/2716847/sqlalchemy-sqlite-for-testing-and-postgresql-for-development-how-to-port look here]. If you want to support both then you should test on both. There are simple ways to scrape a few more minutes off the tests. If you want real speed, then a multiprocess solution (with a database per core) would be sensible if a bit tricky.
#873 1292939274000000 dread Fixed in dgu cset:8bfb867e247a and ckanext cset:fabd31544a73
#876 1292924064000000 sebbacon Thanks for the info! Re. nested transactions. I am getting repeated non-deterministic test failures against sqlite (and indeed postgres, but these failures appear more frequent against sqlite). One of them I seemed to be able to get rid of by eliminating the savepoint as per your first point. However, it appears that sqlite [http://www.sqlite.org/lang_savepoint.html does] support savepoints; to demonstrate it, the following test code appears to work in the latest sqlalchemy: {{{ #!python from sqlalchemy import * from sqlalchemy.orm import * db = create_engine('sqlite:///') metadata = MetaData(db) users = Table('users', metadata, Column('user_id', Integer, primary_key=True), Column('name', String(40)),) users.create() class User(object): pass usermapper = mapper(User, users) Session = sessionmaker() session = Session() fred = User() fred.name = "Fred" sue = User() sue.name = "Sue" amy = User() amy.name = "Amy" session.add(fred) session.add(amy) session.begin_nested() session.add(sue) session.rollback() session.commit() assert session.query(User).count() == 2 print "OK" }}} So, while I agree they're not needed, I'm not sure they're a problem. What do you think? Also, have you seen non-deterministic test errors like this?
#868 1292921456000000 sebbacon I mean #876, of course.
#868 1292921428000000 sebbacon See also #867 Thanks for the patch.
#851 1292894661000000 wwaites currently running against ckan.net, adding broken_link tag if a broken link is found. perhaps something more elaborate should be done? works for now anyhow...
#851 1292892515000000 wwaites Ready to put into cron job. cf: * http://groups.google.com/group/fuxi-discussion/browse_thread/thread/47f131fc2e3817e3 (Actions) * http://groups.google.com/group/fuxi-discussion/browse_thread/thread/bf955620a6ae77d8 (denoted/calculated functions) * http://groups.google.com/group/fuxi-discussion/browse_thread/thread/71a94191e9fef384 (FuXi 1.2) * https://github.com/wwaites/curate/commit/042a96c1589c0fa4980aca733c64c080e02f111e (curate tool update)
#876 1292891133000000 [email protected] I have looked into this already so I can give you a head start. I am working on a project that uses many backends so I have some experience. So here is what I have found so far. == nested transactions == VDM does not support sqlite, as it uses nested transactions. I do not think vdm needs nested transactions. It can use a flush instead. Here is the patch that works. All vdm tests pass. {{{ --- a/vdm/sqlalchemy/base.py Sat Sep 11 23:06:26 2010 +0000 +++ b/vdm/sqlalchemy/base.py Mon Dec 20 16:16:34 2010 +0000 @@ -40,9 +40,8 @@ self.setattr(session, 'HEAD', True) self.setattr(session, 'revision', revision) if revision.id is None: - session.begin_nested() session.add(revision) - session.commit() + session.flush() }}} == indexes == The index file 021_postgres_upgrade.sql in the migrate repository will not run as it uses syntax particular to postgres. Another will need to be made thats similar. sqlite does not support complex indexes like upper(text), so a work around will need to be found. == unicode == The harvesting returns utf8 encoded strings and pysqlite dbapi only supports python unicode objects (as far as I can tell). There will need to be a process in converting all strings that get into the database with string.decode("utf8") == dates == Have not looked into this one too much. However, as sqlite stores everything as strings the timestamps appear to be failing on conversion back into python. I have solved the above two issues before by adding attribute extensions to sqlalchemy mappers to do the conversions without effecting too much code. == in memory sqlite == Some tests need to change in order to make sure the database is created first because the database gets lost each time. In the tests that I have made pass, they run in about a seventh of the time as they do on postgres. == Other things to keep in mind. == * Need a new flag in test.ini to remove full text indexing completely, or always use it with solr. * There are enough incompatibilities between the databases that you would also want to test against postgres as well, at least before a release. * I would probably upgrade sqlalchemy first, so you will not have to the changes twice. The new versions are significantly faster too. * I have submitted a patch to #868 that makes the tests run about 2.5 times as fast and I think there are more low hanging fruit if the aim is test speed.
#868 1292890251000000 [email protected] Below is a patch to make the tests run at least 2.5 times faster (about 15 mins on my old laptop). Instead of dropping the tables each time, it just deletes everything in them, using a low level connection. All the tests pass this way. It's a surprisingly clean patch. Here are a few points concerning it. * I tested truncating the tables but it's slower. If there are any big tables in the tests this way is the fastest (faster than drop). * The sequences (id columns) will start from where they left off. * I also investigated making postgres template database and cloning it, but the complication was not worth it. * sqlalchemy iterates the tables in reverse dependency order, which make this possible. * I targeted rebuild_db as that what most of the tests I saw where using, however I have not checked all tests to see if they all are. * There is a slight hack on the repo object to make sure it knows that "clean_db" is coming from the tests. * I refactored init_db for code reuse. * I have not done a version check. sqlalchemy >= 0.5 do this in a different way as outlined in the comments. {{{ diff -r 7f2239b0f743 ckan/model/__init__.py --- a/ckan/model/__init__.py Fri Dec 17 10:34:47 2010 +0000 +++ b/ckan/model/__init__.py Mon Dec 20 23:25:04 2010 +0000 @@ -41,6 +41,9 @@ def init_db(self): super(Repository, self).init_db() + self.add_initial_data() + + def add_initial_data(self): # assume if this exists everything else does too if not User.by_name(PSEUDO_USER__VISITOR): visitor = User(name=PSEUDO_USER__VISITOR) @@ -69,6 +72,26 @@ import migrate.versioning.api as mig version = mig.version(self.migrate_repository) return version + + def clean_db(self): + # delete only added for tests + if hasattr(self, "delete_only") and self.delete_only: + self.delete_all() + else: + super(Repository, self).clean_db() + + def delete_all(self): + + self.session.remove() + ## use raw connection for performance + connection = self.session.connection() + ## sqla sorts in reverse dependancy order. + ## in >= 0.5 use reversed(metadata.sorted_tables()) instead of table_iterator + for table in self.metadata.table_iterator(): + connection.execute('delete from "%s"' % table.name) + self.session.commit() + + self.add_initial_data() def setup_migration_version_control(self, version=None): import migrate.versioning.exceptions diff -r 7f2239b0f743 ckan/tests/__init__.py --- a/ckan/tests/__init__.py Fri Dec 17 10:34:47 2010 +0000 +++ b/ckan/tests/__init__.py Mon Dec 20 23:25:04 2010 +0000 @@ -55,6 +55,7 @@ import ckan.model as model model.repo.rebuild_db() +model.repo.delete_only = True class BaseCase(object): }}}
#851 1292860957000000 wwaites * link checker above uses the queue. queue not running generally * quickest way forward is just to put the curate tool in a cron job and make a suitable rule. shall do this soonest
#847 1292844627000000 rgrp Still have ticket:669 and ticket:874 to do (431 probably won't be done for a while).
#698 1292781368000000 Stiivi pushed parameter passing; change handling of unknown reply type on proxy side: do not raise exception, but reply with 200 Error - unkown reply type, use json/jsonp
#763 1292775248000000 thejimmyg Notes from my discussion with David a while ago: Following irc discussion, it looks like read-only mode is simply achieved by Apache config: * 503 for PUT/POST operations - stops writes * 503 for GETs to URIs containing: /edit, /create, /new, /authz - stops providing forms that lead to a write * setenv CKAN_READONLY="Undergoing maintenance 12.00 UTC for one hour" which can be picked up by CKAN to be displayed as we see fit in the future. Friedrich's current IATI sprint may link into this and grey out edit links etc.
#371 1292704716000000 nils.toedtmann Replying to [comment:9 nils.toedtmann]: > There seem to be are at least three monitors already in place: [[BR]] Correction: at least four, we seem to have a Montastic account, too:[[BR]] On 18/12/10 15:03, [email protected] wrote: {{{ Dear okfn, This is a monthly reminder that you have an account on Montastic, the website monitor service. ### ACCOUNT INFORMATION Signup date: 2009-10-06 Email you signup with: [email protected] ### 20 WEBSITES MONITORED [OK] - http://www.ckan.net/ [OK] - http://www.knowledgeforge.net/ [OK] - http://okfn.org/ [not monitored] - http://blog.okfn.org/ [...] ### EMAIL ALERT RECIPIENTS - [email protected] - [email protected] - [email protected] [...] To make changes to your account or contact us, go to www.montastic.com. [...] }}}
#698 1292596589000000 Stiivi Here is the fork for (json) data proxy: https://bitbucket.org/Stiivi/dataproxy I've refactored it and moved transformations into separate modules. For each resource type there should be a module in transform/<type>_transform.py Each module should implement ``transform(flow, url, query)`` and should return a dictionary as a result. Existing modules: * transform/csv_transform - CSV files * transform/xls_transform - Excel XLS files if there is no resource_type module, HTTP 200 Error Resource type not supported is returned. You can override URL file extension or specify type if extension is missing through type= URL option. For example if you have any URL that contains CSV data however the url is just foo.com/data then you can pass: url=http://foo.com/data&type=csv Note: Source refactored/updated in example/dataproxy, being tested by running locally localhost:8000.
#734 1292587603000000 dread Found problems and ticketed: #872 #873
#679 1292587426000000 dread James has started this, but still some things to add.
#677 1292587315000000 dread Done a few weeks ago in dgu repo.
#451 1292587233000000 dread Duplicate of #742
#470 1292587187000000 dread apikey_header_name was set to X-CKAN-API-Key some time ago I believe.
#777 1292586843000000 dread James did this a while ago.
#422 1292586586000000 dread Ticket remaining is #427. No need for this story ticket now.
#502 1292586466000000 dread Not doing data4nr at the moment.
#441 1292586309000000 dread No need for requirements. Story tickets exist: #763 & #765
#867 1292410515000000 dread Also ckan cset:c5b018bfe9bb
#871 1292323656000000 nils.toedtmann Re postfix: I second ww. I like to run some super simple local MTA (e.g. "nullmailer") on all but one server, using a central postfix (or a send-only GMail account) as smarthost. Am happy with postfix for >10years, it's straightforward and rock solid.
#871 1292322361000000 wwaites Regarding rkhunter -- yes, eu1 appears to be clean Regarding the upgrade -- upgradede to 4.72 from backports which, looking more closely, appears to still have the privilege escalation bug but not the remote root exploit. Regarding exim on other hosts, there is no reason for them to be running a full mta, something like ssmtp should suffice. Also very worth the thought of moving to postfix. It's much easier to configure and I haven't known it to have any comparable bugs in the decade or so I've been running it. In fact I've never seen anyone actually use exim before...
#371 1292257389000000 nils.toedtmann The nagios fork [http://www.opsview.com/ OPSview] might be worth a look.
#371 1292257189000000 nils.toedtmann (I know the term "QoS" as a very specific networking term about classifying and prioritising network traffic. I assume here it means ''uptime'', ''availability'', ''performance monitoring''?) There seem to be are at least three monitors already in place: * http://munin.okfn.org/ on eu1 monitoring eu[0-7] and us1, gathering additional health information via locally installed daemons. Munin's notification subsystem is not configured. * http://nagios.hmg.ckan.net/ on hmg.ckan.net monitoring the CKAN-HMG service group (network monitoring only). Notfications are not configured (or?) * We have a http://wasitup.com/ account which is watching some OKFN services (e.g. {ca,de,www}.ckan.org, {blog,www}.okfn.org) and sending loads of alerts to [email protected]. Only checking for "HTTP 200 OK" and whether the response contains a configurable string. My 2ct: We should consolidate. What do we want? * A webservice like https://www.pingdom.com/ ($40/month incl 30 checks and 200 SMS, $0.5/month per extra check, $0.14-20/SMS) or http://www.serverdensity.com/ ($10/server-month plus 5-10p/SMS)? * Or run our own monitor (nagios, opsview, monin)? In the latter case we want to have a separate machine which is not in on EC2 (but e.g. ByteMark), dedicated to monitoring only. We should also include root mails into the alert/notification policies. Root mails should be trimmed down to important warnings and errors only.
#698 1292239372000000 rgrp 1. move repo to bitbucket 2. clone james proxy code and modify to make google spreadsheets compatible (add a test ...) 3. update the ckanext to pass on parameters .... 4. Deploy all of this to test.ckan.net 5. Rufus: check redirects with javascript
#869 1292059662000000 rgrp Closed by cset:83734b5e251c which implements an IConfigurer interface.
#698 1292001709000000 Stiivi 'draft": https://github.com/Stiivi/ckanext-dataapi requires that the client handles HTTP 302 Redirect correctly.
#741 1291988087000000 rgrp Done in cset:49d5bd0a6a99 and cset:68522feabfeb among others. Documentation of progress on http://ckan.okfnpad.org/plugins
#867 1291916278000000 dread Done in ckanclient cset:d40fb101aba9
#269 1291897538000000 dread Licence is defaulted in CKAN cset:5bfbcd457426 (merged into default) and DGU cset:2d798e8af3d7. "replacing department with provider" is covered in ticket: https://trac.dataco.coi.gov.uk/projects/datagov/ticket/742
#698 1291859298000000 Stiivi One more note: it would be good if packages had names/identifiers as well, as referencing internal IDs from outside world is not very good practice - they are quite volatile, mostly in regard to expected objects. PACKAGE/RESOURCE_REFERENCE Possible resource references: - 'default' - reserved keyword for 'the only one resource' if there is only one, or first resource if there are more or the one with flag 'default' - 'latest' - to be able to access 'latest' resource within package (or 'actual' or 'last'?) - alphanumeric identifier (not starting with number) - number - index of resource as human/visitor sees it on page (not the same as "position" attribute - as that one might contain gaps or be different (and it is in some cases)), index of resource should be something like: {{{ SELECT package_id, id, url, ROW_NUMBER() OVER (PARTITION BY package_id ORDER BY position) AS index FROM package_resource }}}
#698 1291858346000000 Stiivi I have created "proof of concept" implementation that will use external data proxy service when accessing: {{{ /api/data/PACKAGE_ID }}} like: {{{ http://127.0.0.1:5000/api/data/069c80f8-8476-452e-bfd4-0a9077666c14 }}} It just works and requires refactoring to match ckan standards. I would need help from soneone who knows ckan internals better.
#698 1291851275000000 Stiivi @thejimmyg: It is neat simple solution. You have suggested a proxy API: ''There will be a new API at ``/api/spreadsheet?callback=jsonpcallback&url=`` '' There are two options: 1. Have public ckan data proxy as stand-alone service: I get package resource URL from CKAN and pass it to proxy 2. Have ckan data API (as ticket title suggests): If I am talking to CKAN, I am getting data from CKAN, I should not care about proxy or anything behind nor I should care about original data source - I care about resource data in a format that I can process (CSV/JSON). For CKAN data API I would suggest something like: {{{ /api/resource_data/RESOURCE_ID?... }}} or more human readable: {{{ /api/resource_data/PACKAGE_NAME/RESOURCE_NUMBER?... }}} This will allow others to get only CKAN resources. Moreover, allowing to get only resource data (not any URL data) would allow us to pre-process resources in the future. First version/implementation: pass each requested resource URL to your proxy service (external, not CKAN related), which determines file by file extension in URL, fail on unknown file or unprocessable file. /api/resource_data/PACKAGE/RESOURCE?output=jsonp&sheet=1... would be redirected to (for example): http://1.latest.jsonpdataproxy.appspot.com/?url=RESOURCE["URL"]&sheet=1... Second version/implementation: Determine file type in advance and pass to appropriate conversion service when requested If you upload document on scribd or slideshare it gets processed in the background. This can be done in CKAN after any resource change. We do not need to download the file at the moment, however what can be done is: 1. try a converter by URL file extension 2. try a converter by MIME type (content-type header) 3. brute-force try all converters No need to store copies of files, just store determined file type somewhere in the resource record (as mime type). Also, it would be nice if any data conversion service would provide output in both - JSON/CSV. Therefore we would be able to have "Download CSV" link directly in CKAN web page for browsing users: /api/resource_data/PACKAGE/RESOURCE?output=csv...
#836 1291851041000000 rgrp Fixed in cset:2f6d54341b47 and branch feature-286-siteurl
#698 1291832133000000 thejimmyg Actually we've implemented a first version which doesn't store the data. See this post: http://blog.ckan.org/2010/12/04/open-data-day-announcing-ckan-data-proxy/ You can get data like this: http://1.latest.jsonpdataproxy.appspot.com/?sheet=1&indent=4&url=http://research.dwp.gov.uk/asd/asd4/r1_values.xls
#368 1291831811000000 thejimmyg I don't have enough information to debug this problem. I'm assuming that since this has been a while that the problem is solved? If not please re-open the ticket and add your contact details.
#345 1291831615000000 thejimmyg This is a bit out of date. We have moved to a system of "stable" and "default" branches with feature branches for features, bugs and tickets. We already have default and stable tested by buildbot.
#321 1291831399000000 anonymous This has now been superseded with this proposal: #787
#316 1291831177000000 thejimmyg I've just tested this on ckan.net and it gives a sensible message: There was an error while searching. Please try another search term.
#294 1291830960000000 thejimmyg Duplicate of #812
#269 1291830780000000 thejimmyg Just discussed this with Evan... notes field could use a WYSIWYG * No, Evan wants to discourage fancy features, plain text/markdown is fine * auto-complete on tags - DONE * department drop down options list interact with user permission - Evan building the API we need for this now. * licenses -> drop down is fine, let's just OGL as default So just default licence and replacing department with provider and via to be implemented on this ticket. Evan will provide: organisation.one() to look up one organsisation by ID organisation.many() to look up a list of organsisation by ID all at once organisation.match() to match a string and return a organsisation ID organisation.department() to take a organsisation ID and return the organsisation ID of the department it represents.
#185 1291830039000000 thejimmyg This is probably no longer necessary. I've implemented JavaScript to hide the help text and allow it to be revealed by clicking "More >". This makes the form look simpler without needing to hide actual fields.
#146 1291829862000000 thejimmyg I've just tested this too and it works for me. Let's close this ticket.
#109 1291829457000000 thejimmyg This is effectively implemented by the util API; http://knowledgeforge.net/ckan/hg/file/tip/doc/api/version2.rst John has a separate proposal to move the util API into the REST API but that is a different discussion. Here's how you can now search on tags: /api/2/util/tag/autocomplete?incomplete=ru
#838 1291812318000000 memespring Add download formats to search results (http://ckan.org/ticket/866)
#698 1291742752000000 Stiivi I see two possible options: Option A: store only mirrors of source files, have file format based plugins for querying files Option B: store mirrors of source files, have plugin based loading scripts into "common structured format", have single query module. I would go with option B as it is: - easier to implement - file format based transformations are simpler than file format based queries - more transparent data management process - only one simple query module (see attached ckan-srcmirror.png) The Option B will fit better to the broader data architecture context: http://democracyfarm.org/f/ckan/data_arch.png Concerning API I would suggest to try to be compatible with google spreadsheet API: http://code.google.com/apis/spreadsheets/data/3.0/reference.html
#864 1291741028000000 memespring Havent implemented all text changes because worried about impact on other themes.
#839 1291736541000000 memespring Done with the exception of the discuss/comments page. Plugin won't install on my setup. Fredrich looking into it.
#838 1291736461000000 memespring Search results changes: http://ckan.org/ticket/864
#509 1291734435000000 dread Story no longer required. Work to do is still described in #510
#858 1291734297000000 dread On branch feature-858-diff removed ckan.lib.diff as it was not being used. Merged into default in cset:f9ba1ae63ddd
#192 1291733895000000 dread The gov form has had this for months.
#714 1291733788000000 dread This was implemented with DGU ticket 614
#363 1291733459000000 dread Changes to user properties aren't linked to a package.
#838 1291729819000000 memespring Prompt users to enter missing info - http://ckan.org/ticket/863
#862 1291726261000000 wwaites not to mention sparql endpoint...
#860 1291726067000000 wwaites http://bitbucket.org/ww/ckanrdf/changeset/67df6dc33ec4
#845 1291723492000000 dread This was completed on the feature-845-required-fields branch and merged into default in cset:3b5635ffaa7d
Note: See TracReports for help on using and creating reports.