{23} Trac comments (3729 matches)

Results (2201 - 2300 of 3729)

Ticket Posixtime Author Newvalue
#896 1300217375000000 thejimmyg Actually, I think this should be implemented instead by considering a different CKAN instance as a potential harvest source and then harvesting from it. By thinking of it this way in the first instance, we effectively get a read-only copy of other data in CKAN but one which will be kept up to date. Marking as duplicate. Discussion can now carry on via #987 Common harvesting framework.
#894 1300196388000000 thejimmyg All test CSWs have been successfully harvested from. CSW harvesting is disabled on that source so we can't harvest from it. I don't think we need a ticket for this now do we?
#893 1298293527000000 thejimmyg We don't understand the use case for this requirement. Closing for now until a use case can be demonstrated.
#892 1324314636000000 rgrp Moved to v1.6.
#892 1324402480000000 johnglover https://github.com/okfn/ckan/compare/bff538b%5E...d700680 Merged with master and deployed on test.ckan.org
#891 1318602128000000 rgrp May only do link-checker and not do full storage in this sprint.
#891 1320143424000000 johnglover Almost finished (see http://github.com/okfn/ckanext-archiver). Still to address: - check headers to see if hash / cache / max-age / expires indicates that the resource does not need to be downloaded. - add cache url to resource
#891 1320149841000000 johnglover Added cache_url and cache_last_updated to resources after archiving. Not checking for hash value in headers. This process will generally only run when a new resource is added or someone updates a URL, so we don't expect to be regularly downloading the same resource. We will need something along these lines if this is running as a regular cron job, but in that case the logic will be added to the cron job itself (probably a paster command).
#890 1318529648000000 rgrp May want to close as invalid as obsoleted by more recent queue work.
#890 1318599247000000 kindly Invalid due to #1397, We will be using celery instead.
#889 1293715109000000 rgrp Done in cset:cdbb2e6128b3 (had added item to template earlier pulling from 'g' object but had not tied that up to config as yet). Also added test and also '''removed existing google analytics code''' -- so need to explicitly set this in config from now on.
#888 1294830297000000 Stiivi Chages to Data Proxy: * tests added with configurable list of known URLs * use brewery for transformations (included reference to brewery framework in a new vendor directory) * side effect: code to make google find external packages in vendor directory (from now on, all external packages should go there and be referenced from .hgsub if they are mercurial repositories) * changed response contents: moved from 'headers' to root, renamed 'response' to 'data', added field list as 'fields' * changed way of registering transformers (now class object is used instead of name) * added 'encoding' and 'dialect' parameters for CSV * added optional data audit (parameter 'audit') Changes: https://bitbucket.org/Stiivi/dataproxy/changeset/fccbdd275be5 Data information: http://databrewery.org/doc/data_quality.html#brewery.dq.FieldStatistics
#888 1307352537000000 thejimmyg I don't think any progress has been made on this for a bit so I'm assigning it to me.
#888 1311773103000000 johnglover Dataproxy / Dataapi now deprecated in favour of the combination of new QA archive / process commands and the webstore. Changes in relation to Dataproxy / Dataapi: * Currently only supports CSV files, but plans to add support for excel and google docs spreadsheets soon. * Uses David Raznick's CSV parser instead of Brewery for parsing, handles messy CSV data better. Changes in relation to old QA functionality: * decoupled archiving (downloading) and QA process * added a new 'process' command which parses downloaded files and adds them to a local webstore Closing for now, any improvements/feature requests should be in tickets relating to either the QA functionality or the webstore.
#887 1300196600000000 thejimmyg This is done. See https://bitbucket.org/okfn/ckanext-harvest At the moment we are not adding migrations to remove the harvesting tables. We are designing a harvesting refactor and will write migrations once the refactor is complete so that instances that use harvesting get upgraded, and those that don't get their harvesting tables removed. Also, see this #1030 for the moving harvesting out of the REST API.
#886 1294916538000000 dread Being done as part of DGU ticket: https://trac.dataco.coi.gov.uk/projects/datagov/ticket/757
#885 1293278564000000 wwaites Friedrich had some mixed experiences with owslib and some German CSW endpoints: http://pudo.okfnpad.org/geodaten
#885 1293314219000000 wwaites See: https://bitbucket.org/ww/ckanext-ows impossible as of yet to test against live server from the OS
#885 1294253497000000 wwaites done with feature-885-owslib
#884 1293319984000000 wwaites Implemented in feature-884-rmsource branch at https://bitbucket.org/ww/ckan/changeset/50cbfc5a6bee
#884 1294247654000000 thejimmyg Related to ticket #665, reopening so that I can close it once I've got it working with Drupal too.
#884 1294253563000000 wwaites see feature-884-rmsource in the main ckan repo deleting my own repo
#884 1296592140000000 thejimmyg OK, I've implemented an API call for this too now and all is merged into default.
#883 1300196622000000 thejimmyg Now complete.
#881 1294410574000000 thejimmyg Hello fccoelho, Could you please post the output you get from running pip? Thanks, James
#881 1296335072000000 rgrp Closing due to lack of clarification.
#880 1292945137000000 dread Fixed in ckanclient cset:e7c0af586367 and ckanext cset:82d974ab6860 with test in dgu cset:c0b2c5fd95ea
#879 1294604066000000 wwaites first cut: https://bitbucket.org/ww/ckanext-storage/src
#878 1314404621000000 rgrp A working version of this in ckanjs as of 2 weeks ago. Need to integrate into main CKAN.
#878 1315820838000000 rgrp This is now done in feature-1294-ux-improvements-dataset. see e.g. cset:c6f7f5018b4f
#877 1297680579000000 rgrp Basic pass on an implementation (no permissions yet etc): https://bitbucket.org/okfn/ckanext-upload/changeset/9ae543f0645f
#877 1298624165000000 rgrp Various tidying in https://bitbucket.org/okfn/ckanext-upload/changeset/0fad7aa7aa97 (success messages, permissions on uploaded file - public-read) and completed permissions in https://bitbucket.org/okfn/ckanext-upload/changeset/a83ce00a1266. Still need to integrate into general workflow (e.g. create a Resource on successful upload) but that is a separate item so this ticket is now done.
#876 1292891133000000 [email protected] I have looked into this already so I can give you a head start. I am working on a project that uses many backends so I have some experience. So here is what I have found so far. == nested transactions == VDM does not support sqlite, as it uses nested transactions. I do not think vdm needs nested transactions. It can use a flush instead. Here is the patch that works. All vdm tests pass. {{{ --- a/vdm/sqlalchemy/base.py Sat Sep 11 23:06:26 2010 +0000 +++ b/vdm/sqlalchemy/base.py Mon Dec 20 16:16:34 2010 +0000 @@ -40,9 +40,8 @@ self.setattr(session, 'HEAD', True) self.setattr(session, 'revision', revision) if revision.id is None: - session.begin_nested() session.add(revision) - session.commit() + session.flush() }}} == indexes == The index file 021_postgres_upgrade.sql in the migrate repository will not run as it uses syntax particular to postgres. Another will need to be made thats similar. sqlite does not support complex indexes like upper(text), so a work around will need to be found. == unicode == The harvesting returns utf8 encoded strings and pysqlite dbapi only supports python unicode objects (as far as I can tell). There will need to be a process in converting all strings that get into the database with string.decode("utf8") == dates == Have not looked into this one too much. However, as sqlite stores everything as strings the timestamps appear to be failing on conversion back into python. I have solved the above two issues before by adding attribute extensions to sqlalchemy mappers to do the conversions without effecting too much code. == in memory sqlite == Some tests need to change in order to make sure the database is created first because the database gets lost each time. In the tests that I have made pass, they run in about a seventh of the time as they do on postgres. == Other things to keep in mind. == * Need a new flag in test.ini to remove full text indexing completely, or always use it with solr. * There are enough incompatibilities between the databases that you would also want to test against postgres as well, at least before a release. * I would probably upgrade sqlalchemy first, so you will not have to the changes twice. The new versions are significantly faster too. * I have submitted a patch to #868 that makes the tests run about 2.5 times as fast and I think there are more low hanging fruit if the aim is test speed.
#876 1292924064000000 sebbacon Thanks for the info! Re. nested transactions. I am getting repeated non-deterministic test failures against sqlite (and indeed postgres, but these failures appear more frequent against sqlite). One of them I seemed to be able to get rid of by eliminating the savepoint as per your first point. However, it appears that sqlite [http://www.sqlite.org/lang_savepoint.html does] support savepoints; to demonstrate it, the following test code appears to work in the latest sqlalchemy: {{{ #!python from sqlalchemy import * from sqlalchemy.orm import * db = create_engine('sqlite:///') metadata = MetaData(db) users = Table('users', metadata, Column('user_id', Integer, primary_key=True), Column('name', String(40)),) users.create() class User(object): pass usermapper = mapper(User, users) Session = sessionmaker() session = Session() fred = User() fred.name = "Fred" sue = User() sue.name = "Sue" amy = User() amy.name = "Amy" session.add(fred) session.add(amy) session.begin_nested() session.add(sue) session.rollback() session.commit() assert session.query(User).count() == 2 print "OK" }}} So, while I agree they're not needed, I'm not sure they're a problem. What do you think? Also, have you seen non-deterministic test errors like this?
#876 1292939711000000 [email protected] I have read quite a lot of people having problems with savepoints with sqlite and thought they were not supported on sqlalchemy. They are at least not consistant with postgres ones. I may well be out of date on this. Here is an [http://groups.google.com/group/sqlalchemy/browse_thread/thread/dc9d1b61044bf730/65a62a33ec313842?lnk=gst&q=no+such+savepoint#65a62a33ec313842 example] even though its a bit old. I did get some non deterninistic errors, the above seemed to fix them. A failed subtransaction is not handled well by sqlalchemy and I think this causes knockon effects due to the unresolved transaction. I would stay well clear of them entirely if possible. What are the errors you are getting?? My 2 cents. ignore me at will... I would think about using a different backend for testing than production. [http://stackoverflow.com/questions/2716847/sqlalchemy-sqlite-for-testing-and-postgresql-for-development-how-to-port look here]. If you want to support both then you should test on both. There are simple ways to scrape a few more minutes off the tests. If you want real speed, then a multiprocess solution (with a database per core) would be sensible if a bit tricky.
#876 1293188088000000 anonymous Thanks for your feedback, very useful. I don't really agree with the people in the linked discussion who say it's pointless testing against a different database from production. The goal here is to make it easy enough for people to run as many tests as possible that they actually do so. Even 15 minutes is too long in that case. With sqlite we can get it in at under 5 minutes. I would also like to identify the longest running tests (which I would characterise as "functional" or "integration" tests and make them run as a separate suite, and then encourage a culture of writing true unit tests before functional tests, so that running unit tests can happen in 1 minute and be part of the regular development cycle. That's no replacement for also running *all* tests periodically, and also running tests under postgres, which we can continue to do on the continuous integration server. Longer term I agree that it would be better to run local tests against postgres too, but that will I think involve refactoring many of the tests.
#876 1293218714000000 anonymous I agree with all your points about testing apart form using sqlite, especially splitting out the functional tests and continuous integration. > Longer term I agree that it would be better to run local tests against postgres too, but that will I think involve refactoring many of the tests. Well there are two options 1. refactor the tests 2. refactor the code to use sqlite and postress It is a value judgment as to which is more complicated. I personally think 2 is more complicated but may be wrong on that. The real danger with 2 is that you are needlessly adding complication to production code, with 1 you are only changing the tests. Upgrading to sqlalchemy 0.5+ should happen first regardless. You will need upto date documentation. There is another option too. Put the postgres data directory on tempfs/ramfs and turn off durability [http://www.postgresql.org/docs/9.0/static/non-durability.html here]. We would need a way to db init before the tests where run (or) at boot). This may be the best of both worlds. Anyway Happy xmas!!
#876 1294753889000000 dread Seb and David have completed this I believe. I've merged the changes into core CKAN in cset:68d63fda4814.
#875 1297085261000000 pudo So far opting for route 1) (not implementing facets), therefore this can be closed!
#873 1292939274000000 dread Fixed in dgu cset:8bfb867e247a and ckanext cset:fabd31544a73
#872 1292957110000000 dread Done in ckanext cset:8a7e931ef37c
#871 1292322361000000 wwaites Regarding rkhunter -- yes, eu1 appears to be clean Regarding the upgrade -- upgradede to 4.72 from backports which, looking more closely, appears to still have the privilege escalation bug but not the remote root exploit. Regarding exim on other hosts, there is no reason for them to be running a full mta, something like ssmtp should suffice. Also very worth the thought of moving to postfix. It's much easier to configure and I haven't known it to have any comparable bugs in the decade or so I've been running it. In fact I've never seen anyone actually use exim before...
#871 1292323656000000 nils.toedtmann Re postfix: I second ww. I like to run some super simple local MTA (e.g. "nullmailer") on all but one server, using a central postfix (or a send-only GMail account) as smarthost. Am happy with postfix for >10years, it's straightforward and rock solid.
#871 1296340558000000 rgrp This is not a ckan issue. should have been on http://knowledgeforge.net/okfn/tasks
#870 1294862485000000 [email protected] A patch is available here. https://bitbucket.org/kindly/ckan/changeset/9a1d6f55587b
#870 1294914243000000 anonymous Merged into default in cset:54ae110094be
#869 1292059662000000 rgrp Closed by cset:83734b5e251c which implements an IConfigurer interface.
#868 1292890251000000 [email protected] Below is a patch to make the tests run at least 2.5 times faster (about 15 mins on my old laptop). Instead of dropping the tables each time, it just deletes everything in them, using a low level connection. All the tests pass this way. It's a surprisingly clean patch. Here are a few points concerning it. * I tested truncating the tables but it's slower. If there are any big tables in the tests this way is the fastest (faster than drop). * The sequences (id columns) will start from where they left off. * I also investigated making postgres template database and cloning it, but the complication was not worth it. * sqlalchemy iterates the tables in reverse dependency order, which make this possible. * I targeted rebuild_db as that what most of the tests I saw where using, however I have not checked all tests to see if they all are. * There is a slight hack on the repo object to make sure it knows that "clean_db" is coming from the tests. * I refactored init_db for code reuse. * I have not done a version check. sqlalchemy >= 0.5 do this in a different way as outlined in the comments. {{{ diff -r 7f2239b0f743 ckan/model/__init__.py --- a/ckan/model/__init__.py Fri Dec 17 10:34:47 2010 +0000 +++ b/ckan/model/__init__.py Mon Dec 20 23:25:04 2010 +0000 @@ -41,6 +41,9 @@ def init_db(self): super(Repository, self).init_db() + self.add_initial_data() + + def add_initial_data(self): # assume if this exists everything else does too if not User.by_name(PSEUDO_USER__VISITOR): visitor = User(name=PSEUDO_USER__VISITOR) @@ -69,6 +72,26 @@ import migrate.versioning.api as mig version = mig.version(self.migrate_repository) return version + + def clean_db(self): + # delete only added for tests + if hasattr(self, "delete_only") and self.delete_only: + self.delete_all() + else: + super(Repository, self).clean_db() + + def delete_all(self): + + self.session.remove() + ## use raw connection for performance + connection = self.session.connection() + ## sqla sorts in reverse dependancy order. + ## in >= 0.5 use reversed(metadata.sorted_tables()) instead of table_iterator + for table in self.metadata.table_iterator(): + connection.execute('delete from "%s"' % table.name) + self.session.commit() + + self.add_initial_data() def setup_migration_version_control(self, version=None): import migrate.versioning.exceptions diff -r 7f2239b0f743 ckan/tests/__init__.py --- a/ckan/tests/__init__.py Fri Dec 17 10:34:47 2010 +0000 +++ b/ckan/tests/__init__.py Mon Dec 20 23:25:04 2010 +0000 @@ -55,6 +55,7 @@ import ckan.model as model model.repo.rebuild_db() +model.repo.delete_only = True class BaseCase(object): }}}
#868 1292921428000000 sebbacon See also #867 Thanks for the patch.
#868 1292921456000000 sebbacon I mean #876, of course.
#868 1293401808000000 anonymous Attached are the timings I have for the tests after I upgraded to 0.57 and after a few simple test tweaks. They do not include setup and teardown time at the class level as they are not assignable to individual tests.
#868 1294753596000000 dread I've merged in David Raznick's patches: * no_autoflush_deletes.diff cset:2b9591172182 * postgres_speed.diff cset:fa1b7e3a4e0f * vdm_purge_no_autoflush.diff vdm cset 8accdd0b9b7f I've also merged in Seb's fork: cset:68d63fda4814 which closes this ticket, achieving test speeds of under 3 minutes!
#867 1291916278000000 dread Done in ckanclient cset:d40fb101aba9
#867 1292410515000000 dread Also ckan cset:c5b018bfe9bb
#867 1299866685000000 rgrp This was a breaking change for loaders code. Obviously we don't have tests for that so would not have been noticed ... Fixed in cset:af81e54bd590/ckanclient
#865 1294414639000000 thejimmyg Isn't this in there now? Can we close? Thanks
#865 1295259773000000 pudo This has been fixed on the respective branch and merged into default..
#864 1291741028000000 memespring Havent implemented all text changes because worried about impact on other themes.
#863 1295259827000000 rgrp Removing milestone as not certain when we'll do this.
#863 1338206455000000 ross UI has changed rather a lot in last 18 months, so I killing this bug.
#862 1291726261000000 wwaites not to mention sparql endpoint...
#861 1311168845000000 thejimmyg This ticket has been open for than 6 months so I'm closing it.
#860 1291726067000000 wwaites http://bitbucket.org/ww/ckanrdf/changeset/67df6dc33ec4
#859 1311177461000000 thejimmyg This ticket is more than 6 months old so closing it in line with our new ticketing policy. We know that test coverage needs to be improved, particularly in the logic layer.
#858 1291734297000000 dread On branch feature-858-diff removed ckan.lib.diff as it was not being used. Merged into default in cset:f9ba1ae63ddd
#857 1311177075000000 thejimmyg This ticket is more than 6 months old so closing it in line with our new ticketing policy. We know that test coverage needs to be improved, particularly in the logic layer.
#856 1311177066000000 thejimmyg This ticket is more than 6 months old so closing it in line with our new ticketing policy. We know that test coverage needs to be improved, particularly in the logic layer.
#855 1311176988000000 thejimmyg This ticket is more than 6 months old so closing it in line with our new ticketing policy. The auth code is in the process of a major refactor anyway.
#854 1304351843000000 johnlawrenceaspden Coverage now up to 84% and 81%. Remaining untested code is error conditions, which we decided weren't worth the effort of locking down. fixed on feature-854-tests-for-authz-groups, now merged into default code.
#853 1291723143000000 wwaites done in http://bitbucket.org/ww/ofs need to merge back into main ofs repo
#853 1294047550000000 pudo Looking at the changeset this cannot be functional yet: where is the implementation of the policy document exchange. It seems to me like this is currently adding the actual credentials to the request (self.ofs.conn.add_aws_auth_header(headers, 'PUT', path)). c.f.: * http://doc.s3.amazonaws.com/proposals/post.html#Form_Fields * http://code.google.com/apis/storage/docs/reference-methods.html#postobject
#853 1294594581000000 wwaites we don't need a policy document exchange. it's simpler than that. the "server" instance has already permissions to upload. it just calculates the headers and such that are needed (based on the "client"'s initial headers) and gives them to the "client" the client then uploads without knowing the "server"'s credentials. The "client" never needs any of its own goostor credentials at all. the only separate step is to make the widget readable by the world. ticket #879 is to expose this as a small set of API calls.
#852 1291723207000000 wwaites plugin documentation: http://packages.python.org/ckan/plugins.html
#852 1295259620000000 rgrp Removing from sprint and moving to release as a master ticket.
#852 1315821628000000 rgrp Moved dataset archiving into new ticket #1327 and this is therefore now done.
#851 1292860957000000 wwaites * link checker above uses the queue. queue not running generally * quickest way forward is just to put the curate tool in a cron job and make a suitable rule. shall do this soonest
#851 1292892515000000 wwaites Ready to put into cron job. cf: * http://groups.google.com/group/fuxi-discussion/browse_thread/thread/47f131fc2e3817e3 (Actions) * http://groups.google.com/group/fuxi-discussion/browse_thread/thread/bf955620a6ae77d8 (denoted/calculated functions) * http://groups.google.com/group/fuxi-discussion/browse_thread/thread/71a94191e9fef384 (FuXi 1.2) * https://github.com/wwaites/curate/commit/042a96c1589c0fa4980aca733c64c080e02f111e (curate tool update)
#851 1292894661000000 wwaites currently running against ckan.net, adding broken_link tag if a broken link is found. perhaps something more elaborate should be done? works for now anyhow...
#851 1292941303000000 wwaites urllib2 is good for http(s) urls but not, unfortunately, for other types most prominently ftp. change the httpReq action to use http://curl.haxx.se/libcurl/python/
#851 1293025112000000 wwaites Implemented curlReq that does a curl request and returns statements similar to or analogous to httpReq. Require curate<=0.8 This in a cron job is sufficient to go through all the packages and update them with a broken link tag now: {{{ curate -r https://github.com/wwaites/curate/raw/master/examples/tagging.n3 -s -k API_KEY }}}
#849 1291715179000000 pudo == #846
#847 1291639903000000 rgrp Done my best to pull all the solr related tickets and work together in one place.
#847 1292844627000000 rgrp Still have ticket:669 and ticket:874 to do (431 probably won't be done for a while).
#847 1295259902000000 pudo This is now functional, minus the index testing tool.
#846 1291715212000000 rgrp Reopening and fix used illegal css ;) -- it is illegal to have an @import statement (as we do with extras.css) after normal css statements. We therefore need to move the core css back out into a ckan.css (or similar file).
#846 1291719074000000 memespring It's been done on a branch and pushed to bitbucket: http://bitbucket.org/memespring/ckan/src/9e74c40ff073/ckan/public/css/style.css
#845 1291723492000000 dread This was completed on the feature-845-required-fields branch and merged into default in cset:3b5635ffaa7d
#844 1296340486000000 rgrp Looking at DNS this apparently has been fixed.
#843 1291652696000000 pudo The real name is also requested via AX/SReg extensions in OpenID so for new users who are not using google, this should usually be filled in automatically.
#843 1319710380000000 dread * "we should show the openid as well to distinguish between users with the same name." - when "Full name" is not distinguishable, maybe best to display the unique 'name' field as a hover-over. * "on account creation, the user should be redirected to their personal details page to encourage them to fill in a human readable name." - yes you always got take to the personal details page. We should use a flash message at this point if they have not filled in the "Full name" field to suggest they click edit and do this. * "List is to long" - this has been addressed - see http://thedatahub.org/user
#843 1319721601000000 dread Ok I misunderstood this ticket. This is referring to adding a user in e.g. http://thedatahub.org/group/authz/energy-data This UI seems to be updated. You start typing the name, full name or open id of the person and it has a dropdown that autocompletes. This seems to be sufficient for Will's points 1 and 3. Would be still good to have a flash message on account creation to encourage people to add personal their Full name. This is similar to #1413 so I'll close this ticket and add it there.
#842 1296468313000000 rgrp Change to awaitingtriage as definitely not critical.
#842 1303474131000000 thejimmyg As a user I come to a package: Have a todo count at that top that takes you down to the todo list (which may say nothing todo) At the bottom is a section of the package display titled "ToDo" where I see a list of all toDos for the package most recent at the top If I am logged in see a form for "Add to do" at the top of the todo section and can add one straight away I see a "now resolved" button next to each which goes green when you hover. When clicked the todo fades away. Otherwise I see a button that says "login to add todo" expands out the form The form One of the fields is category -> autocomplete the category (not constrained) Add a description Submit, the todo gets added via AJAX to the list at the top as the most recent todo Model: todo id package_id todo_category_id (required) description (required) date=NOW() resolved=False todo_category id name Prepopulate with: broken-resource-link, no-author, bad-format
#842 1303474228000000 thejimmyg > Otherwise I see a button that says "login to add todo" > > expands out the form > Actually rather than expanding the form, you will go away to the login page and come back to see the expanded form (question: how does this redirect you back to the bottom?)
#841 1300364333000000 thejimmyg See #995
#840 1291325251000000 dread dread: ] That's the proxy_cache decorator which sets a Beaker expiry time only. ] Whether that is on or off, you still end up run the ] "etag_cache(cache_key)" command in the package controller won't you? ] And won't that will either insert the Etags header or abort 304 for a ] repeat? I must admit I've not tried it, asking purely with the code in ] front of me and welcome you pointing out where I've gone wrong. ww: Right. So those etag calls predate the cache decorators and should probably be either moved up into the decorator (e.g. use @ckan_cache instead of @proxy_cache) or wrapped in a check for the config parameter. (And @ckan_cache could be changed to use pylons' etag_cache function rather than just setting the header...)
#840 1302694123000000 dread Basic on/off switch added, tested & documented in cset:0da189c9630e on default.
#839 1291736541000000 memespring Done with the exception of the discuss/comments page. Plugin won't install on my setup. Fredrich looking into it.
#838 1291299716000000 memespring Package page redesign: http://ckan.org/ticket/839
#838 1291636351000000 memespring Merge css files http://ckan.org/ticket/846
Note: See TracReports for help on using and creating reports.