{23} Trac comments (3729 matches)

Results (1901 - 2000 of 3729)

Ticket Posixtime Author Newvalue
#868 1292890251000000 [email protected] Below is a patch to make the tests run at least 2.5 times faster (about 15 mins on my old laptop). Instead of dropping the tables each time, it just deletes everything in them, using a low level connection. All the tests pass this way. It's a surprisingly clean patch. Here are a few points concerning it. * I tested truncating the tables but it's slower. If there are any big tables in the tests this way is the fastest (faster than drop). * The sequences (id columns) will start from where they left off. * I also investigated making postgres template database and cloning it, but the complication was not worth it. * sqlalchemy iterates the tables in reverse dependency order, which make this possible. * I targeted rebuild_db as that what most of the tests I saw where using, however I have not checked all tests to see if they all are. * There is a slight hack on the repo object to make sure it knows that "clean_db" is coming from the tests. * I refactored init_db for code reuse. * I have not done a version check. sqlalchemy >= 0.5 do this in a different way as outlined in the comments. {{{ diff -r 7f2239b0f743 ckan/model/__init__.py --- a/ckan/model/__init__.py Fri Dec 17 10:34:47 2010 +0000 +++ b/ckan/model/__init__.py Mon Dec 20 23:25:04 2010 +0000 @@ -41,6 +41,9 @@ def init_db(self): super(Repository, self).init_db() + self.add_initial_data() + + def add_initial_data(self): # assume if this exists everything else does too if not User.by_name(PSEUDO_USER__VISITOR): visitor = User(name=PSEUDO_USER__VISITOR) @@ -69,6 +72,26 @@ import migrate.versioning.api as mig version = mig.version(self.migrate_repository) return version + + def clean_db(self): + # delete only added for tests + if hasattr(self, "delete_only") and self.delete_only: + self.delete_all() + else: + super(Repository, self).clean_db() + + def delete_all(self): + + self.session.remove() + ## use raw connection for performance + connection = self.session.connection() + ## sqla sorts in reverse dependancy order. + ## in >= 0.5 use reversed(metadata.sorted_tables()) instead of table_iterator + for table in self.metadata.table_iterator(): + connection.execute('delete from "%s"' % table.name) + self.session.commit() + + self.add_initial_data() def setup_migration_version_control(self, version=None): import migrate.versioning.exceptions diff -r 7f2239b0f743 ckan/tests/__init__.py --- a/ckan/tests/__init__.py Fri Dec 17 10:34:47 2010 +0000 +++ b/ckan/tests/__init__.py Mon Dec 20 23:25:04 2010 +0000 @@ -55,6 +55,7 @@ import ckan.model as model model.repo.rebuild_db() +model.repo.delete_only = True class BaseCase(object): }}}
#870 1294862485000000 [email protected] A patch is available here. https://bitbucket.org/kindly/ckan/changeset/9a1d6f55587b
#876 1292891133000000 [email protected] I have looked into this already so I can give you a head start. I am working on a project that uses many backends so I have some experience. So here is what I have found so far. == nested transactions == VDM does not support sqlite, as it uses nested transactions. I do not think vdm needs nested transactions. It can use a flush instead. Here is the patch that works. All vdm tests pass. {{{ --- a/vdm/sqlalchemy/base.py Sat Sep 11 23:06:26 2010 +0000 +++ b/vdm/sqlalchemy/base.py Mon Dec 20 16:16:34 2010 +0000 @@ -40,9 +40,8 @@ self.setattr(session, 'HEAD', True) self.setattr(session, 'revision', revision) if revision.id is None: - session.begin_nested() session.add(revision) - session.commit() + session.flush() }}} == indexes == The index file 021_postgres_upgrade.sql in the migrate repository will not run as it uses syntax particular to postgres. Another will need to be made thats similar. sqlite does not support complex indexes like upper(text), so a work around will need to be found. == unicode == The harvesting returns utf8 encoded strings and pysqlite dbapi only supports python unicode objects (as far as I can tell). There will need to be a process in converting all strings that get into the database with string.decode("utf8") == dates == Have not looked into this one too much. However, as sqlite stores everything as strings the timestamps appear to be failing on conversion back into python. I have solved the above two issues before by adding attribute extensions to sqlalchemy mappers to do the conversions without effecting too much code. == in memory sqlite == Some tests need to change in order to make sure the database is created first because the database gets lost each time. In the tests that I have made pass, they run in about a seventh of the time as they do on postgres. == Other things to keep in mind. == * Need a new flag in test.ini to remove full text indexing completely, or always use it with solr. * There are enough incompatibilities between the databases that you would also want to test against postgres as well, at least before a release. * I would probably upgrade sqlalchemy first, so you will not have to the changes twice. The new versions are significantly faster too. * I have submitted a patch to #868 that makes the tests run about 2.5 times as fast and I think there are more low hanging fruit if the aim is test speed.
#876 1292939711000000 [email protected] I have read quite a lot of people having problems with savepoints with sqlite and thought they were not supported on sqlalchemy. They are at least not consistant with postgres ones. I may well be out of date on this. Here is an [http://groups.google.com/group/sqlalchemy/browse_thread/thread/dc9d1b61044bf730/65a62a33ec313842?lnk=gst&q=no+such+savepoint#65a62a33ec313842 example] even though its a bit old. I did get some non deterninistic errors, the above seemed to fix them. A failed subtransaction is not handled well by sqlalchemy and I think this causes knockon effects due to the unresolved transaction. I would stay well clear of them entirely if possible. What are the errors you are getting?? My 2 cents. ignore me at will... I would think about using a different backend for testing than production. [http://stackoverflow.com/questions/2716847/sqlalchemy-sqlite-for-testing-and-postgresql-for-development-how-to-port look here]. If you want to support both then you should test on both. There are simple ways to scrape a few more minutes off the tests. If you want real speed, then a multiprocess solution (with a database per core) would be sensible if a bit tricky.
#277 1296470458000000 kindly I think generally this is a bad idea. I think in a few controlled circumstances some configuration is worth changing at runtime, however looking through the development.ini file I do not see hardly anything in there that does not require a restart anyway. It would be good to have some clear examples of things that would be in the extension.
#358 1303122109000000 kindly This ticket needs to have a more thorough spec which needs to include. * Examples of put/post requests to resources and if they are needed. * Dealing with resources that do not have a related packages in terms of authorization. Do they have a new action? how granular is the authorization? per resource? system level? etc. * The rules related to authorization for resources attached to packages. i.e you only get read permissions when the related package has read permissions? do they have their own rules?
#363 1298840718000000 kindly revision objects are made everytime a new revision is made even if their are no changes.
#560 1297084192000000 kindly changeset https://bitbucket.org/okfn/ckan/changeset/b899085071a8 cset:b899085071a8
#663 1298913603000000 kindly cset:76a77439ecd0
#664 1300371645000000 kindly fixed cset:a5f4a49190e2
#826 1297416879000000 kindly There is nothing to stop anyone from putting any extra attributes in the extra_info field dict. So any have the flexibility you need. The config option is to add some fields that act in exactly the same way as python attributes, having the same semantics as them. i.e if you have an extra field called alturl, you can do obj.alturl = 'fdsffs'. This is the best of both worlds as far as I can tell.
#826 1297417900000000 kindly I forgot to mention that he main advantage of the fixed fields, is that we can make them properly searchable i.e the values searchable. This currently does not work for package extra values as they are jsons. I have added this searchability for the sql backend.
#826 1297423342000000 kindly This would be too much of a hack. You do not want users overwriting any attributes on the object. If they called the attribute "__init__" it would write over the actual __init__.
#890 1318599247000000 kindly Invalid due to #1397, We will be using celery instead.
#920 1300319140000000 kindly The only issue here is that we are listing tags that relate to 'inactive' packages. We are already not listing tags that relate to NO packages. I have fixed this. cset:cd0347eed69f The tag in the example is related to a deleted package so should not be deleted. With this patch it no longer gets displayed.
#954 1320142744000000 kindly This is basically the complete now with documentation. The child tickets no longer seem to fit and are not essential for completion.
#956 1299489084000000 kindly cset:1305b9192d49
#965 1297682632000000 kindly added with cset:553421d05ce8
#981 1297682773000000 kindly fixed see cset:3d1f720a2e5b
#984 1297628554000000 kindly These are the errors listed in severity order. * group revision tables joins wrong in many ways. * changemask table not added. * licence_id wrong type. * package_revision.download_url and changset.status not dropped. * package.name and tag.name unique constraint not added. * update cascaded defined wrongly. Attached are the fixes that will need to be run in ckan_migration_fixes.sql
#984 1297682116000000 kindly fixed see https://bitbucket.org/okfn/ckan/changeset/d56ea86d4303
#989 1297700363000000 kindly It would be nice to know some use cases. I think that plugins should control their own storage, or share a storage that is designed to be flexible (mongo, redis ...). We do not seem to be able to keep our current migrate repository in sync let alone add plugins to the mix.
#989 1297706620000000 kindly I do not think we need to 'extend the model' if you intend to make the migrations separate. If the schema is decoupled, then there are no problems. So each plugin can have its own model and use sqlalchemy independently i.e have their own metadata, classes and mappers. They do not have to even use sqlalchemy. What I mean is that there is no need to do anything apart from. * Agree on a naming convention of the plugin tables (including their own migrate table each) * Agree to the rule that no plugin can add a column to an existing table. * Agree that no table can have a (database level) foreign key constraint between the core tables and itself in either direction. They *can* have implied sqlalchemy level joins. * Maybe have a hook that on db upgrade all plugins are upgraded. Each plugin will have to redefine the tables, classes and mappers they need to join onto the core tables themselves. reusing/extending the core model will not be worth the trouble. This seems to cover your use cases and this way everything is nicely decoupled. Best of all there is very little work to do...
#994 1298912830000000 kindly see cset:93188d42fc12
#1000 1298912726000000 kindly fixed cset:630513f550d5
#1012 1300352334000000 kindly This is entirely not trivial at the moment. Hopefully after the dictization it will become more so. Simply putting in the package revision object in place of the package does not work. It will obviously work for changes to the package object itself. However, there are no mappers on that object for getting out the related package_tags, resources and extras at that revision. You will have to construct a fake pkg object with some messy and painful queries using dates.
#1012 1300362584000000 kindly cset:5649d6e761fc The basic revision history is merged. I will keep this ticket open if it is not sufficient. All it does is give a list with the most recent first of revision_ids, authors and timestamps. i.e {{{ [{"timestamp": "2011-03-16T15:55:19.941961", "author": "southampton-ac-uk", "revision": "202e9eb8-afaa-4bc9-b8a1-a317561547ea"}, {"timestamp": "2011-03-15T17:59:16.430804", "author": "southampton-ac-uk", "revision": "8235bd0a-d39a-49e0-887a-b0f231be8a92"}] }}}
#1015 1298902753000000 kindly The migration fixes should sort this out, but I will keep the ticket open to check.
#1043 1300321033000000 kindly cset:c894f92c5b9a Sesion.remove() needed to be run before configure as we want the original session made not to be used over the newly configured ones.
#1046 1302777668000000 kindly cset:35ba6ad033ae
#1054 1301002995000000 kindly Looks great. Had a look and changed a minor thing because I was not confident with the handling of the null values. I made a fake resource and did an asdict on that to make the identity. Part of my patch I reverted as I mucked up my version of the vdm, so it was not needed.
#1078 1305828965000000 kindly complete: cset:843b78bae016
#1079 1302777496000000 kindly Complete see cset:35ba6ad033ae
#1092 1305570822000000 kindly completed cset: ca1ac4112ea2
#1109 1303862352000000 kindly It is fixed now in 1.3.4.1.
#1109 1305124697000000 kindly I am happy this is fixed in cset:445fc04333dd.
#1113 1304024611000000 kindly cset: 52a3fb230074
#1129 1305212467000000 kindly > * There seems some misunderstanding: change to have logic layer has almost nothing to do with being able to remove main stateful stuff in vdm. To be able to remove most of stateful stuff in vdm requires us to make some other changes (re foreign keys from revision objects to continuity) The logic layer does not automatically help out, however it makes our life easier if we want to handle state ourselves. For example take package tags, if we remove the stateful_m2m properties and just use normal sqlalchemy relations. We will still want statefullness (i.e active, pending, deleted) on the package_tags table. We should update those on the table ourselves in the logic layer. > * There are other simplifications we should make to vdm before embarking on this (e.g. move to SessionExtension from MapperExtension). This is easy as that work has been done in changeset branch and can be backported. I agree but event thought the MapperExtension way is not great, it is very well field tested.
#1136 1305216218000000 kindly This would be nice but it is not necessary. The current mapper implementation may be nice but not is battle tested..
#1136 1305217553000000 kindly Replying to [comment:2 kindly]: > This would be nice but it is not necessary. The current mapper implementation may be nice but not is battle tested.. I meant the mapper extension "IS" battle tested.
#1137 1305217449000000 kindly I am not sure this needs to be done. I think we should keep the continuity object always in the table, even if it is deleted. The querying should be done through the logic layer so the deleted state should not be an issue. The clients should be entirely state aware. The only thing that needs to be done is to remove all statefullness of relations. These are the only things that are complicated. This would make vdm just a simple copy on write mechanism, with the client controlling the state.
#1140 1312491320000000 kindly fixed cset:987da68ea4f6 The package group table needed to trigger a reindex of the package.
#1148 1305969925000000 kindly complete cset:96a43c9d8bd7
#1149 1305971672000000 kindly cset:b1634d405066
#1149 1306090663000000 kindly Failed dgu tests due to as_dict not working on deleted objects. fixed by cset:c9b9cf513e44
#1193 1309768960000000 kindly fixed cset:87d6140e06ad
#1205 1309768720000000 kindly fixed in default cset:5e2070688e54 The upgrade now works locally, so the upgrade should work now.
#1215 1310335541000000 kindly fixed cset: 8a317eadbb36 If you delete the last row then it just clears it instead of deleting the row.
#1230 1311154142000000 kindly The standard way to add tables in a plugin has converged upon putting the tables in the iconfigurable plugin. This runs at the correct point for when the application runs normally. For testing however there are issues due to the tables potentially being dropped, especially for the sqlite case. The fix is to make sure the iconfigurables are run at the start of each set of tests, hence adding it to the ckan_nose_plugin. This is not pretty, but good enough. cset:8531b9fc1ee2
#1238 1311583876000000 kindly I cannot reproduce this, to me it looks like it does get the correct revision. If you for example look at the package a millisecond before i.e http://ckan.net/package/osm%402010-11-30%2000%3A21%3A49.627829 the tags are no longer there that were added in that revision.
#1256 1312409296000000 kindly This has also caused problems in authorization for deleted packages.
#1256 1312813529000000 kindly fixed cset:357cf9377b25
#1258 1312813614000000 kindly cset:e22d4e385fc8
#1268 1317212499000000 kindly fixed cset:eebbe6071741
#1316 1338193724000000 kindly now get_or_bust function handles this.
#1339 1316010607000000 kindly I have fixed the isodata and made a slightly modified int_converter for this case. In the correct place and raises invalids on not being an int. cset:a4af115116bb The thinking was that the input of these fields would be through the api so the empty string case did not arise. These should clearly be converted to None (Null). What issues in general? Having done this lots of times before you always end up needing to write your own little validators as the standard ones never do what you want. Thats the point of them. Look in ckan/lib/validators if you need examples. So what you did was correct...
#1341 1315996368000000 kindly This was run to delete the users and their mistaken revisions that where created. {{{ BEGIN; delete from revision r where r.id in ( select r.id from "user" join revision r on r.author = "user".name left join resource_revision rr on rr.revision_id = r.id left join package_revision pr on pr.revision_id = r.id left join group_revision gr on gr.revision_id = r.id where "user".created between '2011-08-15' and '2011-09-06' and gr.id is null and rr.id is null and pr.id is null and ("user".name similar to '%[0-9]' or "user".fullname similar to '[A-Z][a-z]*[A-Z]%') and "user".name not like 'http%' ); delete from "user" u where u.id in ( select "user".id from "user" left join revision r on r.author = "user".name where r.id is null and "user".created between '2011-08-15' and '2011-09-06' and ("user".name similar to '%[0-9]' or "user".fullname similar to '[A-Z][a-z]*[A-Z]%') and "user".name not like 'http%' ); COMMIT; }}}
#1344 1316078757000000 kindly fixed cset:f08215845dab
#1356 1317220284000000 kindly fixed cset:1defa48097f5
#1364 1318199135000000 kindly fixed cset:294a0b6577b0
#1376 1318091245000000 kindly fixed cset:39acf62f30b0
#1383 1319405579000000 kindly We only added the Iresourceurlchange interface as we made the IDomainObjectModification include the Package.
#1383 1320141781000000 kindly fixed cset:ecfb0f8b633c
#1398 1321826380000000 kindly deployed on test.ckan.net docs added cset:47216c49fcec881f4eacc7170cb02d0a443500a2
#1408 1320141847000000 kindly fixed cset:51c7d51f3c17
#1433 1320666509000000 kindly Done but waiting to merge after 1.5 release.
#1433 1321827123000000 kindly cset: 68c37312ef70349431213465005761edf434d27e
#1474 1321826753000000 kindly fixed cset:8f3d917e24390f91db577fdbd8b8c6a1d6228505
#1477 1328016209000000 kindly This is done pending new superticket publihser_profile. (#1669)
#1478 1323161054000000 kindly completed in about 2.8 days. cset:58b7a09328111b97da7d8ac65b4710b94dac54e3
#1487 1322095808000000 kindly fixed cset:4160859c8c9786588dbf0893981b93d9621019a9
#1522 1324333827000000 kindly fixed cset:060efe4a0e7e4ede3337623092848740c58107f9
#1531 1326155226000000 kindly final commit 8457a34dff227e50aed8833673600b22683a23a1
#1595 1325604696000000 kindly This will be fixed when the activity stream will be in place of the revision list. There is no bug with the revisioning, it just is getting everything related to the group.
#1603 1338202654000000 kindly Duplicate as new theme will implement this.
#1612 1325688886000000 kindly finished a4d1f616caf3c3f2dcd963369c3e14299433097d
#1614 1325689136000000 kindly a1ca82bbc5cf89a0e308dee278f6d8ea23af8b7e
#1715 1328494878000000 kindly Mostly there, need to added types and stopfiles. Need to add actual multilingual fields. Decided to translated title as well for relevance.
#1738 1328494709000000 kindly cset: 117dce4d64de731e7b0a3c55175a1d093f2bf540
#1739 1328495651000000 kindly fixed cset:117dce4d64de731e7b0a3c55175a1d093f2bf540
#1741 1329750838000000 kindly done cset:7825caed3361e88a245b5dd2f946da8bedb160e0
#1779 1329393759000000 kindly complete at 669a8e9f7a768b147b1668940842b72b2a302088
#1781 1329393814000000 kindly complete at 669a8e9f7a768b147b1668940842b72b2a302088
#1819 1332163324000000 kindly Currently using package_show_rest. Should be moved to just use package_show but that is another ticket.
#2198 1339771453000000 kindly Already in the docs.
#2283 1340623843000000 kindly No super tickets anymore.
#2317 1340033433000000 kindly Getting replace as part of new theme.
#2331 1337782689000000 kindly This is a wontfix. I think terms should be ored by default. All modern search engines work like this. If there is an issue due to relevancy (i.e if you type mulitple words and your result not coming near the top) then we should use this examples so we can tweak the results.
#2331 1338455981000000 kindly Scoring is primary in my opinion. Who cares if you have 1000 results if the top few are yours. If things are hard to find we need to change our relevancy first. So if you have examples of where you think the scoring is wrong then please make a ticket for that. Google, I imagine, just has a cutoff of anything under a certain score not being shown. We could do that as well but it would take some working out what we wanted that score to be. Full "And" queries also limit any accidental discovery, especially of rare terms. If you do not get your search exactly correct then you get nothing, which is bad. Obviously you can still AND things or +things. The correct solution to this is adding a minimum match parameter which is a middle ground. eg you can say that you want to match over half of the terms. "thing1 thing2 thing3 thing3" means you have to match at least 2. There are many options described here. http://wiki.apache.org/solr/DisMaxQParserPlugin. You can change this in an extension if you want it just requries adding an mm field to the before_search in the Ipackage controller. I do not personally think we should change it as default. I am closing it as wont fix as its trivial to change and is a philosophical difference not a technical one.
#2331 1343862230000000 kindly As I said I do not not agree with it being a UX problem or *wierd*. *wierd* is not acceptable in response to a thought out comment. So I am closing it again.
#2402 1337302474000000 kindly cset: 12da5e9effeeb1aca0df321c355d8438647ef426
#2403 1337302700000000 kindly cset: 05144f8621ee719c345373934e70719f46e87cf6
#2581 1340711431000000 kindly This looks fine, just make doubly sure that if this flag is set then whatever sets it explicitly sets the state i.e overrides what the user sent.
#2581 1340728155000000 kindly No any state without active, pending or deleted in their name is fine.
#2877 1345600430000000 kindly 1. This is fixed need to reload data to test though. 2. Fixed as far as I am concerned, limit 0 now returns correct total. If there are no results in filter return total of 0. 3. Want to keep postgres types. This will stop the need for mappings in both directions and makes everything simpler. We are currently not storing any metadata on tables and would like it to stay that way.
#2446 1352206530000000 johnmartin I can't get access to said project. Can someone please give me access so I can triage this?
#2451 1352206567000000 johnmartin I can't get access to said project. Can someone please give me access so I can triage this?
#2457 1352206516000000 johnmartin I can't get access to said project. Can someone please give me access so I can triage this?
#2471 1352206679000000 johnmartin I can't context on this. Closing.
#2562 1352206599000000 johnmartin I'm closing this because I can't get context on this.
Note: See TracReports for help on using and creating reports.