{23} Trac comments (3729 matches)

Results (1401 - 1500 of 3729)

Ticket Posixtime Author Newvalue
#976 1297346954000000 dread Done in cset:1566d08d529f for release 1.3
#569 1297347214000000 thejimmyg This is now complete in ckanext-csw.
#933 1297348975000000 dread Done on enh_933_get_rid_self_in_cls_meth and merged into default cset:5750c77e580d ready for ckan 1.4.
#682 1297358266000000 dread Various improvements to ckanclient to enable this: cset:1bfefd7596d3 and cset:47fd07087547 and installed on buildbot now.
#826 1297414412000000 dread It's great that the extras are added in-line with other Resource properties (as opposed to package extras which are a dict not off 'package' but off 'package.extras'). However, the resource extra field keys are defined in config option "ckan.extra_resource_fields". This config option should be removed - extras need to be entirely flexible for our purposes. (In the next ticket we should make it possible to add/remove both keys and values from the Web UI or API.) It would also be good to tidy things in this direction: http://wiki.okfn.org/Coding_Standards I've merged from default to the branch enhancment_826_resource_extra_fields ready for you.
#826 1297415095000000 pudo I want to strongly support david in his call for fully flexible extras: one of my use cases for them is to store the various bits of fallout from an archiving process, such as: * last-status * last-crawled * last-etag * last-expires * last-md5 * failure-count * fall-back-url These are things needed to really archive the data well, but they have nothing to do with CKAN core ops. Essentially, the archiver is a seperate concern and it need not appear in the CKAN config.
#826 1297416879000000 kindly There is nothing to stop anyone from putting any extra attributes in the extra_info field dict. So any have the flexibility you need. The config option is to add some fields that act in exactly the same way as python attributes, having the same semantics as them. i.e if you have an extra field called alturl, you can do obj.alturl = 'fdsffs'. This is the best of both worlds as far as I can tell.
#826 1297417900000000 kindly I forgot to mention that he main advantage of the fixed fields, is that we can make them properly searchable i.e the values searchable. This currently does not work for package extra values as they are jsons. I have added this searchability for the sql backend.
#977 1297420742000000 dread Fixed in cset:8809fbefaf8c for 1.3 and merged to default.
#826 1297421022000000 dread Ah - I understand. I'd like it if you could make all the extra fields work as attributes and be searchable - is that possible?
#826 1297423342000000 kindly This would be too much of a hack. You do not want users overwriting any attributes on the object. If they called the attribute "__init__" it would write over the actual __init__.
#826 1297429910000000 dread Merged into default in cset:613d7bd5fc96. Future tickets in this area include: * #978 Full edit in Web UI * #979 Edit Resource extras in the API
#958 1297430414000000 dread Sorry I didn't see this and created #78. Closing this one as #978 is fuller.
#958 1297430602000000 dread Apologies, #978 is subtly different - reopening.
#980 1297501755000000 rgrp No owner :) -- also what about the issue that when trying to edit a package for which you did not have permission you got a 500 ...
#980 1297503120000000 pudo Re package editing issue, that was a "double error" fixed by the first two changesets mentioned.
#982 1297518386000000 anonymous Well we could transfer all the dependencies and version numbers to a config file for the fabfile, but we don't achieve much.
#982 1297525477000000 rgrp I don't think i understand why this has been closed (and it would surely be wontfix rather than invalid ...). Let me explain in more detail: we would move to having one and only one pip-requirements.txt in the repo at any given revision and it would simply have the correct info for whatever branch/revision it was on. At the moment we are having extra pip-requirements-....txt for labelling branch pip-requirements when we could just the branching facility of mercurial. You'd do {{{ wget http://bitbucket.org/okfn/ckan/src/metastable/pip-requirements.txt }}} rather than {{{ wget http://bitbucket.org/okfn/ckan/src/default/pip-requirements-metastable.txt }}}
#984 1297628554000000 kindly These are the errors listed in severity order. * group revision tables joins wrong in many ways. * changemask table not added. * licence_id wrong type. * package_revision.download_url and changset.status not dropped. * package.name and tag.name unique constraint not added. * update cascaded defined wrongly. Attached are the fixes that will need to be run in ckan_migration_fixes.sql
#973 1297678851000000 rgrp Done. See http://licenses.opendefinition.org/ the new licenses release (0.6) http://pypi.python.org/pypi/licenses/0.6 and cset:b8e54186faee Actual cost: 4-6h (as i refactored the licenses package heavily)
#962 1297678925000000 rgrp In progress but not yet completed so moving to next sprint. Estimate remaining time at: 2h.
#810 1297679091000000 pudo At the moment, this crashes the groups field for some mysterious reason. Since this is going to be redundant with the new forms and the ticket has a low priority, I'm bumping this back 2 weeks.
#877 1297680579000000 rgrp Basic pass on an implementation (no permissions yet etc): https://bitbucket.org/okfn/ckanext-upload/changeset/9ae543f0645f
#984 1297682116000000 kindly fixed see https://bitbucket.org/okfn/ckan/changeset/d56ea86d4303
#965 1297682632000000 kindly added with cset:553421d05ce8
#981 1297682773000000 kindly fixed see cset:3d1f720a2e5b
#496 1297684298000000 thejimmyg The latest plan after update with PP is as follows: * CKAN will have a CSW interface * OS GenoNetwork will use this interface to determine: * New documents * Modified documents * Removed documents * OS will then handle the serving back to the EU since GeoNetwork already implements the custom filters the EU may require (their docs are ambiguous). This means the creation of the CSW server extension is now a high priority but it only needs to know about documents in the harvested_documents table. I've already implemented a REST API to get the those documents (but depending on the implementation it may need changing).
#738 1297685069000000 thejimmyg It may be handy to have a history of harvested documents and also be able to delete documents but still get their revisions. The package comparison code will need to look at these revisions.
#929 1297686088000000 thejimmyg The licenses service was down, it is back up now. We should be able to cope with this situation better though. Renaming the ticket.
#427 1297686183000000 thejimmyg Documentation of the licenses service was handled in #973. Changing this ticket to be about matching the license service in UKLII.
#794 1297686491000000 thejimmyg Actually, for the timebeing we will match but not do anything with that matched information, until there is a clear use case. Publisher is simply the publisher for which the source was registered. Closing this ticket.
#801 1297686706000000 thejimmyg We do have a requirement for this now. The job model has changed so that it is hidden from the user. We therefore want to know the timestamp the job started and the timestamp it finished. We'll therefore need migrations adding too.
#941 1297689750000000 thejimmyg The system will need some way of plugging in the model. See ticket #989 for progress on this. Other ideas: * The apps will need an image upload * We might like a voting system for apps and ideas, potentially that could be re-used later. Let's discuss the above ideas after the basic functionality is in place.
#937 1297689781000000 sebbacon I did a very quick hacky thing at the end of last week on top of the "insert google analytics code" extension we discussed, to work out "most popular packages" based off data harvested from the Google Analytics API. Needs making generic, tests etc but could be a starting point: https://bitbucket.org/sebbacon/ckanext-googleanalytics/src
#937 1297689859000000 sebbacon (and it would also need some proper caching as the GA API is very slow)
#989 1297700363000000 kindly It would be nice to know some use cases. I think that plugins should control their own storage, or share a storage that is designed to be flexible (mongo, redis ...). We do not seem to be able to keep our current migrate repository in sync let alone add plugins to the mix.
#989 1297700818000000 pudo Kindly, I agree - it would be much preferable to have independent storage for plugins and this would be easy to do if we were using another type of storage already. As it stands, however, our storage mechanism is SQL. I think we should use it for what it is as much as possible and do the weird, vertical stuff (k,v tables, swapping to redis) only if we really need it. For everything else: lets use SQL as it was intended. Examples: * We want to develop an apps catalogue as a CKAN plugin. While we could certainly put this in Redis, there is no reason why we can't have the following table: application (id, name, title, description, author, project_url, site_url, code_url, image). * A watchlist plugin could essentially work on UUIDs alone. What you'd end up with is something like this: watch (id, user, scope_id). Re migrations you're right, but my first intention would be to handle that seperatly for each plugin (i.e. they need to have their own migration repositories that they keep track of, e.g. via an apps_migrate_version table)
#989 1297706620000000 kindly I do not think we need to 'extend the model' if you intend to make the migrations separate. If the schema is decoupled, then there are no problems. So each plugin can have its own model and use sqlalchemy independently i.e have their own metadata, classes and mappers. They do not have to even use sqlalchemy. What I mean is that there is no need to do anything apart from. * Agree on a naming convention of the plugin tables (including their own migrate table each) * Agree to the rule that no plugin can add a column to an existing table. * Agree that no table can have a (database level) foreign key constraint between the core tables and itself in either direction. They *can* have implied sqlalchemy level joins. * Maybe have a hook that on db upgrade all plugins are upgraded. Each plugin will have to redefine the tables, classes and mappers they need to join onto the core tables themselves. reusing/extending the core model will not be worth the trouble. This seems to cover your use cases and this way everything is nicely decoupled. Best of all there is very little work to do...
#983 1297773407000000 dread Error was tracked down to cset:214a8f9fc1c2 (26-9-2010): upgrade_db called validate_authorization_setup() which calls setup_default_user_roles(System()) Fixed in cset:9f51a1c8ac83 for 1.3 branch and merged to default.
#808 1297783658000000 pudo implemented in cset:8200247e74e9
#715 1297796784000000 pudo fixed in cset:69c4210f635a
#986 1297812401000000 wwitzel3 https://bitbucket.org/okfn/ckanext-qa/src/be57e20c60ef/
#982 1297850732000000 anonymous This is now rolled into #963. Marking as duplicate. People can get the pip from a branch over HTTP like this: https://bitbucket.org/okfn/ckan/src/<branch-name>/path/to/file/you/want
#963 1297850773000000 thejimmyg We will also remove all the different pip files as part of this fixing #982 at the same time.
#991 1298037717000000 dread Fixed in cset:56cccbbb9d1a in time for ckan 1.3 release. This did not affect previous releases.
#992 1298060474000000 rgrp Fixed in cset:08548ef8f0e9
#430 1298283075000000 thejimmyg We are doing other refactoring that is more important than this such as: * Plugin APIs to enable extensions * Form refactroing This ticket is 6 months old so closing.
#936 1298283172000000 thejimmyg Hi Wayne, I'm assigning this to you but it isn't a priority yet. We'll put it in a sprint when it is time to do it. Cheers, James
#435 1298284084000000 thejimmyg Haven't seen this myself and it is 6 months old now.
#482 1298284158000000 thejimmyg This is now 6 months old and there still doesn't seem to be a requirement for this. Marking wontfix and we can come back to it if it comes up again.
#963 1298284252000000 thejimmyg You can now get CKAN from the repository http://apt-alpha.ckan.org/debian
#893 1298293527000000 thejimmyg We don't understand the use case for this requirement. Closing for now until a use case can be demonstrated.
#505 1298368280000000 dread Now complete
#998 1298369862000000 dread 'paster db create' (or init) should do exactly what we ask. Surely we should simply tell people to use 'paster db upgrade' instead?
#998 1298371191000000 anonymous I am happy to get rid of paster db create altogether as a compromise? Or add a depreciation warning to it?
#998 1298372171000000 dread Yes I agree - either of those sounds good. I think I've always used 'db init' in preference anyway.
#993 1298373114000000 dread Fixed on 1.3 cset:7708c8b521ed and merged to default. Deployed to ckan.net.
#805 1298379084000000 dread Migration tests added to buildbot using kindly's new nose option #965. Also removed legacy system of migration testing in: ckan/migration/tests and updated docs. cset:643673c7db3e
#931 1298379187000000 dread This was completed in ckanclient in cset:1bfefd7596d3
#659 1298379892000000 dread Smoketest scripts exist for exactly this in ckanext. It would be great to have this running on nagios. It is as simple as running: python blackbox/smoke.py -H ckan.net blackbox/ckan.net.profile.json See here for code: https://bitbucket.org/okfn/ckanext/src/default/blackbox
#659 1298424109000000 nils.toedtmann Good idea. Listed this in my nagios ticket http://knowledgeforge.net/okfn/tasks/ticket/600
#982 1298482394000000 dread Need to do this for older branches which isn't subject to #963.
#821 1298486642000000 dread Investigating several of these packages, it works for me (and David Raznick). For example ni_013_migrants_english_language_skills_and_knowledge, one resource is seen created in the diffs, is displayed in CKAN, in the API and in the dumps. Yet looking at the dump from 17/11/10 when this ticket was created, the resource didn't have a URI, which by the current model is a requirement, so it suggests the data was fine underneath, but it had problems displaying this field, and is now fixed.
#926 1298489517000000 rgrp @Seb: I believe this is now decided following discussion last week. Please could you detail results and close :)
#1003 1298490126000000 rgrp Work so far in http://bitbucket.org/rgrp/ckanjs
#926 1298541597000000 anonymous Goals: We want the interface for updating an object to be loosely coupled to the method for updating it. We might update a Package from: - HTML forms - a REST API (using JSON) - a CLI (potentially using command line arguments, YaML, XML or ini files) Right now, data is validated using a form framework, even if we're not using forms. Data is written to the object as part of the forms framework (using the "sync()" method), making the process hard to customise and hard to discover. Instead, there should be a standard chain for: - deserialising untyped data (such as that received from an HTTP POST or parsed from a YaML file) into valid data - returning structured errors suitable for displaying to the user - saving the validated, deserialised data Ideally, it would look something like: schema = MySchemaDefinition() raw_data = open("raw.csv", "r").read() structured_data = to_python(raw_data, schema) try: validated = validate(python_data) myobject.update_from_dict(validated) return "Updated OK" except ValidationError, e: return "Error: %s" % e.to_dict() The inverse would be something like: structured_data = myobject.render_to_dict() raw_data.write(to_csv(structured_data, schema) print "Wrote CSV %s" % to_logformat(serialized_data, schema) The question of how to generate and display forms should be completely decoupled from this. It should be easy to write forms by hand, which means it should be simple to flatten the serialized data to key, value pairs, and match up any validation errors to each key. Optionally, a form widget generation framework is a nice-to-have, but not essential, as it is expected that, given enough time, the majority of forms will require manual coding to accomodate edge conditions. A form widget generation framework should be reasonably complete if it's worth trying at all, which means it should support things like: - nested fields (at least repeating, multi-value fieldsets) - widgets for dates and file uploads - internationalisation ...but note I'd settle for *no* widget generation Components of a serialisation / validation framework: - a simple, obvious way to define a schema - a lightweight validation implementation - simple interface for validators - easy to match validation errors to data structure items Overall, I'd like to see: - loose coupling, no framework dependencies - maximal test coverage - extensive documentation with readily available examples ## Findings I looked at flatland, formencode, FormAlchemy, formish, WTForms, Django, web2py, deform/colander, formconvert and web.py - **web2py** just helps build HTML from python, so isn't what I'm after at all - **web.py** has rudimentary validation which is only aimed at HTML forms and is hence tightly coupled with them. - **Django**'s forms are again tightly coupled to HTML forms (and their generation) - **FormAlchemy** similarly couples validation to forms, and is focussed on inferring a schema from a data model SQLAlchemy. - **WTForms** again focuses on Form generation and don't make itx easy to deserialise arbitrary data This leaves us with Flatland, Formencode, Formish, Colander/Peppercorn/Deform, and FormConvert. Having reviewed all of these, I rejected Formencode on the basis of its patchy documentation and relatively low unit test coverage. I also found it mixed concerns a bit much for my taste. Formish felt similarly sparsely documented. Of the remainder, I'd be happy using any of them, but opted for Colander in the end as it has the most exhaustive documentation and unit tests and has been used in production for a long time. FormConvert has a nice design but is a bit of a moving target at the moment -- worth revisiting in the future.
#985 1298571248000000 pudo digitialiser.dk has been assigned to Stefan Marsirske to get him into this Framework, everything else is delayed.
#944 1298571917000000 pudo Won't work on this for now - IATI is now running against plain CKAN but this is not deployed. We will continue work on this once IATI requests more functionality and shelf it for now.
#877 1298624165000000 rgrp Various tidying in https://bitbucket.org/okfn/ckanext-upload/changeset/0fad7aa7aa97 (success messages, permissions on uploaded file - public-read) and completed permissions in https://bitbucket.org/okfn/ckanext-upload/changeset/a83ce00a1266. Still need to integrate into general workflow (e.g. create a Resource on successful upload) but that is a separate item so this ticket is now done.
#1006 1298631145000000 dread This command is slightly different to your branch policy as of two weeks ago: {{{ stable: stable code metastable: (will soon be deprecated) for code preparing to be stable default: development HEAD }}} which I prefer. My ideal would be to get rid of the confusing name 'metastable' and unneeded 'stable' and start a new branch called 'released', which will act the same as 'master' in this diagram but with a more intuitive name: http://nvie.com/posts/a-successful-git-branching-model Then for each ckan instance we can either use the most recent release (from 'released') or choose a specific one (e.g. 'ckan-1.3' or even 'default' or 'enh-865' for getting latest features). This gives a good degree of flexibility, is more understandable to newbies and probably a more widely understood branching model.
#1009 1298638447000000 pudo Some more ideas: * /user should list users, sorted by number of packages contributed/editied * /user/{name}/packages shows a list of packages to which users have contributed
#1010 1298733856000000 rgrp Complete see branch feature-1010-list-users and closing changeset cset:feature-1010-list-users.
#1010 1298740889000000 rgrp Meant this cset:d2651db566ef
#1011 1298820235000000 sebbacon On reflection, may as well make a Plugin interface called IAuthorizer, which allows customisation of get_authorization_groups, get_roles, and is_authorized....
#1011 1298821699000000 rgrp I agree that IAuthorizer is useful but not sure how it addresses the requirement of the ticket. AuthorizationGroups are already editable via the web interface at /authorizationgroup
#1008 1298821826000000 rgrp I've removed the eval in cset:1b8fedeb7ab0 - the more general question about caching should go in a separate ticket.
#1011 1298824649000000 sebbacon The "external source" is an Oauth service. We need to lookup user groups from that service.
#1011 1298825600000000 sebbacon Proposed implementation at https://bitbucket.org/okfn/ckan/changeset/187e65afb35f
#363 1298840718000000 kindly revision objects are made everytime a new revision is made even if their are no changes.
#941 1298886391000000 wwitzel3 Continued work on the community plugin. I am still learning the layout of templates and how they work within ckan and getting figuring out Genshi templates so this is where most of the delay has been. I've been able to determine a pretty good plugin layout for extensions that create models. I am currently focusing on getting the rest of the UI in place and trying to determine the best way to get colander to do the desired validation beyond ensuring the form has all the elements. After todays work, I will push what I've done and I would like to walk through the design with someone at some point.
#982 1298887980000000 dread Buildbot scripts now fixed.
#962 1298889078000000 rgrp Nearly done.
#833 1298889104000000 rgrp In progress now (sysadmin view and update nearly done).
#1003 1298889293000000 rgrp Have now started refactor to use backbone and have basic inline editing working and started on Add dataset view.
#937 1298892547000000 sebbacon The current implementation I referenced above will be a good starting point. Work that remains: * Add download click tracking to individual download links (currently we just record page views for packages, not downloads) * Somehow cache the download stats against each package (the Google API is very slow); package reddis or sqlite or similar as a local storage for the extension * Expose download information in the relevant places in the UI (all users? package owners? where?) This is about 2 days' work. Unlikely to get it done in this sprint.
#1015 1298902753000000 kindly The migration fixes should sort this out, but I will keep the ticket open to check.
#1000 1298912726000000 kindly fixed cset:630513f550d5
#994 1298912830000000 kindly see cset:93188d42fc12
#663 1298913603000000 kindly cset:76a77439ecd0
#1018 1299073340000000 dread Done in cset:e4167f8b3f80 on default
#427 1299164063000000 thejimmyg This is done in the latest release to test.
#496 1299164106000000 thejimmyg Will has implemented this now and OS have confirmed their export to GeoNetwork works.
#1019 1299166930000000 pudo fixed in https://bitbucket.org/okfn/ckanext-webhooks/changeset/034647931921
#971 1299245064000000 sebbacon folded into #1013
#1013 1299245157000000 sebbacon This is now resolved, but depends on core CKAN behaviour (specifically pluggable middleware and unicode-aware error pages) to function: https://bitbucket.org/okfn/ckan/changeset/c846794c1799
#1011 1299245206000000 sebbacon Merged to default https://bitbucket.org/okfn/ckan/changeset/e8217c317a8e
#1014 1299245293000000 sebbacon Run out of time for decoupling, but tests and README.txt written (including pointers about how to customise for anyone who needs to decouple in the future)
#956 1299489084000000 kindly cset:1305b9192d49
#1022 1299512991000000 pudo We're now using fileConfig to configure the logger API from the worker config file and this enables us to use SMTPHandler to send out error messages on queue processing failures. Marking as fixed.
#1023 1299514847000000 pudo Tried implementing this with AMQPs msg.requeue() and channel.basic_recover() but RabbitMQ yield a NOT_IMPLEMENTED error. Bit clueless on how to proceed.
Note: See TracReports for help on using and creating reports.