Custom Query (2152 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (901 - 903 of 2152)

Ticket Resolution Summary Owner Reporter
#1339 fixed Issues / question re navl and data conversion kindly rgrp

Reported by rgrp, 3 years ago.

Description

I ran into a bug with the size field on resources.

  • It would not accept an empty value from form (IMO this clearly equates to null/None)
  • This could be fixed via using ignore_empty instead of ignore_missing
  • However using this means there was no way to empty the field (e.g. i may just want to set the size field back to null not just change to another value)
  • similar issues could arise around other fields (such as last_modified ...)
    • cf cset:645031d07b60

To solve this (cset:58acdcfe6d4e) i created an int_converter temporarily in logic/schema.py (this is almost certainly the wrong place). But I think it raises a bigger issue about the conversion layer and how it works.

#1344 fixed datetime error json conversion on search kindly kindly

Reported by kindly, 3 years ago.

Description

Json decoding error on search, due to date in resources.

#1345 fixed Investigate possible memory leak kindly nils.toedtmann

Reported by nils.toedtmann, 3 years ago.

Description

There is some evidence pointing to CKAN handling memory inefficiently or even leaking under certain conditions:

When we migrated ckan.net/thedatahub.org from eu7.okfn.org (32bit) to s053.okserver.org (64bit) (ticket) we experienced extraordinary memory usage peaks (ticket). Here are the observed value with Apache default settings:

  • eu7, mpm-prefork: base level ~0.6GB, peaks up to 2GB
  • s055, mpm-prefork: base level ~1GB, peaks up to 4GB
  • s055, mpm-worker: base level ~1.5GB, peaks up to 6GB

William reduced the life-time of a WSGI CKAN process from 500 requests down to 25 requests (changeset). This (together with two other tweaks) changed the situation drastically:

  • s055, mpm-event: base level ~1.4GB, no peaks

This suggests that the more requests a CKAN processes serves over time, the more memory it consumes, aka bad memory management or a leak.

To prove this theory, one could reduce the total number of WSGI CKAN processes as much as possible without killing the performance (e.g. down to processes=3), and then observing the relation between maximum-requests=25...500 and memory consumption.

On 14/09/11 17:49, David Read wrote:

Someone to do a bit of top-down memory-use profiling would be very useful. Also useful would be something in the tests that reported what test cases use lots of memory - this could be in the nose plugin.

+1

Note: See TracQuery for help on using queries.