Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 250/869 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • How do I delete the 32k errored document?

    - by Ramkumar
    I am having some documents. If I try to open the document then it shows error like "field is too large 32k or view's column & selection formulas are too large" Whenever I try to delete the document, I am getting the same error. I am not able to delete. Okay we can try to get the document via backend, But there, I can not get the document handle. Whatever I try to search then the document collection count is 0. Important:- I am using Notes 6.5.2. Thanks in Advance,

    Read the article

  • Can Crystal Reports Scale to Fit Page

    - by Jacob Reyes
    Hello , Can a crystal report be scaled to fit page? I'm hoping to achieve something similar to Microsoft Excel's Scale To Fit feature wherein a large spreadsheet can be scaled to fit a 8.5"x 11" page. (On MS Excel 2007 goto Page Layout Scale To Fit). Im searching of a way to make a large report fit into a smaller page during print. for example a report designed in Legal(8.5"x 14") page must be able to shrink when print previewed for Letter(8.5"x 11") page. In my crystal report, it should be scaled to fit the page by default. I was thinking maybe theres a Crystal Report Setting or C# code technique that I missed out. Any hint or link to the right direction is appreciated. Thanks!

    Read the article

  • Webservice and ORM Framework?

    - by Sebastian
    Does anybody know a good web framework that includes an ORM mapper and allows straight forward implementation of web services? I'm looking for a framework written in PHP or C++. I'm looking for the following features (not all of them required, some will do nicely) data definition in one place used by database and web service WSDL generation XML output/JSON output boilerplate code generation So what I would like is a framework that let's me specify the objects, the web service functions on those objects and then generate everything that is required leaving me to fill the business logic (connecting the database to the web service). Anything like that out there? Background information for why I need this: I'm looking into creating a web project: the client is a rich web application that fetches all its data using AJAX. It will be completely custom made using only a low level javascript library. The server back end is supposed to serve static content and javascript (basically the rich web application) and to provide a RESTful web service API (which I would like to implement using aforementioned framework).

    Read the article

  • Are we using IoC effectively?

    - by Juliet
    So my company uses Castle Windsor IoC container, but in a way that feels "off": All the data types are registered in code, not the config file. All data types are hard-coded to use one interface implementation. In fact, for nearly all given interfaces, there is and will only ever be one implementation. All registered data types have a default constructor, so Windsor doesn't instantiate an object graph for any registered types. The people who designed the system insist the IoC container makes the system better. We have 1200+ public classes, so its a big system, the kind where you'd expect to find a framework like Windsor. But I'm still skeptical. Is my company using IoC effectively? Is there an advantage to new'ing objects with Windsor than new'ing objects with the new keyword?

    Read the article

  • fluent interface program in Ruby

    - by intern
    we have made the following code and trying to run it. class Numeric def gram self end alias_method :grams, :gram def of(name) ingredient = Ingredient.new(name) ingredient.quantity=self return ingredient end end class Ingredient def initialize(n) @@name= n end def quantity=(o) @@quantity = o return @@quantity end def name return @@name end def quantity return @@quantity end end e= 42.grams.of("Test") a= Ingredient.new("Testjio") puts e.quantity a.quantity=90 puts a.quantity puts e.quantity the problem which we are facing in it is that the output of puts a.quantity puts e.quantity is same even when the objects are different. what we observed is that second object i.e 'a' is replacing the value of the first object i.e. 'e'. the output is coming out to be 42 90 90 but the output required is 42 90 42 can anyone suggest why is it happening? it is not replacing the object as object id's are different..only the values of the objects are replaced.

    Read the article

  • fluent interface program in Ruby

    - by intern
    we have made the following code and trying to run it. class Numeric def gram self end alias_method :grams, :gram def of(name) ingredient = Ingredient.new(name) ingredient.quantity=self return ingredient end end class Ingredient def initialize(n) @@name= n end def quantity=(o) @@quantity = o return @@quantity end def name return @@name end def quantity return @@quantity end end e= 42.grams.of("Test") a= Ingredient.new("Testjio") puts e.quantity a.quantity=90 puts a.quantity puts e.quantity the problem which we are facing in it is that the output of puts a.quantity puts e.quantity is same even when the objects are different. what we observed is that second object i.e 'a' is replacing the value of the first object i.e. 'e'. the output is coming out to be 42 90 90 but the output required is 42 90 42 can anyone suggest why is it happening? it is not replacing the object as object id's are different..only the values of the objects are replaced.

    Read the article

  • Django: order by count of a ForeignKey field?

    - by AP257
    This is almost certainly a duplicate question, in which case apologies, but I've been searching for around half an hour on SO and can't find the answer here. I'm probably using the wrong search terms, sorry. I have a User model and a Submission model. Each Submission has a ForeignKey field called user_submitted for the User who uploaded it. class Submission(models.Model): uploaded_by = models.ForeignKey('User') class User(models.Model): name = models.CharField(max_length=250 ) My question is pretty simple: how can I get a list of the three users with the most Submissions? I trued creating a num_submissions method on the User model: def num_submissions(self): num_submissions = Submission.objects.filter(uploaded_by=self).count() return num_submissions and then doing: top_users = User.objects.filter(problem_user=False).order_by('num_submissions')[:3] but this fails, as do all the other things I've tried. Can I actually do it using a smart database query? Or should I just do something more hacky in the views file?

    Read the article

  • Adding a new column to Table which contains live data

    - by Ardman
    I have a large table consisting of over 60 millions records and I would like to add 2 new columns for data migration purposes. There are indexes on the table and some of them are large. So, by me adding the 2 new columns to the table, will I run the risk of slowing down the database whilst it attempts to add them and maybe time-out? Or will it just work? I know that if I try and rearrange the columns SQL Server will ask me to drop and re-create the table, so I definately don't want this. Is this something everyone is challenged with?

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • dynamic module creation

    - by intuited
    I'd like to dynamically create a module from a dictionary, and I'm wondering if adding an element to sys.modules is really the best way to do this. EG context = { a: 1, b: 2 } import types test_context_module = types.ModuleType('TestContext', 'Module created to provide a context for tests') test_context_module.__dict__.update(context) import sys sys.modules['TestContext'] = test_context_module My immediate goal in this regard is to be able to provide a context for timing test execution: import timeit timeit.Timer('a + b', 'from TestContext import *') It seems that there are other ways to do this, since the Timer constructor takes objects as well as strings. I'm still interested in learning how to do this though, since a) it has other potential applications; and b) I'm not sure exactly how to use objects with the Timer constructor; doing so may prove to be less appropriate than this approach in some circumstances. EDITS/REVELATIONS/PHOOEYS/EUREKAE: I've realized that the example code relating to running timing tests won't actually work, because import * only works at the module level, and the context in which that statement is executed is that of a function in the testit module. In other words, the globals dictionary used when executing that code is that of main, since that's where I was when I wrote the code in the interactive shell. So that rationale for figuring this out is a bit botched, but it's still a valid question. I've discovered that the code run in the first set of examples has the undesirable effect that the namespace in which the newly created module's code executes is that of the module in which it was declared, not its own module. This is like way weird, and could lead to all sorts of unexpected rattlesnakeic sketchiness. So I'm pretty sure that this is not how this sort of thing is meant to be done, if it is in fact something that the Guido doth shine upon. The similar-but-subtly-different case of dynamically loading a module from a file that is not in python's include path is quite easily accomplished using imp.load_source('NewModuleName', 'path/to/module/module_to_load.py'). This does load the module into sys.modules. However this doesn't really answer my question, because really, what if you're running python on an embedded platform with no filesystem? I'm battling a considerable case of information overload at the moment, so I could be mistaken, but there doesn't seem to be anything in the imp module that's capable of this. But the question, essentially, at this point is how to set the global (ie module) context for an object. Maybe I should ask that more specifically? And at a larger scope, how to get Python to do this while shoehorning objects into a given module?

    Read the article

  • [C#] Finding the index of a queue that holds a member of a containing object for a given value

    - by Luke Mcneice
    I have a Queue that contains a collection of objects, one of these objects is a class called GlobalMarker that has a member called GlobalIndex. What I want to be able to do is find the index of the queue where the GlobalIndex contains a given value (this will always be unique). Simply using the .contains function shown bellow returns a bool. How can I obtain the queue index of this match? RealTimeBuffer.OfType<GlobalMarker>().Select(o => o.GlobalIndex).Contains(INT_VALUE);

    Read the article

  • Log with timestamps that have millisecond accuracy & resolution in Windows C++

    - by Psychic
    I'm aware that for timing accuracy, functions like timeGetTime, timeBeginPeriod, QueryPerformanceCounter etc are great, giving both good resolution & accuracy, but only based on time-since-boot, with no direct link to clock time. However, I don't want to time events as such. I want to be able to produce an exact timestamp (local time) so that I can display it in a log file, eg 31-12-2010 12:38:35.345, for each entry made. (I need the millisecond accuracy) The standard Windows time functions, like GetLocalTime, whilst they give millisecond values, don't have millisecond resolution, depending on the OS running. I'm using XP, so I can't expect much better than about a 15ms resolution. What I need is a way to get the best of both worlds, without creating a large overhead to get the required output. Overly large methods/calculations would mean that the logger would start to eat up too much time during its operation. What would be the best/simplest way to do this?

    Read the article

  • CUDA small kernel 2d convolution - how to do it

    - by paulAl
    I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads). I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels. After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution. Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes. Whether or not you read everything I wrote, my question is: how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

    Read the article

  • Django paging object has issues with Postgresql QuerySets

    - by pivotal
    I have some django code that runs fine on a SQLite database or on a MySQL database, but it runs into problems with Postgres, and it's making me crazy that no one has has this issue before. I think it may also be related to the way querysets are evaluated by the pager. In a view I have: def index(request, page=1): latest_posts = Post.objects.all().order_by('-pub_date') paginator = Paginator(latest_posts, 5) try: posts = paginator.page(page) except (EmptyPage, InvalidPage): posts = paginator.page(paginator.num_pages) return render_to_response('blog/index.html', {'posts' : posts}) And inside the template: {% for post in posts.object_list %} {# some rendering jazz #} {% endfor %} This works fine with SQLite, but Postgres gives me: Caught TypeError while rendering: 'NoneType' object is not callable To further complicate things, when I switch the Queryset call to: latest_posts = Post.objects.all() Everything works great. I've tried re-reading the documentation, but found nothing, although I admit I'm a bit clouded by frustration at this point. What am I missing? Thanks in advance.

    Read the article

  • Choosing a design pattern for a class that might change it's internal attributes

    - by the_drow
    I have a class that holds arbitrary state and it's defined like this: class AbstractFoo { }; template <class StatePolicy> class Foo : public StatePolicy, public AbstractFoo { }; The state policy contains only protected attributes that represent the state. The state might be the same for multiple behaviors and they can be replaced at runtime. All Foo objects have the same interface to abstract the state itself and to enable storing Foo objects in containers. I would like to find the least verbose and the most maintainable way to express this.

    Read the article

  • How to find full module path of a class to import in other file

    - by Pooya
    I have method that returns module path of given class name def findModulePath(path, className): attributes = [] for root, dirs, files in os.walk(path): for source in (s for s in files if s.endswith(".py")): name = os.path.splitext(os.path.basename(source))[0] full_name = os.path.splitext(source)[0].replace(os.path.sep, '.') m = imp.load_module(full_name, *imp.find_module(name, [root])) try: attr = getattr(m, className) attributes.append(attr) except: pass if len(attributes) <= 0: raise Exception, "Class %s not found" % className for element in attributes: print "%s.%s" % (element.__module__, className) but it does not return the full path of the module, For example I have a python file named "objectmodel" in objects package,and it contains a Model class, So I call findModulePath(MyProjectPath,"Model"). it prints objectmodel.Model but I need objects.objectmodel.Model

    Read the article

  • Array of pointers in Objective-C using NSArray

    - by Amir
    Hello, I am writting program for my iphone and have a qestion. lets say i have class named my_obj class my_obj { NSString *name; NSinteger *id; NSinteger *foo; NSString *boo; } now i allocate 100 objects from type my_obj and insert them to array from type NSArray. then i want to sort the Array in two different ways. one by the name and the second by the id. i want to allocate another two arrays from type NSArray *arraySortByName *arraySortById what i need to do if i just want the sorted arrays to be referenced to the original array so i will get two sorted arrays that point to the original array (that didnt changed!) i other word i dont want to allocate another 100 objects to each sorted array.

    Read the article

  • android java.lang.OutOfMemoryError

    - by xiangdream
    hi, all, when i download large data from website, i got this error information: I/global (20094): Default buffer size used in BufferedInputStream constructor. It would be better to be explicit if an 8k buffer is required. D/dalvikvm(20094): GC freed 6153 objects / 3650840 bytes in 335ms I/dalvikvm-heap(20094): Forcing collection of SoftReferences for 3599051-byte al location D/dalvikvm(20094): GC freed 320 objects / 11400 bytes in 144ms E/dalvikvm-heap(20094): Out of memory on a 3599051-byte allocation. I/dalvikvm(20094): "Thread-9" prio=5 tid=17 RUNNABLE I/dalvikvm(20094): | group="main" sCount=0 dsCount=0 s=0 obj=0x439b9480 I/dalvikvm(20094): | sysTid=25762 nice=0 sched=0/0 handle=4065496 anyone can help me?

    Read the article

  • Iterating dictionary indexes in django templates

    - by unclaimedbaggage
    Hi folks...I have a dictionary with embedded objects, which looks something like this: notes = { 2009: [<Note: Test note>, <Note: Another test note>], 2010: [<Note: Third test note>, <Note: Fourth test note>], } I'm trying to access each of the note objects inside a django template, and having a helluva time navigating to them. In short, I'm not sure how to extract by index in django templating. Current template code is: <h3>Notes</h3> {% for year in notes %} {{ year }} # Works fine {% for note in notes.year %} {{ note }} # Returns blank {% endfor %} {% endfor %} If I replace {% for note in notes.year %} with {% for note in notes.2010 %} things work fine, but I need that '2010' to be dynamic. Any suggestions much appreciated.

    Read the article

  • XML serialization options in .NET

    - by Borek
    I'm building a service that returns an XML (no SOAP, no ATOM, just plain old XML). Say that I have my domain objects already filled with data and just need to transform them to the XML format. What options do I have on .NET? Requirements: The transformation is not 1:1. Say that I have an Address property of type Address with nested properties like Line1, City, Postcode etc. This may need to result in an XML like <xaddr city="...">Line1, Postcode</xaddr>, i.e. quite different. Some XML elements/attributes are conditional, for example, if a Customer is under 18, the XML needs to contain some additional information. I only need to serialize the objects to XML, the other direction (XML to objects) is not important Some technologies, i.e. Data Contracts use .NET attributes. Other means of configuration (external XML config, buddy classes etc.) would be a plus. Here are the options as I see them as the moment. Corrections / additions will be very welcome. String concatenation - forget it, it was a joke :) Linq 2 XML - complete control but quite a lot of hand written code, would need good suite of unit tests View engines in ASP.NET MVC (or even Web Forms theoretically), the logic being in controllers. It's a question how to structure it, I can have simple rules engine in my controller(s) and one view template per each possible output, or have the decision logic directly in the template. Both have upsides and downsides. XML Serialization - I'm not sure about the flexibility here Data Contracts from WCF - not sure about the flexibility either, plus would they work in a simple ASP.NET MVC app (non-WCF service)? Are they a super-set of the standard XML serialization now? If it exists, some XML-to-object mapper. The more I think about it the more I think I'm looking for something like this but I couldn't find anything appropriate. Any comments / other options?

    Read the article

  • How to persist non-trivial fields in Play Framework

    - by AlexR
    I am trying to persist complex objects using Ebeans in Play Framework (2.03). In particular, I've created a class that contains a field of type weka.classifier.Classifier (Weka is a popular machine learning library - see http://weka.sourceforge.net/doc/weka/classifiers/Classifier.html). Classifier implements Serializeable so I hoped that I can get away with something like @Entity @Table(name = "classifiers") public class ClassifierData extends Model { @Id public Long id; public Classifier classifier; } However, the Evolutions script suggests the following database structure: create table classifiers ( id bigint auto_increment not null, constraint pk_classifiers primary key (id)) ) In other words, it ignores the field of type Classifier. (The database is MySQL if it makes any difference) What should I do to store complex serializeable objects using Ebean/Evolutions/PlayFramework?

    Read the article

  • Sort/Enumerate an NSArray from somewhere in the middle?

    - by Kenny Winker
    I have an NSArray, with objects ordered like so: a b c d e f I would like to enumerate through this array in this order: c d e f a b i.e. starting at some point that is not the beginning, but hitting all the items. I imagine this is a matter of sorting first and then enumerating the array, but I'm not sure if that's the best technique... or, frankly, how to sort the array like this. Edit Adding details: These will be relatively small arrays, varying from 3 to 10 objects. No concurrent access. I'd like to permanently alter the sort order of the array each time I do this operation.

    Read the article

  • AS3: Removing EventListeners without knowing amount or names

    - by DevEight
    Hello! First shortly about how my site works: When a link is clicked it checks if something is already displayed in either the Left or Right side of the screen (the website looks like a book, so I have a left page I want to display information on and a right page). If there is already something showing it hides it and displays the new object, together with this it enables all the buttons within that object (I have separate functions to set up each object). An example of such an EventListener would be: pathTo.Button1.addEventListener(MouseEvent.CLICK, function():void {showText(side, object)}); What I'm trying to do is to remove all the previous set EventListeners without having to create separate functions for removing the links inside every object as well. Shorter version: How do I remove all EventListeners on all objects inside another object? The only variable I want to store is the object containing everything. There are however not always EventListeners within the objects.

    Read the article

  • What classes should I map against with NHibernate?

    - by apollodude217
    Currently, we use NHibernate to map business objects to database tables. Said business objects enforce business rules: The set accessors will throw an exception on the spot if the contract for that property is violated. Also, the properties enforce relationships with other objects (sometimes bidirectional!). Well, whenever NHibernate loads an object from the database (e.g. when ISession.Get(id) is called), the set accessors of the mapped properties are used to put the data into the object. What's good is that the middle tier of the application enforces business logic. What's bad is that the database does not. Sometimes crap finds its way into the database. If crap is loaded into the application, it bails (throws an exception). Sometimes it clearly should bail because it cannot do anything, but what if it can continue working? E.g., an admin tool that gathers real-time reports runs a high risk of failing unnecessarily instead of allowing an admin to even fix a (potential) problem. I don't have an example on me right now, but in some instances, letting NHibernate use the "front door" properties that also enforce relationships (especially bidi) leads to bugs. What are the best solutions? Currently, I will, on a per-property basis, create a "back door" just for NHibernate: public virtual int Blah {get {return _Blah;} set {/*enforces BR's*/}} protected virtual int _Blah {get {return blah;} set {blah = value;}} private int blah; I showed the above in C# 2 (no default properties) to demonstrate how this gets us basically 3 layers of, or views, to blah!!! While this certainly works, it does not seem ideal as it requires the BL to provide one (public) interface for the app-at-large, and another (protected) interface for the data access layer. There is an additional problem: To my knowledge, NHibernate does not give you a way to distinguish between the name of the property in the BL and the name of the property in the entity model (i.e. the name you use when you query, e.g. via HQL--whenever you give NHibernate the name (string) of a property). This becomes a problem when, at first, the BR's for some property Blah are no problem, so you refer to it in your O/R mapping... but then later, you have to add some BR's that do become a problem, so then you have to change your O/R mapping to use a new _Blah property, which breaks all existing queries using "Blah" (common problem with programming against strings). Has anyone solved these problems?!

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >