Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 569/626 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • Aliasing `T*` with `char*` is allowed. Is it also allowed the other way around?

    - by StackedCrooked
    Note: This question has been renamed and reduced to make it more focused and readable. Most of the comments refer to the old text. According to the standard objects of different type may not share the same memory location. So this would not be legal: int i = 0; short * s = reinterpret_cast<short*>(&i); // BAD! The standard however allows an exception to this rule: any object may be accessed through a pointer to char or unsigned char: int i = 0; char * c = reinterpret_cast<char*>(&i); // OK However, it is not clear to me if this is also allowed the other way around. For example: char * c = read_socket(...); unsigned * u = reinterpret_cast<unsigned*>(c); // huh? Summary of the answers The answer is NO for two reasons: You an only access an existing object as char*. There is no object in my sample code, only a byte buffer. The pointer address may not have the right alignment for the target object. In that case dereferencing it would result in undefined behavior. On the Intel and AMD platforms it will result performance overhead. On ARM it will trigger a CPU trap and your program will be terminated! This is a simplified explanation. For more detailed information see answers by @Luc Danton, @Cheers and hth. - Alf and @David Rodríguez.

    Read the article

  • Build OpenGL model in parallel?

    - by Brendan Long
    I have a program which draws some terrain and simulates water flowing over it (in a cheap and easy way). Updating the water was easy to parallelize using OpenMP, so I can do ~50 updates per second. The problem is that even with a small amounts of water, my draws per second are very very low (starts at 5 and drops to around 2 once there's a significant amount of water). It's not a problem with the video card because the terrain is more complicated and gets drawn so quickly that boost::timer tells me that I get infinity draws per second if I turn the water off. It may be related to memory bandwidth though (since I assume the model stays on the card and doesn't have to be transfered every time). What I'm concerned about is that on every draw, I'm calling glVertex3f() about a million times (max size is 450*600, 4 vertices each), and it's done entirely sequentially because Glut won't let me call anything in parallel. So.. is if there's some way of building the list in parallel and then passing it to OpenGL all at once? Or some other way of making it draw this faster? Am I using the wrong method (besides the obvious "use less vertices")?

    Read the article

  • How can I use io.StringIO() with the csv module?

    - by Tim Pietzcker
    I tried to backport a Python 3 program to 2.7, and I'm stuck with a strange problem: >>> import io >>> import csv >>> output = io.StringIO() >>> output.write("Hello!") # Fail: io.StringIO expects Unicode Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' >>> output.write(u"Hello!") # This works as expected. 6L >>> writer = csv.writer(output) # Now let's try this with the csv module: >>> csvdata = [u"Hello", u"Goodbye"] # Look ma, all Unicode! (?) >>> writer.writerow(csvdata) # Sadly, no. Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' According to the docs, io.StringIO() returns an in-memory stream for Unicode text. It works correctly when I try and feed it a Unicode string manually. Why does it fail in conjunction with the csv module, even if all the strings being written are Unicode strings? Where does the str come from that causes the Exception? (I do know that I can use StringIO.StringIO() instead, but I'm wondering what's wrong with io.StringIO() in this scenario)

    Read the article

  • how to change uibutton title at runtime in objective c?

    - by Sarah
    Hello, I know this question is asked many a times,and i am also implementing the same funda for chanding the title of the uibutton i guess. Let me clarify my problem first. I have one uibutton named btnType, on clicking of what one picker pops up and after selecting one value,i am hitting done button to hide the picker and at the same time i am changing the the title of the uibutton with code [btnType setTitle:btnTitle forState:UIControlEventTouchUpInside]; [btnType setTitleColor:[UIColor redColor] forState:UIControlEventAllEvents]; But with my surpriaze,it is not changed and application crashes with signal EXC_BAD_ACCESS. I am not getting where i am making mistake.I have allocated memory to the btnType at viewdidLoad. Also I am using -(IBAction)pressAddType { toolBar.hidden = FALSE; dateTypePicker.hidden = FALSE; } event on pressing the button to open the picker. Also i would like to mention that i have made connection with IB with event TouchUpInside for pressAddType. Any guesses? I will be grateful if you could help me. Thanks.

    Read the article

  • How to determine 2D unsigned short pointers array length in c++

    - by tuman
    Hello, I am finding it difficult to determine the length of the columns in a 2D unsigned short pointer array. I have done memory allocation correctly as far as I know. and can print them correctly. plz see the following code segment: int number_of_array_index_required_for_pointer_abc=3; char A[3][16]; strcpy(A[0],"Hello"); strcpy(A[1],"World"); strcpy(A[2],"Tumanicko"); cout<<number_of_array_index_required_for_pointer_abc*sizeof(unsigned short)<<endl; unsigned short ** pqr=(unsigned short **)malloc(number_of_array_index_required_for_pointer_abc*sizeof(unsigned short)); for(int i=0;i<number_of_array_index_required_for_pointer_abc;i++) { int ajira = strlen(A[i])*sizeof(unsigned short); cout<<i<<" = "<<ajira<<endl; pqr[i]=(unsigned short *)malloc(ajira); cout<<"alocated pqr[i]= "<<sizeof pqr<<endl; int j=0; for(j=0;j<strlen(A[i]);j++) { pqr[i][j]=(unsigned short)A[i][j]; } pqr[i][j]='\0'; } for(int i=0;i<number_of_array_index_required_for_pointer_abc;i++) { //ln= (sizeof pqr[i])/(sizeof pqr[0]); //cout<<"Size of pqr["<<i<<"]= "<<ln<<endl; // I want to know the size of the columns i.e. pqr[i]'s length instead of finding '\0' for(int k=0;(char)pqr[i][k]!='\0';k++) cout<<(char)pqr[i][k]; cout<<endl; }

    Read the article

  • What does the destructor do silently?

    - by zhanwu
    Considering the following code which looks like that the destructor doesn't do any real job, valgrind showed me clearly that it has memory leak without using the destructor. Any body can explain me what does the destructor do in this case? #include <iostream> using namespace std; class A { private: int value; A* follower; public: A(int); ~A(); void insert(int); }; A::A(int n) { value = n; follower = NULL; } A::~A() { if (follower != NULL) delete follower; cout << "do nothing!" << endl; } void A::insert(int n) { if (this->follower == NULL) { A* f = new A(n); this->follower = f; } else this->follower->insert(n); } int main(int argc, char* argv[]) { A* objectA = new A(1); int i; for (i = 0; i < 10; i++) objectA->insert(i); delete objectA; }

    Read the article

  • How to construct objects based on XML code?

    - by the_drow
    I have XML files that are representation of a portion of HTML code. Those XML files also have widget declarations. Example XML file: <message id="msg"> <p> <Widget name="foo" type="SomeComplexWidget" attribute="value"> inner text here, sets another attribute or inserts another widget to the tree if needed... </Widget> </p> </message> I have a main Widget class that all of my widgets inherit from. The question is how would I create it? Here are my options: Create a compile time tool that will parse the XML file and create the necessary code to bind the widgets to the needed objects. Advantages: No extra run-time overhead induced to the system. It's easy to bind setters. Disadvantages: Adds another step to the build chain. Hard to maintain as every widget in the system should be added to the parser. Use of macros to bind the widgets. Complex code Find a method to register all widgets into a factory automatically. Advantages: All of the binding is done completely automatically. Easier to maintain then option 1 as every new widget will only need to call a WidgetFactory method that registers it. Disadvantages: No idea how to bind setters without introducing a maintainability nightmare. Adds memory and run-time overhead. Complex code What do you think is better? Can you guys suggest a better solution?

    Read the article

  • UIImagePickerController Crash after 5 to 7 pictures - again

    - by Sophtware
    OK, I know this one has been beaten to death on this forum, but I'm still having the memory problem and I have tried all the techniques on the web to get around this. I have an application that uses the UIImagePickerController to capture an image from the camera. I've tried both creating and destroying the controller for each picture, and keeping it around for the life of the app. Both are failing. The first way crashes the phone almost immediately. While the second, leaving the controller around, crashes the app after about 5 to 7 pictures. My original app used an undocumented API to get around this issue, but Apple rejected it because of this. I really need to get my app to the store. Does anyone have code showing how they got around the issue? I know there is a way because there are apps on the store using the camera, but I just can't seem to get it. Any help is greatly appreciated! I can post my code here too, if needed.

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

  • Using an objects date (without time) for a table header instead of an objects date and time (iphone)

    - by billywilliamton
    I've been working on an iphone project and have run into an issue. Currently In the table view where it displays all the objects, I use headers based on the objects datePerformed field. The only problem is that my code apparently creates a header that contains both the date and time resulting in objects not being grouped solely by their date as I intended, but rather based on their date and time. I'm not sure if it matters, but when an object is created I use a date picker to pick the date, but not the time. I was wondering if anyone could give me any suggestions or advice. Here is the code where i set up the fetchedResultsController - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController != nil) { return fetchedResultsController; } // Create and configure a fetch request with the Exercise entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Exercise" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Create the sort descriptors array using date and name NSSortDescriptor *dateDescriptor = [[NSSortDescriptor alloc] initWithKey:@"datePerformed" ascending:NO]; NSSortDescriptor *nameDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:dateDescriptor, nameDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Create and initialize the fetch results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:@"datePerformed" cacheName:@"Root"]; self.fetchedResultsController = aFetchedResultsController; fetchedResultsController.delegate = self; // Memory management calls [aFetchedResultsController release]; [fetchRequest release]; [dateDescriptor release]; [nameDescriptor release]; [sortDescriptors release]; return fetchedResultsController; } Here's where I set up the table header properties - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { // Display the exercise' date as section headings. return [[[fetchedResultsController sections] objectAtIndex:section] name]; } Any suggestions welcome. Thanks for your time. -Billy Williamton

    Read the article

  • C/C++ Bit Array or Bit Vector

    - by MovieYoda
    Hi, I am learning C/C++ programming & have encountered the usage of 'Bit arrays' or 'Bit Vectors'. Am not able to understand their purpose? here are my doubts - Are they used as boolean flags? Can one use int arrays instead? (more memory of course, but..) What's this concept of Bit-Masking? If bit-masking is simple bit operations to get an appropriate flag, how do one program for them? is it not difficult to do this operation in head to see what the flag would be, as apposed to decimal numbers? I am looking for applications, so that I can understand better. for Eg - Q. You are given a file containing integers in the range (1 to 1 million). There are some duplicates and hence some numbers are missing. Find the fastest way of finding missing numbers? For the above question, I have read solutions telling me to use bit arrays. How would one store each integer in a bit?

    Read the article

  • Still some questions about USB

    - by b-gen-jack-o-neill
    Hi, few days ago I asked here about implementing USB. Now, If I may, would like to ask few more questions, about thing I dind´t quite understood. So, first, If I am right, Windows has device driver for USB interface, for the physical device that sends and receives communication. But what this driver offers to system (user)? I mean, USB protocol is made so its devices are adressed. So you first adress device than send message to it. But how sophisticted is the device controller (HW) and its driver? Is it so sophisticated that it is a chip you just send device adress and data, and it writes the outcomming data out and incomming data to some internal register to be read, or thru DMA directly to memory? Or, how its drivers (SW) really work? Does its driver has some advanced functions like send _data to _device? Becouse I somewhat internally hope there is a way to directly send some data thru USB, maybe by calling USB drivers itself? Is there any good article, tutorial you know about to really explain how all this logic works? Thanks.

    Read the article

  • How to know the type of an object in a list?

    - by nacho4d
    Hi, I want to know the type of object (or type) I have in my list so I wrote this: void **list; //list of references list = new void * [2]; Foo foo = Foo(); const char *not_table [] = {"tf", "ft", 0 }; list[0] = &foo; list[1] = not_table; if (dynamic_cast<LogicProcessor*>(list[0])) { //ERROR here ;( printf("Foo was found\n"); } if (dynamic_cast<char*> (list[0])) { //ERROR here ;( printf("char was found\n"); } but I get : error: cannot dynamic_cast '* list' (of type 'void*') to type 'class Foo*' (source is not a pointer to class) error: cannot dynamic_cast '* list' (of type 'void*') to type 'char*' (target is not pointer or reference to class) Why is this? what I am doing wrong here? Is dynamic_cast what I should use here? Thanks in advance EDIT: I know above code is much like plain C and surely sucks from the C++ point of view but is just I have the following situation and I was trying something before really implementing it: I have two arrays of length n but both arrays will never have an object at the same index. Hence, or I have array1[i]!=NULL or array2[i]!=NULL. This is obviously a waste of memory so I thought everything would be solved if I could have both kind of objects in a single array of length n. I am looking something like Cocoa's (Objective-C) NSArray where you don't care about the type of the object to be put in. Not knowing the type of the object is not a problem since you can use other method to get the class of a certain later. Is there something like it in c++ (preferably not third party C++ libraries) ? Thanks in advance ;)

    Read the article

  • APC decreasing php performance??? (php 5.3, apache 2.2, windows vista 64bit)

    - by M.M.
    Hi, I have an Apache/2.2.15 (VC9) and PHP/5.3.2 (VC9 thread safe) running as an apache module on Vista 64bit machine. All running fine. Project that I'm benchmarking (with apache's ab utility) is basically standard Zend Framework project with no db connection involved. Average (median) apache response is about 0.15 seconds. After I've installed APC (3.1.4-dev VC9 thread safe) with standard settings suddenly the request response time raised to 1.3 seconds (!), which is unacceptable... All apc settings looked always good (through the apc.php script: enough shm memory, no cache full, fragmentation 0%). Only difference was to disable the stats lookup (apc.stat = 0). Then the response dropped to 0.09 seconds which was finally better than without the apc. IIRC, it's expected and obvious that the stat lookup creates some overhead, but shouldn't it still be far more performant compared to running wihout the apc extension at all? Or put it differently why is the apc.stat creating so much overhead? Apparently, something is not working as it should, I don't really know where to start looking. Thank you for your time/answers/direction in advance. Cheers, m.

    Read the article

  • Rails Nested Attributes Doesn't Insert ID Correctly

    - by MunkiPhD
    I'm attempting to edit a model's nested attributes, much as outline here, replicated here: <%= form_for @person do |person_form| %> <%= person_form.text_field :name %> <% for address in @person.addresses %> <%= person_form.fields_for address, :index => address do |address_form|%> <%= address_form.text_field :city %> <% end %> <% end %> <% end %> In my code, I have the following: <%= form_for(@meal) do |f| %> <!-- some other stuff that's irrelevant... --> <% for subitem in @meal.meal_line_items %> <%= f.fields_for subitem, :index => subitem do |line_item_form| %> <%= line_item_form.label :servings %><br/> <%= line_item_form.text_field :servings %><br/> <%= line_item_form.label :food_id %><br/> <%= line_item_form.text_field :food_id %><br/> <% end %> <% end %> <%= f.submit %> <% end %> This works great, except, when I look at the HTML, it's creating the inputs that look like the following, failing to input the correct id and instead placing the memory representation(?) of the model: <input type="text" value="2" size="30" name="meal[meal_line_item][#<MealLineItem:0x00000005c5d618>][servings]" id="meal_meal_line_item_#<MealLineItem:0x00000005c5d618>_servings">

    Read the article

  • How can I synchronize one set of data with another?

    - by RenderIn
    I have an old database and a new database. The old records were converted to the new database recently. All our old applications continue to point to the old database, but the new applications point to the new database. Currently the old database is the only one being updated, so throughout the day the new database becomes out of sync. It is acceptable for the new database to be out of sync for a day, so until all our applications are pointed to the new database I just need to write a nightly cron job that will bring it up to date. I do not want to purge the new database and run the complete conversion script each night, as that would reduce uptime and would create a mess in our auditing of that table. I'm thinking about selecting all the data from the old database, converting it to the new database structure in memory, and then checking for the existence of each record before inserting it in the new database. After that's done, I'd select everything from the new database and check if it exists in the old one, and if not delete it. Is this the simplest way to do this?

    Read the article

  • GTK+: How do I process RadioMenuItem choice without marking it chosen? And vise versa

    - by eugene.shatsky
    In my program, I've got a menu with a group of RadioMenuItem entries. Choosing one of them should trigger a function which can either succeed or fail. If it fails, this RadioMenuItem shouldn't be marked chosen (the previous one should persist). Besides, sometimes I want to set marked item without running the choice processing function. Here is my current code: # Update seat menu list def update_seat_menu(self, seats, selected_seat=None): seat_menu = self.builder.get_object('seat_menu') # Delete seat menu items for menu_item in seat_menu: # TODO: is it a good way? does remove() delete obsolete menu_item from memory? if menu_item.__class__.__name__ == 'RadioMenuItem': seat_menu.remove(menu_item) # Fill menu with new items group = [] for seat in seats: menu_item = Gtk.RadioMenuItem.new_with_label(group, str(seat[0])) group = menu_item.get_group() seat_menu.append(menu_item) if str(seat[0]) == selected_seat: menu_item.activate() menu_item.connect("activate", self.choose_seat, str(seat[0])) menu_item.show() # Process item choice def choose_seat(self, entry, seat_name): # Looks like this is called when item is deselected, too; must check if active if entry.get_active(): # This can either succeed or fail self.logind.AttachDevice(seat_name, '/sys'+self.device_syspath, True) Chosen RadioMenuItem gets marked irrespective of the choose_seat() execution result; and the only way to set marked item without triggering choose_seat() is to re-run update_seat_menu() with selected_seat argument, which is an overkill. I tried to connect choose_seat() with 'button-release-event' instead of 'activate' and call entry.activate() in choose_seat() if AttachDevice() succeeds, but this resulted in whole X desktop lockup until AttachDevice() timed out, and chosen item still got marked.

    Read the article

  • Calling member functions dynamically

    - by user652511
    I'm pretty sure it's possible to call a class and its member function dynamically in Delphi, but I can't quite seem to make it work. What am I missing? // Here's a list of classes (some code removed for clarity) moClassList : TList; moClassList.Add( TClassA ); moClassList.Add( TClassB ); // Here is where I want to call an object's member function if the // object's class is in the list: for i := 0 to moClassList.Count - 1 do if oObject is TClass(moClassList[i]) then with oObject as TClass(moClassList[i]) do Foo(); I get an undeclared identifier for Foo() at compile. Clarification/Additional Information: What I'm trying to accomplish is to create a Change Notification system between business classes. Class A registers to be notified of changes in Class B, and the system stores a mapping of Class A - Class B. Then, when a Class B object changes, the system will call a A.Foo() to process the change. I'd like the notification system to not require any hard-coded classes if possible. There will always be a Foo() for any class that registers for notification. Maybe this can't be done or there's a completely different and better approach to my problem. By the way, this is not exactly an "Observer" design pattern because it's not dealing with objects in memory. Managing changes between related persistent data seems like a standard problem to be solved, but I've not found very much discussion about it. Again, any assistance would be greatly appreciated. Jeff

    Read the article

  • Creation of model in core data on the fly

    - by user1740045
    How can we create a model in core data on the fly? I.e getting the schema of database from somewhere and then creating a Core Data Object graph? *QuesTion:* Yes thats fine, agreed with all the advantages. But, can anybody can tell practically, what is the benefit of integrating Core Data into project instead of using SQL directly. 1.No need to write SQL boiler plate code [but need to learn Core Data Model (steep curve)] 2.WE can undo and redo changes [but practically who needs it] 3.we can migrate to another schema [that can be done by SQLite as well jus need to add another field into table] 4.For say aggregation on some field in table,in Core Data we need to loop through Core Data Objects whereas in SQLite we need to first write SQLite Boiler Plate Code and then the basic aggregation SQL query,which is easy to write,only length of code will increase...But in case of Core Data (need to learn a lot). So apart from reducing the length of Code,does it actually adds value to project? or in terms of Memory Efficiency,Performance,etc.. PS: If anybody has actualy worked on Core Data(Model Creation On the Fly) , if possible share and gve pointers..thanks!

    Read the article

  • why doesnt this program print?

    - by Alex
    What I'm trying to do is to print my two-dimensional array but i'm lost. The first function is running perfect, the problem is the second or maybe the way I'm passing it to the "Print" function. #include <stdio.h> #include <stdlib.h> #define ROW 2 #define COL 2 //Memory allocation and values input void func(int **arr) { int i, j; arr = (int**)calloc(ROW,sizeof(int*)); for(i=0; i < ROW; i++) arr[i] = (int*)calloc(COL,sizeof(int)); printf("Input: \n"); for(i=0; i<ROW; i++) for(j=0; j<COL; j++) scanf_s("%d", &arr[i][j]); } //This is where the problem begins or maybe it's in the main void print(int **arr) { int i, j; for(i=0; i<ROW; i++) { for(j=0; j<COL; j++) printf("%5d", arr[i][j]); printf("\n"); } } void main() { int *arr; func(&arr); print(&arr); //maybe I'm not passing the arr right ? }

    Read the article

  • Good C string libary

    - by chamakits
    Hello all. I recently got inspired to start up a project I've been wanting to code for a while. I want to do it in C, because memory handling is key this application. I was searching around for a good implementation of strings in C, since I know me doing it myself could lead to some messy buffer overflows, and I expect to be dealing with a fairly big amount of strings. I found this article which gives details on each, but they each seem like they have a good amount of cons going for them (don't get me wrong, this article is EXTREMELY helpful, but it still worries me that even if I were to choose one of those, I wouldn't be using the best I can get). I also don't know how up to date the article is, hence my current plea. What I'm looking for is something that may hold a large amount of characters, and simplifies the process of searching through the string. If it allows me to tokenize the string in any way, even better. Also, it should have some pretty good I/O performance. Printing, and formatted printing isn't quite a top priority. I know I shouldn't expect a library to do all the work for me, but was just wandering if there was a well documented string function out there that could save me some time and some work. Any help is greatly appreciated. Thanks in advance! EDIT: I was asked about the license I prefer. Any sort of open source license will do, but preferably GPL (v2 or v3). EDIt2: I found betterString (bstring) library and it looks pretty good. Good documentation, small yet versatile amount of functions, and easy to mix with c strings. Anyone have any good or bad stories about it? The only downside I've read about it is that it lacks Unicode (again, read about this, haven't seen it face to face just yet), but everything else seems pretty good. EDIT3: Also, preferable that its pure C.

    Read the article

  • Pointers to a typed variable in C#: Interfase, Generic, object or Class? (Boxing/Unboxing)

    - by PaulG
    First of all, I apologize if this has been asked a thousand times. I read my C# book, I googled it, but I can't seem to find the answer I am looking for, or I am missing the point big time. I am very confused with the whole boxing/unboxing issue. Say I have fields of different classes, all returning typed variables (e.g. 'double') and I would like to have a variable point to any of these fields. In plain old C I would do something like: double * newVar; newVar = &oldVar; newVar = &anotherVar; ... In C#, it seems I could do an interfase, but would require that all fields be properties and named the same. Breaks apart when one of the properties doesn't have the same name or is not a property. I could also create a generic class returning double, but seems a bit absurd to create a class to represent a 'double', when a 'double' class already exists. If I am not mistaken, it doesn't even need to be generic, could be a simple class returning double. I could create an object and box the typed variable to the newly created object, but then I would have to cast every time I use it. Of course, I always have the unsafe option... but afraid of getting to unknown memory space, divide by zero and bring an end to this world. None of these seem to be the same as the old simple 'double * variable'. Am I missing something here?

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >