Search Results

Search found 12398 results on 496 pages for 'in memory oltp'.

Page 447/496 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • Is programming overrated?

    - by aengine
    [Subjective and intended to be a community wiki] I am sorry for such an offensive question: But here are my arguments Most of the progress in "computing" has came from non-programming sources. i.e. People invented faster microprocessors and better routers and novel memory devices. I dont think on average people are writting more efficient programs than those written 10 years ago. And the newer and popular languages are infact slower than C. though speed is one of the lesser criterias. Most of the progress came from novel paradigms. Web, Internet, Cloud computing and Social networking are novel paradigms and did not involve progress in programming as such. Heck even facebook was written in PHP and not some extreme language. Though it did face scalability issues (same with twitter) but i believe money and better programmers (who came in much later) took care of that. Thus ideating capability trumped programming capability/ Even things like Map-Reduce, Column oriented database and Probablistic algorithms (E.g. bloom filters) came from hardcore Algorithms research, rather than some programming convention. Thus my final point is why programming skill is so overstressed? To point a recent example about how only 10% of programmers can "write code" (binary search) without debugging. Isnt it a bit hypocritical, considering your real successs lies in coming up with better algorithm or a novel feature rather than getting right first time???

    Read the article

  • Porting Symbian C++ to Android NDK

    - by Donal Rafferty
    I've been given some Symbian C++ code to port over for use with the Android NDK. The code has lots of Symbian specific code in it and I have very little experience of C++ so its not going very well. The main thing that is slowing me down is trying to figure out the alternatives to use in normal C++ for the Symbian specific code. At the minute the compiler is throwing out all sorts of errors for unrecognised types. From my recent research these are the types that I believe are Symbian specific: TInt, TBool, TDesc8, RSocket, TInetAddress, TBuf, HBufc, RPointerArray Changing TInt and TBool to int and bool respectively works in the compiler but I am unsure what to use for the other types? Can anyone help me out with them? Especially TDesc, TBuf, HBuf. Also Symbian has a two phase contructor using NewL and NewLc But would changing this to a normal C++ constructor be ok? Finally Symbian uses the clean up stack to help eliminate memory leaks I believe, would removing the clean up stack code be acceptable, I presume it should be replaced with try/catch statements?

    Read the article

  • Call 32-bit or 64-bit program from bootloader

    - by user1002358
    There seems to be quite a lot of identical information on the Internet about writing the following 3 bootloaders: Infinite loop jmp $ Print a single character Print "Hello World". This is fantastic, and I've gone through these 3 variations with very little trouble. I'd like to write some 32- or 64-bit code in C and compile it, and call that code from the bootloader... basically a bootloader that, for example, sets the computer up to run some simple numerical simulation. I'll start by listing primes, for example, and then maybe some input/output from the user to maybe compute a Fourier transform. I don't know. I haven't found any information on how to do this, but I can already foresee some problems before I even begin. First of all, compiling a C program compiles it into one of several different files, depending on the target. For Windows, it's a PE file. For Linux, it's a .out file. These files are both quite different. In my instance, the target isn't Windows or Linux, it's just whatever I have written in the bootloader. Secondly, where would the actual code reside? The bootloader is exactly 512 bytes, but the program I write in C will certainly compile to something much larger. It will need to sit on my (virtual) hard disk, probably in some sort of file system (which I haven't even defined!) and I will need to load the information from this file into memory before I can even think about executing it. But from my understanding, all this is many, many orders of magnitude more complex than a 12-line "Hello World" bootloader. So my question is: How do I call a large 32- or 64-bit program (written in C/C++) from my 16-bit bootloader.

    Read the article

  • FileNotFoundExcecption while inserting image into SDCard.

    - by Are
    Hi, I want to insert images into my SDCard.So I used below code m_cImagePath = "/sdcard/"+ String.format("%d.jpg", System.currentTimeMillis()); FileOutputStream lObjOutStream = null; try { lObjOutStream = new FileOutputStream(m_cImagePath); if (null != lObjOutStream && null != finalBitmap) { finalBitmap.compress(Bitmap.CompressFormat.JPEG, 85, lObjOutStream); lObjOutStream.close(); }catch(FileNotFoundException fe){ fe.printStackTrace(); } Sometimes it is giving FileNotFoundException even my SDCard had memory.When I remove some images from sdcard again it is working smoothly.Why this Happend?How can i know that file is inserted successfully in SDCard and Is there any functionality in Java1.5 to know available space of the SDCard like java 1.6?How can i know file length which is not before inserting into the SDCard(I searched in google and found that "when the file is not physically there then file.length() always gives 0" ).But before inserting i want to know the length of the file.Then Comparing this space to available SDCard space is simple. Note :I had an idea to use Unix command df sdcard using in Runtime class to found SDCard space. Please give me an idea in this problem. Regards, Android Developer

    Read the article

  • Passing C++ object to C++ code through Python?

    - by cornail
    Hi all, I have written some physics simulation code in C++ and parsing the input text files is a bottleneck of it. As one of the input parameters, the user has to specify a math function which will be evaluated many times at run-time. The C++ code has some pre-defined function classes for this (they are actually quite complex on the math side) and some limited parsing capability but I am not satisfied with this construction at all. What I need is that both the algorithm and the function evaluation remain speedy, so it is advantageous to keep them both as compiled code (and preferrably, the math functions as C++ function objects). However I thought of glueing the whole simulation together with Python: the user could specify the input parameters in a Python script, while also implementing storage, visualization of the results (matplotlib) and GUI, too, in Python. I know that most of the time, exposing C++ classes can be done, e.g. with SWIG but I still have a question concerning the parsing of the user defined math function in Python: Is it possible to somehow to construct a C++ function object in Python and pass it to the C++ algorithm? E.g. when I call f = WrappedCPPGaussianFunctionClass(sigma=0.5) WrappedCPPAlgorithm(f) in Python, it would return a pointer to a C++ object which would then be passed to a C++ routine requiring such a pointer, or something similar... (don't ask me about memory management in this case, though :S) The point is that no callback should be made to Python code in the algorithm. Later I would like to extend this example to also do some simple expression parsing on the Python side, such as sum or product of functions, and return some compound, parse-tree like C++ object but let's stay at the basics for now. Sorry for the long post and thx for the suggestions in advance.

    Read the article

  • Efficient and accurate way to compact and compare Python lists?

    - by daveslab
    Hi folks, I'm trying to a somewhat sophisticated diff between individual rows in two CSV files. I need to ensure that a row from one file does not appear in the other file, but I am given no guarantee of the order of the rows in either file. As a starting point, I've been trying to compare the hashes of the string representations of the rows (i.e. Python lists). For example: import csv hashes = [] for row in csv.reader(open('old.csv','rb')): hashes.append( hash(str(row)) ) for row in csv.reader(open('new.csv','rb')): if hash(str(row)) not in hashes: print 'Not found' But this is failing miserably. I am constrained by artificially imposed memory limits that I cannot change, and thusly I went with the hashes instead of storing and comparing the lists directly. Some of the files I am comparing can be hundreds of megabytes in size. Any ideas for a way to accurately compress Python lists so that they can be compared in terms of simple equality to other lists? I.e. a hashing system that actually works? Bonus points: why didn't the above method work?

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user568249
    I'm currently working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to either be real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 0 and SD 1. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do this. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • How to load/save C++ class instance (using STL containers) to disk

    - by supert
    I have a C++ class representing a hierarchically organised data tree which is very large (~Gb, basically as large as I can get away with in memory). It uses an STL list to store information at each node plus iterators to other nodes. Each node has only one parent, but 0-10 children. Abstracted, it looks something like: struct node { public: node_list_iterator parent; // iterator to a single parent node double node_data_array[X]; map<int,node_list_iterator> children; // iterators to child nodes }; class strategy { private: list<node> tree; // hierarchically linked list of nodes struct some_other_data; public: void build(); // build the tree void save(); // save the tree from disk void load(); // load the tree from disk void use(); // use the tree }; I would like to implement the load() and save() to disk, and it should be fairly fast, however the obvious problems are: I don't know the size in advance; The data contains iterators, which are volatile; My ignorance of C++ is prodigious. Could anyone suggest a pure C++ solution please?

    Read the article

  • C# cross thread dialogue co-operation

    - by John Attridge
    K I am looking at a primarily single thread windows forms application in 3.0. Recently my boss had a progress dialogue added on a separate thread so the user would see some activity when the main thread went away and did some heavy duty work and locked out the GUI. The above works fine unless the user switches applications or minimizes as the progress form sits top most and will not disappear with the main application. This is not so bad if there are lots of little operations as the event structure of the main form catches up with its events when it gets time so minimized and active flags can be checked and thus the dialog thread can hide or show itself accordingly. But if a long running sql operation kicks off then no events fire. I have tried intercepting the WndProc command but this also appears queued when a long running sql operation is executing. I have also tried picking up the processes, finding the current app and checking various memory values isiconic and the like inside the progress thread but until the sql operation finishes none of these get updated. Removing the topmost causes the dialog to disappear when another app activates but if the main app is then brought back it does not appear again. So I need a way to find out if the other thread is minimized or no longer active that does not involve querying the actual thread as that locks until the sql operation finishes. Now I know that this is not the best way to write this and it would be better to have all the heavy processing on separate threads leaving the GUI free but as this is a huge ancient legacy app the time to re-write in that fashion will not be provided so I have to work with what I have got. Any help is appreciated

    Read the article

  • Extra NotifyIcon shown in system tray

    - by Kettch19
    I'm having an issue with an app where my NotifyIcon displays an extra icon. The steps to reproduce it are easy, but the problem is that the extra icon shows up after any of the actual codebehind we've added fires. Put simply, clicking a button triggers execution of method FooBar() which runs all the way through fine but its primary duty is to fire a backgroundworker to log into another of our apps. It only appears if this particular button is clicked. Strangely enough, we have a WndProc method override and if I step through until the extra NotifyIcon appears, it always appears during this method so something else beyond the codebehind must be triggering the behavior. Our WndProc method is currently (although I don't think it's caused by the WndProc): Protected Overrides Sub WndProc(ByRef m As System.Windows.Forms.Message) 'Check for WM_COPYDATA message from other app or drag/drop action and handle message If m.Msg = NativeMethods.WM_COPYDATA Then ' get the standard message structure from lparam Dim CD As NativeMethods.COPYDATASTRUCT = m.GetLParam(GetType(NativeMethods.COPYDATASTRUCT)) 'setup byte array Dim B(CD.cbData) As Byte 'copy data from memory into array Runtime.InteropServices.Marshal.Copy(New IntPtr(CD.lpData), B, 0, CD.cbData) 'Get message as string and process ProcessWMCopyData(System.Text.Encoding.Default.GetString(B)) 'empty array Erase B 'set message result to 'true', meaning message handled m.Result = New IntPtr(1) End If 'pass on result and all messages not handled by this app MyBase.WndProc(m) End Sub The only place in the code where the NotifyIcon in question is manipulated at all is in the following event handler (again, don't think this is the culprit, but just for more info): Private Sub TrayIcon_MouseDoubleClick(ByVal sender As System.Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles TrayIcon.MouseDoubleClick If Me.Visible Then Me.Hide() Else PositionBottomRight() Me.Show() End If End Sub The backgroundworker's DoWork is as follows (just a class call to log in to our other app, but again just for info): Private Sub LoginBackgroundWorker_DoWork(ByVal sender As Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles LoginBackgroundWorker.DoWork Settings.IsLoggedIn = _wdService.LogOn(Settings.UserName, Settings.Password) End Sub Does anyone else have ideas on what might be causing this or how to possibly further debug this? I've been banging my head on this without seeing a pattern so another set of eyes would be extremely appreciated. :) I've posted this on MSDN winforms forums as well and have had no luck there so far either.

    Read the article

  • No class def found error for JUnit Test on android

    - by J Bellamy
    I am having some very bizarre behaviour. I have a large number of test cases for my Android application, and they all work except for one. When I run this one I get a java.lang.NoClassDefFoundError: org.JUnit.test Yes, I have the JUnit 4 library imported into the project, and my other JUnit tests are running without any problems. What is particularly bizarre is that before I hit this problem I had an error in my code- basically, I tried writing a file to a read only folder. When that occurred, the JUnitTest would execute up to the point where it would hit an IO exception for accessing a part of memory it cannot access. I fix this problem, and suddenly the Android emulator doesn't seem to know what org.JUnit.test is. I have examined the run configuration for this test class, and it is the same as my others. It is in the same folder as the other tests as well. It also uses the same import statements. Any idea on what is going on? I am using the Android 10 emulator, and eclipse version 3.7.2. Edit: To clarify, the error I get appears on Logcat and not in my Eclipse project.

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • Caching vector addition over changing collections

    - by DRMacIver
    I have the following setup: I have a largish number of uuids (currently about 10k but expected to grow unboundedly - they're user IDs) and a function f : id - sparse vector with 32-bit integer values (no need to worry about precision). The function is reasonably expensive (not outrageously so, but probably on the order of a few 100ms for a given id). The dimension of the sparse vectors should be assumed to be infinite, as new dimensions can appear over time, but in practice is unlikely to ever exceed about 20k (and individual results of f are unlikely to have more than a few hundred non-zero values). I want to support the following operations efficiently: add a new ID to the collection invalidate an existing ID retrieve sum f(id) in O(changes since last retrieval) i.e. I want to cache the sum of the vectors in a way that's reasonable to do incrementally. One option would be to support a remove ID operation and treat invalidation as a remove followed by an add. The problem with this is that it requires us to keep track of all the old values of f, which is expensive in space. I potentially need to use many instances of this sort of cached structure, so I would like to avoid that. The likely usage pattern is that new IDs are added at a fairly continuous rate and are frequently invalidated at first. Ids which have been invalidated recently are much more likely to be invalidated again than ones which have remained valid for a long time, but in principle an old Id can still be invalidated. Ideally I don't want to do this in memory (or at least I want a way that lets me save the result to disk efficiently), so an idea which lets me piggyback off an existing DB implementation of some sort would be especially appreciated.

    Read the article

  • How do I efficiently parse a CSV file in Perl?

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively. edit It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan) another edit Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory. yet another edit I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

    Read the article

  • why doesnt this program print?

    - by Alex
    What I'm trying to do is to print my two-dimensional array but i'm lost. The first function is running perfect, the problem is the second or maybe the way I'm passing it to the "Print" function. #include <stdio.h> #include <stdlib.h> #define ROW 2 #define COL 2 //Memory allocation and values input void func(int **arr) { int i, j; arr = (int**)calloc(ROW,sizeof(int*)); for(i=0; i < ROW; i++) arr[i] = (int*)calloc(COL,sizeof(int)); printf("Input: \n"); for(i=0; i<ROW; i++) for(j=0; j<COL; j++) scanf_s("%d", &arr[i][j]); } //This is where the problem begins or maybe it's in the main void print(int **arr) { int i, j; for(i=0; i<ROW; i++) { for(j=0; j<COL; j++) printf("%5d", arr[i][j]); printf("\n"); } } void main() { int *arr; func(&arr); print(&arr); //maybe I'm not passing the arr right ? }

    Read the article

  • Aliasing `T*` with `char*` is allowed. Is it also allowed the other way around?

    - by StackedCrooked
    Note: This question has been renamed and reduced to make it more focused and readable. Most of the comments refer to the old text. According to the standard objects of different type may not share the same memory location. So this would not be legal: int i = 0; short * s = reinterpret_cast<short*>(&i); // BAD! The standard however allows an exception to this rule: any object may be accessed through a pointer to char or unsigned char: int i = 0; char * c = reinterpret_cast<char*>(&i); // OK However, it is not clear to me if this is also allowed the other way around. For example: char * c = read_socket(...); unsigned * u = reinterpret_cast<unsigned*>(c); // huh? Summary of the answers The answer is NO for two reasons: You an only access an existing object as char*. There is no object in my sample code, only a byte buffer. The pointer address may not have the right alignment for the target object. In that case dereferencing it would result in undefined behavior. On the Intel and AMD platforms it will result performance overhead. On ARM it will trigger a CPU trap and your program will be terminated! This is a simplified explanation. For more detailed information see answers by @Luc Danton, @Cheers and hth. - Alf and @David Rodríguez.

    Read the article

  • NSManagedObject How To Reload

    - by crissag
    I have a view that consists of a table of existing objects and an Add button, which allows the user to create a new object. When the user presses Add, the object is created in the list view controller, so that the object will be part of that managed object context (via the NSEntityDescription insertNewObjectForEntityForName: method). The Add view has a property for the managed object. In the list view controller, I create an Add view controller, set the property to the managed object I created, and then push the Add view on to the navigation stack. In the Add view, I have two buttons for save and cancel. In the save, I save the managed object and pass the managed object back to the list view controller via a delegate method. If the user cancels, then I delete the object and pass nil back to the list view controller. The complication I am having in the add view is related to a UIImagePickerController. In the Add view, I have a button which allows the user to take a photo of the object (or use an existing photo from the photo library). However, the process of transferring to the UIImagePickerController and having the user use the camera, is resulting in a didReceiveMemoryWarning in the add view controller. Further, the view was unloaded, which also caused my NSManagedObject to get clobbered. My question is, how to you go about reloading the NSManagedObject in the case where it was released because of the low memory situation?

    Read the article

  • Outlook VSTO AddIn for Meetings

    - by BigDubb
    We have created a VSTO addin for Outlook Meetings. As part of this we trap on the SendEvent of the message on the FormRegionShowing event: _apptEvents.Send += new Microsoft.Office.Interop.Outlook.ItemEvents_SendEventHandler(_apptEvents_Send); The method _apptEvents_Send then tests on a couple of properties and exits where appropriate. private void _apptEvents_Send(ref bool Cancel) { if (!_Qualified) { MessageBox.Show("Meeting has not been qualified", "Not Qualified Meeting", MessageBoxButtons.OK, MessageBoxIcon.Information); chkQualified.Focus(); Cancel = true; } } The problem that we're having is that some users' messages get sent twice. Once when the meeting is sent and a second time when the user re-opens outlook. I've looked for memory leaks, thinking that something might not be getting disposed of properly, and have added explicit object disposal on all finally calls to try and make sure resources are managed, but still getting the functionality incosistently across the organization. i.e. I never encountered the problem during development, nor other developers during testing. All users are up to date on framework (3.5 SP1) and Hotfixes for Outlook. Does anyone have any ideas on what might be causing this? Any ideas anyone might have would be greatly appreciated.

    Read the article

  • C++ class derivation and superconstructor confusion

    - by LukeN
    Hey, in a tutorial C++ code, I found this particular piece of confusion: PlasmaTutorial1::PlasmaTutorial1(QObject *parent, const QVariantList &args) : Plasma::Applet(parent, args), // <- Okay, Plasma = namespace, Applet = class m_svg(this), // <- A member function of class "Applet"? m_icon("document") // <- ditto? { m_svg.setImagePath("widgets/background"); // this will get us the standard applet background, for free! setBackgroundHints(DefaultBackground); resize(200, 200); } I'm not new to object oriented programming, so class derivation and super-classes are nothing complicated, but this syntax here got me confused. The header file defines the class like this: class PlasmaTutorial1 : public Plasma::Applet { Similar to above, namespace Plasma and class Applet. But what's the public doing there? I fear that I already know the concept but don't grasp the C++ syntax/way of doing it. In this question I picked up that these are called "superconstructors", at least that's what stuck in my memory, but I don't get this to the full extend. If we glance back at the first snippet, we see Constructor::Class(...) : NS::SuperClass(...), all fine 'till here. But what are m_svg(this), m_icon("document") doing there? Is this some kind of method to make these particular functions known to the derivated class? Is this part of C++ basics or more immediate? While I'm not completly lost in C++, I feel much more at home in C :) Most of the OOP I have done so far was done in D, Ruby or Python. For example in D I would just define class MyClass : MySuperClass, override what I needed to and call the super class' constructor if I'd need to.

    Read the article

  • Question on passing a pointer to a structure in C to a function?

    - by worlds-apart89
    Below, I wrote a primitive singly linked list in C. Function "addEditNode" MUST receive a pointer by value, which, I am guessing, means we can edit the data of the pointer but can not point it to something else. If I allocate memory using malloc in "addEditNode", when the function returns, can I see the contents of first-next ? Second question is do I have to free first-next or is it only first that I should free? I am running into segmentation faults on Linux. #include <stdio.h> #include <stdlib.h> typedef struct list_node list_node_t; struct list_node { int value; list_node_t *next; }; void addEditNode(list_node_t *node) { node->value = 10; node->next = (list_node_t*) malloc(sizeof(list_node_t)); node->next->value = 1; node->next->next = NULL; } int main() { list_node_t *first = (list_node_t*) malloc(sizeof(list_node_t)); first->value = 1; first->next = NULL; addEditNode(first); free(first); return 0; }

    Read the article

  • Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • How can I use io.StringIO() with the csv module?

    - by Tim Pietzcker
    I tried to backport a Python 3 program to 2.7, and I'm stuck with a strange problem: >>> import io >>> import csv >>> output = io.StringIO() >>> output.write("Hello!") # Fail: io.StringIO expects Unicode Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' >>> output.write(u"Hello!") # This works as expected. 6L >>> writer = csv.writer(output) # Now let's try this with the csv module: >>> csvdata = [u"Hello", u"Goodbye"] # Look ma, all Unicode! (?) >>> writer.writerow(csvdata) # Sadly, no. Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' According to the docs, io.StringIO() returns an in-memory stream for Unicode text. It works correctly when I try and feed it a Unicode string manually. Why does it fail in conjunction with the csv module, even if all the strings being written are Unicode strings? Where does the str come from that causes the Exception? (I do know that I can use StringIO.StringIO() instead, but I'm wondering what's wrong with io.StringIO() in this scenario)

    Read the article

  • Reading and writing to files simultaneously?

    - by vipersnake005
    Moved the question here. Suppose, I want to store 1,000,000,000 integers and cannot use my memory. I would use a file(which can easily handle so much data ). How can I let it read and write and the same time. Using fstream file("file.txt', ios::out | ios::in ); doesn't create a file, in the first place. But supposing the file exists, I am unable to use to do reading and writing simultaneously. WHat I mean is this : Let the contents of the file be 111111 Then if I run : - #include <fstream> #include <iostream> using namespace std; int main() { fstream file("file.txt",ios:in|ios::out); char x; while( file>>x) { file<<'0'; } return 0; } Shouldn't the file's contents now be 101010 ? Read one character and then overwrite the next one with 0 ? Or incase the entire contents were read at once into some buffer, should there not be atleast one 0 in the file ? 1111110 ? But the contents remain unaltered. Please explain. Thank you.

    Read the article

  • Drawing line graphics leads Flash to spiral out of control!

    - by drpepper
    Hi, I'm having problems with some AS3 code that simply draws on a Sprite's Graphics object. The drawing happens as part of a larger procedure called on every ENTER_FRAME event of the stage. Flash neither crashes nor returns an error. Instead, it starts running at 100% CPU and grabs all the memory that it can, until I kill the process manually or my computer buckles under the pressure when it gets up to around 2-3 GB. This will happen at a random time, and without any noticiple slowdown beforehand. WTF? Has anyone seen anything like this? PS: I used to do the drawing within a MOUSE_MOVE event handler, which brought this problem on even faster. PPS: I'm developing on Linux, but reproduced the same problem on Windows. UPDATE: You asked for some code, so here we are. The drawing function looks like this: public static function drawDashedLine(i_graphics : Graphics, i_from : Point, i_to : Point, i_on : Number, i_off : Number) : void { const vecLength : Number = Point.distance(i_from, i_to); i_graphics.moveTo(i_from.x, i_from.y); var dist : Number = 0; var lineIsOn : Boolean = true; while(dist < vecLength) { dist = Math.min(vecLength, dist + (lineIsOn ? i_on : i_off)); const p : Point = Point.interpolate(i_from, i_to, 1 - dist / vecLength); if(lineIsOn) i_graphics.lineTo(p.x, p.y); else i_graphics.moveTo(p.x, p.y); lineIsOn = !lineIsOn; } } and is called like this (m_graphicsLayer is a Sprite): m_graphicsLayer.graphics.clear(); if (m_destinationPoint) { m_graphicsLayer.graphics.lineStyle(2, m_fixedAim ? 0xff0000 : 0x333333, 1); drawDashedLine(m_graphicsLayer.graphics, m_initialPos, m_destinationPoint, 10, 10); }

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >