Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 606/672 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • Why does a MFC application behaves mysteriously in encrypted hard drive environment.

    - by MauriceL
    I'm working on a bug where I have an MFC application that does weird things when installed in when Sophos Safeguard hard drive encryption is installed. I'm sorry to be so vague here, but I'm writing this away from the office so this is all from my (poor) memory. Three things I've noticed: AfxGetResourceHandle() doesn't return a consistent resource handle. There is a single case where we try to load a string resource, and for some reason, the resource handle that we get from this method is different to all the other stings. Can't construct a CDocumentTemplate. There is a trace error which I cant seem to recall. Will edit and post when I'm in tomorrow. This behaviour appears to manifest in a Visual Studio 2005 version of the project, but not a Visual Studio 2008 version. Unfortunately moving to the 2008 version is not an option. The bug is not always reproducable if I step through with a debugger. Also, bringing up debug message boxes changes the behaviuor, which leads me to think that either there is some kind of race condition going on with the way MFC events are being handled, but I'm not sure how I'll ever know for sure, or even what I can do about it if I did. I think there's some underlying reason that the app is behaving weirdly, but what I've posted are more symptoms. Can anyone think of what I should check for? I've run Windows update on the test environment to ensure everything was up to date, and I've examined the process in procmon to see if the disk encryption stuff was getting in the way and conflicting with files - it didn't appear to be, but it does appear to be involved in some way as our app accesses Sophos related paths in the temp directory.

    Read the article

  • GUI for server-client program

    - by sksingh73
    I am making a server-client application in c++. In this i am also using shared memory & file read-write operations. my program is completely ready & i now wants to make a gui for it. someone suggested me to go for QT4, but when i tried it, i found i have to re-write 80% of the code because QT has got its own classes & variable. i don't want to do it. i want suggestions from you on this regard. my requirements for gui are very simple i.e there will be a main form, which will have two text boxes in which all messages being sent & received by client & server should be shown. there should be another lineedit box, through which i can send the messages to the other end server. I don't know how to make this gui. someone suggested tcl/tk, other suggested me use php/swig. i am not sure how to go about this. my only requirement is that i want to make this simple gui with minimum of changes in my code. THANX

    Read the article

  • Question on passing a pointer to a structure in C to a function?

    - by worlds-apart89
    Below, I wrote a primitive singly linked list in C. Function "addEditNode" MUST receive a pointer by value, which, I am guessing, means we can edit the data of the pointer but can not point it to something else. If I allocate memory using malloc in "addEditNode", when the function returns, can I see the contents of first-next ? Second question is do I have to free first-next or is it only first that I should free? I am running into segmentation faults on Linux. #include <stdio.h> #include <stdlib.h> typedef struct list_node list_node_t; struct list_node { int value; list_node_t *next; }; void addEditNode(list_node_t *node) { node->value = 10; node->next = (list_node_t*) malloc(sizeof(list_node_t)); node->next->value = 1; node->next->next = NULL; } int main() { list_node_t *first = (list_node_t*) malloc(sizeof(list_node_t)); first->value = 1; first->next = NULL; addEditNode(first); free(first); return 0; }

    Read the article

  • Parallel doseq for Clojure

    - by andrew cooke
    I haven't used multithreading in Clojure at all so am unsure where to start. I have a doseq whose body can run in parallel. What I'd like is for there always to be 3 threads running (leaving 1 core free) that evaluate the body in parallel until the range is exhausted. There's no shared state, nothing complicated - the equivalent of Python's multiprocessing would be just fine. So something like: (dopar 3 [i (range 100)] ; repeated 100 times in 3 parallel threads... ...) Where should I start looking? Is there a command for this? A standard package? A good reference? So far I have found pmap, and could use that (how do I restrict to 3 at a time? looks like it uses 32 at a time - no, source says 2 + number of processors), but it seems like this is a basic primitive that should already exist somewhere. clarification: I really would like to control the number of threads. I have processes that are long-running and use a fair amount of memory, so creating a large number and hoping things work out OK isn't a good approach (example which uses a significant chunk available mem). update: Starting to write a macro that does this, and I need a semaphore (or a mutex, or an atom i can wait on). Do semaphores exist in Clojure? Or should I use a ThreadPoolExecutor? It seems odd to have to pull so much in from Java - I thought parallel programming in Clojure was supposed to be easy... Maybe I am thinking about this completely the wrong way? Hmmm. Agents?

    Read the article

  • Question about array subscripting in C#

    - by Michael J
    Back in the old days of C, one could use array subscripting to address storage in very useful ways. For example, one could declare an array as such. This array represents an EEPROM image with 8 bit words. BYTE eepromImage[1024] = { ... }; And later refer to that array as if it were really multi-dimensional storage BYTE mpuImage[2][512] = eepromImage; I'm sure I have the syntax wrong, but I hope you get the idea. Anyway, this projected a two dimension image of what is really single dimensional storage. The two dimensional projection represents the EEPROM image when loaded into the memory of an MPU with 16 bit words. In C one could reference the storage multi-dimensionaly and change values and the changed values would show up in the real (single dimension) storage almost as if by magic. Is it possible to do this same thing using C#? Our current solution uses multiple arrays and event handlers to keep things synchronized. This kind of works but it is additional complexity that we would like to avoid if there is a better way.

    Read the article

  • Need help in Hashtable implementation

    - by rafael
    Hi all, i'm quite a beginner in C# , i tried to write a program that extract words from an entered string, the user has to enter a minimum length for the word to filter the words output ... my code doesn't look good or intuitive, i used two arrays countStr to store words , countArr to store word length corresponding to each word .. but the problem is i need to use hashtables instead of those two arrays , because both of their sizes are depending on the string length that the user enter , i think that's not too safe for the memory or something ? here's my humble code , again i'm trying to replace those two arrays with one hashtable , how can this be done ? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Collections; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { int i = 0 ; int j = 0; string myString = ""; int counter = 0; int detCounter = 0; myString = Console.ReadLine(); string[] countStr = new string[myString.Length]; int[] countArr = new int[myString.Length]; Console.Write("Enter minimum word length:"); detCounter = int.Parse(Console.ReadLine()); for (i = 0; i < myString.Length; i++) { if (myString[i] != ' ') { counter++; countStr[j] += myString[i]; } else { countArr[j] = counter; counter = 0; j++; } } if (i == myString.Length) { countArr[j] = counter; } for (i = 0; i < myString.Length ; i++) { if (detCounter <= countArr[i]) { Console.WriteLine(countStr[i]); } } Console.ReadLine(); } } }

    Read the article

  • Pointer and malloc issue

    - by Andy
    I am fairly new to C and am getting stuck with arrays and pointers when they refer to strings. I can ask for input of 2 numbers (ints) and then return the one I want (first number or second number) without any issues. But when I request names and try to return them, the program crashes after I enter the first name and not sure why. In theory I am looking to reserve memory for the first name, and then expand it to include a second name. Can anyone explain why this breaks? Thanks! #include <stdio.h> #include <stdlib.h> void main () { int NumItems = 0; NumItems += 1; char* NameList = malloc(sizeof(char[10])*NumItems); printf("Please enter name #1: \n"); scanf("%9s", NameList[0]); fpurge(stdin); NumItems += 1; NameList = realloc(NameList,sizeof(char[10])*NumItems); printf("Please enter name #2: \n"); scanf("%9s", NameList[1]); fpurge(stdin); printf("The first name is: %s",NameList[0]); printf("The second name is: %s",NameList[1]); return 0; }

    Read the article

  • Rotate a linked list

    - by user408041
    I want to rotate a linked list that contains a number. 123 should be rotated to 231. The function created 23 but the last character stays empty, why? typedef struct node node; struct node{ char digit; node* p; }; void rotate(node** head){ node* walk= (*head); node* prev= (*head); char temp= walk->digit; while(walk->p!=NULL){ walk->digit=walk->p->digit; walk= walk->p; } walk->digit=temp; } How I create the list: node* convert_to_list(int num){ node * curr, * head; int i=0,length=0; char *arr=NULL; head = NULL; length =(int) log10(((double) num))+1; arr =(char*) malloc((length)*sizeof(char)); //allocate memory sprintf (arr, "%d" ,num); //(num, buf, 10); for(i=length;i>=0;i--) { curr = (node *)malloc(sizeof(node)); (curr)->digit = arr[i]; (curr)->p = head; head = curr; } curr = head; return curr; }

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • Objective C iPhone performance issue

    - by Asad Khan
    Ok guys I am developing an iPhone app I have a Model class which follows a Singleton design pattern. Now I have an NSArray in it which is initialized to around some 1000 NSStrings in the init method. Now I need to use this data in some view controller. so I import Model.h, I create an array of NSString objects in view controller & set the data to it. But now the problem is that now I have 2000 NSStrings currently allocated, which I believe is not a good thing on iPhone due to memory considerations. releasing model object wont help because I've overrided release method to release nothing according to the pattern & I cannot change the design now because now a lot of code works on the assumption of model being a singleton. & in future maybe the initial NSStrings may grow to 2000 or even more & then I'll have 4000 NSStrings allocated at one time .... I am a little confused on how to go about it any suggestions

    Read the article

  • Is it not possible to make a C++ application "Crash Proof"?

    - by Enno Shioji
    Let's say we have an SDK in C++ that accepts some binary data (like a picture) and does something. Is it not possible to make this SDK "crash-proof"? By crash I primarily mean forceful termination by the OS upon memory access violation, due to invalid input passed by the user (like an abnormally short junk data). I have no experience with C++, but when I googled, I found several means that sounded like a solution (use a vector instead of an array, configure the compiler so that automatic bounds check is performed, etc.). When I presented this to the developer, he said it is still not possible.. Not that I don't believe him, but if so, how is language like Java handling this? I thought the JVM performs everytime a bounds check. If so, why can't one do the same thing in C++ manually? UPDATE By "Crash proof" I don't mean that the application does not terminate. I mean it should not abruptly terminate without information of what happened (I mean it will dump core etc., but is it not possible to display a message like "Argument x was not valid" etc.?)

    Read the article

  • C++ ulong to class method pointer and back

    - by Simone Margaritelli
    Hi guys, I'm using a hash table (source code by Google Inc) to store some method pointers defined as: typedef Object *(Executor::*expression_delegate_t)( vframe_t *, Node * ); Where obviously "Executor" is the class. The function prototype to insert some value to the hash table is: hash_item_t *ht_insert( hash_table_t *ht, ulong key, ulong data ); So basically i'm doing the insert double casting the method pointer: ht_insert( table, ASSIGN, reinterpret_cast<ulong>( (void *)&Executor::onAssign ) ); Where table is defined as a 'hash_table_t *' inside the declaration of the Executor class, ASSIGN is an unsigned long value, and 'onAssign' is the method I have to map. Now, Executor::onAssign is stored as an unsigned long value, its address in memory I think, and I need to cast back the ulong to a method pointer. But this code: hash_item_t* item = ht_find( table, ASSIGN ); expression_delegate_t delegate = reinterpret_cast < expression_delegate_t > (item->data); Gives me the following compilation error : src/executor.cpp:45: error: invalid cast from type ‘ulong’ to type ‘Object* (Executor::*)(vframe_t*, Node*)’ I'm using GCC v4.4.3 on a x86 GNU/Linux machine. Any hints?

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

  • Session scoped bean as class attribute of Spring MVC Controller

    - by Sotirios Delimanolis
    I have a User class: @Component @Scope("session") public class User { private String username; } And a Controller class: @Controller public class UserManager { @Autowired private User user; @ModelAttribute("user") private User createUser() { return user; } @RequestMapping(value = "/user") public String getUser(HttpServletRequest request) { Random r = new Random(); user.setUsername(new Double(r.nextDouble()).toString()); request.getSession().invalidate(); request.getSession(true); return "user"; } } I invalidate the session so that the next time i got to /users, I get another user. I'm expecting a different user because of user's session scope, but I get the same user. I checked in debug mode and it is the same object id in memory. My bean is declared as so: <bean id="user" class="org.synchronica.domain.User"> <aop:scoped-proxy/> </bean> I'm new to spring, so I'm obviously doing something wrong. I want one instance of User for each session. How?

    Read the article

  • How to optimalize my game calendar in C#?

    - by MartyIX
    Hi, I've implemented a simple calendar (message system) for my game which consists from: 1) List<Event> calendar; 2) public class Event { /// <summary> /// When to process the event /// </summary> public Int64 when; /// <summary> /// Which object should process the event /// </summary> public GameObject who; /// <summary> /// Type of event /// </summary> public EventType what; public int posX; public int posY; public int EventID; } 3) calendar.Add(new Event(...)) The problem with this code is that even thought the number of messages is not excessise per second. It allocates still new memory and GC will once need to take care of that. The garbage collection may lead to a slight lag in my game and therefore I'd like to optimalize my code. My considerations: To change Event class in a structure - but the structure is not entirely small and it takes some time to copy it wherever I need it. Reuse Event object somehow (add queue with used events and when new event is needed I'll just take from this queue). Does anybody has other idea how to solve the problem? Thanks for suggestions!

    Read the article

  • Call 32-bit or 64-bit program from bootloader

    - by user1002358
    There seems to be quite a lot of identical information on the Internet about writing the following 3 bootloaders: Infinite loop jmp $ Print a single character Print "Hello World". This is fantastic, and I've gone through these 3 variations with very little trouble. I'd like to write some 32- or 64-bit code in C and compile it, and call that code from the bootloader... basically a bootloader that, for example, sets the computer up to run some simple numerical simulation. I'll start by listing primes, for example, and then maybe some input/output from the user to maybe compute a Fourier transform. I don't know. I haven't found any information on how to do this, but I can already foresee some problems before I even begin. First of all, compiling a C program compiles it into one of several different files, depending on the target. For Windows, it's a PE file. For Linux, it's a .out file. These files are both quite different. In my instance, the target isn't Windows or Linux, it's just whatever I have written in the bootloader. Secondly, where would the actual code reside? The bootloader is exactly 512 bytes, but the program I write in C will certainly compile to something much larger. It will need to sit on my (virtual) hard disk, probably in some sort of file system (which I haven't even defined!) and I will need to load the information from this file into memory before I can even think about executing it. But from my understanding, all this is many, many orders of magnitude more complex than a 12-line "Hello World" bootloader. So my question is: How do I call a large 32- or 64-bit program (written in C/C++) from my 16-bit bootloader.

    Read the article

  • JavaCard monitoring folder

    - by GxG
    I want to write a two way application: applet for javacard and an application in C#. I've got the C# covered but i want to know if with JavaCard i can monitor a folder on the memory and how would i go about doing that. I have a shared folder let's call it temp in which i want to store buffer information between the simulated smartcard and the C# application. The C# application will only read from that folder and display the information, but also it will write requests towards the smartcard. For example i simulate entering the PIN for the card. The applet will write a file containing available options and the C# application will read that file and display those options; from the C# app i will chose and option and write a request file in the same folder. This is when the smartcard which is monitoring that folder will read the request and issue a response. Can i make the smartcard monitor that folder? I was thinking of using encrypted XML files for the request/response operations. But simple .txt files are good to. I am limited to using JavaCard v2.2.1, and every operation has to be encrypted/decrypted. (with the ciphering i have no problem)

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Build OpenGL model in parallel?

    - by Brendan Long
    I have a program which draws some terrain and simulates water flowing over it (in a cheap and easy way). Updating the water was easy to parallelize using OpenMP, so I can do ~50 updates per second. The problem is that even with a small amounts of water, my draws per second are very very low (starts at 5 and drops to around 2 once there's a significant amount of water). It's not a problem with the video card because the terrain is more complicated and gets drawn so quickly that boost::timer tells me that I get infinity draws per second if I turn the water off. It may be related to memory bandwidth though (since I assume the model stays on the card and doesn't have to be transfered every time). What I'm concerned about is that on every draw, I'm calling glVertex3f() about a million times (max size is 450*600, 4 vertices each), and it's done entirely sequentially because Glut won't let me call anything in parallel. So.. is if there's some way of building the list in parallel and then passing it to OpenGL all at once? Or some other way of making it draw this faster? Am I using the wrong method (besides the obvious "use less vertices")?

    Read the article

  • (How) Can I approximate a "dynamic" index (key extractor) for Boost MultiIndex?

    - by Sarah
    I have a MultiIndex container of boost::shared_ptrs to members of class Host. These members contain private arrays bool infections[NUM_SEROTYPES] revealing Hosts' infection statuses with respect to each of 1,...,NUM_SEROTYPES serotypes. I want to be able to determine at any time in the simulation the number of people infected with a given serotype, but I'm not sure how: Ideally, Boost MultiIndex would allow me to sort, for example, by Host::isInfected( int s ), where s is the serotype of interest. From what I understand, MultiIndex key extractors aren't allowed to take arguments. An alternative would be to define an index for each serotype, but I don't see how to write the MultiIndex container typedef ... in such an extensible way. I will be changing the number of serotypes between simulations. (Do experienced programmers think this should be possible? I'll attempt it if so.) There are 2^(NUM_SEROTYPES) possible infection statuses. For small numbers of serotypes, I could use a single index based on this number (or a binary string) and come up with some mapping from this key to actual infection status. Counting is still darn slow. I could maintain a separate structure counting the total numbers of infecteds with each serotype. The synchrony is a bit of a pain, but the memory is fine. I would prefer a slicker option, since I would like to do further sorts on other host attributes (e.g., after counting the number infected with serotype s, count the number of those infected who are also in a particular household and have a particular age). Thanks in advance.

    Read the article

  • refactoring my code. My headers (Header Guard Issues)

    - by numerical25
    I had a post similar to this awhile ago based on a error I was getting. I was able to fix it but since then I been having trouble doing things because headers keep blocking other headers from using code. Honestly, these headers are confusing me and if anyone has any resources that will address these types of issues, that will be helpful. What I essentially want to do is be able to have rModel.h be included inside RenderEngine.h. every time I add rModel.h to RenderEngine.h, rModel.h is no longer able to use RenderEngine.h. (rModel.h has a #include of RenderEngine.h as well). So in a nutshell, RenderEngine and rModel need to use each others functionalities. On top of all this confusion, the Main.cpp needs to use RenderEngine. stdafx.h #include "targetver.h" #define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers // Windows Header Files: #include <windows.h> // C RunTime Header Files #include <stdlib.h> #include <malloc.h> #include <memory.h> #include <tchar.h> #include "resource.h" main.cpp #include "stdafx.h" #include "RenderEngine.h" #include "rModel.h" // Global Variables: RenderEngine go; rModel *g_pModel; ...code........... rModel.h #ifndef _MODEL_H #define _MODEL_H #include "stdafx.h" #include <vector> #include <string> #include "rTri.h" #include "RenderEngine.h" ........Code RenderEngine.h #pragma once #include "stdafx.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #include "rModel.h" .......Code......

    Read the article

  • segmentation fault when using pointer to pointer

    - by user3697730
    I had been trying to use a pointer to pointer in a function,but is seems that I am not doing the memory allocation correctly... My code is: #include<stdio.h> #include<math.h> #include<ctype.h> #include<stdlib.h> #include<string.h> struct list{ int data; struct list *next; }; void abc (struct list **l,struct list **l2) { *l2=NULL; l2=(struct list**)malloc( sizeof(struct list*)); (*l)->data=12; printf("%d",(*l)->data); (*l2)->next=*l2; } int main() { struct list *l,*l2; abc(&l,&l2); system("pause"); return(0); } This code compiles,but I cannot run thw program..I get a segmentation fault..What should I do?Any help would be appreciated!

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >