Search Results

Search found 12634 results on 506 pages for 'transactional memory'.

Page 450/506 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • scanf a byte then print it out?

    - by Sarah
    I've searched around to see if I can find this answer but I can't seem to (please let me know if I'm wrong). I am trying to use scanf to read in a byte, an unsigned int and a char in one .c file and I am trying to access this byte in a different .c file and print it out. (I have already checked to make sure I have included all the appropriate parameters everywhere) But I keep getting errors. The warnings are: database.c: In function ‘addCitizen’: database.c:23:2: warning: format ‘%hhu’ expects argument of type ‘int’, but argument 2 has type ‘byte *’ [-Wformat] database.c:24:2: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 2 has type ‘int *’ [-Wformat] database.c:25:2: warning: format ‘%c’ expects argument of type ‘int’, but argument 2 has type ‘char *’ [-Wformat] where I'm scanf'ing: // Request loop while (count-- != 0) { while (1){ // Get values from the user int error = scanf ("%79s %hhu %u %c", tname, &tdist, &tyear, &tgender); addCitizen(db, tname, &tdist, &tyear, &tgender); where I'm printing: void addCitizen(Database *db, char *tname, byte *tdist, int *tyear, char *tgender){ //needs to find the right place in memory to put this stuff and then put it there printf("\nName is: %79s\n", tname); printf("District is: %hhu\n", tdist); printf("Year of birth is: %u\n", tyear); printf("Gender is:%c\n", tgender); I'm not sure where I'm going wrong?

    Read the article

  • Weak hashmap with weak references to the values?

    - by Razor Storm
    I am building an android app where each entity has a bitmap that represents its sprite. However, each entity can be be duplicated (there might be 3 copies of entity asdf for example). One approach is to load all the sprites upfront, and then put the correct sprite in the constructors of the entities. However, I want to decode the bitmaps lazily, so that the constructors of the entities will decode the bitmaps. The only problem with this is that duplicated entities will load the same bitmap twice, using 2x the memory (Or n times if the entity is created n times). To fix this, I built a SingularBitmapFactory that will store a decoded Bitmap into a hash, and if the same bitmap is asked for again, will simply return the previously hashed one instead of building a new one. Problem with this, though, is that the factory holds a copy of all bitmaps, and so won't ever get garbage collected. What's the best way to switch the hashmap to one with weakly referenced values? In otherwords, I want a structure where the values won't be GC'd if any other object holds a reference to it, but as long as no other objects refers it, then it can be GC'd.

    Read the article

  • Recursion in assembly?

    - by Davis
    I'm trying to get a better grasp of assembly, and I am a little confused about how to recursively call functions when I have to deal with registers, popping/pushing, etc. I am embedding x86 assembly in C++. Here I am trying to make a method which given an array of integers will build a linked list containing these integers in the order they appear in the array. I am doing this by calling a recursive function: insertElem (struct elem *head, struct elem *newElem, int data) -head: head of the list -data: the number that will be inserted at the end of a list -newElem: points to the location in memory where I will store the new element (data field) My problem is that I keep overwriting the registers instead of a typical linked list. For example, if I give it an array {2,3,1,8,3,9} my linked-list will return the first element (head) and only the last element, because the elements keep overwriting each other after head is no longer null. So here my linked list looks something like: 2--9 instead of 2--3--1--8--3--9 I feel like I don't have a grasp on how to organize and handle the registers. newElem is in EBX and just keeps getting rewritten. Thanks in advance!

    Read the article

  • Why does multiple sessions started on the same page gets the same session id?

    - by Calmarius
    I tryed the following: <?php session_name('user'); session_start(); // sets an 'user' cookie with session id. // This session stores the user's login data. // Its an ordinary session. $USERSESSION=$_SESSION; // saving this session $userSessId=session_id(); session_write_close(); //closing this session session_name('chatroom'); session_start(); // set a 'chatroom' cookie but contains the same session id and points to the same data :( . // This session would be used to store the chat text. This session would be // shared between all users joined in this chat room by setting the same session id for all members. // by calling session_regenerate_id would make a diffent session id for every user and it won't be a chat room anymore. ?> So I want to do a chat like thing with sessions. On the client side it would be done with ajax that polls this php page in every 5-10 seconds. Sessions may be cached in the server's memory so it can be accessible fast. I can store the chat in the database but my service runs on a free webhost which is limited, only 4 mysql connections allowed at a time which is almost nothing. I try to touch my database as least times as possible.

    Read the article

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • object / class methods serialized as well?

    - by Mat90
    I know that data members are saved to disk but I was wondering whether object's/class' methods are saved in binary format as well? Because I found some contradictionary info, for example: Ivor Horton: "Class objects contain function members as well as data members, and all the members, both data and functions, have access specifiers; therefore, to record objects in an external file, the information written to the file must contain complete specifications of all the class structures involved." and: Are methods also serialized along with the data members in .NET? Thus: are method's assembly instructions (opcodes and operands) stored to disk as well? Just like a precompiled LIB or DLL? During the DOS ages I used assembly so now and then. As far as I remember from Delphi and the following site (answer by dan04): Are methods also serialized along with the data members in .NET? sizeof(<OBJECT or CLASS>) will give the size of all data members together (no methods/procedures). Also a nice C example is given there with data and members declared in one class/struct but at runtime these methods are separate procedures acting on a struct of data. However, I think that later class/object implementations like Pascal's VMT may be different in memory.

    Read the article

  • XML Decryption Bug (referencing issue)

    - by OrangePekoe
    Hi, Needing some explanation of what exactly the decryption is doing, in addition to some help on solving the problem. Currently, when a portion of XML is encrypted, and then decrypted, the DOM appears to work correctly. We can see the element is encrypted and then see it return back once it is decrypted. Our problem lies when a user tries to change data in that same element after decryption has occurred. When a user changes some settings, data in the XML should change. However, if the user attempts to change an XML element that has been decrypted the changes are not reflected in the DOM. We have a reference pointer to the XML element that is used to bind the element to an object. If you encrypt the node and then decrypt it, the reference pointer now points to a valid orphaned XML element that is no longer part of the DOM. After decryption, there will be 2 copies of the XML element. One in the DOM as expected (though will not reflect new changes), and one orphaned element in memory that is still referenced by our pointer. The orphaned element is valid (reflects new changes). We can see that this orphaned element is owned by the DOM, but when we try to return its parent, it returns null. The question is: Where did this orphaned xml element come from? And how can we get it to correctly append (replace old data) to the DOM? The code resembles: public static void Decrypt(XmlDocument Doc, SymmetricAlgorithm Alg) { if (Doc == null) throw new ArgumentNullException("Doc"); if (Alg == null) throw new ArgumentNullException("Alg"); XmlElement encryptedElement = Doc.GetElementsByTagName("EncryptedData")[0] as XmlElement; if (encryptedElement == null) { throw new XmlException("The EncryptedData element was not found."); } EncryptedData edElement = new EncryptedData(); edElement.LoadXml(encryptedElement); EncryptedXml exml = new EncryptedXml(); byte[] rgbOutput = exml.DecryptData(edElement, Alg); exml.ReplaceData(encryptedElement, rgbOutput); }

    Read the article

  • What's the best way to transfer a large dataset over a .NET web service?

    - by Malvineous
    I've inherited a C# .NET application which talks to a web service, and the web service talks to an Oracle database. I need to add an export function to the UI, to produce an Excel spreadsheet of some of the data. I have created a web service function to run a database query, load the data into a DataTable and then return it, which works fine for a small number of rows. However there is enough data in the full run that the client application locks up for a few minutes and then returns a timeout error. Obviously this isn't the best way to retrieve such a large dataset. Before I go ahead and come up with some dodgy way of splitting the call, I'm wondering if there is already something in place that can handle this. At the moment I'm thinking of a startExport function then repeatedly calling a next50Rows function until there is no data left, but because web services are stateless this means I'm going to have to keep some sort of ID number around and deal with the associated permissions. It would mean that I don't have to load the entire data set into the web server's memory though, which is one good thing. So if anyone knows a better way to retrieve a large amount of data (in a table format) over a .NET web service, please let me know!

    Read the article

  • python crc32 woes

    - by lazyr
    I'm writing a python program to extract data from the middle of a 6 GB bz2 file. A bzip2 file is made up of independently decryptable blocks of data, so I only need to find a block (they are delimited by magic bits), then create a temporary one-block bzip2 file from it in memory, and finally pass that to the bz2.decompress function. Easy, no? The bzip2 format has a crc32 checksum for the file at the end. No problem, binascii.crc32 to the rescue. But wait. The data to be checksummed does not necessarily end on a byte boundary, and the crc32 function operates on a whole number of bytes. My plan: use the binascii.crc32 function on all but the last byte, and then a function of my own to update the computed crc with the last 1-7 bits. But hours of coding and testing has left me bewildered, and my puzzlement can be boiled down to this question: how come crc32("\x00") is not 0x00000000? Shouldn't it be, according to the wikipedia article? You start with 0b00000000 and pad with 32 0's, then do polynomial division with 0x04C11DB7 until there are no ones left in the first 8 bits, which is immediately. Your last 32 bits is the checksum, and how can that not be all zeroes? I've searched google for answers and looked at the code of several crc32 implementations without finding any clue to why this is so.

    Read the article

  • Alternatives to static methods on interfaces for enforcing consistency

    - by jayshao
    In Java, I'd like to be able to define marker interfaces, that forced implementations to provide static methods. For example, for simple text-serialization/deserialization I'd like to be able to define an interface that looked something like this: public interface TextTransformable<T>{ public static T fromText(String text); public String toText(); Since interfaces in Java can't contain static methods though (as noted in a number of other posts/threads: here, here, and here this code doesn't work. What I'm looking for however is some reasonable paradigm to express the same intent, namely symmetric methods, one of which is static, and enforced by the compiler. Right now the best we can come up with is some kind of static factory object or generic factory, neither of which is really satisfactory. Note: in our case our primary use-case is we have many, many "value-object" types - enums, or other objects that have a limited number of values, typically carry no state beyond their value, and which we parse/de-parse thousands of time a second, so actually do care about reusing instances (like Float, Integer, etc.) and its impact on memory consumption/g.c. Any thoughts?

    Read the article

  • Do all C compilers allow functions to return structures?

    - by Jordan S
    I am working on a program in C and using the SDCC compiler for a 8051 architecture device. I am trying to write a function called GetName that will read 8 characters from Flash Memory and return the character array in some form. I know that it is not possible to return an array in C so I am trying to do it using a struct like this: //********************FLASH.h file******************************* MyStruct GetName(int i); //Function prototype #define NAME_SIZE 8 typedef struct { char Name[NAME_SIZE]; } MyStruct; extern MyStruct GetName(int i); // *****************FLASH.c file*********************************** #include "FLASH.h" MyStruct GetName( int i) { MyStruct newNameStruct; //... // Fill the array by reading data from Flash //... return newNameStruct; } I don't have any references to this function yet but for some reason, I get a compiler error that says "Function cannot return aggregate." Does this mean that my compiler does not support functions that return structs? Or am I just doing something wrong?

    Read the article

  • Using `<List>` when dealing with pointers in C#.

    - by Gorchestopher H
    How can I add an item to a list if that item is essentially a pointer and avoid changing every item in my list to the newest instance of that item? Here's what I mean: I am doing image processing, and there is a chance that I will need to deal with images that come in faster than I can process (for a short period of time). After this "burst" of images I will rely on the fact that I can process faster than the average image rate, and will "catch-up" eventually. So, what I want to do is put my images into a <List> when I acquire them, then if my processing thread isn't busy, I can take an image from that list and hand it over. My issue is that I am worried that since I am adding the image "Image1" to the list, then filling "Image1" with a new image (during the next image acquisition) I will be replacing the image stored in the list with the new image as well (as the image variable is actually just a pointer). So, my code looks a little like this: while (!exitcondition) { if(ImageAvailabe()) { Image1 = AcquireImage(); ImgList.Add(Image1); } if(ImgList.Count 0) { ProcessEngine.NewImage(ImgList[0]); ImgList.RemoveAt(0); } } Given the above, how can I ensure that: - I don't replace all items in the list every time Image1 is modified. - I don't need to pre-declare a number of images in order to do this kind of processing. - I don't create a memory devouring monster. Any advice is greatly appreciated.

    Read the article

  • How to get the size of a binary tree ?

    - by Andrei Ciobanu
    I have a very simple binary tree structure, something like: struct nmbintree_s { unsigned int size; int (*cmp)(const void *e1, const void *e2); void (*destructor)(void *data); nmbintree_node *root; }; struct nmbintree_node_s { void *data; struct nmbintree_node_s *right; struct nmbintree_node_s *left; }; Sometimes i need to extract a 'tree' from another and i need to get the size to the 'extracted tree' in order to update the size of the initial 'tree' . I was thinking on two approaches: 1) Using a recursive function, something like: unsigned int nmbintree_size(struct nmbintree_node* node) { if (node==NULL) { return(0); } return( nmbintree_size(node->left) + nmbintree_size(node->right) + 1 ); } 2) A preorder / inorder / postorder traversal done in an iterative way (using stack / queue) + counting the nodes. What approach do you think is more 'memory failure proof' / performant ? Any other suggestions / tips ? NOTE: I am probably going to use this implementation in the future for small projects of mine. So I don't want to unexpectedly fail :).

    Read the article

  • What C++ templates issue is going on with this error?

    - by WilliamKF
    Running gcc v3.4.6 on the Botan v1.8.8 I get the following compile time error building my application after successfully building Botan and running its self test: ../../src/Botan-1.8.8/build/include/botan/secmem.h: In member function `Botan::MemoryVector<T>& Botan::MemoryVector<T>::operator=(const Botan::MemoryRegion<T>&)': ../../src/Botan-1.8.8/build/include/botan/secmem.h:310: error: missing template arguments before '(' token What is this compiler error telling me? Here is a snippet of secmem.h that includes line 130: [...] /** * This class represents variable length buffers that do not * make use of memory locking. */ template<typename T> class MemoryVector : public MemoryRegion<T> { public: /** * Copy the contents of another buffer into this buffer. * @param in the buffer to copy the contents from * @return a reference to *this */ MemoryVector<T>& operator=(const MemoryRegion<T>& in) { if(this != &in) set(in); return (*this); } // This is line 130! [...]

    Read the article

  • User to kernel mode big picture?

    - by fsdfa
    I've to implement a char device, a LKM. I know some basics about OS, but I feel I don't have the big picture. In a C programm, when I call a syscall what I think it happens is that the CPU is changed to ring0, then goes to the syscall vector and jumps to a kernel memmory space function that handle it. (I think that it does int 0x80 and in eax is the offset of the syscall vector, not sure). Then, I'm in the syscall itself, but I guess that for the kernel is the same process that was before, only that it is in kernel mode, I mean the current PCB is the process that called the syscall. So far... so good?, correct me if something is wrong. Others questions... how can I write/read in process memory?. If in the syscall handler I refer to address, say, 0xbfffffff. What it means that address? physical one? Some virtual kernel one?

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • Drawing and filling different polygons at the same time in MATLAB

    - by Hossein
    Hi,I have the code below. It load a CSV file into memory. This file contains the coordinates for different polygons.Each row of this file has X,Y coordinates and a string which tells that to which polygon this datapoint belongs. for example a polygone named "Poly1" with 100 data points has 100 rows in this file like : Poly1,X1,Y1 Poly1,X2,Y2 ... Poly1,X100,Y100 Poly2,X1,Y1 ..... The index.csv file has the number of datapoint(number of rows) for each polygon in file Polygons.csv. These details are not important. The thing is: I can successfully extract the datapoints for each polygon using the code below. However, When I plot the lines of different polygons are connected to each other and the plot looks crappy. I need the polygons to be separated(they are connected and overlapping the some areas though). I thought by using "fill" I can actually see them better. But "fill" just filles every polygon that it can find and that is not desirable. I only want to fill inside the polygons. Can someone help me? I can also send you my datapoint if necessary, they are less than 200Kb. Thanks [coordinates,routeNames,polygonData] = xlsread('Polygons.csv'); index = dlmread('Index.csv'); firstPointer = 0 lastPointer = index(1) for Counter=2:size(index) firstPointer = firstPointer + index(Counter) + 1 hold on plot(coordinates(firstPointer:lastPointer,2),coordinates(firstPointer:lastPointer,1),'r-') lastPointer = lastPointer + index(Counter) end

    Read the article

  • What could cause these Apache crash errors ?

    - by jacobanderssen
    Hello guys. I had a server crash several days ago. I use Cacti to keep stats: at the time when the server crashed, a huge spike from Load 1 to Load 200 occurred, with over 800 processes in the run queue ( from 300 average). Upon checking /var/log/httpd I notice this: * glibc detected /usr/sbin/httpd: double free or corruption (out): 0x00002b8f3142c2f0 ** Followed by alot of these: [Sat Mar 13 19:20:20 2010] [warn] child process 3090 still did not exit, sending a SIGTERM [Sat Mar 13 19:20:20 2010] [warn] child process 3091 still did not exit, sending a SIGTERM Followed by this: ======= Backtrace: ========= /lib64/libc.so.6[0x2b8f1463c2ef] /lib64/libc.so.6(cfree+0x4b)[0x2b8f1463c73b] /usr/lib64/libapr-1.so.0(apr_pool_destroy+0x131)[0x2b8f13f98821] /usr/sbin/httpd[0x2b8f126df47e] /usr/sbin/httpd[0x2b8f126df4ab] /lib64/libpthread.so.0[0x2b8f141b87c0] /etc/httpd/modules/mod_file_cache.so[0x2b8f1cdf00fb] ======= Memory map: ======== And finally a lot of these: [Sat Mar 13 19:20:27 2010] [error] could not make child process 733 exit, attempting to continue anyway [Sat Mar 13 19:20:27 2010] [error] could not make child process 24560 exit, attempting to continue anyway [Sat Mar 13 19:20:27 2010] [error] could not make child process 31384 exit, attempting to continue anyway I am also noticing one or two lines like this: [Mon Mar 15 01:17:26 2010] [notice] child pid 20765 exit signal Segmentation fault (11) Please help me shed some light on this. Thanks !

    Read the article

  • How do the operators < and > work with pointers?

    - by Øystein
    Just for fun, I had a std::list of const char*, each element pointing to a null-terminated text string, and ran a std::list::sort() on it. As it happens, it sort of (no pun intended) did not sort the strings. Considering that it was working on pointers, that makes sense. According to the documentation of std::list::sort(), it (by default) uses the operator < between the elements to compare. Forgetting about the list for a moment, my actual question is: How do these (, <, =, <=) operators work on pointers in C++ and C? Do they simply compare the actual memory addresses? char* p1 = (char*) 0xDAB0BC47; char* p2 = (char*) 0xBABEC475; e.g. on a 32-bit, little-endian system, p1 p2 because 0xDAB0BC47 0xBABEC475? Testing seems to confirm this, but I thought it'd be good to put it on StackOverflow for future reference. C and C++ both do some weird things to pointers, so you never really know...

    Read the article

  • What does it mean for an OS to "execute within user processes"? Do any modern OS's use that approach

    - by Chris Cooper
    I have recently become interested in operating system, and a friend of mine lent me a book called Operating Systems: Internals and Design Principles (I have the third edition), published in 1998. It's been a very interesting book so far, but I have come to the part dealing with process control, and it's using UNIX System V as one of its examples of an operating system that executes within user processes. This concept has struck me as a little strange. First of all, does this mean that OS instructions and data are stored in each user of the processes? Probably not, because that would be an absurdly redundant scheme. But if not, then what does it mean to "execute within" a user process? Do any modern operating systems use this approach? It seems much more logical to have the operating system execute as its own process, or even independently of all processes, if you're short on memory. All the inter-accessiblilty of process data required for this layout seems to greatly complicate things. (But maybe that's just because I don't quite get the concept ;D) Here is what the book says: "Execution within User Processes: An alternative that is common with operation systems on smaller machines is to execute virtually all operating system software in the context of a user process. ... "

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • SqlServer slow on production environment

    - by Lieven Cardoen
    I have a weird problem in a production environment at a customer. I can't give any details on the infrastructure except that sql server runs on a virtual server and the data, log and filestream file are on another storage server (data and filestream together and log on a seperate server). Now, there's this query that when we run it gives these durations (first we clear the cache): 300ms, 20ms, 15ms, 17ms,... First time it takes longer, but from then on it is cached. At the customer, on a sql server that is more powerfull, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms, 2600ms, 2400ms, ... The query can be improved, that's right, but that's not the question here. How would you tackle this? I don't know where to go from here. The servers at this customer are really more powerfull but they do have virtual servers (we don't). What could be the cause... - Not enough memory? - Fragmentation? - Physical storage? I know it can be a lot of things, but maybe some of you have got some info for me on how to go on with this issue...

    Read the article

  • What's the best way to store Logon User information for Web Application?

    - by Morgan Cheng
    I was once in a project of web application developed on ASP.NET. For each logon user, there is an object (let's call it UserSessionObject here) created and stored in RAM. For each HTTP request of given user, matching UserSessoinObject instance is used to visit user state information and connection to database. So, this UserSessionObject is pretty important. This design brings several problems found later: 1) Since this UserSessionObject is cached in ASP.NET memory space, we have to config load balancer to be sticky connection. That is, HTTP request in single session would always be sent to one web server behind. This limit scalability and maintainability. 2) This UserSessionObject is accessed in every HTTP request. To keep the consistency, there is a exclusive lock for the UserSessionObject. Only one HTTP request can be processed at any given time because it must to obtain the lock first. The performance and response time is affected. Now, I'm wondering whether there is better design to handle such logon user case. It seems Sharing-Nothing-Architecture helps. That means long user info is retrieved from database each time. I'm afraid that would hurt performance. Is there any design pattern for long user web app? Thanks.

    Read the article

  • Calling a WPF Application and modify exposed properties?

    - by Justin
    I have a WPF Keyboard Application, it is developed in such a way that an application could call it and modify its properties to adapt the Keyboard to do what it needs to. Right now I have a file *.Keys.Set which tells the application (on open) to style itself according to that new style. I know this file could be passed as a command line argument into the application. That would not be a problem. My concern is, is there a way via a managed environment to change the properties of the executable as long as they are exposed properly, an example: 'Creates a new instance of the Keyboard Application Dim e_key as new WpfApplication("C:\egt\components\keyboard.exe") 'Sets the style path e_key.SetStylePath("c:\users\joe\apps\me\default.keys.set") e_key.Refresh() 'Applies the style e_key.HideMenu() 'Hides the menu e_key.ShowDeck("PIN") 'Shows the custom "deck" of keyboard keys the developer 'Created in the style application. ''work with events and response 'Clear the instance from memory e_key.close e_key.dispose e_key = nothing This would allow my application to become easily accessible to other Touch Screen Application Developers, allowing them to use my keyboard and keep the functionality they need. It seems like it might be possible because (name of executable).application shows all the exposed functions, properties, and values. I just have never done this before. Any help would be appreciated, thank you in advance.

    Read the article

  • Rewriting jQuery to plain old javascript - are the performance gains worth it?

    - by Swader
    Since jQuery is an incredibly easy and banal library, I've developed a rather complex project fairly quickly with it. The entire interface is jQuery based, and memory is cleaned regularly to maintain optimum performance. Everything works very well in Firefox, and exceptionally so in Chrome (other browsers are of no concern for me as this is not a commercial or publicly available product). What I'm wondering now is - since pure plain old javascript is really not a complicated language to master, would it be performance enhancing to rewrite the whole thing in plain old JS, and if so, how much of a boost would you expect to get from it? If the answers prove positive enough, I'll go ahead and do it, run a benchmark and report back with the precise findings. Cheers Edit: Thanks guys, valuable insight. The purpose was not to "re-invent the wheel" - it was just for experience and personal improvement. Just because something exists, doesn't mean you shouldn't explore it into greater detail, know how it works or try to recreate it. This is the same reason I seldom use frameworks, I would much rather use my own code and iron it out and gain massive experience doing it, than start off by using someone else's code, regardless of how ironed out it is. Anyway, won't be doing it, thanks for saving me the effort :)

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >