Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 496/555 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • Do all C compilers allow functions to return structures?

    - by Jordan S
    I am working on a program in C and using the SDCC compiler for a 8051 architecture device. I am trying to write a function called GetName that will read 8 characters from Flash Memory and return the character array in some form. I know that it is not possible to return an array in C so I am trying to do it using a struct like this: //********************FLASH.h file******************************* MyStruct GetName(int i); //Function prototype #define NAME_SIZE 8 typedef struct { char Name[NAME_SIZE]; } MyStruct; extern MyStruct GetName(int i); // *****************FLASH.c file*********************************** #include "FLASH.h" MyStruct GetName( int i) { MyStruct newNameStruct; //... // Fill the array by reading data from Flash //... return newNameStruct; } I don't have any references to this function yet but for some reason, I get a compiler error that says "Function cannot return aggregate." Does this mean that my compiler does not support functions that return structs? Or am I just doing something wrong?

    Read the article

  • PHP file upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • What's the best way to transfer a large dataset over a .NET web service?

    - by Malvineous
    I've inherited a C# .NET application which talks to a web service, and the web service talks to an Oracle database. I need to add an export function to the UI, to produce an Excel spreadsheet of some of the data. I have created a web service function to run a database query, load the data into a DataTable and then return it, which works fine for a small number of rows. However there is enough data in the full run that the client application locks up for a few minutes and then returns a timeout error. Obviously this isn't the best way to retrieve such a large dataset. Before I go ahead and come up with some dodgy way of splitting the call, I'm wondering if there is already something in place that can handle this. At the moment I'm thinking of a startExport function then repeatedly calling a next50Rows function until there is no data left, but because web services are stateless this means I'm going to have to keep some sort of ID number around and deal with the associated permissions. It would mean that I don't have to load the entire data set into the web server's memory though, which is one good thing. So if anyone knows a better way to retrieve a large amount of data (in a table format) over a .NET web service, please let me know!

    Read the article

  • SqlCeResultSet Problem

    - by Vlad
    Hello, I have a SmartDevice project (.NetCF 2.0) configured to be tested on the USA Windows Mobile 5.0 Pocket PC R2 Emulator. My project uses SqlCe 3.0. Understanding that a SmartDevice project is "more carefull" with the device's memory I am using SqlCeResultSets. The result sets are strongly typed, autogenerated by Visual Studio 2008 using the custom tool MSResultSetGenerator. The problem I am facing is that the result set does not recognize any column names. The autogenerated code for the fields does not work. In the client code I am using InfoResultSet rs = new InfoResultSet(); rs.Open(); rs.ReadFirst(); string myFormattedDate = rs.MyDateColumn.ToString("dd/MM/yyyy"); When the execution on the emulator reaches the rs.MyDateColumn the application throws an System.IndexOutOfRangeException. Investigating the stack trace at System.Data.SqlServerCe.FieldNameLookup.GetOrdinal() at System.Data.SqlServerCe.SqlCeDataReader.GetOrdinal() I've tested the GetOrdinal method (in my autogenerated class that inherits SqlCeResultSet): this.GetOrdinal("MyDateColumn"); // throws an exception this.GetName(1); // returns "MyDateColumn" this.GetOrdinal(this.GetName(1)); //throws an exception :) [edit added] The table exists and it's filled with data. Using typed DataSets works like a charm. Regenerating the SqlCeResultSet does not solve the issue, the problem remains. The problem basically is that I am not able to access a column by it's name. The data can be accessed trough this.GetDateTime(1)using the column ordinal. The application fails executing this.GetOrdinal("MyDateColumn"). Also I have updated Visual Studio 2008 to Service Pack 1. Additionaly I am developing the project on a virtual machine with Windows XP SP 2, but in my opinion if the medium is virtual or not should have no effect on the developing. Am I doing something wrong or am I missing something? Thank you.

    Read the article

  • Why does multiple sessions started on the same page gets the same session id?

    - by Calmarius
    I tryed the following: <?php session_name('user'); session_start(); // sets an 'user' cookie with session id. // This session stores the user's login data. // Its an ordinary session. $USERSESSION=$_SESSION; // saving this session $userSessId=session_id(); session_write_close(); //closing this session session_name('chatroom'); session_start(); // set a 'chatroom' cookie but contains the same session id and points to the same data :( . // This session would be used to store the chat text. This session would be // shared between all users joined in this chat room by setting the same session id for all members. // by calling session_regenerate_id would make a diffent session id for every user and it won't be a chat room anymore. ?> So I want to do a chat like thing with sessions. On the client side it would be done with ajax that polls this php page in every 5-10 seconds. Sessions may be cached in the server's memory so it can be accessible fast. I can store the chat in the database but my service runs on a free webhost which is limited, only 4 mysql connections allowed at a time which is almost nothing. I try to touch my database as least times as possible.

    Read the article

  • Recursion in assembly?

    - by Davis
    I'm trying to get a better grasp of assembly, and I am a little confused about how to recursively call functions when I have to deal with registers, popping/pushing, etc. I am embedding x86 assembly in C++. Here I am trying to make a method which given an array of integers will build a linked list containing these integers in the order they appear in the array. I am doing this by calling a recursive function: insertElem (struct elem *head, struct elem *newElem, int data) -head: head of the list -data: the number that will be inserted at the end of a list -newElem: points to the location in memory where I will store the new element (data field) My problem is that I keep overwriting the registers instead of a typical linked list. For example, if I give it an array {2,3,1,8,3,9} my linked-list will return the first element (head) and only the last element, because the elements keep overwriting each other after head is no longer null. So here my linked list looks something like: 2--9 instead of 2--3--1--8--3--9 I feel like I don't have a grasp on how to organize and handle the registers. newElem is in EBX and just keeps getting rewritten. Thanks in advance!

    Read the article

  • CUDA - multiple kernels to compute a single value

    - by Roger
    Hey, I'm trying to write a kernel to essentially do the following in C float sum = 0.0; for(int i = 0; i < N; i++){ sum += valueArray[i]*valueArray[i]; } sum += sum / N; At the moment I have this inside my kernel, but it is not giving correct values. int i0 = blockIdx.x * blockDim.x + threadIdx.x; for(int i=i0; i<N; i += blockDim.x*gridDim.x){ *d_sum += d_valueArray[i]*d_valueArray[i]; } *d_sum= __fdividef(*d_sum, N); The code used to call the kernel is kernelName<<<64,128>>>(N, d_valueArray, d_sum); cudaMemcpy(&sum, d_sum, sizeof(float) , cudaMemcpyDeviceToHost); I think that each kernel is calculating a partial sum, but the final divide statement is not taking into account the accumulated value from each of the threads. Every kernel is producing it's own final value for d_sum? Does anyone know how could I go about doing this in an efficient way? Maybe using shared memory between threads? I'm very new to GPU programming. Cheers

    Read the article

  • How do the operators < and > work with pointers?

    - by Øystein
    Just for fun, I had a std::list of const char*, each element pointing to a null-terminated text string, and ran a std::list::sort() on it. As it happens, it sort of (no pun intended) did not sort the strings. Considering that it was working on pointers, that makes sense. According to the documentation of std::list::sort(), it (by default) uses the operator < between the elements to compare. Forgetting about the list for a moment, my actual question is: How do these (, <, =, <=) operators work on pointers in C++ and C? Do they simply compare the actual memory addresses? char* p1 = (char*) 0xDAB0BC47; char* p2 = (char*) 0xBABEC475; e.g. on a 32-bit, little-endian system, p1 p2 because 0xDAB0BC47 0xBABEC475? Testing seems to confirm this, but I thought it'd be good to put it on StackOverflow for future reference. C and C++ both do some weird things to pointers, so you never really know...

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • Spring + iBatis + Hessian caching

    - by ILya
    Hi. I have a Hessian service on Spring + iBatis working on Tomcat. I'm wondering how to cache results... I've made the following config in my sqlmap file: <sqlMap namespace="Account"> <cacheModel id="accountCache" type="MEMORY" readOnly="true" serialize="false"> <flushInterval hours="24"/> <flushOnExecute statement="Account.addAccount"/> <flushOnExecute statement="Account.deleteAccount"/> <property name="reference-type" value="STRONG" /> </cacheModel> <typeAlias alias="Account" type="domain.Account" /> <select id="getAccounts" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts; </select> <select id="getAccount" parameterClass="Long" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts where id=#id#; </select> <insert id="addAccount" parameterClass="Account"> fix all; insert into accounts (id, name, pin) values (#id#, #name#, #pin#); </insert> <delete id="deleteAccount" parameterClass="Long"> fix all; delete from accounts where id = #id#; </delete> </sqlMap> Then i've done some tests... I have a hessian client application. I'm calling getAccounts several times and after each call it's a query to DBMS. How to make my service to query DBMS only a first time (after server restart) getAccounts called and for the following calls to use a cache?

    Read the article

  • User to kernel mode big picture?

    - by fsdfa
    I've to implement a char device, a LKM. I know some basics about OS, but I feel I don't have the big picture. In a C programm, when I call a syscall what I think it happens is that the CPU is changed to ring0, then goes to the syscall vector and jumps to a kernel memmory space function that handle it. (I think that it does int 0x80 and in eax is the offset of the syscall vector, not sure). Then, I'm in the syscall itself, but I guess that for the kernel is the same process that was before, only that it is in kernel mode, I mean the current PCB is the process that called the syscall. So far... so good?, correct me if something is wrong. Others questions... how can I write/read in process memory?. If in the syscall handler I refer to address, say, 0xbfffffff. What it means that address? physical one? Some virtual kernel one?

    Read the article

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • How to get the size of a binary tree ?

    - by Andrei Ciobanu
    I have a very simple binary tree structure, something like: struct nmbintree_s { unsigned int size; int (*cmp)(const void *e1, const void *e2); void (*destructor)(void *data); nmbintree_node *root; }; struct nmbintree_node_s { void *data; struct nmbintree_node_s *right; struct nmbintree_node_s *left; }; Sometimes i need to extract a 'tree' from another and i need to get the size to the 'extracted tree' in order to update the size of the initial 'tree' . I was thinking on two approaches: 1) Using a recursive function, something like: unsigned int nmbintree_size(struct nmbintree_node* node) { if (node==NULL) { return(0); } return( nmbintree_size(node->left) + nmbintree_size(node->right) + 1 ); } 2) A preorder / inorder / postorder traversal done in an iterative way (using stack / queue) + counting the nodes. What approach do you think is more 'memory failure proof' / performant ? Any other suggestions / tips ? NOTE: I am probably going to use this implementation in the future for small projects of mine. So I don't want to unexpectedly fail :).

    Read the article

  • object / class methods serialized as well?

    - by Mat90
    I know that data members are saved to disk but I was wondering whether object's/class' methods are saved in binary format as well? Because I found some contradictionary info, for example: Ivor Horton: "Class objects contain function members as well as data members, and all the members, both data and functions, have access specifiers; therefore, to record objects in an external file, the information written to the file must contain complete specifications of all the class structures involved." and: Are methods also serialized along with the data members in .NET? Thus: are method's assembly instructions (opcodes and operands) stored to disk as well? Just like a precompiled LIB or DLL? During the DOS ages I used assembly so now and then. As far as I remember from Delphi and the following site (answer by dan04): Are methods also serialized along with the data members in .NET? sizeof(<OBJECT or CLASS>) will give the size of all data members together (no methods/procedures). Also a nice C example is given there with data and members declared in one class/struct but at runtime these methods are separate procedures acting on a struct of data. However, I think that later class/object implementations like Pascal's VMT may be different in memory.

    Read the article

  • python crc32 woes

    - by lazyr
    I'm writing a python program to extract data from the middle of a 6 GB bz2 file. A bzip2 file is made up of independently decryptable blocks of data, so I only need to find a block (they are delimited by magic bits), then create a temporary one-block bzip2 file from it in memory, and finally pass that to the bz2.decompress function. Easy, no? The bzip2 format has a crc32 checksum for the file at the end. No problem, binascii.crc32 to the rescue. But wait. The data to be checksummed does not necessarily end on a byte boundary, and the crc32 function operates on a whole number of bytes. My plan: use the binascii.crc32 function on all but the last byte, and then a function of my own to update the computed crc with the last 1-7 bits. But hours of coding and testing has left me bewildered, and my puzzlement can be boiled down to this question: how come crc32("\x00") is not 0x00000000? Shouldn't it be, according to the wikipedia article? You start with 0b00000000 and pad with 32 0's, then do polynomial division with 0x04C11DB7 until there are no ones left in the first 8 bits, which is immediately. Your last 32 bits is the checksum, and how can that not be all zeroes? I've searched google for answers and looked at the code of several crc32 implementations without finding any clue to why this is so.

    Read the article

  • Array: Recursive problem cracked me up

    - by VaioIsBorn
    An array of integers A[i] (i 1) is defined in the following way: an element A[k] ( k 1) is the smallest number greater than A[k-1] such that the sum of its digits is equal to the sum of the digits of the number 4* A[k-1] . You need to write a program that calculates the N th number in this array based on the given first element A[1] . INPUT: In one line of standard input there are two numbers seperated with a single space: A[1] (1 <= A[1] <= 100) and N (1 <= N <= 10000). OUTPUT: The standard output should only contain a single integer A[N] , the Nth number of the defined sequence. Input: 7 4 Output: 79 Explanation: Elements of the array are as follows: 7, 19, 49, 79... and the 4th element is solution. I tried solving this by coding a separate function that for a given number A[k] calculates the sum of it's digits and finds the smallest number greater than A[k-1] as it says in the problem, but with no success. The first testing failed because of a memory limit, the second testing failed because of a time limit, and now i don't have any possible idea how to solve this. One friend suggested recursion, but i don't know how to set that. Anyone who can help me in any way please write, also suggest some ideas about using recursion/DP for solving this problem. Thanks.

    Read the article

  • Alternatives to static methods on interfaces for enforcing consistency

    - by jayshao
    In Java, I'd like to be able to define marker interfaces, that forced implementations to provide static methods. For example, for simple text-serialization/deserialization I'd like to be able to define an interface that looked something like this: public interface TextTransformable<T>{ public static T fromText(String text); public String toText(); Since interfaces in Java can't contain static methods though (as noted in a number of other posts/threads: here, here, and here this code doesn't work. What I'm looking for however is some reasonable paradigm to express the same intent, namely symmetric methods, one of which is static, and enforced by the compiler. Right now the best we can come up with is some kind of static factory object or generic factory, neither of which is really satisfactory. Note: in our case our primary use-case is we have many, many "value-object" types - enums, or other objects that have a limited number of values, typically carry no state beyond their value, and which we parse/de-parse thousands of time a second, so actually do care about reusing instances (like Float, Integer, etc.) and its impact on memory consumption/g.c. Any thoughts?

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Using `<List>` when dealing with pointers in C#.

    - by Gorchestopher H
    How can I add an item to a list if that item is essentially a pointer and avoid changing every item in my list to the newest instance of that item? Here's what I mean: I am doing image processing, and there is a chance that I will need to deal with images that come in faster than I can process (for a short period of time). After this "burst" of images I will rely on the fact that I can process faster than the average image rate, and will "catch-up" eventually. So, what I want to do is put my images into a <List> when I acquire them, then if my processing thread isn't busy, I can take an image from that list and hand it over. My issue is that I am worried that since I am adding the image "Image1" to the list, then filling "Image1" with a new image (during the next image acquisition) I will be replacing the image stored in the list with the new image as well (as the image variable is actually just a pointer). So, my code looks a little like this: while (!exitcondition) { if(ImageAvailabe()) { Image1 = AcquireImage(); ImgList.Add(Image1); } if(ImgList.Count 0) { ProcessEngine.NewImage(ImgList[0]); ImgList.RemoveAt(0); } } Given the above, how can I ensure that: - I don't replace all items in the list every time Image1 is modified. - I don't need to pre-declare a number of images in order to do this kind of processing. - I don't create a memory devouring monster. Any advice is greatly appreciated.

    Read the article

  • Drawing and filling different polygons at the same time in MATLAB

    - by Hossein
    Hi,I have the code below. It load a CSV file into memory. This file contains the coordinates for different polygons.Each row of this file has X,Y coordinates and a string which tells that to which polygon this datapoint belongs. for example a polygone named "Poly1" with 100 data points has 100 rows in this file like : Poly1,X1,Y1 Poly1,X2,Y2 ... Poly1,X100,Y100 Poly2,X1,Y1 ..... The index.csv file has the number of datapoint(number of rows) for each polygon in file Polygons.csv. These details are not important. The thing is: I can successfully extract the datapoints for each polygon using the code below. However, When I plot the lines of different polygons are connected to each other and the plot looks crappy. I need the polygons to be separated(they are connected and overlapping the some areas though). I thought by using "fill" I can actually see them better. But "fill" just filles every polygon that it can find and that is not desirable. I only want to fill inside the polygons. Can someone help me? I can also send you my datapoint if necessary, they are less than 200Kb. Thanks [coordinates,routeNames,polygonData] = xlsread('Polygons.csv'); index = dlmread('Index.csv'); firstPointer = 0 lastPointer = index(1) for Counter=2:size(index) firstPointer = firstPointer + index(Counter) + 1 hold on plot(coordinates(firstPointer:lastPointer,2),coordinates(firstPointer:lastPointer,1),'r-') lastPointer = lastPointer + index(Counter) end

    Read the article

  • mysterical error

    - by Görkem Buzcu
    i get "customer_service_simulator.exe stopped" error, but i dont know why? this is my c programming project and i have limited time left before deadline. the code is: #include <stdio.h> #include <stdlib.h> #include<time.h> #define FALSE 0 #define TRUE 1 /*A Node declaration to store a value, pointer to the next node and a priority value*/ struct Node { int priority; //arrival time int val; //type int wait_time; int departure_time; struct Node *next; }; Queue Record that will store the following: size: total number of elements stored in the list front: it shows the front node of the queue (front of the queue) rear: it shows the rare node of the queue (rear of the queue) availability: availabity of the teller struct QueueRecord { struct Node *front; struct Node *rear; int size; int availability; }; typedef struct Node *niyazi; typedef struct QueueRecord *Queue; Queue CreateQueue(int); void MakeEmptyQueue(Queue); void enqueue(Queue, int, int); int QueueSize(Queue); int FrontOfQueue(Queue); int RearOfQueue(Queue); niyazi dequeue(Queue); int IsFullQueue(Queue); int IsEmptyQueue(Queue); void DisplayQueue(Queue); void sorteddequeue(Queue); void sortedenqueue(Queue, int, int); void tellerzfunctionz(Queue *, Queue, int, int); int main() { int system_clock=0; Queue waitqueue; int exit, val, priority, customers, tellers, avg_serv_time, sim_time,counter; char command; waitqueue = CreateQueue(0); srand(time(NULL)); fflush(stdin); printf("Enter number of customers, number of tellers, average service time, simulation time\n:"); scanf("%d%c %d%c %d%c %d",&customers, &command,&tellers,&command,&avg_serv_time,&command,&sim_time); fflush(stdin); Queue tellerarray[tellers]; for(counter=0;counter<tellers;counter++){ tellerarray[counter]=CreateQueue(0); //burada teller sayisi kadar queue yaratiyorum } for(counter=0;counter<customers;counter++){ priority=1+(int)rand()%sim_time; //this will generate the arrival time sortedenqueue(waitqueue,1,priority); //here i put the customers in the waiting queue } tellerzfunctionz(tellerarray,waitqueue,tellers,customers); DisplayQueue(waitqueue); DisplayQueue(tellerarray[0]); DisplayQueue(tellerarray[1]); // waitqueue-> printf("\n\n"); system("PAUSE"); return 0; } /*This function initialises the queue*/ Queue CreateQueue(int maxElements) { Queue q; q = (struct QueueRecord *) malloc(sizeof(struct QueueRecord)); if (q == NULL) printf("Out of memory space\n"); else MakeEmptyQueue(q); return q; } /*This function sets the queue size to 0, and creates a dummy element and sets the front and rear point to this dummy element*/ void MakeEmptyQueue(Queue q) { q->size = 0; q->availability=0; q->front = (struct Node *) malloc(sizeof(struct Node)); if (q->front == NULL) printf("Out of memory space\n"); else{ q->front->next = NULL; q->rear = q->front; } } /*Shows if the queue is empty*/ int IsEmptyQueue(Queue q) { return (q->size == 0); } /*Returns the queue size*/ int QueueSize(Queue q) { return (q->size); } /*Shows the queue is full or not*/ int IsFullQueue(Queue q) { return FALSE; } /*Returns the value stored in the front of the queue*/ int FrontOfQueue(Queue q) { if (!IsEmptyQueue(q)) return q->front->next->val; else { printf("The queue is empty\n"); return -1; } } /*Returns the value stored in the rear of the queue*/ int RearOfQueue(Queue q) { if (!IsEmptyQueue(q)) return q->rear->val; else { printf("The queue is empty\n"); return -1; } } /*Displays the content of the queue*/ void DisplayQueue(Queue q) { struct Node *pos; pos=q->front->next; printf("Queue content:\n"); printf("-->Priority Value\n"); while (pos != NULL) { printf("--> %d\t %d\n", pos->priority, pos->val); pos = pos->next; } } void enqueue(Queue q, int element, int priority){ if(IsFullQueue(q)){ printf("Error queue is full"); } else{ q->rear->next=(struct Node *)malloc(sizeof(struct Node)); q->rear=q->rear->next; q->rear->next=NULL; q->rear->val=element; q->rear->priority=priority; q->size++; } } void sortedenqueue(Queue q, int val, int priority) { struct Node *insert,*temp; insert=(struct Node *)malloc(sizeof(struct Node)); insert->val=val; insert->priority=priority; temp=q->front; if(q->size==0){ enqueue(q, val, priority); } else{ while(temp->next!=NULL && temp->next->priority<insert->priority){ temp=temp->next; } //printf("%d",temp->priority); insert->next=temp->next; temp->next=insert; q->size++; if(insert->next==NULL){ q->rear=insert; } } } niyazi dequeue(Queue q) { niyazi del; niyazi deli; del=(niyazi)malloc(sizeof(struct Node)); deli=(niyazi)malloc(sizeof(struct Node)); if(IsEmptyQueue(q)){ printf("Queue is empty!"); return NULL; } else { del=q->front->next; q->front->next=del->next; deli->val=del->val; deli->priority=del->priority; free(del); q->size--; return deli; } } void sorteddequeue(Queue q) { struct Node *temp; struct Node *min; temp=q->front->next; min=q->front; int i; for(i=1;i<q->size;i++) { if(temp->next->priority<min->next->priority) { min=temp; } temp=temp->next; } temp=min->next; min->next=min->next->next; free(temp); if(min->next==NULL){ q->rear=min; } q->size--; } void tellerzfunctionz(Queue *a, Queue b, int c, int d){ int i; int value=0; int priority; niyazi temp; temp=(niyazi)malloc(sizeof(struct Node)); if(c==1){ for(i=0;i<d;i++){ temp=dequeue(b); sortedenqueue((*(a)),temp->val,temp->priority); } } else{ for(i=0;i<d;i++){ while(b->front->next->val==1){ if((*(a+value))->availability==1){ temp=dequeue(b); sortedenqueue((*(a+value)),temp->val,temp->priority); (*(a+value))->rear->val=2; } else{ value++; } } } } } //end of the program

    Read the article

  • Rewriting jQuery to plain old javascript - are the performance gains worth it?

    - by Swader
    Since jQuery is an incredibly easy and banal library, I've developed a rather complex project fairly quickly with it. The entire interface is jQuery based, and memory is cleaned regularly to maintain optimum performance. Everything works very well in Firefox, and exceptionally so in Chrome (other browsers are of no concern for me as this is not a commercial or publicly available product). What I'm wondering now is - since pure plain old javascript is really not a complicated language to master, would it be performance enhancing to rewrite the whole thing in plain old JS, and if so, how much of a boost would you expect to get from it? If the answers prove positive enough, I'll go ahead and do it, run a benchmark and report back with the precise findings. Cheers Edit: Thanks guys, valuable insight. The purpose was not to "re-invent the wheel" - it was just for experience and personal improvement. Just because something exists, doesn't mean you shouldn't explore it into greater detail, know how it works or try to recreate it. This is the same reason I seldom use frameworks, I would much rather use my own code and iron it out and gain massive experience doing it, than start off by using someone else's code, regardless of how ironed out it is. Anyway, won't be doing it, thanks for saving me the effort :)

    Read the article

  • What's the best way to store Logon User information for Web Application?

    - by Morgan Cheng
    I was once in a project of web application developed on ASP.NET. For each logon user, there is an object (let's call it UserSessionObject here) created and stored in RAM. For each HTTP request of given user, matching UserSessoinObject instance is used to visit user state information and connection to database. So, this UserSessionObject is pretty important. This design brings several problems found later: 1) Since this UserSessionObject is cached in ASP.NET memory space, we have to config load balancer to be sticky connection. That is, HTTP request in single session would always be sent to one web server behind. This limit scalability and maintainability. 2) This UserSessionObject is accessed in every HTTP request. To keep the consistency, there is a exclusive lock for the UserSessionObject. Only one HTTP request can be processed at any given time because it must to obtain the lock first. The performance and response time is affected. Now, I'm wondering whether there is better design to handle such logon user case. It seems Sharing-Nothing-Architecture helps. That means long user info is retrieved from database each time. I'm afraid that would hurt performance. Is there any design pattern for long user web app? Thanks.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >