Search Results

Search found 65558 results on 2623 pages for 'large data'.

Page 480/2623 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • How to manage data access / preloading efficiently using web services in C# ?

    - by Amadeus45
    Hello all, Ok, this is very "generic" question. We currently have a SQL Server database for which we need to develop an application in ASP.NET with will contain all the business logic in C# Web Services. The thing is that, architecturally speaking, I'm not sure how to design the web service and the data management. There are many things to consider : We need have very rapid access to data. Right now, we have over a million "sales" and "purchases" record from which we need to often calculate and load the current stock for a given day according to a serie of parameter. I'm not sure how we should preload the data and keep the data in the Web Service. Doing a stock calculation within a SQL query will be very lengthy. They currently have a stock calculation application that preloads all sales and purchases for the day and afterwards calculate the stock on the code-side. We want to develop powerful reporting tools. We want to implement a "pivot table" but not sure how to implement it and have good performances. For the reasons above, I'm not sure how to design the data model. Anybody can give me any guidelines on how to start, or from their personnal experiences (what have you done in the past ?) I'm not sure if it's possible to make a bounty even though the question is new (I'd put 300 rep on it, since I really need something). If you know how, let me know. Thanks

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • php : is this if condition correct?

    - by phill
    I have the following if condition statement if ( (strlen($data[70])>0) || ( (remove19((trim($data[29])) == '7135556666')) && isLongDistance($data[8])) ) where $data is a recordset from a database. My goal is to include all rows where $data[70] isn't blank, and also include rows where $data[29] = 713555666 && $data[8] isLongDistance = TRUE My question is, if isLongDistance($data[8]) returns false, will it still return the row since $data[70] is not blank? thanks in advance

    Read the article

  • How can I optimize retrieving lowest edit distance from a large table in SQL?

    - by Matt
    Hey, I'm having troubles optimizing this Levenshtein Distance calculation I'm doing. I need to do the following: Get the record with the minimum distance for the source string as well as a trimmed version of the source string Pick the record with the minimum distance If the min distances are equal (original vs trimmed), choose the trimmed one with the lowest distance If there are still multiple records that fall under the above two categories, pick the one with the highest frequency Here's my working version: DECLARE @Results TABLE ( ID int, [Name] nvarchar(200), Distance int, Frequency int, Trimmed bit ) INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@Source, [Name])) As Distance, Frequency, 'False' As Trimmed FROM MyTable INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@SourceTrimmed, [Name])) As Distance, Frequency, 'True' As Trimmed FROM MyTable SET @ResultID = (SELECT TOP 1 ID FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @Result = (SELECT TOP 1 [Name] FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultDist = (SELECT TOP 1 Distance FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultTrimmed = (SELECT TOP 1 Trimmed FROM @Results ORDER BY Distance, Trimmed, Frequency) I believe what I need to do here is to.. Not dumb the results to a temporary table Do only 1 select from `MyTable` Setting the results right in the select from the initial select statement. (Since select will set variables and you can set multiple variables in one select statement) I know there has to be a good implementation to this but I can't figure it out... this is as far as I got: SELECT top 1 @ResultID = ID, @Result = [Name], (dbo.Levenshtein(@Source, [Name])) As distOrig, (dbo.Levenshtein(@SourceTrimmed, [Name])) As distTrimmed, Frequency FROM MyTable WHERE /* ... yeah I'm lost */ ORDER BY distOrig, distTrimmed, Frequency Any ideas?

    Read the article

  • How can I get PHP to compile a LaTeX document if it (www-data) can't get access to the required packages?

    - by Mark Jones
    I have a PHP script that compiles LaTeX documents with the use of: exec('cd /path/to/doc && /usr/bin/latexmk -pdf filename.tex'); This is working for some of my LaTeX documents but my latest document doesn't compile and a look at the log reveals: !pdfTeX error: pdflatex (file ecrm1000): Font ecrm1000 at 600 not found ==> Fatal error occurred, no output PDF file produced! Which I have found is the result of LaTeX not being able to see the required font packages. When I run the same compile command under my username the document compiles as it should. So my question is, how can I get PHP (executing as www-data) to get access to the necessary LaTeX packages? I have tried installing the required package under the www-data account using: sudo -u www-data sudo apt-get install texlive-fonts-recommended but it askes for www-data's password, which I don't believe was set by me and isn't anything I've thrown at it. I'm running Ubuntu 12.04 if it's any help.

    Read the article

  • import a text file into a temporary table using 'Load data infile' inside a stored procedure- MySQL

    - by Pankaj
    I need to import a text file into a temporary table and from that select portions of it to insert in different tables. I wanted to use 'LOAD DATA INFILE'. Is there any way, i can use 'Load data infile' in a stored procedure. I am using mysql. LOAD DATA LOCAL INFILE 'C:\\MyData.txt' INTO TABLE tempprod fields terminated by ',' lines terminated by '\r\n'; SELECT * FROM product p;

    Read the article

  • How do I use price data in one table for a calculation that is stored in another table?

    - by shane
    I'm still leanring PHP/MySQL but have learned quite a bit thanks to codies on StackOverflow. I'm trying to setup a sort of room reservations system using two tables: SETUP: Room price table: Has, prices for a type room a client may want to rent as well as the dates (day of week) they wish to use it. Pricing varies based on day of the week and per room. I've setup a different table for each room type as each room type carries different pricing for each day of the week. So, There is an Alpha room table, Bravo room, etc. Within Alpha table are headers for the days of the week with pricing pre-entered into the rows. Client info table: Has the name, address, date of room use, etc data for the specific client. EXAMPLE: Alpha-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Bravo-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Client data table: ClientName; date-of-room-use; address; day_subtotal; grand_total. QUESTION: I'm trying to find PHP code that will: look at the date of room use in the client data table, look up the associated cost for that date in the specific room pricing table, record that unit cost in the day subtotal of the client data table and sum a grand total in the grand total row of the client data table (assuming the room may be used more than one day by the customer). I know there's something to do with join but I'm finding it difficult to grasp the concept and, if someone can demonstrate using this example, I think I will have a better understanding of how to work this sort of transaction. Thank you ALL in advance for your suggestions or alternatvie approaches.

    Read the article

  • How can I access data that's stored in my App Delegate from my various view controllers?

    - by BeachRunnerJoe
    This question is similar to this other post, but I'm new to iPhone development and I'm getting used to the good practices for organizing my data throughout my app. I understand the ApplicationDelegate object to be the best place to manage data that is global to my app, correct? If so, how can I access data that's stored in my App Delegate from various view controllers? Specifically, I have an array of table section titles for my root table view controller created as such... appdelegate.m sectionTitles = [[NSArray alloc] initWithObjects: @"Title1", @"Title2", @"Title3", nil]; rootViewController.appDelegate = self; and I need to access it throughout the different views of my app, like such... rootviewcontroller.m NSUInteger numSections = [self.appDelegate.sectionTitles count]; Is this the best way to do it or are there any reasons I should organize my data a better way? Thanks so much in advance for your help!

    Read the article

  • Why do I get access denied to data folder when using adb?

    - by gregm
    I connected to my live device using the adb and the following commands: C:\>adb -s HT829GZ52000 shell $ ls ls sqlite_stmt_journals cache sdcard etc system sys sbin proc logo.rle init.trout.rc init.rc init.goldfish.rc init default.prop data root dev $ cd data cd data $ ls ls opendir failed, Permission denied I was surprised to see that I have access denied. How come I can't browse around the directories using the commandline like this? How do I get root access on my phone?

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • How to use PHP fgetcsv to create an array for each piece of data in csv file?

    - by Olivia
    I'm trying to import data from a csv file to some html code to use it in a graph I have already coded. I'm trying to use PHP and fgetcsv to create arrays for each separate piece of data to use PHP to put it into the html code. I know how to open the csv file and to print it using PHP, and how to print each separate row, but not each piece of data separated by a comma. Is there a way to do this? If this helps, this is the csv data I am trying to import. May 10,72,12,60 May 11,86,24,62 May 12,67,32,34 May 13,87,12,75 May 14,112,23,89 May 17,69,21,48 May 18,98,14,84 May 19,115,18,97 May 20,101,13,88 May 21,107,32,75 I hope that makes sense.

    Read the article

  • For REST, how do I receive posted data using PHP?

    - by netrox
    I want to set up a tiny RESTful interface for my web services using PHP. The problem is that I looked at frameworks and I cannot figure out how do I recieve the posted data without field names? For example, if a server posts data to my server, I cannot figure out how do I get it without needing the postfield (POST variables). Traditionally, with forms, people send post data with field names such as this: curl_setopt($ch, CURLOPT_POSTFIELDS, postfield=postvalue); and I would use PHP code like this: $postvalue=$_POST[postfield]; to get value of postfield but since the server posting data is not using postfield and is just sending XML, how do I get it without fields? How do I capture the XML? That's where I am lost.

    Read the article

  • Ruby: Why is Array.sort slow for large objects?

    - by David Waller
    A colleague needed to sort an array of ActiveRecord objects in a Rails app. He tried the obvious Array.sort! but it seemed surprisingly slow, taking 32s for an array of 3700 objects. So just in case it was these big fat objects slowing things down, he reimplemented the sort by sorting an array of small objects, then reordering the original array of ActiveRecord objects to match - as shown in the code below. Tada! The sort now takes 700ms. That really surprised me. Does Ruby's sort method end up copying objects about the place rather than just references? He's using Ruby 1.8.6/7. def self.sort_events(events) event_sorters = Array.new(events.length) {|i| EventSorter.new(i, events[i])} event_sorters.sort! event_sorters.collect {|es| events[es.index]} end private # Class used by sort_events class EventSorter attr_reader :sqn attr_reader :time attr_reader :index def initialize(index, event) @index = index @sqn = event.sqn @time = event.time end def <=>(b) @time != b.time ? @time <=> b.time : @sqn <=> b.sqn end end

    Read the article

  • Best way to keep a large number of hobby projects alive; open sourcing?

    - by Daan van Yperen
    Because my time is limited I can usually only focus on one or two of my hobby projects, while the others sit there wasting away. I am looking for a solution that would allow me to divide my time better. is open sourcing where I take the role of guiding the project realistic, or are there better solutions? In my case, one project has a reasonably sized community of users going for it but is currently closed source. There have been requests to open source it.

    Read the article

  • How to use data receive event in Socket class?

    - by affan
    I have wrote a simple client that use TcpClient in dotnet to communicate. In order to wait for data messages from server i use a Read() thread that use blocking Read() call on socket. When i receive something i have to generate various events. These event occur in the worker thread and thus you cannot update a UI from it directly. Invoke() can be use but for end developer its difficult as my SDK would be use by users who may not use UI at all or use Presentation Framework. Presentation framework have different way of handling this. Invoke() on our test app as Microstation Addin take a lot of time at the moment. Microstation is single threaded application and call invoke on its thread is not good as it is always busy doing drawing and other stuff message take too long to process. I want my events to generate in same thread as UI so user donot have to go through the Dispatcher or Invoke. Now i want to know how can i be notified by socket when data arrive? Is there a build in callback for that. I like winsock style receive event without use of separate read thread. I also do not want to use window timer to for polling for data. I found IOControlCode.AsyncIO flag in IOControl() function which help says Enable notification for when data is waiting to be received. This value is equal to the Winsock 2 FIOASYNC constant. I could not found any example on how to use it to get notification. If i am write in MFC/Winsock we have to create a window of size(0,0) which was just used for listening for the data receive event or other socket events. But i don't know how to do that in dotnet application.

    Read the article

  • How can I create the XML::Simple data structure using a Perl XML SAX parser?

    - by DVK
    Summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog—due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. I'm looking for an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is; e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • Autocorrelation method for pitch determination: what is the input data form?

    - by harsh
    I have read a code for pitch determination using autocorrelation method. Can anybody please tell what would be the input data (passed as argument to DetectPitch()) function here: double DetectPitch(short* data) { int sampleRate = 2048; //Create sine wave double *buffer = malloc(1024*sizeof(short)); double amplitude = 0.25 * 32768; //0.25 * max length of short double frequency = 726.0; for (int n = 0; n < 1024; n++) { buffer[n] = (short)(amplitude * sin((2 * 3.14159265 * n * frequency) / sampleRate)); } doHighPassFilter(data); printf("Pitch from sine wave: %f\n",detectPitchCalculation(buffer, 50.0, 1000.0, 1, 1)); printf("Pitch from mic: %f\n",detectPitchCalculation(data, 50.0, 1000.0, 1, 1)); return 0; }

    Read the article

  • [R] How to create a data.frame with a unknow number of columns ?

    - by Olivier
    Hello I would like to create, in a function, a boucle to create a data.frame with a variable number of columns. WIth something like : a = c("a","b") b = c(list(1,2,3), list(4,5,6)) data.frame(a,b) I would like to get a data-frame like : a 1 2 3 b 4 5 6 Instead of I obtain : a 1 2 3 4 5 6 b 1 2 3 4 5 6 Thank you ! PS : I also try with rbind, but it's doesn't work...

    Read the article

  • Servlet receiving data both in ISO-8859-1 and UTF-8. How to URL-decode?

    - by AJPerez
    I've a web application (well, in fact is just a servlet) which receives data from 3 different sources: Source A is a HTML document written in UTF-8, and sends the data via <form method="get">. Source B is written in ISO-8859-1, and sends the data via <form method="get">, too. Source C is written in ISO-8859-1, and sends the data via <a href="http://my-servlet-url?param=value&param2=value2&etc">. The servlet receives the request params and URL-decodes them using UTF-8. As you can expect, A works without problems, while B and C fail (you can't URL-decode in UTF-8 something that's encoded in ISO-8859-1...). I can make slight modifications to B and C, but I am not allowed to change them from ISO-8859-1 to UTF-8, which would solve all the problems. In B, I've been able to solve the problem by adding accept-charset="UTF-8" to the <form>. So the <form> sends the data in UTF-8 even with the page being ISO. What can I do to fix C? Alternatively, is there any way to determine the charset on the servlet, so I can call URL-decode with the right encoding in each case?

    Read the article

  • How to explictely obtain post data in Spring MVC?

    - by predhme
    Is there a way to obtain the post data itself? I know spring handles binding post data to java objects. But if I had two fields that I want to process manually, how do I obtain that data? Assuming I had two fields in my form <input type="text" name="value1" id="value1"/> <input type="text" name="value2" id="value2"/> How would I go about retrieving those values in my controller?

    Read the article

  • Easiest way to convert json data into objects with methods attached?

    - by John Mee
    What's the quickest and easiest way to convert my json, containing the data of the objects, into actual objects with methods attached? By way of example, I get data for a fruitbowl with an array of fruit objects which in turn contain an array of seeds thus: {"fruitbowl": [{ "name": "apple", "color": "red", "seeds": [] },{ "name": "orange", "color": "orange", "seeds": [ {"size":"small","density":"hard"}, {"size":"small","density":"soft"} ]} } That's all nice and good but down on the client we do stuff with this fruit, like eat it and plant trees... var fruitbowl = [] function Fruit(name, color, seeds){ this.name = name this.color = color this.seeds = seeds this.eat = function(){ // munch munch } } function Seed(size, density){ this.size = size this.density = density this.plant = function(){ // grow grow } } My ajax's success routine currently is currently looping over the thing and constructing each object in turn and it doesn't handle the seeds yet, because before I go looping over seed constructors I'm thinking Is there not a better way? success: function(data){ fruitbowl.length = 0 $.each(data.fruitbowl, function(i, f){ fruitbowl.push(new Fruit(f.name, f.color, f.seeds)) }) I haven't explored looping over the objects as they are and attaching all the methods. Would that work?

    Read the article

  • designing data structures for an address book in C program??

    - by osabri
    i want that the number of address book item is not known in advance, i am thinking to use linked list is it the right choice?? "the user can enter new person data, or print the data for a given name, the asking data need not be a name but also an address on a telephone number, the program prints the whole information about a person, print the content of the book in alphabetical order. Store some data in a file; retrieve it and safe it after modification Program should write a file to the disk and retrieve the file from it. Program should be called with arguments. i will use malloc but i don't know when and how? somebody did similar task or have an idea can help me plz

    Read the article

  • How to properly load HTML data from third party website using MVC+AJAX?

    - by Dmitry
    I'm building ASP.NET MVC2 website that lets users store and analyze data about goods found on various online trade sites. When user is filling a form to create or edit an item, he should have a button "Import data" that automatically fills some fields based on data from third party website. The question is: what should this button do under the hood? I see at least 2 possible solutions. First. Do the import on client side using AJAX+jQuery load method. I tried it in IE8 and received browser warning popup about insecure script actions. Of course, it is completely unacceptable. Second. Add method ImportData(string URL) to ItemController class. It is called via AJAX, does the import + data processing server-side and returns JSON-d result to client. I tried it and received server exception (503) Server unavailable when loading HTML data into XMLDocument. Also I have a feeling that dealing with not well-formed HTML (missing closing tags, etc.) will be a huge pain. Any ideas how to parse such HTML documents?

    Read the article

  • Copying 6000 tables and data from sqlserver to oracle ==> fastest method?

    - by nazer555
    i need to copy the tables and data (about 5 yrs data, 6200 tables) stored in sqlserver, i am using datastage and odbc connection to connect and datstage automatically creates the table with data, but its taking 2-3 hours per table as tables are very large(0.5 gig, 300+columns and about 400k rows). How can i achieve this the fastes as at this rate i am able to only copy 5 tables per day but within 30 days i need move over these 6000 tables.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >