Search Results

Search found 65558 results on 2623 pages for 'large data'.

Page 169/2623 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • How do I ensure data consistency in this concurrent situation?

    - by MalcomTucker
    The problem is this: I have multiple competing threads (100+) that need to access one database table Each thread will pass a String name - where that name exists in the table, the database should return the id for the row, where the name doesn't already exist, the name should be inserted and the id returned. There can only ever be one instance of name in the database - ie. name must be unique How do I ensure that thread one doesn't insert name1 at the same time as thread two also tries to insert name1? In other words, how do I guarantee the uniqueness of name in a concurrent environment? This also needs to be as efficient as possible - this has the potential to be a serious bottleneck. I am using MySQL and Java. Thanks

    Read the article

  • Render different view depending on the type of the data

    - by Miau
    Hi there So lets suppose we have an action in a controller that looks a bit like this: public ViewResult SomeAction(int id) { var data = _someService.GetData(id); ... //create new view model based on the data here return View(viewModel); } What I m trying to figure out is the best way to render a diferent view based on the type fo the data. the "_someService.GetData method returns an data that knows its out type (ie not only you can do typeof(data) but also you can do data.DataType and you ll get an enum value so I could achieve what I m trying to do doing something kinda like this public ViewResult SomeAction(int id) { var data = _someService.GetData(id); //I m mapping fields to the viewModel here var viewModel = GetViewModel(data); swtich(data.DataType) case DataType.TypeOne: return View("TypeOne", viewModel); break; ... } But this does not seem to be the nicest way, (I dont event know if it would work) Is this the way to go? Should I use some sort of RenderPartial Aproach? after all , waht will change in the view is mostly the order of the data (ie the rest of the view would be quite similar) Cheers

    Read the article

  • Enumerating large (20-digit) [probable] prime numbers

    - by Paul Baker
    Given A, on the order of 10^20, I'd like to quickly obtain a list of the first few prime numbers greater than A. OK, my needs aren't quite that exact - it's alright if occasionally a composite number ends up on the list. What's the fastest way to enumerate the (probable) primes greater than A? Is there a quicker way than stepping through all of the integers greater than A (other than obvious multiples of say, 2 and 3) and performing a primality test for each of them? If not, and the only method is to test each integer, what primality test should I be using?

    Read the article

  • Looking for nice Javascript/jQuery code for displaying large tables

    - by misha-moroshko
    I have an HTML table which may contain thousands of rows (number of columns is not a problem here). I would like to be able to browse this table easily and be able to do the following: Decide how many rows will be presented Jump to the next/previous X number of rows Scroll the table using the scroll bars to any desired line Be able to customize/extend easily this Javascript/jQuery code Has anyone seen something similar ? Thank you very much !

    Read the article

  • Most performant way to check how many objects are referenced by an to-many relationship in Core Data

    - by dontWatchMyProfile
    Lets say I have an employees relationship in an Company entity, and it's to-many. And they're really many. Apple in 100 years, with 1.258.500.073 employees. Could I simply do something like NSInteger numEmployees = [apple.employees count]; without firing 1.258.500.073 faults? (Well, in 100 years, the iPhone will easily handle so many objects, for sure...but anyways)

    Read the article

  • Is it compulsory to learn about Data Structures if you want to be a Java/C++ programmer ?

    - by happysoul
    So do I like really need to learn about them ? Isn't there an interesting way to learn about stacks, linked lists, heaps ,etc ? I found it a boring subject. **While posting this question it showed some warning.Am I not allowed to post such a question ? Admins please clarify and I will delete it :/ Warning :: The question you're asking appears subjective and is likely to be closed.

    Read the article

  • Searching for online database software/cms

    - by ButterdBread
    I am searching for a software or CMS that manages and displays large online databases, as some kind of frontend to MySQL or any other database. It should be accessible through the browser, be as secure as possible (offering login). The data I'd like to store would be personal information such as name, adress and birthday - also I'd need to be able to add custom fields as well. Also forms and the possibility to download the data in an excel? table would be great. PHPmyadmin is not an option, it should be similar to a CRM but more closely adapted to managing database tables, searching for entries and filtering data. It should be possible to have many user accounts with different rights, with each of them being able to acces certain parts of the data and entering own data. Is there something out there, that might get close to what I imagine? I appreciate any help!

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • Make backup of large site with 100,000+ files/images

    - by niggles
    I tried backing up our site today using the Unix 'cp' command and ended up getting our office blocked out by PLESK - it added my ip to /etc/hosts.deny as it thought I was flooding the server. After Tech support fixed the issue, they suggested I go folder by folder to back it up, but there's about 10,000 folders on the site totaling 1/2 a terabyte, each with multiple sub-folders, so this isn't viable. Basically I want to be able to mirror the domain on another domain we've got set up on the same dedicated server so I can test with live images (the bulk of our content). Any suggestions e.g adding some rules to open_base_dir and getting PHP to recursively copy the folders to the other domain (remember it's on the same dedicated box so it just needs to traverse the directory, not FTP things).

    Read the article

  • Empty R environment becomes large file when saved

    - by user1052019
    I'm getting behaviour I don't understand when saving environments. The code below demonstrates the problem. I would have expected the two files (far-too-big.RData, and right-size.RData) to be the same size, and also very small because the environments they contain are empty. In fact, far-too-big.R ends up the same size as bigfile.RData. I get the same results using 2.14.1 and 2.15.2, both on WinXP 5.1 SP3. Can anyone explain why this is happening? Thanks. a <- matrix(runif(1000000, 0, 1), ncol=1000) save(a, file="c:/temp/bigfile.RData") test <- function() { load("c:/temp/bigfile.RData") test <- new.env() save(test, file="c:/temp/far-too-big.RData") test1 <- new.env(parent=globalenv()) save(test1, file="c:/temp/right-size.RData") } test()

    Read the article

  • servlet ArrayList and HashMap problem witch result

    - by nonameplum
    Hi, I have that code List<Map<String, Object>> data = new ArrayList<Map<String, Object>>(); Map<String, Object> item = new HashMap<String, Object>(); data.clear(); item.clear(); int i = 0; while (i < 5){    item.put("id", i);    i++;    out.println("id: " + item.get("id"));    out.println("--------------------------");    data.add(item); } for(i=0 ; i<5 ; i++){    out.println("print data[" + i + "]" + data.get(i)); } Result of that is: id: 0 -------------------------- id: 1 -------------------------- id: 2 -------------------------- id: 3 -------------------------- id: 4 -------------------------- print data[0]{id=4} print data[1]{id=4} print data[2]{id=4} print data[3]{id=4} print data[4]{id=4} Why only last element is stored?

    Read the article

  • how can i execute large mysql queries fast

    - by testkhan
    I have 4 mysql tables and have a single query with JOIN on multiple tables and I am requesting it via jquery ajax, but it takes to much long time from about 1-3 minutes while I want to execute them on average 2-5 seconds. is there any special way to execute the quries fast

    Read the article

  • how to handle large dataset like sproutcore

    - by Nik
    Hello all, I really don't have any substantial code to show here, actually, that's kinda why I am writing: I looked at the SproutCore demo, especially the Collection demo, on http://demo.sproutcore.com/sample_controls/, and am amazed by its loading 200,000 records to the page so easily. I tried using Rails to provide 200,000 records and in a completely blank HTML page with <% @projects.each do |p| % <%= p.title % <% end % that freezes the browser for seconds on my m1530 laptop with 4gb ram and t7700 256gb ssd. Yet the sproutcore demo does not freeze and takes less than 3 seconds to load. What do you think the one technique they are using to enable this is? Thanks!

    Read the article

  • Large Table in iFrame crashes IE8

    - by Brian
    I have a page with an iFrame whose source is an ashx page. The handler takes in 3 arguments through the query string and generates a text/html response containing a table. When the table gets 1700 rows it crashes the IE8 browser. The browser freezes and returns a null reference error. If I take the html that is being rendered and place it inside a DIV on the page it renders fine in IE8. Any suggestions?

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • Best way to support large dropdowns

    - by JustAProgrammer
    Say I have a report that can be restricted by specifying some value in a dropdown. This dropdown list references a table with 30,000 records. I don't think this would be feasible to populate a dropdown with! So, what is the best way to provide the user the ability to select a value given this situation? These values do not really have categories and even if I subdivided (by having some nesting dropdown situation) by the first letter of the value, that may still leave a few thousand entries. What's the best way to deal with this?

    Read the article

  • Handling large numbers of sockets with .NET

    - by Dreaddan
    I'm looking at writing a application that need to be able to handle in the region of 200 connections / sec and was wondering if C# and .NET will handle this or if I need to really be looking at C++ to do this? It looks like SocketAsyncEventArgs may be the way to go but thought id check before I plough in to it. Each transaction should only last less than a second but could take up to 15 seconds each if that makes any difference.

    Read the article

  • Declaring two large 2d arrays gives segmentation fault.

    - by pfdevil
    Hello, i'm trying to declare and allocate memory for two 2d-arrays. However when trying to assign values to itemFeatureQ[39][16816] I get a segmentation vault. I can't understand it since I have 2GB of RAM and only using 19MB on the heap. Here is the code; double** reserveMemory(int rows, int columns) { double **array; int i; array = (double**) malloc(rows * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } for(i = 0; i < rows; i++) { array[i] = (double*) malloc(columns * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } } return array; } void populateUserFeatureP(double **userFeatureP) { int x,y; for(x = 0; x < CUSTOMERS; x++) { for(y = 0; y < FEATURES; y++) { userFeatureP[x][y] = 0; } } } void populateItemFeatureQ(double **itemFeatureQ) { int x,y; for(x = 0; x < FEATURES; x++) { for(y = 0; y < MOVIES; y++) { printf("(%d,%d)\n", x, y); itemFeatureQ[x][y] = 0; } } } int main(int argc, char *argv[]){ double **userFeatureP = reserveMemory(480189, 40); double **itemFeatureQ = reserveMemory(40, 17770); populateItemFeatureQ(itemFeatureQ); populateUserFeatureP(userFeatureP); return 0; }

    Read the article

  • Building a decision-making game in jQuery? Where would I store data....

    - by redconservatory
    I built a slideshow/decision-making game in Flash but would like to try to redo it using jQuery. The slideshow part seems simple enough, however I have a series of user decisions that I'm not sure how to approach. In flash, if the user makes a decision, I would just store this in a variable or shared local objects, is this the same for jQuery? i.e. mix regular javascript variables with the jQuery?

    Read the article

  • Sending large XML from Silverlight to SVC (WCF)

    - by alexbf
    Hi! I want to send a big XML string to a WCF SVC service from Silverlight. It looks like anything under about 50k is sent correctly but if I try to send something over that limit, my request reaches the server (BeginRequest is called) but never reaches my SVC. I get the classic "NotFound" exception. Any idea on how to raise that limit? If I can't raise it? What are my other options? Thanks, Alex

    Read the article

  • Selecting from a Large Table SQL 2005

    - by Eugene
    I have a SQL table it has more than 1000000 rows, and I need to select with the query as you can see below: SELECT DISTINCT TOP (200) COUNT(1) AS COUNT, KEYWORD FROM QUERIES WITH(NOLOCK) WHERE KEYWORD LIKE '%Something%' GROUP BY KEYWORD ORDER BY 'COUNT' DESC Could you please tell me how can I optimize it to speed up the execution process? Thank you for useful answers.

    Read the article

  • Linking to a Large address aware DLL.

    - by Canopus
    Suppose I have a DLL which is built with LARGEADDRESSAWARE linker flag set. Now I have an application dynamically linking to this DLL. Does this make my application LARGEADDRESSAWARE? If not then, does it make sense to have this flag set for any DLL?

    Read the article

  • Read a large result set in chunks from mysql

    - by ripper234
    I am trying to read a huge result set from mysql. Reading them in a straight-forward manner didn't work, as mysql tries to return all results together, which times out. I found the following piece of code which tells mysql to read the results back one at a time: stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY); stmt.setFetchSize(Integer.MIN_VALUE); Can I read a chunk at a time instead of one by one? I've tried setting fetch size to a different value, but it doesn't work.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >