Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 168/2640 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Efficient paging with large tables in sql 2008

    - by Kumar
    for tables with 1,000,000 rows and possibly many many more ! haven't done any benchmarking myself so wanted to get the experts opinion. Looked at some articles on row_number() but it seems to have performance implications What are the other choices/alternatives ?

    Read the article

  • Searching through large data set

    - by calccrypto
    how would i search through a list with ~5 mil 128bit (or 256, depending on how you look at it) strings quickly and find the duplicates (in python)? i can turn the strings into numbers, but i don't think that's going to help much. since i haven't learned much information theory, is there anything about this in information theory? and since these are hashes already, there's no point in hashing them again

    Read the article

  • HLSL How can one pass data between shaders / read existing colour value?

    - by RJFalconer
    Hello all, I have 2 HLSL ps2.0 shaders. Simplified, they are: Shader 1 Reads texture Outputs colour value based on this texture Shader 2 Needs to read in existing colour (or have it passed in/read from a register) Outputs the final colour which is a function of the previous colour (They need to be different shaders as I've reached the maximum vertex-shader outputs for 1 shader) My problem is I cannot work out how Shader 2 can access the existing fragment/pixel colour. Is the only way for shaders to interact really just the alpha blending options? These aren't sufficient if I want to use the colour as input to my function.

    Read the article

  • mySQL - One large query vs Ajax indivdual queries

    - by Mark
    Hi guys, I guess no one will have a definative answer to this but considered predictions would be appriciated. I am in the process of developing a mySQL database for a web application and my question is: Is it more efficient to make a single query that returns a single row using AJAX or To request 100 - 700 rows when the user will likely only ever use the results of two or three? Really I am asking what is heavier for the server 2-3 requests with one result or 1 request with 100 - 700 results? Thanks, Mark

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • Use queried json data in a function

    - by SztupY
    I have a code similar to this: $.ajax({ success: function(data) { text = ''; for (var i = 0; i< data.length; i++) { text = text + '<a href="#" id="Data_'+ i +'">' + data[i].Name + "</a><br />"; } $("#SomeId").html(text); for (var i = 0; i< data.length; i++) { $("#Data_"+i).click(function() { alert(data[i]); RunFunction(data[i]); return false; }); } } }); This gets an array of some data in json format, then iterates through this array generating a link for each entry. Now I want to add a function for each link that will run a function that does something with this data. The problem is that the data seems to be unavailable after the ajax success function is called (although I thought that they behave like closures). What is the best way to use the queried json data later on? (I think setting it as a global variable would do the job, but I want to avoid that, mainly because this ajax request might be called multiple times) Thanks.

    Read the article

  • WPF drawing performance with large numbers of geometries

    - by MyFaJoArCo
    Hello, I have problems with WPF drawing performance. There are a lot of small EllipseGeometry objects (1024 ellipses, for example), which are added to three separate GeometryGroups with different foreground brushes. After, I render it all on simple Image control. Code: DrawingGroup tmpDrawing = new DrawingGroup(); GeometryGroup onGroup = new GeometryGroup(); GeometryGroup offGroup = new GeometryGroup(); GeometryGroup disabledGroup = new GeometryGroup(); for (int x = 0; x < DisplayWidth; ++x) { for (int y = 0; y < DisplayHeight; ++y) { if (States[x, y] == true) onGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); else if (States[x, y] == false) offGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); else disabledGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); } } tmpDrawing.Children.Add(new GeometryDrawing(OnBrush, null, onGroup)); tmpDrawing.Children.Add(new GeometryDrawing(OffBrush, null, offGroup)); tmpDrawing.Children.Add(new GeometryDrawing(DisabledBrush, null, disabledGroup)); DisplayImage.Source = new DrawingImage(tmpDrawing); It works fine, but takes too much time - 0.5s on Core 2 Quad, 2s on Pentium 4. I need <0.1s everywhere. All Ellipses, how you can see, are equal. Background of control, where is my DisplayImage, is solid (black, for example), so we can use this fact. I tried to use 1024 Ellipse elements instead of Image with EllipseGeometries, and it was working much faster (~0.5s), but not enough. How to speed up it? Regards, Oleg Eremeev P.S. Sorry for my English.

    Read the article

  • How large is a "buffer" in PostgreSQL

    - by Konrad Garus
    I am using pg_buffercache module for finding hogs eating up my RAM cache. For example when I run this query: SELECT c.relname, count(*) AS buffers FROM pg_buffercache b INNER JOIN pg_class c ON b.relfilenode = c.relfilenode AND b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database())) GROUP BY c.relname ORDER BY 2 DESC LIMIT 10; I discover that sample_table is using 120 buffers. How much is 120 buffers in bytes?

    Read the article

  • C# System.Diagnostics.Process redirecting Standard Out for large amounts of data

    - by Matt
    I running an exe from a .NET app and trying to redirect standard out to a streamreader. The problem is that when I do myprocess.exe out.txt out.txt is close to 14mb. When I do the command line version it is very fast but when I run the process from my csharp app it is excruciatingly slow because I believe the default streamreader flushes every 4096 bytes. Is there a way to change the default stream reader for the Process object?

    Read the article

  • Compute weighted averages for large numbers

    - by Travis
    I'm trying to get the weighted average of a few numbers. Basically I have: Price - 134.42 Quantity - 15236545 There can be as few as one or two or as many as fifty or sixty pairs of prices and quantities. I need to figure out the weighted average of the price. Basically, the weighted average should give very little weight to pairs like Price - 100000000.00 Quantity - 3 and more to the pair above. The formula I currently have is: ((price)(quantity) + (price)(quantity) + ...)/totalQuantity So far I have this done: double optimalPrice = 0; int totalQuantity = 0; double rolling = 0; System.out.println(rolling); Iterator it = orders.entrySet().iterator(); while(it.hasNext()) { System.out.println("inside"); Map.Entry order = (Map.Entry)it.next(); double price = (Double)order.getKey(); int quantity = (Integer)order.getValue(); System.out.println(price + " " + quantity); rolling += price * quantity; totalQuantity += quantity; System.out.println(rolling); } System.out.println(rolling); return totalQuantity / rolling; The problem is I very quickly max out the "rolling" variable. How can I actually get my weighted average? Thanks!

    Read the article

  • How do I display data in a table and allow users to copy selected data?

    - by cfouche
    Hi I have a long list of data that I want to display in table format to users. The data changes when the user performs certain actions in my app, but it is not directly editable. So the user can create a reasonably big table of data, but he can't change individual cells' values. However, I do want the data to be copy-able. So I want it to be possible for the user to select some or all of the cells, and do a ctrl-C to copy the data to his clipboard, and then a ctrl-V to paste the data to an external text editor. At the moment, I'm displaying the data in a ListView with a GridView and this works perfectly, except that GridView doesn't allow one to copy data. What other options can I try? Ours is a WPF app, coding in c#.

    Read the article

  • How to use R-Tree for plotting large number of map markers on google maps

    - by Eeyore
    After searching SO and multiple articles I haven't found a solution to my problem. What I am trying to achieve is to load 20,000 markers on Google Maps. R-Tree seems like a good approach but it's only helpful when searching for points within the visible part of the map. When the map is zoomed out it will return all of the points and...crash the browser. There is also the problem with dragging the map and at the end of dragging re-running the query. I would like to know how I can use R-Tree and be able to achieve the all of the above.

    Read the article

  • Best practice to structure large html-based project

    - by AntonAL
    I develop Rails based website, enjoying using partials for some common "components" Recently, i faced a problem, that states with CSS interference. Styles for one component (described in css) override styles for another components. For example, one component has ... <ul class="items"> ... and another component has it too. But that ul's has different meaning in these two components. On the other hand, i want to "inherit" some styles for one component from another. For example: Let, we have one component, called "post" <div class="post"> <!-- post's stuff --> <ul class="items"> ... </ul> </div And another component, called "new-post": <div class="new-post"> <!-- post's stuff --> <ul class="items"> ... </ul> <!-- new-post's stuff --> <div class="tools">...</div> </div Post and new-post have something similar ("post's stuff") and i want to make CSS rules to handle both "post" and "new-post" New post has "subcomponents", for example - editing tools, that has also: <ul class="items"> This is where CSS rules starting to interfer - some rules, targeted for ul.items (in post and new-post) applies subcomponent of new-post, called "tools" On the one hand - i want to inherit some styles On the other hand, i want to get better incapsulation What are the best practices, to avoid such kind of problems ?

    Read the article

  • Linear Recurrence for very large n

    - by Android Decoded
    I was trying to solve this problem on SPOJ (http://www.spoj.pl/problems/REC/) F(n) = a*F(n-1) + b where we have to find F(n) Mod (m) where 0 <= a, b, n <= 10^100 1 <= M <= 100000 I am trying to solve it with BigInteger in JAVA but if I run a loop from 0 to n its getting TLE. How could I solve this problem? Can anyone give some hint? Don't post the solution. I want hint on how to solve it efficiently.

    Read the article

  • How to use a data type (table) defined in another database in SQL2k8?

    - by Victor Rodrigues
    I have a Table Type defined in a database. It is used as a table-valued parameter in a stored procedure. I would like to call this procedure from another database, and in order to pass the parameter, I need to reference this defined type. But when I do DECLARE @table dbOtherDatabase.dbo.TypeName , it tells me that The type name 'dbOtherDatabase.dbo.TypeName' contains more than the maximum number of prefixes. The maximum is 1. How could I reference this table type?

    Read the article

  • After adding data files to a file group Is there a way to distribute the data into the new files?

    - by Blootac
    I have a database in a single file group, with a single file group. I've added 7 data files to this file group. Is there a way to rebalance the data over the 8 data files other than by telling sql server to empty the original? If this is the only way, is it possible to allow sql server to start writing to this file? MSDN says that once its empty its marked so no new data will be written to it. What I'm aiming for is 8 equally balanced data files. I'm running SQL Server 2005 standard edition. Thanks

    Read the article

  • Flex - weird display behavior on large number of Canvas

    - by itarato
    Hi, I have a Flex app (SDK 3.5 - FP10) that does mindmap trees. Every node is a Canvas (I'm using Canvas specific properties so I needed it). It has a shadow effect, background color and some small ui element on it (like icons, texts...). It works perfectly until it goes over ~700 nodes (Canvas). Over that number it shows grey rectangles: http://yfrog.com/bhw2pj . If I turn off the DropShadowFilter effect for the Canvas, they are also gone, so I assume it's a DropShadowFilter problem: http://yfrog.com/2d9y8j . The effect is simple: private static var _nodeDropShadow:DropShadowFilter = new DropShadowFilter(1, 45, 0x888888, 1, 1, 1); _backgroundComp.filters = _nodeDropShadow; Is it possible that Flex can't handle that much? Thanks in advance

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Does Python work in larger teams?

    - by Kugel
    I read this post last night and it got me thinking. I like python and "batteries", pypi and such. But I've only done python solo. Never tried it in a team. Are the points that Ted mentions valid? If they are how do teams cope with them? Does Python work in teams or even large teams? Or it kills productivity? I personally see the problems he mentions when I come back to my old code. Even when working with other modules sometimes I need to peek inside. I would like to hear people with experience on this.

    Read the article

  • Indexing large DB's with Lucene/PHP

    - by thebluefox
    Afternoon chaps, Trying to index a 1.7million row table with the Zend port of Lucene. On small tests of a few thousand rows its worked perfectly, but as soon as I try and up the rows to a few tens of thousands, it times out. Obviously, I could increase the time php allows the script to run, but seeing as 360 seconds gets me ~10,000 rows, I'd hate to think how many seconds it'd take to do 1.7million. I've also tried making the script run a few thousand, refresh, and then run the next few thousand, but doing this clears the index each time. Any ideas guys? Thanks :)

    Read the article

  • Is it bad practice to use an enum that maps to some seed data in a Database?

    - by skb
    I have a table in my database called "OrderItemType" which has about 5 records for the different OrderItemTypes in my system. Each OrderItem contains an OrderItemType, and this gives me referential integrity. In my middletier code, I also have an enum which matches the values in this table so that I can have business logic for the different types. My dev manager says he hates it when people do this, and I am not exactly sure why. Is there a better practice I should be following?

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >