Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 8/145 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Any way to skip the huge apt-cache-update everytime a repository is added?

    - by Nirmik
    I keep adding atleast one repository per day.! I like new stuff and to personalise my Ubuntu. But now evertime I add a repository,the apt cache needs to be updated and its almost a 10-15min update everyday.! Moreover its a lot data consuming. Is there any way or workaround so that the repository is updated in the apt-cache without updating other stuff? Also,sometimes i do not wish to install some updates at the given point of time due to my bandwidth limitations.But then if i have to add a repository,the updates are all installed in the sudo apt-get update command.. Would like any help. The bandwidth limit is actually a major issue for me.! Thanx :)

    Read the article

  • In a browser, is it best to use one huge spritesheet or many (10000) different PNG's?

    - by Nick
    I'm creating a game in jQuery, where I use about 10000 32x32 tiles. Until now, I have been using them all separately (no sprite sheet). An average map uses about 2000 tiles (sometimes re-used PNG's but all separate divs) and the performance ranges from stable (Chrome) to a bit laggy (Firefox). Each of these divs are positioned absolutely using CSS. They do not need to be updated every tick, just when a new map is loaded. Would it be better for performance to use spritesheet methods for the divs using CSS background-positioning, like gameQuery does? Thank you in advance!

    Read the article

  • HTML5 - Does it have the power to handle a large 2D game with a huge world?

    - by user15858
    I have been using XNA game studio, but due to private reasons (as well as the ability to publish anywhere & my heavy interest in isogenic engine), I would like to switch to HTML5. However, I have very high 2D graphic demands for my game. The game itself will have a HDD size of anywhere between 6GB (min) to 12GB (max) which would be a full game deployed offline. The size of the images aren't significantly large, so streaming would be entirely possible if only those assets required were streamed as needed. The game has a massive file size because of the sheer amount of content. For some images or spritesheets, they would be quite massive. (ex. a very large Dragon, which if animated in a spritesheet would be split into two 4096x4096 sheets or one 8192x8192 sheet). Most assets would be very small, and about 7MB for a full character with 15 animations in every direction (all animations not required immediately) so in the size of a few hundred KB to download before the game loads. My question, however, is if the graphical power of HTML5 is enough to animate several characters on screen at once, when it flips through frames quite rapidly. All my sprites have about 25 frames per animation, 5 directions (a spritesheet for each direction & animation), and run at 30fps. Upon changing direction, animation, or a new character entering, spritesheets would change and be constantly loading/unloading. If I pack all directions in a single sheet, it would be about 2048x2048 per sheet. Most frameworks have no problem with this, but I am afraid from what I read that HTML5's graphical capabilities will limit me. Since it takes significant time simply to animate characters in any language, I'd like a quick answer.

    Read the article

  • Best setup/workflow for distributed team to integrated DSVC with fragmented huge .NET site?

    - by lazfish
    So we have a team with 2 developers one manager. The dev server sits in a home office and the live server sits in a rack somewhere handled by the larger part of my company. We have freedom to do as we please but I want to incorporate Kiln DSVC and FogBugz for us with some standard procedures to make sense of our decisions/designs/goals. Our main product is web-based training through our .NET site with many videos etc, and we also do mobile apps for multiple platforms. Our code-base is a 15 yr old fragmented mess. The approach has been rogue .asp/.aspx pages with some class management implemented in the last 6 years. We still mix our html/vb/js all on the same file when we add a feature/page to our site. We do not separate the business logic from the rest of the code. Wiring anything up in VS for Intelli-sense or testing or any other benefit is more frustrating than it is worth, because of having to manually rejigger everything back to one file. How do other teams approach this? I noticed when I did wire everything up for VS it wants to make a class for all functions. Do people normally compile DLLs for page-specific functions that won't be reusable? What approaches make sense for getting our practices under control while still being able to fix old anti-patterns and outdated code and still moving towards a logical structure for future devs to build on?

    Read the article

  • Evaluating this huge .... thing? [closed]

    - by Shark
    I'll just leave this here: superTiled = ((((gctUINT32) (hardware->chipMinorFeatures)) >> (0 ? 12:12) & ((gctUINT32) ((((1 ? 12:12) - (0 ? 12:12) + 1) == 32) ? ~0 : (~(~0 << ((1 ? 12:12) - (0 ? 12:12) + 1)))))) == (0x1 & ((gctUINT32) ((((1 ? 12:12) - (0 ? 12:12) + 1) == 32) ? ~0 : (~(~0 << ((1 ? 12:12) - (0 ? 12:12) + 1))))))); If you can break it down into 'steps' or a single numeric value - props to you :) I see that this is generated/ unwrapped macros code but .... can't really wrap around it atm. The first person that breaks this down / simplifies it a bit gets to know which driver i found it in. Cheers :)

    Read the article

  • How to compress a huge amount of PNG images?

    - by T Pops
    At work, on certain projects I have to manage a lot of images. Most of the time PNG files work the best for what I'm doing. With such a huge amount of images, I've tried using PNG compression with PNG Gauntlet but sometimes the file doesn't really change and sometimes PNG Gauntlet reports it would've made the filesize bigger! Am I just maxing out the compression or is there something more I can do?

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • Which technology is best suited to store and query a huge readonly graph?

    - by asmaier
    I have a huge directed graph: It consists of 1.6 million nodes and 30 million edges. I want the users to be able to find all the shortest connections (including incoming and outgoing edges) between two nodes of the graph (via a web interface). At the moment I have stored the graph in a PostgreSQL database. But that solution is not very efficient and elegant, I basically need to store all the edges of the graph twice (see my question PostgreSQL: How to optimize my database for storing and querying a huge graph). It was suggested to me to use a GraphDB like neo4j or AllegroGraph. However the free version of AllegroGraph is limited to 50 million nodes and also has a very high-level API (RDF), which seems too powerful and complex for my problem. Neo4j on the other hand has only a very low level API (and the python interface is not mature yet). Both of them seem to be more suited for problems, where nodes and edges are frequently added or removed to a graph. For a simple search on a graph, these GraphDBs seem to be too complex. One idea I had would be to "misuse" a search engine like Lucene for the job, since I'm basically only searching connections in a graph. Another idea would be, to have a server process, storing the whole graph (500MB to 1GB) in memory. The clients could then query the server process and could transverse the graph very quickly, since the graph is stored in memory. Is there an easy possibility to write such a server (preferably in Python) using some existing framework? Which technology would you use to store and query such a huge readonly graph?

    Read the article

  • One of my apache processes is huge - how can I find out why?

    - by Malcolm Box
    I'm running Apache 2.2.12 with mod_wsgi, hosting a Django site. Most of the apache child processes weigh in at about 125MB RSS, but occasionally I see one child balloon to 1GB RSS. At this point there's usually 1 huge process (1GB), a couple of large ones (500MB) and the rest are still ~125MB. These are the mod_wsgi daemon processes. I've tried using memory leak tracing in Python to see if it's the Django code, and I see no leaks. Looking in the logs doesn't show any particularly strange requests. I'm stumped on how to figure out what's causing this - any ideas? Also, any workaround ways to kill the large apache process when it gets too big, without bringing apache down? Some more details: Not using mod_php Using pre-fork

    Read the article

  • How do you synchronise huge sparse files (VM disk images) between machines?

    - by chrisdew
    Is there a command, such as rsync, which can synchronise huge, sparse, files from one linux server to another? It is very important that the destination file remains sparse. It may be longer (but not bigger) than the drive which contains it. Only changed blocks should be sent across the wire. I have tried rsync, but got no joy. groups.google.com/group/mailing.unix.rsync/browse_thread/thread/94f39271980513d3 If I write a programme to do this, am I just reinventing the wheel? http://www.finalcog.com/synchronise-block-devices Thanks, Chris.

    Read the article

  • tricks for speeding up tar while tarring up a huge directory of little files?

    - by Trevor Harrison
    I'm trying to tar up a directory that has about 3M tiny files in it. Tar is chugging along, but I'm thinking its going to take longer than I can wait. I'm wondering if telling tar to not store metadata (owner, group, perms) would reduce the churn on reading and re-reading this huge directory and maybe speed things up, and if there is a tar switch that does this. My initial perusal of the man page only gets me something like --no-xattrs, which looks like a start, but I was hoping someone had some specific knowledge.

    Read the article

  • What's the best way to view/analyse/filter huge traces/logfiles?

    - by oliver
    this seems to be a reoccurring issue: we receive a bug report for our software and with it tons of traces or logfiles. since finding errors is much easier when having a visualization of the log messages/events over time it is convenient to use a tool that can display the progression of events in a graph etc. (e.g. wireshark (http://www.wireshark.org) for analyzing network traffic) what tool do you use for such a purpose? the problem with most tools i used so far is that they mercilessly break down when you feed them huge data traces ( 1GB) so some criteria for such a tool would be: can deal with huge input files ( 1 GB) is really fast (so you don't have to get coffee while a file is loading) has some sort of filtering mechanism

    Read the article

  • Distributed version control for HUGE projects - is it feasible?

    - by Vilx-
    We're pretty happy with SVN right now, but Joel's tutorial intrigued me. So I was wondering - would it be feasible in our situation too? The thing is - our SVN repository is HUGE. The software itself has a 15 years old legacy and has survived several different source control systems already. There are over 68,000 revisions (changesets), the source itself takes up over 100MB and I cant even begin to guess how many GB the whole repository consumes. The problem then is simple - a clone of the whole repository would probably take ages to make, and would consume far more space on the drive that is remotely sane. And since the very point of distributed version control is to have a as many repositories as needed, I'm starting to get doubts. How does Mercurial (or any other distributed version control) deal with this? Or are they unusable for such huge projects?

    Read the article

  • Huge host CPU usage in idle vmware guest. Ubuntu 10.04 host, Vista SP2 guest

    - by themesandmodules
    I'm experiencing huge host CPU usage with an idle vmware guest. Host: Ubuntu 10.04 32-bit 2.6.32-24-generic-pae. (Very new install, i.e 24 hours ago) Hardware is Dell XPS M1530 laptop, 4GB ram. Intel Core II Duo T9300 2.50Ghz The virtualization setting "VT" or something is enabled in my bios. Guest: Completely fresh install of Windows Vista, upgraded to latest SP2 and all windows updates installed. 1024 - 1512MB ram allocated. Absolutely no other software installed on it, apart from VMWare tools. Situation When the guest is doing absolutely nothing, I watch with sysinternals process watch on the guest. This shows that system idle process is between 70 and 99%, usually around 95%. No actual process doing anything. On the host, I watch with top, I get cpu usage of 20% - 80%, usually around 30%. What I have tried Single and Dual processor available to guest - no change. Turn off all peripherals to guest - no network, drives, usb etc - no change. Turn off 3d acceleration for guest - perhaps a small improvement, or no change. Upping allocated ram to guest from 1024MB to 1512MB - no change. Yelling at vmware - no change. I have experienced a similar issue in the past, which was solved by setting the guest to have 1 CPU. This time that hasn't worked.

    Read the article

  • How do I use command line and wmctrl to make a window larger than the screen to get a huge screenshot?

    - by Mnebuerquo
    I use a program which makes a large image which I have to scroll to view. The program has no way to save the image, and I have no access to the source to modify it. The only way I have to get the image from the program is by screenshot. My goal is to save the full size image without having to piece together individual screenshots. I'm using this script to try taking a screenshot: #!/bin/bash window=$(wmctrl -l | grep "Program$" | awk '{print $1}') wmctrl -v -i -r $window -e '0,0,0,6030,5828' wmctrl -i -a $window import -window $window ~/Desktop/screenshot.png This uses wmctrl to get the window id ($window) for a window named "Program". It then tries to resize the window to the desired dimensions. It uses imagemagick (import) to save a screenshot.png on the user's Desktop. All of this works except the resize step. I can resize the window using wmctrl -r -e, but sizes greater than the screen size don't work. I'm using Ubuntu 10.04 and the Gnome Desktop. I run two monitors, but I've tried this with one of them disabled. Is there a way to resize the window larger than my screen to get a huge screenshot?

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • Best approach to write huge xml data to file?

    - by Kayes
    Hi. I'm currently exporting a database table with huge data (100000+ records) into an xml file using XmlTextWriter class and I'm writing directly to a file on the physical drive. _XmlTextWriterObject = new XmlTextWriter(_xmlFilePath, null); While my code runs ok, my question is that is it the best approach? Or should I write the whole xml in memory stream first and then write the xml document in physical file from memory stream? And what are the effects on memory/ performance in both cases?

    Read the article

  • What is the lightest way to make a huge chess-like grid?

    - by Sotkra
    Hey there I'm working on a browser-game and I can't help but wonder about what's the lightest way to make the grid/board on which the game takes place. Right now, as a mere sample, I'll show you this: http://sotkra.com/game/ Now, as the grid gets bigger and bigger, the table and its td's create a very heavy filepage which in turn...sucks in more resources from the browser engine and computer. So, is a table with td's the most lightweight way to craft a huge grid-like board or is there something lighter that you recommend? Cheers Sotkra

    Read the article

  • python: how to jump to a particular line in a huge text file?

    - by photographer
    Are there any alternatives to the code below: startFromLine = 141978 # or whatever line I need to jump to urlsfile = open(filename, "rb", 0) linesCounter = 1 for line in urlsfile: if linesCounter > startFromLine: DoSomethingWithThisLine(line) linesCounter += 1 if I'm processing a huge text file (~15MB) with lines of unknown but different length, and need to jump to a particular line which number I know in advance? I feel bad by processing them one by one when I know I could ignore at least first half of the file. Looking for more elegant solution if there is any.

    Read the article

  • 2D Game: Fast(est) way to find x closest entities for another entity - huge amount of entities, high

    - by Pygmy
    I'm working on a 2D game that has a huge amount of dynamic entities. For fun's sake, let's call them soldiers, and let's say there are 50000 of them (which I just randomly thought up, it might be much more or much less :)). All these soldiers are moving every frame according to rules - think boids / flocking / steering behaviour. For each soldier, to update it's movement I need the X soldiers that are closest to the one I'm processing. What would be the best spatial hierarchy to store them to facilitate calculations like this without too much overhead ? (All entities are updated/moved every frame, so it has to handle dynamic entities very well)

    Read the article

  • How do I break a huge WordPress multi-site database up into separate MySQL databases ?

    - by Faith McNulty
    How do I break a huge WordPress multi-site database up into separate MySQL databases? I have 24 WordPress MU-or- multi-site sites, holding over 20K users in total. My Server says I must break them into smaller or separate databases But I am at a loss as to how this is accomplished ? I seem to remember somewhere in the original install a option setting asking if wp should use separate data bases True or False and it was set to false by default ? but Now I can't seem to find it ? Please help !

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >