Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 70/151 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Automatically change aspect ratio of WMC videos

    - by NeverwaY
    is there a way to automatically change the aspect ratio flag of a video so that it displays in 16:9 in all video players WITHOUT reencode? Quality loss is not a concern. I just need a quick and efficient way to convert several video files to 16:9 as quickly as possible. For instance, older tv show series. I use a plasma tv for my media center, and do not want black bars burnt into my screen when people that dont understand how to use the zoom function in WMC7 watch tv all day. thanks.

    Read the article

  • Looking for a router with multiple WAN ports and load balancing

    - by Cyrcle
    I'm going to be moving in a few months. The location I'm moving to is great except it's on a road with very few people, so the internet access option is limited to DSL at 1.6Mbps down, 384kbps up. This is much slower than I'm used to. One option is to get at least two of the DSL lines. There's also good possibility that I'll be able to get WiMax or similar. I've been looking around a bit and it seems like what I need is a load balancing router with multiple WAN ports. Can anyone recommend some good ones? I could also go with a small power efficient Linux box with multiple NICs. What would be good software for that? It'd need to be able to handle around 10Mbps. Thanks for any help

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • Rsync like windows backup tool

    - by Halfgaar
    I need to backup some windows machines and have been unable to find the proper tool. What I need is a tool that does efficient copying of changed files to a windows network location, like Rsync does. In turn, the server will then back that up using rdiff-backup, a tool which does very clever incremental backups. Right now I'm using windows' 7 included backup feature, but I really don't get that. It's too much off-topic, but it doesn't suffice (seems buggy as well). I looked into Amanda, but as soon as it wanted to install MySQL, I aborted. I also tried Deltacopy, but unfortunately, I don't remember what the problem with that was... Any advice for an rsync like tool that just does daily syncs to a network location?

    Read the article

  • nginx hashing on GET parameter

    - by Sparsh Gupta
    I have two Varnish servers and I plan to add more varnish servers. I am using a nginx load balancer to divide traffic to these varnish servers. To utilize maximum RAM of each varnish server, I need that same request reaches same varnish server. Same request can be identified by one GET parameter in the request URL say 'a' In a normal code, I would do something like- (if I need to divide all traffic between 2 Varnish servers) if($arg_a % 2 == 0) { proxy_pass varnish1; } if($arg_a % 2 == 1) { proxy_pass varnish2; } This is basically doing a even / odd check on GET parameter a and then deciding which upstream pool to send the request. My question are- What is the nginx equivalent of such a code. I dont know if nginx accepts modulas Is there a better/ efficient hashing function built in with nginx (0.8.54) which I can possibly use. In future I want to add more upstream pools so I need not to change %2 to %3 %4 and so on Any other alternate way to solve this problem

    Read the article

  • best way to record local modifications to an application's configuration files

    - by Menelaos Perdikeas
    I often install applications in Linux which don't come in package form but rather one just downloads a tarball, unpacks it, and runs the app out of the exploded folder. To adjust the application to my environment I need to modify the default configuration files, perhaps add an odd script of my own and I would like to have a way to record all these modifications automatically so I can apply them to another environment. Clearly, the modifications can not be reproduced verbatim as things like IP addresses or username need to change from system to system; still an exhaustive record to what was changed and added would be useful. My solution is to use a pattern involving git. Basically after I explode the tarball I do a git init and an initial commit and then I can save to a file the output of git diff and a cat of all files appearing as new in the git status -s. But I am sure there are more efficient ways. ???

    Read the article

  • What are the Windows G: through Z: drives used for?

    - by Tom Wijsman
    In Windows you have a C: drive. The first things labeled beyond that seems to be extra stuff. So my DVD drive is D: and if you put in a USB stick it becomes F:. And then some people also have A: and B:. But then, what and where are G: through Z: drives for? Is it possible to connect so many things to a computer to make them all in use? Or more than them? Would it give a BSOD? Or would this slow down the system somehow? Or what would happen? What if I want to connect even more drives to the computer? Because with the hard drive limits it's more efficient to buy more drives than to buy a single drive with a lot of capacity. Is it possible to create drive letters like 0: through Z: or AA: through ZZ:?

    Read the article

  • Moving tool bars in Acrobat X Pro

    - by user67275
    I'm using Acrobat X pro for Windows. I found that two lines of tool bar takes too much space. So I would like to combine them into one line, or move them to left or right side. I do know that I can hide those tool bars by Ctrl+H. But I want to use them without hiding. Since most documents I use are letter size and portrait type, tool bars on the left or right side would be much more efficient way of using space. Does anybody know how I can do this? (I think I could do it in Acrobat 9..but I can't in X version)

    Read the article

  • cat contents of one file into another file

    - by Attila O.
    I have a large (binary) file that has some corruption near the beginning. Then, I have a second, smaller file that I obtain by starting to download the same file again, but interrupt after I have enough bytes to fix the original one. My question is, how do I simply overwrite the beginning of the large file with the contents of the second, smaller file? I could use cat, tail and head, but that would create a copy of the file. There must be a more efficient way. Oh yes, and I'm looking for a linux command-line solution, if that wasn't obvious. I'm using bash, but I have other shells if that helps.

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • How to make qmail to feed email into shell command?

    - by Nopik
    Hi, I have some server running qmail, with many users/domains configured via plesk. I would like to get emails from specified address delivered to one of my users being feed to shell command, as addition to normal delivery. So, basically, I have some shell script, and I'd like qmail to invoke my script with the email, and the continued with delivery as usual. If necessary, I can do recipient/sender filterind on script side, though it could be more efficient and cleaner, if qmail would feed only correct emails to my script. Anyway, how to accomplish that?

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • Why does Exim puts emails on hold if there are frozen messages in the queue?

    - by user51932
    I've a CentOS with CPanel server working as a SMTP server, which currently uses 20 different hostnames and IP addresses to deliver email for an email newsletter service. However, it's extremely slow in sending emails. It's sending like 10 emails per minute, which I check by running the "exim -bpc" command. What could be affecting this? One thing I'm supposing, is that there are frozen messages in the queue, which are slowing down the sending until they're sent out, and are putting new messages on hold. What are the most common reasons a message can get frozen? Also, would it be more efficient to use 20 different small VPSs to send out email rather than use one large VPS with the 20 different hostnames and IPs in it?

    Read the article

  • How to Detect that Current (Bash) Shell is a (Vi/Vim) Subshell?

    - by Jeet
    From inside Vi/Vim, I can type: :shell to drop into a shell. Is there any way to detect that I am in a Vi-spawned subshell? The environmental variable SHLVL is 2, but that does not tell me explicitly that I am in a Vi/Vim-spawned subshell. On OS X, the following variables are also set: MYVIMRC, VIMRUNTIME, VIM. How universal are these? Can I count on these being set in any system, if and only if I am in a Vi/Vim subshell? If not, is there any portable, robust and hopefully efficient way to tell that I am in a Vi/Vim subshell? Thanks.

    Read the article

  • How can I copy files to an external drive and verify their integrity in OS X?

    - by jedavis
    I'm moving large amounts of data from one external drive to another larger one. The files are important and the smaller drives need to be cleared and reused (HD camera). Is there some utility for moving files and verifying their integrity? I've been using this command find . -type f -exec md5 '{}' \; > md5list.txt in the terminal to create a list of MD5s for each file then using diff to compare the two. However, I am moving 320GB at a time, which takes a while by itself. Computing the checksums takes another hour or so. It would be much more efficient to do this on the fly, during the copy. I'm just hoping someone has already written the software...

    Read the article

  • Apache mod_wsgi elegant clustering method

    - by Dr I
    I'm currently trying to build a scalable infrastructure for my Python webservers. Actually, I'm trying to find the most elegant way to build a scalable cluster to host all my Python WebServices. For now, I'm using three servers like this: 1 x PuppetMaster to deploy my servers. 2 x Apache Reverse Proxy Front-end servers. 1 x Apache HTTPd Server which host the Python WSGI Applications and binded to using mod_wsgi. 4 x MongoDB Clustered server. Everything is OK concerning the Reverse proxy and the DB Backend, I'm able to easily add a new Reverse Proxy and a new DB Node, but my problem is about the Python WebServer. I thinked to just provision a new node with exactly the same configuration and a rsync replication between the two nodes, but It's not really usefull in term of deployement for my developpers etc. So if you have a solution which is as efficient and elegant that the Tomcat Cluster I'll be really happy to ear it ;-)

    Read the article

  • Got root, now how should I configure my server?

    - by L. De Leo
    I've been a developer for years and by trade I had to know a little bit of server side configuration. But now I find myself needing to manage my own VPS instance (Amazon EC2) and I'm lost. I'd like to know what are the common ways to configure an Apache and MySQL server that is secure and efficient. For example right now I'm doing everything as root but I doubt that's the best way at all. My whole Apache is configured to serve 1 site when I'd like it to be able to serve multiple sites. Where do I start?

    Read the article

  • nginx not returning 304 on cached content

    - by Don H
    I'm using nginx as a reverse proxy with an Apache back-end handling some PHP files. The files return the right expiry headers and proxy_cache does a good job of caching them, but I've noticed that the cached content returns a 200 on every refresh, when it might be more efficient to return a 304 on the cached files. The files in question are generated by PHP. The urls do not have .php in them as they've been prettified. Any idea why nginx might not be returning 304 on repeated visits to a cached PHP output? To clarify: It's using proxy_cache for caching dynamic PHP pages (not static html pages generated by PHP). I'm setting expires headers in the PHP file of time + 24 hours. With that in mind, I was hoping nginx would be able to then return 304s on its cached versions during that 24 hour window.

    Read the article

  • How can I too many files upload more fast way to Cloud files in Rasckspace?

    - by andy kim
    I have a lot of image files, it's all I want to upload to RackSpace cloud files about a million in a single directory the fastest and most efficient way. but I'm use uploading python-cloudfiles script is very slow and I want to know different ways or python script code. because one by one connection upload is very slow. I think one files tar and uncompress directory is better way. but cloudfiles do not support this way. Who know any other way?

    Read the article

  • First-class help desk solution? [closed]

    - by Andy Gregory
    A help desk, for larger companies is a place for centralized help within an enterprise to help the users of their products and services.Therefore,to find the solution that best fits your business requirements, it is important to research, examine, and compare help desk software. As far as I know,Hesk is an free helpdesk software with some limited features while h2desk can provide the hosted solution. But for my small business,i just need a web based help desk software which can provide ticket management and knowledge base faq. We need unlimited staff support. Maybe a freeware help desk software can not meet our needs. So,we are willing to pay for effective helpdesk solution. But it should be low cost. We have gotten two choice: iKode Helpdesk Nethelpdesk Our helpdesk team are tend to iKode Helpdesk. Any other efficient first-class help desk solution to share?

    Read the article

  • APC UPS-5000 Power off remote servers

    - by Vishal
    Hi there, I have a UPS connected via the serial port to a server using PowerChute Business Edition. If a power outage occurs I would like this server to start shutting down all other servers within the network. Is there dedicated software to do this? I was thinking of creating a command file which runs a .bat file to run shut down commands to each server (using PSExec). I can set PowerChute to run this command file when a power failure occurs. Is there not anything APC provide which has this functionality and is more efficient than writing a .bat file to do this? Thanks,

    Read the article

  • Efficiently installing fully-patched Windows XP, IE, and Office 2007 on an isolated PC

    - by JPaget
    I have been tasked to install Windows XP, IE, and Office 2007 on a computer that will become part of a standalone network not connected to the Internet. What is a good way to install all of the security updates? I'm installing from CD's of Windows XP SP2 and MS Office 2007. Next I plan to download Windows XP SP3 and Office 2007 SP2, burn them to CD's, and install both service packs. Finally I plan to go to the Microsoft Download Center and download all applicable security updates, burn then to CD, and install them. I estimate that there are over 100 of these updates. Is there a more efficient way to do this?

    Read the article

  • Backup software for Ubuntu - which one?

    - by Industrial
    Hi everybody, I have spent some time testing out different backup solutions for my small home office during the last weeks, but still haven't found anything that have been working out too well yet. We can definitely work with a non-GUI script if that's what it takes, if only the requirements are fulfilled: Upload to Amazon S3 Europe. We get unbelievable slow uploading speed to US, so uploading 400+ GB of data will not be happening anytime this year... Incremental backups - only changed files shall be uploaded or we will have a big bill from Amazon in the end of each month.. Files should not be uploaded in one big per-folder archive. This is not efficient at all, since if we change one file in a subfolder, a huge two-digit GB sized file would have to be uploaded during next backup. Not good for economy again, or traffic overhead on our internet connection. What options are available to us? Thanks!

    Read the article

  • Textwrangler (OS X) -- Simple Text Macro Help Needed

    - by bobber205
    I often, when parsing log/error files, need to replace < and < and > with in order to be able to efficiently understand what's going on in the files. I know TextWrangler has a macro ability but I can't figure out a efficient way to do this. Since I have to do it so often I'd love to just have a simple keybinding or menu item to do this simple replace/find all for me. Anyone know how to do this? ^_^

    Read the article

  • is there a GOTCHA - DBCC CHECKDB ('DBNAME', NOINDEX)?

    - by Deb Anderson
    I am turning on DBCC CHECKDB in our OLTP environment (SQL 2005,2008). System overhead is a very visible thing on our serversso I want them to be as efficient as it makes sense for them to be. HENCE - I want to turn on the NOINDEX option, an option I've never used before. My thoughts are these: if there is a problem with an index that is detected outside the integrity check, that I can just rebuild the index. Also the duration of the integrity checks will be drastically reduced, and the nastier corruption will be detected. What is the flaw in my plan? Thanks, Deb

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >