Search Results

Search found 10417 results on 417 pages for 'large'.

Page 122/417 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Looking for a tagging media library application for Windows

    - by E3 Group
    I'm looking for a program that can: 1) index specific folders and capture video, music and picture files. 2) allow me to assign tags or categories to these files 3) allow me to search by tags or filenames I have a large collection of movies, music, etc that I want to categorise and tag with multiple tags. Haven't yet been able to find any applications that will do this for me.

    Read the article

  • Determine the percentage of a file that has been ftp'd from client to server

    - by klwillie
    I want to ftp a large file from a Windows client to a Windows server, using their IP addresses. This is on an internet independent network. While the file is transferring, I would like to determine how many bytes have been received by the server. I then would like to use this information to determine in real-time the percentage of the file that has been transferred to the server. Any recommendations as to the ftp command syntax and C# code to achieve this?

    Read the article

  • How can I migrate lab manager VM to vCloud Director

    - by jfmessier
    A friend of mine has been asked to start experimenting with migrating VMs from Vmware Lab Manager to vCloud. We know the machines/labs have to be undeployed and consolidated, which is done for this testing lab. However, what should be the next steps to make this as efficient as possible, as we have a large number of labs to migrate. Also, the dependencies tree is huge and the consolidation process is a big thing, but we know we cannot avoid this. Thanks :-)

    Read the article

  • Ubuntu Deluge checking downloads at start-up slow

    - by solomongaby
    I am downloading a very large torrent (~60GB) and when deluge clients starts up it takes a lot of time to check the parts downloaded during witch it uses a lot the hardisk that leads to a very slow computer. Is there a way to skip this checking ? or make it be less aggressive on the hardrive ?

    Read the article

  • scp stalls and ssh sessions freeze up (but eventually start again)

    - by coleifer
    I am running ubuntu on various computers on a home network. Some are on 9.04x64, some 10.04x64 and one 9.04x32. Running scp with a large file starts out at 2.1 mbps and drops down to about 200k, stalling and dropping until the transfer is complete. I've noticed this when I have a secure shell open on any of these servers as well. I have tried this with 2 different routers, both brand new, different brands.

    Read the article

  • Local or public NTP servers?

    - by BeeOnRope
    For a relatively large network (thousands of hosts) - what are the arguments for and against running a locally managed (pool of) NTP server(s) (perhaps periodically set via some public NTP server) and having all other hosts on the network use that (pool of) NTP server(s) versus having all hosts simply use public NTP servers directly, say via ntp.pool.org? Aside from the pros and cons, What is typical best practice today?

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • How can I prepare a TortoiseSVN installer to use the serf HTTP library instead of neon?

    - by Sam Johnson
    I'm going to be distributing instructions on how to access our new Subversion repository with TortoiseSVN. Because it's hosted on Windows, and we have some large files in the repository, we have to use the Serf HTTP library instead of neon. This is normally specified by manually editing the Subversion "servers" file on the client machine and adding the line http-library=serf Is there a way I can customize the TortoiseSVN installer to do this automatically? I'm just trying to get it up and running as easy as possible for our new SVN users.

    Read the article

  • How can I detect hard drive failures?

    - by Francis
    I am in charge of a large number of Windows servers. Recently, many have been reporting hard drive errors with event codes 11 and 55. CHKDSK indicates that the drives are fine most of the time. What other diagnostic tools could I use to more accurately detect hard drive failures? Could these Windows events be false positives? I have already evaluated S.M.A.R.T., and it seems to have significant sensitivity and specificity issues.

    Read the article

  • Linux file copy with ETA?

    - by bobby
    I'm copying a large amount of files between disks. There's approximately 16 GB of data. I'd like to see progress information, and even an estimated time of completion from the command line. Any advice?

    Read the article

  • Backing up oracle to TAPE

    - by andreas
    Hi folks, our Oracle database has grown very large as of late ~= 400 - 500 GB and saving to filesystem is not scalable anymore to us. We are looking at using RMAN to backup to tape (directly, not to fs then tape). Anyone can shed a light on this please?

    Read the article

  • excel / open office - append an incrementing value to all non-unique fields

    - by mheavers
    I have a large table of about 7500 store names. I need to search through those names and, if they are not unique, append an incrementing value, for example: store_1 store_2 etc. Anyone know how to do this? For another project, I was using this: =J1&IF(COUNTIF($J$1:J1,J1)1,COUNTIF($J$1:J1,J1),"") but in open office this gives an error, and in google spreadsheets, it times out because my database is so big. Any suggestions?

    Read the article

  • linux disk usage report inconsistancy after removing file. cpanel inaccurate disk usage report

    - by brando
    relevant software: Red Hat Enterprise Linux Server release 6.3 (Santiago) cpanel installed 11.34.0 (build 7) background and problem: I was getting a disk usage warning (via cpanel) because /var seemed to be filling up on my server. The assumption would be that there was a log file growing too large and filling up the partition. I recently removed a large log file and changed my syslog config to rotate the log files more regularly. I removed something like /var/log/somefile and edited /etc/rsyslog.conf. This is the reason I was suspicious of the disk usage report warning issued by cpanel that I was getting because it didn't seem right. This is what df was reporting for the partitions: $ [/var]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 9.3G 108M 99% /var This is what du was reporting for /var mount point: $ [/var]# du -sh 528M . clearly something funky was going on. I had a similar kind of reporting inconsistency in the past and I restarted the server and df reporting seemed to be correct after that. I decided to reboot the server to see if the same thing would happpen. This is what df reports now: $ [~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 697M 8.7G 8% /var This looks more like what I'd expect to get. For consistency this is what du reports for /var: $ [/var]# du -sh 638M . question: This is a nuisance. I'm not sure where the disk usage reports issued by cpanel get their info but it clearly isn't correct. How can I avoid this inaccurate reporting in the future? It seems like df reporting wrong disk usage is a strong indicator of the source problem but I'm not sure. Is there a way to 'refresh' the filesystem somehow so that the df report is accurate without restarting the server? Any other ideas for resolving this issue?

    Read the article

  • can someone explain IOSTAT ouput?

    - by user37197
    i'am having IBM server with Redhat 5 ElsmP connected to the IBM Storage over iSCSI (in sdb ) can someone explain this output from iostat command avg-cpu: %user %nice %system %iowait %steal %idle 12.79 0.01 4.53 72.22 0.00 10.45 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 95.63 48.88 240.95 485589164 2393706728 sdb 29.20 350.49 402.08 3481983365 3994494696 move large file to sdb very slowly,it seem normaly?

    Read the article

  • Is there a way to automatically keep Chrome/Ask Tool Bar from installing?

    - by hydroparadise
    So of lately, I've had to warn my users to watch out for unwanted programs that are coming in with Adobe Flash and Java updates. Adobe seems to be pushing Google's Chrome and Java with the Ask.com Toolbar. I admit that it could be much worse because both instance simply require an uncheck during some point of the update process, but on a large scale, prevention is better than confrontation. Any suggestions?

    Read the article

  • CRC error when extracting to SSD from 2nd HDD

    - by gbn
    Hello I have a large RAR file (split up) containing an ISO on my 2nd HDD When I extract it: to the same HDD, it's OK to the system/OS SSD, I get CRC errors I've checked memory, run memtests, checked wires etc I have no other issues; only with this one RAR file Any ideas please?

    Read the article

  • Improve file transfer speed between Windows PCs and servers

    - by Geotarget
    I've setup a server which I've connected to multiple PCs in my workplace. Sadly, data transfer speeds are at max 3 MB/sec per connection which works out slow for file transfers, especially when transferring large files. I'm using Windows filesharing and the server is a Windows Server 2008 (2 Ghz CPU, 1 GB RAM) and the client PCs mostly running Windows 7. How can I detect bottlenecks in my network and improve file sharing speed within the network?

    Read the article

  • Alternative to Dropbox (on my server)?

    - by jweede
    I love using Dropbox to sync files between all my machines, and I've heard it uses rsync internally to keep files synced. Sometimes I need to sync very large things, and I don't necessarily want to pay for storage space on someone else's server when I have my own. So does anyone know of any nice cross-platform (pref. open source) automatic file-sync applications out there for this?

    Read the article

  • Prefork or Worker MPM for amazon xlarge server?

    - by Netismine
    I'm trying to measure would it be better to have prefork or worker mpm apache module for the server I'm working on, which is Amazon X-Large 15 GB memory 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each) and that will run a Magento website with about 50 active users at once. Site serves a lot of images and about 45 requests per page. Images sometimes hang, so it seems worker would be a better option? Thanks

    Read the article

  • Does the Virtual PC XP Mode need safety measures?

    - by Ivo
    Does the Virtual PC XP-Mode (or any other virtualized mode) require safety measures, such as antivirus or a firewall? I'm just wondering if the XP-Mode would be a large security loophole, since it's so much more integrated into Windows 7. Actually I'm wondering the same for Portable Ubuntu, are their any safety measures I should undertake, so that I don't open a backdoor on my computer.

    Read the article

  • How to log size of cookies in request header with apache

    - by chrisst
    We have an issue on our site with cookies growing too large. We have already expanded the acceptable header size and throttled the cookie sizes for now, but I'd like to figure out what the average client's header sizes are, specifically of the cookies. I've created an apache log that captures the cookies being set on each request: LogFormat "%{Cookie}i" cookies But this just spits out the entire contents of all cookies in the header. Is there a way to have apache just log the size (or just length of the string) per request?

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >