Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 626/1477 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • Varnish send while cache

    - by osmano807
    I wanted the varnish download the file to the cache while sends to client. From what I am seeing, it first download and then send the file, and that with very large files, is slow. (sorry for english, I'm using an online translator.)

    Read the article

  • Security camera for HQ and remote sites?

    - by Atlas
    We want to install security cams at HQ site and 3 remotes sites. Basically: (1) Each site would have N cams (2) Each site should have DVR locally to record everything. What we want is that HQ to be able to see the live/recorded videos of each remote site and including itself. Preferably HQ would have 1 large screen, and display all cams of itself and remotes sites, say showing it in 32x32 cells. Does such system exists?

    Read the article

  • Looking for a user friendly file upload service to run on my own server

    - by Joe Mako
    My goal is to allow people to upload and download large files through their web browser with a simple user interface and user password/account management. Yousendit offers a service like what I am looking for, but it does not make sense to use their service if I already have a web server. Is there an open source version of Yousendit that I can run on my own web server? What is this type of software or service called? (calling it an FTP Server does not seem to fit)

    Read the article

  • How can I have Vhosts with Lighttpd on Windows and keeping PHP through mod_cgi ?

    - by Pixelastic
    Hello, I installed Lighty on Windows 7 and managed to get it correctly serve both static and PHP files (through mod_cgi). At first I got the "No input file selected" message displayed when requesting a .php file. So, I updated the doc_root value in my php.ini to match the server.document-root defined in my Lighty config, and PHP stops complaining. Then I defined a VHost to point all foo.com requests to a specific dir. It worked well for all static files but when requesting a .php file, the mod_cgi was still picking files from the doc_root defined in php.ini, not in the directory I defined for server.document-root in my Vhost. I know its what's supposed to happen, PHP follows the config defined in php.ini. And I have to set this value in my php.ini otherwise no php is processed at all. What I don't understand is how I'm supposed to have virtual hosts with mod_cgi enabled here ? I tried adding [HOST=foo.com] section in the php.ini without any luck. I tried mod_fastcgi but could'n get it to work at all, I also tried mod_simple_host but could get it handle php. I managed to get it working by copying my PHP install to another dir (and changing the doc_root value) and adding a cgi.assign pointing to that install in my vhost. But this is a really hackish way, it means having one PHP install for each virtualhost. Note that I'm working on a development machine running Windows, this is not a production server, I just wanted to emulate the final Server config locally to test some changes. I googled a lot this problem but all I can find are people installing Lighty on windows with mod_cgi, or installing Lighty on Windows with virtual hosts, but I never found anyone who managed to get both.

    Read the article

  • Suggestions for making sysfs parameters persist across reboots

    - by ewwhite
    I'm experimenting with large changes to Linux system runtime parameters exposed through the sysfs virtual file system. What is the most efficient way to maintain these parameters so that they persist across reboots on a RHEL/CentOS-style system? Is it simply dumping commands into /etc/rc.local? Is there an init script that's well-suited for this? I'm also thinking about standardization from a configuration management perspective. Is there a clean sysfs equivalent to sysctl?

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • VNC - Is there any way to turn off logging/log files

    - by Ke
    Hi, I've looked everywhere for a solution to this. Is there any way to turn off this logging in VNC? VNC seems to be logging some large updates I'm doing in mysql and taking up my whole hard drive space. The only way to get rid of the log file is to reboot, which I would prefer not to have to do if possible. Cheers

    Read the article

  • Is it possible to copy a set of files, but automatically skip if file already exists?

    - by awe
    I know that the copy command has an option to automatically replace a file if it already exists, but I want to know if it is a way to copy the files only if they not already exist (/Y). I do not know the actual file names in the batch code, as I copy from the source using wildcards in the copy command: copy *.zip c:\destination The reason I want this instead of automatic overwrite is that the files are large, and to skip existing would save a lot of execution time.

    Read the article

  • Media meta file system for windows

    - by Chris Marisic
    I have a large assortment of media that is currently arranged using folders. As the library has grown I've started to notice that folders aren't the best at conveying meaning. Also as the number of folders serving as categories/tags has grown has lead to data duplication for not realizing it was already filed under a different tag. As I started to think about this I realized tag cloud visualization would be tremendously powerful and figured there has to be something like this out there.

    Read the article

  • Slow down individual connections passing through a Linux router?

    - by davr
    We have a Linux server acting as a router/firewall for our office. Occasionally someone will upload a large file that takes up all our bandwidth. I don't want to implement any complex rules or traffic shaping, but I'm wondering if there is a way to slow down a single connection on the spot? I found tcpnice, but it doesn't slow down the transfers in my testing.

    Read the article

  • Moving directories full of files over the top

    - by JavaRocky
    I took a backup of a directory which has a number directories and files inside them. Recently some files have gone missing. I would like to just move over the missing files. I prefer moving files instead of just copying as space is a premium on this particular box and the files are quite large. How can i achieve this?

    Read the article

  • Need help recovering a corrupt SQL database

    - by user570079
    I have a very special case that I have been working on for several days. I have a very large SQL Server 2008 database (about 2 TB) that contains 500 filegroups to support very large partitioned tables. Recently we had a catastophic failure on one of the drive and lost several filegroups and the database became in-accessible. We have been doing filegroup backups on a daily basis, but due to other issues, we lost our most recent backup of the log and the primary filegroup. We have all the data backed up but the primary filegroup backup is old. There have been no schema changes since the primary filegroup backup, but the lsn's are now all out of sync and we cannot recover the data. I have tried everything I could think of (and have tried just about every trick and hack I could google) but I still end up at the same point where I get messages saying that the files for filegroup x do not match the primary filegroup. I am now at the point of trying to edit the system tables (we have a separate temporary environment to do this so we are not worried about corrupting any production databases). I have tried updated sys.sysdbreg, sys.sysbrickfiles, and sys.sysprufiles to try to trick SQL into thinking all the files are online, but a "Select * From OPENROWSET(TABLE DBPROP, 5)" shows a different database state from what I see in sys.sysdbreg. I am now thinking I need to somehow edit the headers of the actual data files to try to line up the lsn's with the primary. I appreciate any help anyone can give me here, but please do not respond with things like "you are not supposed to do edit mdf, ndf files...." or "see msdn article....", etc. This is an advanced emergency case and I need a real hack so we can just get to the data in this corrupt database and export to a fresh new database. I know there is a way to do this, but not knowing what the DBPROP system functions does (i.e. does it look at system tables or does it actually open the file) is keeping me from trying to figure out how to fool SQL into allowing me to read these files. Thanks for any help.

    Read the article

  • Prevent abuse of public HTTP directory meant for images

    - by sutre
    The situation: Each user has their own public HTTP directory, meant for images only. This could easily be abused by users using it to serve large files, wasting bandwidth. The question: Is there any fairly simple way to prevent this abuse? Either by allowing the webserver to only images to be served, restricting size, or some other method.

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • Automatically remove all music from mp3 talk show

    - by Schudel
    Is there a tool that can automatically remove the music sections of a given mp3 file so only the talking part is left? This would create a better listening experience when listening to the file on my portable mp3 player. Currently I'm doing the task manually by looking for 'dense' regions using Audacity, but this is very cumbersome for large files. Any help would be appreciated.

    Read the article

  • Burn 30GB zip file to DVD

    - by Joel Coehoorn
    I have a zip 30GB zip file containing an archive a digital materials available in the school library that I want to burn to dvd. Of course, 30Gb is far too large for a single dvd and the content is already zipped. I'm open to ideas, but leaning towards suggestions that will help me automatically spread the file over multiple dvds, including a simple program to stitch it back together again later.

    Read the article

  • Copy all text in a LibreOffice Draw drawing

    - by harbichidian
    I have a large flowchart, created in LibreOffice Draw (3.3.1), that I would like to copy all of the text from. I do not need, nor care about the order or structure, I just need all of the text from within the blocks. I can't seem to find any way to export without turning it into an image, and none of the "Paste Special" options allow me to get unformatted text. Is there a way to do this without retyping everything?

    Read the article

  • How to set ulimits for a service starting at boot?

    - by jayofdoom
    I need, for mysql to use large-pages, to set a ulimit - I've done this in limits.conf. However, limits.conf (pam_limits.so), doesn't get read in for init, only for "real" shells. I solved this before by adding a "ulimit -l" to the initscript start function. but I need some sort of repeatable way to do this, now that the boxes are managed with chef, and we don't want to take over a file that's actually owned by the RPM.

    Read the article

  • How to read external USB hard drive formatted ext3 from Windows 7?

    - by CChriss
    I had to format a USB hard drive to ext3 to use it with a Linksys nas box. Now I can't read the hd when I unplug it from the nas and plug it directly into my Windows 7 computer. (The computer accesses the nas over a wireless connection, so I like to plug the hd directly into my pc when transferring large files.) How can I leave the hd formatted with ext3 and yet be able to access it (read/write) when I plug it directly into my pc?

    Read the article

  • using git for media libraries

    - by mpapis
    Rationale: I want to manage libraries of media files (music, images) using git, there is git-annex but it requires haskel platform - but they do not play together well (also it's quite to big dependency for me). Question: Is there any other plugin with this functionality, or possibly would it be possible to write such plugin (resources?). Similar questions: Self-hosted, cross-platform repository for large files Using Git to Manage An iTunes Library?

    Read the article

  • Why is "start in" needed for Windows scheduled tasks?

    - by GomoX
    We develop a web application that can be deployed on Windows or Linux. The Linux implementation uses cron, and the Windows one uses scheduled tasks to run a single PHP script that processes all scheduled tasks for our system. The task is scheduled using schtasks during the install process, like: This has always worked both under W2003 and W2008. A week ago a customer reported that scheduled tasks were not running. He is running on Windows 2008. We checked over and over and finally solved the issue by entering the folder that contains the .vbs script as the "start in" folder for the scheduled task. This said, there is no way to set up the "start in..." value from schtasks without using an XML definition of the tasks. XML definitions don't work in Windows 2003, so I would have to add windows version detection to the installer, additional testing, etc (I'd like to avoid this if at all possible). The only atypical thing I noticed about the install is that the system is installed in D:\ as opposed to the default C:\Program Files (x86)\, but I don't see how this would matter. All the paths are absolute in all the scripts. Can anyone suggest a reasonable solution for this?

    Read the article

  • What is the best way to migrate to a new file share in a new domain?

    - by cbattlegear
    I am looking to move a file share (100 GB or so) from one domain/server to a new domain and server. I would like to do this with little to no downtime and if possible I would like to be able to map permissions from the groups/users in the current system to groups/users in the new domain. A side question, a large number of the files in the system are office documents with hard links to the old file server. Any way to programmatically change all those links to the new file server?

    Read the article

  • Is there a way to search for equations in Word 2007 documents?

    - by FM
    I have many large Word 2007 documents containing a few dozen equations each. Is there a way to locate the equations using Word's Find command, or do I have to hunt for them old-school? I tried searching for a graphic (^g) and and field (^d), but that didn't do the trick. Am I missing something obvious? Might there be a way to do this using VB or some other trick?

    Read the article

  • Is the XP VMM a bottleneck on a multi core machine?

    - by JeffV
    I have a dual Xeon hex core machine running an IO intensive application. (WinXP 32) I am seeing a hardware driver (1/2 user mode, 1/2 kernel, streaming data) that is using 6k delta page faults per second. When other applications load or allocate large amounts of memory the driver's hardware buffer gets an underrun (application not feeding it fast enough). Could this be because the kernel is only using one core to service page fault interrupts?

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >