Search Results

Search found 14206 results on 569 pages for 'compressed folder'.

Page 370/569 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Is it possible to stream input into RAR

    - by Dscoduc
    I'm using RARLABS RAR.exe to archive/backup my server data. I am familiar with using RAR for creating an archive and adding files from a folder, but what about streaming data directly into an archive? For example, when backing up my MySQL databases I use the mysqldump command that includes a pipe command into a text file. It would be nice to skip the file step and go directly into an archive file using something like the following syntax: mysqldump -uUserName -pPassword --all-databases > rar.exe newarchivename.rar Does anyone know if what I have described, or something similar, is even possible?

    Read the article

  • Best approach to utilize RamDisk for Chrome?

    - by laggingreflex
    I use a lot of tabs and after a while less recently opened tabs take some time to become responsive, which I guess is because they're being un-cached to HDD as they're not required. So after creating a Ram-Disk I have two options, use --disk-cache-dir="G:/" switch to do what it does. Or what I'm currently doing: using a directory junction for "[...]\AppData\Local\Google\Chrome\User Data\Default" to move that entire folder over to Ram-Disk. I thought this would be better than just disk-cache but what do I know. Is it? As one can guess it'll be a pain saving/loading the Ram-Disk image each time I start chrome but if it really is better than the former approach I'll write a script or something.

    Read the article

  • Snippets between desktop and laptop

    - by Jamie F
    The Situation: At work, I have a nice beefy desktop running Windows Server 2008 R2 (SharePoint dev machine). My handy ThinkPad is right next to it. Every once in a while I'd like to cut and paste or share something (usually text) between the machines: for example, I might be headed out and I'd like to take send the URL I'm reading from the desktop to the laptop. Of course I can create a share or use the Admin shares and create files to get stuff back and forth, but that seems heavyweight for what I'm thinking of. I'm thinking more along the lines of sending myself an IM. How do you get little things from machine to machine? Keep a shared folder pinned to the taskbar? Send an email to yourself? Bookmark sync? While on it, I'm looking for a decent multiple clipboard handler: maybe these two functions are combined in some nice little utility? I suspect I'm missing something simple here... Thanks... Jamie F.

    Read the article

  • How do I restore a non-system hard drive using Time Machine under OSX?

    - by richardtallent
    I dropped one of the external drives on my Mac Pro and it started making noises... so I bought a replacement drive. No biggie, that's why I have Time Machine, right? So now that I have the new drive up and initialized, how do I actually restore the drive from backup? Time Machine is intuitive when it comes to restoring the system drive or restoring individual folders/files on the same literal device, but I'm a bit stuck in how to properly restore an entire drive that is not the boot drive. I saw one suggestion to use the same volume name as the old drive and then go into Time Machine. Haven't tried that since the information is unconfirmed. For now, I just went to the Time Machine volume, found the latest backup folder for that volume, and I'm copying the files via Finder. Of couse, I expect this to work just fine, but I feel like I'm missing something if that's the "proper" way to do this.

    Read the article

  • Apahce - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory McCann
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • read data on ntfs partition - ubuntu

    - by albert green
    Hi, I had a win xp with an NTFS partion for programs (c:) and I installed ubuntu 10.10 on it. I will use ubuntu from now on. On the disk there was a space for the NTFS partition and free space. I created in the free space a new linux partition. So the new linux partition is a ext3. now from ubuntu I used the disk utility and saw that the windows is marked as free space. I had only one possibility, which is to create a partition, so I did as NTFS. I did NOT format it. I don't care about the windows system, I just need to access the program files folder on that partition and get my chrome bookmarks. I forgot to save them before the installation of linux. do you think it is possible? if so how? thanks.

    Read the article

  • Nautilus can't start due to segmentation fault

    - by Dmitriy Sukharev
    Out of the blue I can't start nautilus today. When I try to open any directory it tries to open it, and sometimes I even can see the content of directory, but finally it's closed, after that there are no icons on desktop. When I tried to launch nautilus from terminal, I got: $ nautilus . Initializing nautilus-dropbox 0.7.1 Initializing nautilus-gdu extension Segmentation fault (core dumped) I've tried to move ~/.local/share/gvfs-metadata folder, I don't have nautilus-open-terminal package and don't have file /usr/local/lib/libgtk-3.so.0 Also I can't update system right now. All the time I'm getting the the same hash-sum error: $ sudo apt-get update [sudo] password for dmitriy: Ign http://mirror.mirohost.net precise InRelease Ign http://mirror.mirohost.net precise-updates InRelease Ign http://mirror.mirohost.net precise-security InRelease Hit http://mirror.mirohost.net precise Release.gpg ... Ign http://ppa.launchpad.net precise/main Translation-en Hit http://mirror.mirohost.net precise-security/restricted Translation-en Hit http://mirror.mirohost.net precise-security/universe Translation-en Fetched 1 B in 1s (0 B/s) W: Failed to fetch gzip:/var/lib/apt/lists/partial/mirror.mirohost.net_ubuntu_dists_precise_universe_source_Sources Hash Sum mismatch E: Some index files failed to download. They have been ignored, or old ones used instead. Any ideas how to rescue my system? UPD: In syslog I have the following errors: Jul 7 21:35:02 dmitriy-desktop kernel: [ 58.059141] nautilus[1991]: segfault at 7fc09d9bb700 ip 00007fc0abb5feb6 sp 00007fff6caa4cf8 error 4 in libc-2.15.so[7fc0aba24000+1b3000] Jul 7 21:35:39 dmitriy-desktop kernel: [ 94.356490] update-notifier[3358]: segfault at 7f6507611700 ip 00007f64cc221eb6 sp 00007fffbcc0dd88 error 4 in libc-2.15.so[7f64cc0e6000+1b3000] Jul 7 21:37:45 dmitriy-desktop kernel: [ 220.501859] nautilus[3629]: segfault at 7f9b9744c700 ip 00007f9b7c9c6eb6 sp 00007fff72e990f8 error 4 in libc-2.15.so[7f9b7c88b000+1b3000] UPD2: Ubuntu version is 12.04.

    Read the article

  • Issues importing PST into Archive Mailbox Exchange 2013

    - by atomicharri
    I've completed a successful mailbox import request into the archive mailbox for a particular user. There are no errors to speak of and the size of the archive mailbox has grown to the expected size after import (approximately 6GB). However, in OWA I can't see any of the mail folders inside the archive mailbox, only Deleted Items and RSS feeds folders. However if I run some cmdlets to list the contents of the Archive Mailbox\Inbox folder in the Exchange Shell, the full list of subfolders will come up. If I do a mail item search on the archive mailbox, emails from those individual folders appear as search results! I just can't see the folders in the navigation pane and hence cannot browse any of my old emails. Any help would be appreciated.

    Read the article

  • Can I use Ubuntu One to sync data fiies between two remote computers

    - by Sleepy John
    I've got two computers, both running Ubuntu with files in their home folders sync'd in to Ubuntu One. I'd like to know if it's possible to make Ubuntu One automatically download data changes that have been uploaded automatically to Ubuntu One from one computer to the equivalent data file in the other. Clarifying a bit further, I've installed Red Notebook in both computers and so they each have their own /.rednotebook/data folder containing a series of .txt files corresponding to the monthly entries in each of them. These are sync'd to upload any changes to those .txt files to Ubuntu One. My question is can I, and if so how, do I make Ubuntu One automatically download and replace those .txt files in the other computer after they've been updated and uploaded from the first computer? I did labouriously manage to download all those text files which had been uploaded from the first computer, from Ubuntu One one-by-one to the second computer, but what I want to do is automate this process and that's where I'm stuck. I'm aware that things could get a bit complicated if both my computers were on-line at the same time and both were simultaneously making different Red Notebook entries, so that's not the scenario I'm trying to cover. All I want to achieve is that whatever updates to the files have been uploaded by one computer, will automatically be downloaded to the same-named files in the other computer as soon as that second computer appears on line and detects that Ubuntu One has matching but more recent sync'd files than the ones it's holding.

    Read the article

  • procfs and youtube flash video

    - by trideceth12
    Up until recently (about 3 months ago), ALL open flash videos had deleted file handles in the procfs virtual folder for the flash plugin, I could see them thus: ps x|grep flash cd /proc/#PROCESS#/fd ls -l cp #FILE# ~/ This still works for the vast majority of flash video, but some YouTube videos no longer keep this open file handle. My questions are: A) Why not? B) Where are these files now stored? C) How can I get this file Yes I know I could probably get a browser plugin, it just annoys me that they are hiding these files so I want to keep doing it the hard way.

    Read the article

  • What is the 64-bit Firefox Beta PPA?

    - by JamesTheAwesomeDude
    I recently discovered that my computer is 64-bit. I have backed up my Home folder, and reinstalled Ubuntu. The reinstall wasn't nearly as painful as I thought. There is one thing that I can't quite seem to figure out: how do I get the 64-bit Firefox Beta build? I always get the Beta builds, but I want to take advantage of the 64-bit architecture of my computer. this page says that Mozilla has come out with a 64-bit version of Firefox, but I can't seem to find it. I do understand the ramifications of using a 64-bit browser, but I've decided to jump right in and do it anyway. (Flash and Java are already 64-bit, and who cares about Silverlight, since it's not for Linux anyway?) There's only one issue, and it's a big one: I can't find the 64-bit Beta PPA!!! (I really hate using .tar.gz files, but I'd be willing to do that as long as I could still access Firefox via the Launcher. Oh, speaking of which, I don't understand .tar.gz files. Once, I managed to run one (the Dropbox Beta build,) but I have no idea whatsoever on how to install them: as in, click on the icon and go.)

    Read the article

  • Unable to copy files previously extracted from archives created on a Mac, even after claiming ownership

    - by Maxim Zaslavsky
    I reinstalled Windows on my computer today, and backed up my music to a USB drive. Now, I'm trying to copy the files onto my fresh Windows partition, but I'm unable to copy files that I obtained within my previous Windows installation from zip archives created on Macs. When I try to copy those previously-extracted files, I get an error saying that I need permission from S-1-5-21-...-1000 (a bizarre long ID). The first thing I tried was to take ownership of the files by setting my new user account as the owner, but that resulted in errors saying that I need permission from myself! Some Googling suggested adding antivirus suggestions, so I excluded the relevant folders from Microsoft Security Essentials, but the issue persists. For what it's worth, it seems that some program (so far I've only installed Chrome, Microsoft Security Essentials, and the latest Windows updates) created an empty folder named 601c8c7f0e0c03f725 at the root of my external USB hard drive. What gives?

    Read the article

  • IIS 7.5 default permission - is restriction needed?

    - by Caroline Beltran
    I am using IIS 7.5 and I do not need to explicitly specify permissions for my ISAPI application to execute. Additionally, the application can create subdirectories, create and delete files without me specifying permissions. Since I am using the default permissions, checked to see if web.config was safe from prying eyes over the web, and it can’t be read which is good. My app also creates some .log and .ini files which are also not viewable over the web. I did notice that .txt files are viewable. I really don’t know how default permissions allow my app to do so much. Is this safe or do I need to lock down? To be honest, I don’t know what accounts to restrict. App details: My ISAPI has an ‘allowed’ entry in ISAPI and CGI Restrictions Folder and subfolders containing my application has ‘default’ permissions set. Application pool is using ‘classic’ pipeline mode and no managed code. Pass-through authentication in use. Thank you for your time

    Read the article

  • Windows 7 Recovery Console File Access Denied

    - by Ty Rozak
    Recently by computer crashed and was stuck in a boot loop. So I created a Windows Recovery CD and booted from that. When I use the command prompt in the recovery console, I cannot see any of my personal files or folders (such a my Users folder with My Documents). Is there a way to access these files? The only reason I would need to fix the computer properly would be to get these files off of the computer and onto a hard drive. Any other fix suggestions would be greatly appreciated. I have tried both system repair and system restore from the Recovery Console, but neither seem to work. Thanks.

    Read the article

  • If using eMule, how to keep current downloading files while adding a hard drive?

    - by the searcher
    If there are still downloading files (ones that will need extra 2 week or unknown time because they are rare files) but need to use a new hard drive because no space is left in hard drive, then is there a way to use new hard drive while keeping existing downloads ongoing? That's because if we change the folder in eMule from G: to H:, then all existing downloads will disappear too... Update: I can move the completed files over to the new hard drive... but it is going to be a never ending task... (old hard drive gets full... move some... and repeat)

    Read the article

  • How do I install Apache Tomcat 7?

    - by Anupesh
    Troubleshooting with apache Tomcat 7 steps should be followed .... after downloading apache-Tomcat 7 // open terminal go to that downloading file folder in which tar.gz file is still. Untar the tar file tar -xzvf filename.tar.gz move a extracted directory of apache-Tomcat7 in current path sudo mv Directory_Name(Extracted eg.apache-Tomcat 7.0.32) /usr/local/Tomcat7 to set environment variable for Tomcat7 sudo nano /usr/local/Tomcat7/bin/setenv.sh then Nano editor will open .. JAVA_HOME =/usr/lib/jvm/java-6-sun export CATALINA_OPTS="$CATALINA_OPTS -Xms128m -Xmx1024m -XX:MaxPermSize=256m after hit ctrl+X to save file to set the role and username and password. sudo nano /usr/local/Tomcat7/conf/./tomcat-users.xml <role rolename="manager-gui" /> <role rolename="manager-script" /> <role rolename="manager-jmx" /> <role rolename="manager-status" /> <user username="admin" password="admin" roles="manager-gui,manager-script,manager-jmx,manager-status"/> Ctrl+X to save file If Port no confliction comes in a way then open file to change the port no. sudo nano /usr/local/Tomcat7/conf/./server.xml then change the connector port as you want....

    Read the article

  • Default permissions for courier imap folders

    - by JoeCoder
    I'm using courier imap. When a mail client creates a new folder, it's created on the filesystem with 640 permission. I need it to be writable by the group, or 660. I currently have /etc/courier/imapd IMAP_UMASK=007, but that's not enough. I'm not sure what else to try. Any ideas? I'm using ubuntu server 12.04. EDIT: I added a 50pt bounty to this. For an acceptable answer, I need a way to make it work from a package in a standard repo. If I download source and compile it myself, it won't be automatically kept up to date with security fixes. If I don't find a better answer, I'll add code to the admin script to call another sudo approved script to chmod -R the whole directory before every change. But this is kind of hack-ish.

    Read the article

  • Enable anonymous access to report builder in reporting services 2008

    - by ilivewithian
    I have a 2008 reporting services server installed on windows 2003 server. I am trying to allow anonymous access to the report builder folder so that my users do not have to select the remember password option when they login, if they are wanting to use the report builder. All I have found so far is that I should be able to do this with the IIS manager, but that only seems to work for reporting services 2005. Reporting services 2008 does not show up in the IIS manager, enabling anonymous access seems to be hidden somewhere else. How do I enable anonymous access to report builder in reporting services 2008?

    Read the article

  • How to batch rename files using bash

    - by Alex Popov
    I know there are lots of such questions, but I couldn't find one (or a combination of several), which describes the things I want to do. I think I need to use regular expressions, but I am not very good with that. I use zsh. I have a folder with files, which I want to rename: I want the files challenge1.rb, challenge2.rb, challenge3.rb, etc. to be renamed to c1.rb, c2.rb etc. Similarly task1.rb and similar must be renamed to t1.rb etc. sample_spec_c1.rb, sample_spec_c2.rb etc. must be renamed to c1_spec.rb, c2_spec.rb etc. So I guess I need some combination of regular expressions and iteration, but I don't know how to write the bash script.

    Read the article

  • How can I display and log PHP errors on IIS7?

    - by Ben
    We're running PHP 5.2.5 on an IIS 7 Server and we're having problems making PHP errors visible... At the moment whenever we have a PHP error the server sends back a 500 error with the message "The page cannot be displayed because an internal server error has occurred." This might be a good setting for production websites but it's rather annoying on a development server... ;-) I have tried configuring php.ini to display errors to the screen as well as log them to a specific folder but it seems that the Server catches all errors before and prevents and handling by PHP... Does someone know what we have to do to make IIS display PHP errors on screen? Any links, tipps or tutorials on the subject would be appreciated!

    Read the article

  • Windows does not detect any modems anymore[HSPA]

    - by Shenal Silva
    By an accident i uninstalled my Connect manager (ZTE). but when i re-install it the virtual CD drive is detected but it does not detect the drivers. When i direct to the folder containing the relevant drivers(.sys files) still it doesnt detect. Then i tried a modem of a different brand Huawei the same problem presists As i understand it is not a device(brand) driver specific issue as it does not detect modem(dongles) of any brand. when i plugged in a different USB device(a pen drive) the driver is detected and it works perfectly. Please help me with the issue i have tried system restore but it doesnt work. I want a repair that works without re-installing windows [Edit] I tried Karan's suggested method with usboblivion it didnt work. now im sure that the issue does not deal with the registry

    Read the article

  • Install system-wide PEAR on Debian Lenny

    - by artvolk
    Good day! I've installed PEAR on Debian Lenny using apt-get install php-pear, it was installed in /usr/share/php When I try to install anything using pear install <package> the PEAR folder is created under current user home directory and separate copy of pear is installed there. I ended up by installing local copy of PEAR for one of the users like this: http://kuziel.info/log/archives/2006/04/01/Installation-of-local-PEAR-repository Is any way to tell pear to install packages to system-wide repository in /usr/share/php? What is the recommended way of using system-wide PEAR copy? Thanks in advance!

    Read the article

  • ODI 11g - Scripting a Reverse Engineer

    - by David Allan
    A common question is related to how to script the reverse engineer using the ODI SDK. This follows on from some of my posts on scripting in general and accelerated model and topology setup. Check out this viewlet here to see how to define a reverse engineering process using ODI's package. Using the ODI SDK, you can script this up using the OdiPackage and StepOdiCommand classes as follows;  OdiPackage pkg = new OdiPackage(folder, "Pkg_Rev"+modName);   StepOdiCommand step1 = new StepOdiCommand(pkg,"step1_cmd_reset");   step1.setCommandExpression(new Expression("OdiReverseResetTable \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step2 = new StepOdiCommand(pkg,"step2_cmd_reset");   step2.setCommandExpression(new Expression("OdiReverseGetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step3 = new StepOdiCommand(pkg,"step3_cmd_reset");   step3.setCommandExpression(new Expression("OdiReverseSetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   pkg.setFirstStep(step1);   step1.setNextStepAfterSuccess(step2);   step2.setNextStepAfterSuccess(step3); The biggest leap of faith for users is getting to know which SDK classes have to be used to build the objects in the design, using StepOdiCommand isn't necessarily obvious, once you see it in action though it is very simple to use. The above snippet uses an OdiModel variable named mod, its a snippet I added to the accelerated model creation script in the post linked above.

    Read the article

  • Php.ini: Local Value vs Master Value (safe_mode, specifically)

    - by Philipp Lenssen
    I can change php.ini values on my Apache and restart to see them in effect via a script showing php_info(). However, one setting is causing problems: safe_mode. I set it to "off" in php.ini but php_info() still shows it as Local value: On Master value: Off How can I find out which local value is overriding the master value? There's no htaccess directive of that kind in the httpdocs folder in question... (I already downloaded all files php_info() claims to be additional .ini files parsed, but safe_mode is not set in them.)

    Read the article

  • Unable to understand a line in Google CodePreview's README

    - by Masi
    The README is in Google's codepreview which uses Google-appengine. To run the app locally (e.g. for testing), download the Google App Engine SDK from http://code.google.com/appengine/downloads.html. You can then run the server using make serve I run make serve in my terminal after moving Google-appengine.app to my Application -folder in OS X Leopard. I get make: *** No rule to make target `serve'. Stop. How can you run the make serve to run the server for Google AppEngine?

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >