Search Results

Search found 5758 results on 231 pages for 'contents'.

Page 101/231 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Cookies blocked by router?

    - by Martin wiboe
    Hello, My friend has a D-Link DI-524 router that she uses for her home broadband. It's a pretty vanilla setup with the standard firewall settings, DHCP enabled etc. However, recently she has experienced something strange - cookies are not working on every computer on her LAN, whether using FF3.5 or IE8. I tried viewing the HTTP traffic using Fiddler2, and the requests come through fine (mind you, Internet browsing still works flawlessly) but whenever a website tries to set a cookie using the "Set-Cookie:" header, my computer sees that line as "Set-*ookie:" with the cookie contents removed. I have never seen anything like this - do you have any idea? Regards, Martin

    Read the article

  • Disabling kextcache on 10.5.8 and 10.6.3

    - by Jeff Kelley
    We use Radmind to manage our Mac OS X loadsets and, as such, often run into difficulty when new OS releases come out due to, among other things, updated kernel extensions. The workflow in the past (OS revisions <= 10.4) was to delete the kernel extension cache, update the extensions, and then reboot. That worked just fine, as the system would re-create missing caches on boot. In Leopard, you need to delete the caches after replacing the kernel extensions with their new versions, as the system will automatically start creating them when you replace them; the only way to ensure that you don't have invalid extensions cached is to delete the cache before rebooting. I'm looking for a way to prevent the kernel extensions cache from being re-created until the next reboot. If you modify the contents of /System/Library/Extensions/, kextcache will start up automatically. I've looked through /System/Library/LaunchDaemons/ and other places, but I can't find whatever it is that's starting kextcache. Any ideas?

    Read the article

  • Problem running mercurial against symlinked .hgrc file under Cygwin/Windows 7

    - by emptyset
    This is not a question about handling symlinks in the mercurial repository. I have this setup at work where I keep my dotfiles in a separate directory (.configuration) that I can use to synch my dotfiles between cygwin/windows and linux, then use symlinks instead of dotfiles in the home directory. So, I have the symlink ~/.hgrc -> .configuration/.hgrc in my home directory. After setting this up, mercurial complains thus: $ hg st hg: config error at C:\Users\aaf\.hgrc:1: '!<symlink>ÿþ.configuration/.hgrc' Removing the symlink and replacing it with the actual file works, so the contents of the .hgrc file are not at fault. I can live with that, I suppose, but I'd like to know why this happens. All other tools I've configured the same way work great with symlinked dotfiles.

    Read the article

  • Can you recover from a backup with bad blocks?

    - by Macbook-Recovery
    The hard drive in my Macbook recently gave up while using it on the plane (dual prop, lots of vibration unfortunately). I have a backup of its contents from a few weeks ago, but there are files that aren't included in it that I would like to recover. As it stands right now, I have it plugged to my macbook by USB. Snow leopard recognizes it, but can't mount it. Therefore, tools like Diskwarrior and Techtools do not work. I started doing a clone of it with Data Rescue 3, but after 7 hours of activity (20% through the drive), it has copied 130 GB of the drive but reports all of the data as "bad blocks". My question is this: Is any data recoverable if the clone is completely composed of bad blocks?

    Read the article

  • How do I built a DIY NAS?

    - by Kaushik Gopal
    I'm looking for good, detailed instructions on how to build a DIY NAS (Network Access Storage). I'm planning on doing it cheap (old PC config + open source software). I would like to know: What hardware I need to built one What kind of hard-drive setup I should take (like RAID) Or any other relevant hardware related advices (power supply, motherboard etc...) What software I should run on it, both what OS and software to manage the contents effectively So the NAS is recognizable and accessible to my network I can make sure my Windows computers will recognize it (when using Linux distro's) I can access my files from outside my network I already did a fair bit of searching and found these links, but while these links are great they delve more on the hardware side. I'm looking for more instructions in the software side. Ubuntu Setting up a Home NAS DIY NAS Smackdown How to Configure an $80 File Server in 45 Minutes FreeNAS Build a NAS Device With an Old PC and Free Software Build Your Own NAS Device

    Read the article

  • Archiving old, outdated hard drives

    - by Calvin
    I have a number of old hard drives. I've decided to throw them out. But before I do that, I'd like to keep the contents of hard drives, intact. I tried to use the ISO file format to archive but the major problem is that it loses file attributes and can't create directories with exceeding depth of 8 levels. I do have drives over a variety of file systems; FAT, NTFS, ext2, ext3 and HFS and I'd like to archive them without any loss of information.

    Read the article

  • Syntax error in apc.ini: unexpected '='

    - by Ashley
    I installed APC on Ubuntu 10.04 and it seems to be working fine but I'm seeing this error in my Apache error.log: PHP: syntax error, unexpected '=' in /etc/php5/apache2/conf.d/apc.ini on line 2 The contents of the file are: cat /etc/php5/apache2/conf.d/apc.ini extension=apc.so apc.enabled="1" apc.shm_segments="1" apc.shm_size="192" apc.num_files_hint="1024" And I have also tried it without the quotes (") around the values and get the same error. I've looked at loads of the tutorials on installing APC that mention apc.ini and they all seem to use one of the two syntax formats I have tried. I'd appreciate any ideas. Update: This still causes it: cat /etc/php5/apache2/conf.d/apc.ini extension='apc.so' apc.enabled='1' apc.shm_segments='1' apc.shm_size='192' apc.num_files_hint='1024' I changed to: cat /etc/php5/apache2/conf.d/apc.ini extension=apc.so and it still happens (there's no line 2 in the file now!) I'm assuming a /etc/init.d/apache2 reload will be sufficient to read the new config, is that my mistake?

    Read the article

  • X forwarding over SSH from Mac to a Linux box

    - by Checkers
    I need run Mac applications on a remote Mac machine and display it on a local Linux machine's X server (a lot of articles on the Internet seem to be detailing how would you do it the opposite way). $ ssh -X mac-box $ cd /Developer/Applications/Xcode.app $ ./Contents/MacOS/Xcode Sat Oct 3 20:41:26 mac-box.local Xcode[15634] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. _RegisterApplication(), FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL. ^C My $DISPLAY variable appears to be empty. What should it look like so that forwarding works correctly? Can I run OSX applications this way at all?

    Read the article

  • IIS FTP service - download timeouts and restarts getting the data twice

    - by accel229
    We have an IIS FTP site on a Windows Server 2003 x64 machine. Application Layer Gateway service is disabled (so http://support.microsoft.com/kb/931130 does not apply). Windows Firewall service is disabled as well. Connection timeout for the FTP site (there is only one) is set to 1,200 seconds = 20 minutes. An external client can connect to the site, list directory contents and download small files. When a client attempts to download a large file (eg, if the download continues for 3 minutes, which is still under 20 minutes, but relatively long), the server sends all data, then the connection times out, the client issues REST / RETR commands attempting to restart the download since after the last byte (which I believe should succeed and receive exactly 0 bytes), and the server behaves as if the client tried to restart after byte 0, that is, it sends the entire file all over. Any ideas on how to fix this?

    Read the article

  • What's the quickest way to install Windows 7?

    - by SaultDon
    I am wondering what the quickest/fastest way to install Windows 7 would be? I've read that you can make a bootable USB with unetbootin, or load the ISO contents to a separate partition/hard-drive and boot from there to install. Then I seen a method using imagex to copy the files needed onto a new partition which can be booted from directly, it takes ~7 minutes + ~5 min for the initial boot... I haven't tried it yet but would like to know if anyone knows of anything faster? If you could provide some instructions (step by step) would be great! The imagex method provides a good tutorial for example.

    Read the article

  • Pause/play AJAX on particular tabs in firefox

    - by bguiz
    Hi, I want to know if there is some method to disable AJAX on particular tabs within Firefox and re-enable them later. My concern is that I have metered bandwidth, and I need to conserve my usage. But I also like to leave several Gmail tabs open in the background. It would be great if I could just hit a "Pause AJAX" button, to stop the contents of that tab from sending or receiving anything, and then later on hit a "Play" button when I want it to start doing its thing again. Any suggestions?

    Read the article

  • Digest authentication not working: endless cycles of asking for user/pass

    - by bcmcfc
    I'm trying to setup my SVN repository for access remotely. In doing so I have some settings under Apache's dav_svn.conf file. When navigating to hostname/svn, or using Tortoise to do the same it prompts for the user name and password as expected. However, when entering the correct user name and pass that were set in the password file linked to under AuthUserFile it just asks for the credentials again. I think I'm probably missing something simple? The server is running Ubuntu Server 9.10. Accessing SVN remotely does currently work if the authentication lines of dav_svn.conf are commented out. These are the contents of the dav_svn.conf file: <Location /svn> DAV svn SVNPath /home/svn/repo AuthType Digest AuthName "Subversion Repository" AuthDigestDomain /svn/ AuthUserFile /etc/svn_authfile Require valid-user </Location>

    Read the article

  • Vanilla TeX Live 2009 on Ubuntu

    - by reprogrammer
    I installed TeX Live 2009 by following the instructions at http://www.tug.org/texlive/quickinstall. Then, to make my local TeX Live installation work with the Ubuntu package management system, I followed the instructions on http://www.tug.org/texlive/debian.html. That is, I performed the following steps. $ sudo aptitude install equivs $ mkdir /tmp/tl-equivs && cd /tmp/tl-equivs $ equivs-control texlive-local # I replaced the contents of texlive-local by http://www.tug.org/texlive/debian-control-ex.txt $ equivs-build texlive-local $ sudo dpkg -i texlive-local_2009-1~1_all.deb However, when I go about installing kile through the Ubuntu package management system, it requires me to install a lot of dependencies that are already provided by my texlive-local package. Does any one have a suggestion to fix this problem?

    Read the article

  • Streaming video from a point-and-shoot camera that doesn't support it

    - by egasimus
    I have a Canon IXUS 120is (PowerShot SD940) - a nice digital camera that's a couple of years old. It does record fairly decent video, but, alas, can't function as a webcam - and I need to stream video over the Web. I've installed CHDK on it, and while it's quite flexible, doesn't seem to provide a solution to my problem. I suppose that the video footage is written to the SD card in real time - is there a hack that allows me to monitor the file as it is written, and broadcast its contents over the Internet? Perhaps connecting its the camera's slot to my laptop's card reader via SDIO? I'm running Windows, but I'm roughly familiar with Linux; another question has suggested a file-to-/dev/video driver - do such tools exist?

    Read the article

  • Mac OS X CD ripping speed

    - by SlimSCSI
    I am using vobcopy (installed via macports) to rip DVDs on a mac. I have been doing this for a while on linux with no problems. On the mac however, it is VERY slow. I am guessing that somehow the DVD drive is being limited to 1x in order to keep noise and power consumption down during playback. Is there a way to over ride this? Update: It is MUCH slower than 1x. It has taken me about an hour to copy 300MB Notes: While I appreciate all suggestions, I am not looking for "Have you tried HandBrake?". I am looking for a solution to copy the contents of a DVD, not transcode them. Also, I am launching vobcopy from an apple script that gets executed on DVD insertion, so a GUI solution is not desirable.

    Read the article

  • How can I repair the boot loader on my laptop?

    - by zbalata
    I had removed my HD from a Dell laptop and accessed it with an external HD port on another computer. Though after returning it to the Dell laptop, it will no longer boot. The PC came pre-installed with Windows 7 and I do not have an installation disk. None of the contents of the original install have been removed or modified. If I use another laptop running Windows 7 to create a repair/recovery disk, would I be able to use it on my Dell to repair the boot sector? How can I repair the bootmgr? It's frustrating knowing there's a perfectly good operating system there that wont boot. Thanks for your time!

    Read the article

  • How can I tell if ZFS (zfs-fuse) dedup/compression is applied to a particular file?

    - by asari
    I have a zfs formatted partition using zfs-fuse for linux (Ubuntu). I had used it for a while, and then enabled dedup and compression on it (zfs set compression=on/dedup=on). Now I think I have some files that are dedup'ed and compressed, and file that are not yet. It was OK, but sometimes I was confused. Let's see, following command would consume almost 4GB of my zfs storage: cp oldfile.4GB newfile.4GB .. and this would consume almost zero: cp newfile.4GB newfile.4GB.2 This is because the old file is not yet compressed, so dedup not happened, I think. My idea is -- if I can find old files that are not yet dedup/compressed, I can perform batch copy/rename/remove them to eliminate duplicity and redundancy. But how I can check that? I know I can re-copy whole contents of my storage should work (even better with checking the time stamp of each file), but I'd be happier if I have zfsstat-like tool that shows some file properties.

    Read the article

  • Recovery of Pinnacle Studio Project Files

    - by seanieb
    My external hard drive had some sort of issue a few months ago, but I was able to recover my files using a data recovery software program. However my Pinnacle studio files are not being recovered as before, they are being recovered as directory's/folders that have sub directory's and files. And I have tried with several different recovery programs and they all recover the projects as directories. And the projects all contain one file called README.TXT: * WARNING This directory contains the descriptive data of the project, split into. various subdirectories and files for better access. DO NOT EDIT, ADD, CHANGE OR MODIFY ANY OF IT'S CONTENTS! This gives me hope that I could some how just turn the directory into a .stu Pinnacle studio project file. How would I go about doing this? Or is there another way to solve this problem?

    Read the article

  • Not able to access a folder in Windows 7 and not able to see in Ubuntu.

    - by Rohit
    I have four partitions on my hard disk. Partition C has Windows XP installed and Partition G has Windows 7 installed. Ubuntu 10.10 is also installed, probably in F. Partitions C and G are NTFS. When I boot into C, XP is loading but when I click on the C Drive in MyComputer, it displays: "Access is denied". Windows 7 displays the folder tree of C, but when I try to open a folder, I am not able to view the contents. The same error: of Access Denied. When I try to view the C Partition using Ubuntu, the entire C partition is not visible. I tried following commands to take ownership of the C drive: takeown /f C: cacls C: /G Rohit:F but still I am not able to get rid of "Access Denied". I again tried the above commands from the Windows 7 safe mode, but still the problem persists. The two commands return "Successful", but nothing is happening.

    Read the article

  • Excel 2010: if( , , "") not treated the same as blank for pivot table group by date

    - by Confused
    I'm trying to group by date in an Excel 2010 pivot table. The column with dates (i.e., the one want to group by), should be the latest date of 2 other columns if neither is null, or blank. i.e., with a formula like: =IF(AND(A4 <> "", B4 <> ""), MAX(A4,B4), "") Normally, this ""in the IF() formula acts the same as an empty cell. In this case, it is preventing me from grouping by date in the Pivot Table. If I filter the date column by (Blanks), then clear the contents of all those cells, then the pivot table does group by date ok. i.e., "" is not being treated the same as an empty cell.

    Read the article

  • Working of trashcan utility in tru64 Unix server.. or any other utility??

    - by RBA
    Hi, I used this mktrashcan command mktrashcan deleteMe1 trashcan/ And then i Deleted all the contents inside deleteMe1 directory(rm -rf*).. But then what happend is only the two text files which are inside the deleteMe1(deleteMe2.txt, deleteMe3.txt) directory were moved into the trashcan folder.. Rest of the directories and files inside the directories were not foundd!! Isn't there any other way, so that whatever is deleted, moves exactly the same way to the trashcan directory??? Or is there Any Other Utility that can perform the same task but in advance way.. mkdir deleteMe1 mkdir deleteMe1/deleteMe2 mkdir deleteMe1/deleteMe3 touch ./deleteMe1/deleteMe2/deleteMe4.txt touch ./deleteMe1/deleteMe2/deleteMe5.txt touch ./deleteMe1/deleteMe3/deleteMe6.txt touch ./deleteMe1/deleteMe3/deleteMe7.txt touch ./deleteMe1/deleteMe2.txt touch ./deleteMe1/deleteMe3.txt Thankss..

    Read the article

  • Additional mailboxes in Outlook 07 SP2

    - by Nick
    I have had my Outlook 2007 open additional mailboxes via the advanced account settings. After updating to Office SP2, the list of emails in the additional account still displays, but I get a message for each mailbox item: This item cannot be displayed in the Reading Pane. Open the item to read its contents. After double clicking the message, I get a small error box which displays just Cannot open this item. Unknown Error. Also, if I try to re-add the mailbox in the Advanced tab of my account settings, I get an error message The name cannot be resolved. The connection to Microsoft Exchange is unavailable. Outlook must be online or connected to complete this action. However, the status bar indicates Online with Microsoft Exchange, and I can both send and receive emails from my primary account. What could be going wrong?

    Read the article

  • 127.0.0.1 is working but localhost is not working on mac XAMPP

    - by Ganim
    I installed XAMPP on my mac months ago and was working great. Now i get "Test Page For Apache Installation" when i try to browse /localhost and /localhost/xampp is not found. But when i browse /127.0.0.1 it just works as localhost used to be. I double checked my /etc/hosts file that i have 127.0.0.1 localhost and not commented. Also when i browse localhost/~username/test.php , i get contents of test.php: <?php echo 'ganim'; ?> but if i browse 127.0.0.1/~username/test.php , i get: ganim what could change redirecting of localhost or how can i get localhost work again?

    Read the article

  • Increment numbers in page headers in Microsoft Word

    - by Imray
    In Microsoft Word, I am laying out a process in steps. Each page pretty much is a new step that begins with a header like: 3. Drive the body to a secure location I would like the numbers to automatically increment, particularly if later on I decide to add a new step somewhere in the middle. Does anyone know how I can achieve that in the simplest way? I already have a working Table of Contents and I'd prefer not doing something that would mess with that, if possible to avoid.

    Read the article

  • Viewing a large field in a query in SQL management studio with ZOOM?

    - by smithym
    Hi there, Can anyone help? I am using SQL management studio (sql server 2008) to run queries and some of the fields that come back are varchar(max) for example and it has a lot of information - Is there a zoom feature to open the window and show me the contents with vertical and horizontal scrollbars? I remember there was, i thought it was F2 but i must have been mistaken as it doesn't work Now i have to scroll horizontal on the field and its really difficult to see everything Also some of the fields contain new line codes etc so it would be great if the zoom feature would display the info using the new line codes etc Any body know how to do this?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >