Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 718/1118 | < Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >

  • How to migrate Outlook Express mail rules?

    - by ronwest
    I have a home computer that only had a 15Gb C: drive, and ran out of space with all the Microsoft Updates, etc, that keep coming down. So I fitted a 160Gb drive as a C: drive and altered the drive jumpers to make the old C: drive into a slave D: drive, to save migrating documents, etc. I've installed a clean copy of Windows XP SP3 and reassigned the new Outlook Express' mailstore path to point to the old mailstore folder that now has a D: drive letter - and it all works OK. However, my extensive list of mail rules have not been transferred to the new OE and I have not been able to identify how they are stored. To find it I added a new rule to the new OE, exited OE, then searched on the whole computer (including hidden/system files) for files altered around the time I added the rule. I hoped I could just overwrite a new empty file with an old one. But the only files that seem to be changed are Windows system-level files and some bits and pieces in the Windows\PreFetch sub-folder. None of them can be opened as XP has them locked, and none of them have names that are anything to do with email or rules. Does anyone know of any way of migrating OE rules, or do I have to re-enter them by hand? Thanks!

    Read the article

  • make file readable by other users

    - by Alaa Gamal
    i was trying to make one sessions for my all subdomains (one session across subdomains) subdomain number one auth.site.com/session_test.php session_set_cookie_params(0, '/', '.site.com'); session_start(); echo session_id().'<br />'; $_SESSION['stop']='stopsss this'; print_r($_SESSION); subdomain number two anscript.site.com/session_test.php session_set_cookie_params(0, '/', '.site.com'); session_start(); echo session_id().'<br />'; print_r($_SESSION); Now when i visit auth.site.com/session_test.php i get this result 06pqdthgi49oq7jnlvuvsr95q1 Array ( [stop] => stopsss this ) And when i visit anscript.site.com/session_test.php i get this result 06pqdthgi49oq7jnlvuvsr95q1 Array () session id is same! but session is empty after two days of failed trys, finally i detected the problem the problem is in file promissions the file is not readable by the another user session file on my server -rw------- 1 auth auth 25 Jul 11 11:07 sess_06pqdthgi49oq7jnlvuvsr95q1 when i make this command on the server chmod 777 sess_06pqdthgi49oq7jnlvuvsr95q1 i get the problem fixed!! the file is became readable by (anscript.site.com) So, how to fix this problem? How to set the default promissions on session files? this is the promissions of the sessions directory Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)

    Read the article

  • Reusing slot numbers in Linux software RAID arrays

    - by thkala
    When a hard disk drive in one of my Linux machines failed, I took the opportunity to migrate from RAID5 to a 6-disk software RAID6 array. At the time of the migration I did not have all 6 drives - more specifically the fourth and fifth (slots 3 and 4) drives were already in use in the originating array, so I created the RAID6 array with a couple of missing devices. I now need to add those drives in those empty slots. Using mdadm --add does result in a proper RAID6 configuration, with one glitch - the new drives are placed in new slots, which results in this /proc/mdstat snippet: ... md0 : active raid6 sde1[7] sdd1[6] sda1[0] sdf1[5] sdc1[2] sdb1[1] 25185536 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU] ... mdadm -E verifies that the actual slot numbers in the device superblocks are correct, yet the numbers shown in /proc/mdstat are still weird. I would like to fix this glitch, both to satisfy my inner perfectionist and to avoid any potential sources of future confusion in a crisis. Is there a way to specify which slot a new device should occupy in a RAID array? UPDATE: I have verified that the slot number persists in the component device superblock. For the version 1.0 superblocks that I am using that would be the dev_number field as defined in include/linux/raid/md_p.h of the Linux kernel source. I am now considering direct modification of said field to change the slot number - I don't suppose there is some standard way to manipulate the RAID superblock?

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • Ubuntu server apt-get says "(-5 - No address associated with hostname)"

    - by Srini
    I have a ubuntu 12.04 server. Running sudo apt-get update on it produces errors like this: W: Failed to fetch http://au.archive.ubuntu.com/ubuntu/dists/precise-backports/main/binary-i386/Packages Something wicked happened resolving 'au.archive.ubuntu.com:http' (-5 - No address associated with hostname) I am able to ping all the other hosts on the network and also Google's DNS 8.8.8.8. But am unable to ping www.google.com. So, I'm guessing something is wrong with my DNS setup, but not sure what. I use static IP and my /etc/network/interfaces looks like this: auto eth0 iface eth0 inet static address 192.168.1.50 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.0.255 gateway 192.168.1.1 #dns-nameserver 203.12.160.35 203.12.160.36 #nameserver 203.12.160.35 203.12.160.36 My /etc/resolv.conf and /etc/resolvconf/resolv.conf.d/base are both empty and my /etc/resolvconf/resolv.conf.d/original says: nameserver 192.168.1.1 Any help would be greatly appreciated. P.S. I've googled it a bit and the common resolution is to switch to DHCP which I don't want to do since this is my home server. Thanks Srini

    Read the article

  • Linux: Send mail to external mail box from a server with that user's hostname

    - by dtbarne
    I've got sendmail running on a linux box. Let's say the hostname of the box is bar.com. If I run the following command, I don't receive the email (which is hosted with a third party), presumably due to the hostname pointing to the local machine. echo "Test Body" | mail -s "Test Subject" [email protected] Is there any way to get this to work so that I can receive emails at my third party email address even though it has the same hostname? Do I have to change the hostname of this server (not preferred)? It may be worth noting that I created a user "foo" on my machine and noticed that the mailbox for that account is empty. I noticed these log entries, which may or may not be relevant: Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: from=apache, size=80, class=0, nrcpts=1, msgid=<[email protected]>, relay=apache@localhost Jun 28 01:09:48 bar sendmail[14339]: p5S59mIA014339: from=<[email protected]>, size=293, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.$ Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: [email protected], ctladdr=apache (48/48), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30080, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5S59mIA$ Jun 28 01:09:48 bar sendmail[14340]: p5S59mIA014339: to=<[email protected]>, ctladdr=<[email protected]> (48/48), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30495, dsn=2.0.0, stat=Sent

    Read the article

  • Migrating from one linux install to another: How to keep the second disk around?

    - by Jim Miller
    I've got a linux box running Fedora 19 that I want to move to CentOS 6.4. Rather than trying to do something fancy with the current disk (which has also accumulated a lot of sludge over the years), I'm going to get a new disk, put CentOS on that, and then move the to-be-preserved bits of stuff from the old disk to the new one. I haven't done this yet, but I presume it should be semi-straightforward -- do the CentOS install on the new disk, mount the old disk on /olddisk or somesuch, and start copying. However, I'm not sure how to handle getting the machine to recognize the new empty disk as the target of the CentOS install (I suppose I can just pull the old disk during the installation), remember that this is the intended boot disk once the install has happened), and tweak /etc/fstab (right?) to set up the old disk on the desired mount point. (Both disks are, or will be, SATA.) I could probably hack it together without losing too much hair or doing too much damage, but could anyone offer some advice that would get/keep me on the right track? Thanks!

    Read the article

  • Ubuntu 13.10 - How to disable LVM and cryptsetup? cryptsetup: evms_activate is not available

    - by NeverEndingQueue
    I am trying to remove whole drive encryption from my Ubuntu installation. I've run Ubuntu from Live CD, mounted crypt partition and copied it to another partition /dev/sda3. sudo cryptsetup luksOpen /dev/sda5 crypt1 sudo dd if=/dev/ubuntu-vg/root of=/dev/sda3 bs=1M After that I've run boot-repair: https://help.ubuntu.com/community/Boot-Repair Added entry to /etc/fstab: UUID=<uuid> / ext4 errors=remount-ro 0 1 Of course I've replaced with blkid result of my /dev/sda3. I've also deleted overlayfs and tmpfs lines from /etc/fstab. (I've just compared it to content of /etc/fstab in non-encrypted Ubuntu installation and could not find overlayfs and tmpfs). I've chrooted from LiveCD into my system and rebuilt initramfs: http://blog.leenix.co.uk/2012/07/evmsactivate-is-not-available-on-boot.html I've also removed cryptsetup using apt-get remove. Basically I can easily mount my system partition from Live CD (without setting up the encryption and LVM stuff), but can not boot from it. Instead I see: cryptsetup: evms_activate is not available When I've chosen the Recovery mode I've seen this: Begin: Mounting root file system ... Begin: Running /script/local-top ... Reading all physical volumes. This may take a while ... No volume groups found cryptsetup: evms_activate is not available Begin: Waiting for encrytpted source device ... My /etc/crypttab is empty. I am pretty sure that system tries to find encrypted partition, search for LVMs etc. Do you have ideas what could be the problem or how can I fix it? Thanks

    Read the article

  • Upgrading memory in a laptop

    - by ulidtko
    I'm a bit confused about all the memory types and various bus frequencies of modern consumer PCs. Requesting expert help on the subject. So far I'm confident that: I have an Asus X51L laptop with an unknown set of configuration options. The CPU in there supports PAE, so I still have a chance to extend the memory beyond 3GiB; and the upper limit of the system is 8GiB. (?) The laptop has two SODIMM slots, one of which is occupied by a 2GiB bank, and the other one is empty. dmidecode and lshw tools consistently state 533 Mhz frequency of the bank. The last one confuses me the most. I failed to find out characteristics of the northbridge in this laptop, and still can't figure out what DDR2 to seek for. Is it DDR2-1066? Or, rather, PC2-8500/PC2-8600? Wouldn't a DDR2-800 bank harm the system's performance? Which kind of modules should I look up in stores? Update: I have bought a 2 GiB DDR2-800 SODIMM, and it seams that the system can't handle 4 GiB of memory. When installed by itself in either slot, both new and old bank (which btw happens to be marked GDDR2-677) work just perfectly; i.e. any configuration resulting in 2 GiB works. When both banks are installed though (totalling in 4 GiB), the memcheck86 tool produces horrible artifacts and crashes, and system reboots; an Ubuntu system can be started and even logged into a Unity session, but the system reboots too in this case from even a minor RAM load. So it's pretty obvious to me now that this laptop doesn't support 4 GiB of RAM or more.

    Read the article

  • Combine multiple rows into one

    - by Jim
    I am trying to combine multiple rows of data into one. Column A contains the value on which the groupings will be based -- rows whose Column A values match will be combined into one row. My range extends from column A through X so I need a matching row of data to start in column Y. Example: +--------------+ ¦ 1001 ¦ A ¦ C ¦ ¦ 1001 ¦ B ¦ D ¦ ¦ 1002 ¦ A ¦ E ¦ ¦ 1002 ¦ B ¦ F ¦ ¦ 1002 ¦ C ¦ G ¦ +--------------+ Desired Result: +------------------------------+ ¦ 1001 ¦ A ¦ C ¦ B ¦ D ¦ ¦ ¦ ¦ 1002 ¦ A ¦ E ¦ B ¦ F ¦ C ¦ G ¦ +------------------------------+ The VBA code I am currently using is not taking the entire contents of the matched row. It is only taking the data in the 2nd column and moving it up. VBA Code: Sub Mergeitems() Dim cl As Range Dim rw As Range Set rw = ActiveCell Do While rw <> "" ' for each row in data set ' find first empty cell on row Set cl = rw.Offset(0, 1) Do While cl <> "" Set cl = cl.Offset(0, 1) Loop ' if next row needs to be processed... Do While rw = rw.Offset(1, 0) cl = rw.Offset(1, 1) ' move the data Set cl = cl.Offset(0, 1) ' update pointer to next blank cell rw.Offset(1, 0).EntireRow.Delete xlShiftUp ' delete old data Loop ' next row Set rw = rw.Offset(1, 0) Loop End Sub

    Read the article

  • What is going on when I can't access an SMB server share (not accessible error) until I run cmdkey to delete the credential?

    - by Warren P
    I have a network connection share issue. The first connection works, and seems to stay connected for at least a few hours. However, after each time my windows 7 PC reboots, it can no longer form a network connection to the shared folder, nor browse to it, until I not only unmap and remap the mapped drive, but also, I have to use cmdkey to delete the stored credentials like this: cmdkey /delete:Domain:target=HOSTNAME My work PC is on a domain, and I am not the IT administrator, but I'm curious if there is anything I can do to investigate this issue. Any settings in registry or group policy that I could examine to see why the first connection works, but each subsequent attempt (once a stored credential exists) to browse or use the connection, fails with a connection error saying it is "not accessible", like this: I do not even get any error until at least several minutes go by. THe first thing I see is a window frozen and empty, and then I get this error: This has happened when connecting to a share on a DROBO device, and on a share which is not on the domain, but which was a Microsoft Home Server. I wonder if there's something broken in WIndows 7 professional with regards to connecting to non-domain shares when an active directory domain controller exists, and a particular workstation is joined to a domain? The problem only occurs if I click "remember credentials". It is not fixed by any amount of working with net use. Usingcmdkey to delete all stored credentials for the host is the only way to get back in, and it affects all non-domain shared folders. Update I'm hoping there are some registry locations I could check that could be misconfigured in some way that might explain why SMB/CIFS stored credentials for non-domain systems seem to be auto-invalidated in this weird way. Knowing how whacko Microsoft Windows domain and security handling is sometimes, this could be some kind of stupid "feature".

    Read the article

  • Why Would one VLAN have no Communication on one Switch?

    - by Webs
    So the problem is we have a device or host that needs to communicate on a specific VLAN. This VLAN is not new, it is running all throughout our environment and works fine. But the VLAN was recently configured on the switch in question, a Cisco 3750. The DHCP server is handing out addresses on that VLAN with no problem. I have verified the cable between the host and switch and tried multiple hosts, but none of them can communicate or get an address. I plugged my laptop into an empty port which had a different VLAN assigned and immediately got a DHCP address. When I changed that port to the same VLAN I'm having issues with I got the same problem. The laptop just sits there and tries to DHCP an address but nothing happens. I double checked the cores and their Layer 3 VLAN config and its fine too. Plus I figured the issue couldn't be with them because the VLAN works fine everywhere else it exists. So the only other thing I can think of is the switch, but the VLAN exists on the switch and seems to be configured correctly. The trunks appear to be configured just fine as well too. Anyone have any ideas? I'm lost on this one.

    Read the article

  • TFS 2010 Subfolder Permissions

    - by gmcalab
    I am a TFSAdmin and when I have a TFS project in which a subfolder needs specific permissions to deny some users. So, I right click on the folder in question hit Properties, and click the Security tab. There I select the Windows User or Group radio, then click Add. I put in the AD User that I want specific permissions for and hit Check Names. That resolves, so I click OK. Next, I select the permissions to Allow or Deny below in the Permissions for list. I hit OK. The permission are honored by TFS, this user no longer has PendChange permissions and I was expecting. The odd thing is, I was expecting to be able to go back into the Security tab and see that User in the list of Users and Groups and see the current state. But the list is always empty. Not sure why, but the permissions are definitely being honored, I can re-add the user with different permissions and those are also honored. Any ideas why the current users are not showing up in the Users and Groups list under the Security tab for a folder's properties? I also used the tf permission $\... to see if there were any permissions but it always returns There are no permissions set for this item (Inherit: Yes)

    Read the article

  • Imac g5 with no OS nor CD drive

    - by sinekonata
    What I want: Ubuntu on a g5 Imac. What I have: An empty PC (Intel g5 17" Imac) with broken CD drive. Its model is A1173. This PC with Ubuntu 12.04 and an old Vista partition. a usb flash drive. Problems: No CD means the only boot Drive I could use is USB. There are no BIOS on Macs so I can't set boot settings or even see if it detects my USB drive. When I start the machine and press ALT the first and only thing I see is an old corrupted winXP partition and not a single option or additional information. So assuming blindly that the Mac hardware/firmware works normally, I don't have any Mac OS to use any of the tools that I found on different tutorials for building a bootable .img drive for macs. I can't find much software on Linux/Windows to substitute to those tools, for example among others converting an .iso file (win/linux) to .img (mac I guess). Which makes me think that the scenario where someone like me has Mac hardware but no Mac OS is extremely rare. So other than finding someone that has a Mac I have no solution. So I ask what would you do? the only thing is it should not involve any money (I know mac soft is rarely free) which also excludes getting any MacOS unless I can use a free macos.img for VM or restore the original Mac for free. Thank you

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • DNS resolve .com domain on local domain

    - by Joost Verdaasdonk
    I'm building a local 2008 R2 domain as a test case to be able to write a roadmap for the real new domain that needs to be created soon. What I would like to know if I'm able to make a record in DNS that will point the domain name: www.example.com and example.com to one of the servers in my network. I tried creating an a-record for it but that doesn't work. To be honest I'm not even sure if this is possible? So can I do this? That way I would be able to fully test all our services (and webb app) offline before I build the real domain and switch the DNS records at the provider. Some advice if possible and where to start is appreciated. The solution (Thanks Brent): Create new Forward lookup zone pointing to example.com Create empty A record pointing to IP of the webserver you are targeting If www is needed create A record with Name: www and IP of your webserver sub domains repeat the process but then with names for example: sub or www.sub (and ip your webserver) Be aware of the DNS Cache while you are in this process. Things can take time or do the following: Right click the server and choose clear cache in CMD: ipconfig /flushdns (to flush the client cache)

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • Cacti Login Page: Infinite loop occurs

    - by beicha
    Apache 2.2.3 | PHP 5.1.6 | MySQL 5.0.77 I followed cacti installation guide to install latest cacti 0.8.7h on CentOS 5.5 (64-bit). The installation of PHP/Apache/MySQL went smoothly until I finished the setup, and came to the login page. I can login http://.../cacti/index.php with admin account but the new page is redirected to the same login page with the message "Please enter your Cacti user name and password below" This is a infinite loop! If I use a wrong admin password I get the correct error message "Invalid User Name/Password Please Retype". [Same problem here] If I login use Guest/guest account, "Error: Access Denied, user account disabled." displays. The Cacti log file (./cacti/log/cacti.log) is empty. I Googled and seems this problem has existed for a long time, but no followup solutions were found on the forum posts I found. Anyone can help me on this problem? If more information needed, please let me know. Nov 18, 2011 UPDATE: I re-installed Cacti, this question remains UNSOLVED.

    Read the article

  • How to recover my invisible HD again?

    - by pattulus
    I made this several times now, but this time something bad happened. What I did: I installed Windows 7 at a 32GB partition on my slot 2 HD in my MacPro. Windows 7 made a 105MB partition… I knew this before, but what I didn’t know was that this partition is now on my slot 4 HD. My home folder, my private videos and some other stuff are on this 1TB drive. What I found out so far: I’m currently logged in as another admin since my OS partition as well as the two other HD's aren't harmed. Disk Utility: … only shows the 105MB NTSF partition on this 1TB volume. It isn’t showing my old 1TB partition/ex-HD named "storehouse". Only the partition tab is telling me that there now is a 1TB empty free unpartitioned space. Data Rescue II: … is showing the Volume as it used to be with it's old Name "storehouse". A quick scan and a thorough scan both were done in 1 second which leds me to the conclusion that there's isn’t something deleted at all (» hope!). Data Rescue doesn’t even mention the damn "system reserved" partition. Drive Genius: … also shows the old partition and doesn’t mention the new one. But looking at the info it tells me under "content": FDisk_partition_scheme (instead of Apple_partition_scheme). Well D'oh…. Tech Tools: … doesn’t show the volume, otherwise I'd might have been tempted to press rebuild/repair. What to do next?? I think the best approach is to buy another 1TB HD and let Disk Warrior Clone my old one to it… just to be on the safe side. But what is the best thing to do after this… ???

    Read the article

  • Why does BitLocker need a minimum volume size of 64 MB?

    - by Iszi
    Since the future of TrueCrypt appears to be still unclear, I figured I'd try to get my stuff migrated into BitLocker at least for the time being. I nearly never have to access my encrypted data from anything that's not BitLocker-capable, so cross-platform compatibility isn't a big deal to me at this time. However, I am having a bit of an issue understanding the minimum requirement of a 64 MB volume. With TrueCrypt, I was able to protect small files (and most of my protected files are fairly small) in containers down to 300 KB or even less. When I finally created a VHD of an appropriate size last night (100 MB), it seemed the file system itself only took up about 3 MB and encrypting it with BitLocker didn't appear to take up any more. While 3 MB is still an order of magnitude larger than the smallest volume I could make with TrueCrypt, it's still relatively reasonable in comparison to 64 MB. This is an especially large amount of overhead (and largely wasted at that, since it's mostly empty space for now) when I consider that some of these volumes will be stored and synced in the cloud. What possible reasons could BitLocker have for needing volumes to be 64 MB large, when it's not even appearing to use that space? BitLocker FAQ on TechNet

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Fill down in Excel, but based on multiple values

    - by Jenn D.
    I have spreadsheets (not created by me) that have blank entries in one column where they should really have data. I want to take every empty cell and fill it with the nearest value above it. I'm looking for as little manual intervention as possible, because I'll have to do it repeatedly. I thought some previous version of Excel, or maybe another spreadsheet from the distant past, would do this by default -- that is, if you selected the column with foo and bar, and chose the equivalent of "fill down", you would get what's in the WANT column. What I actually get in Excel is the GET column. HAVE: WANT: GET: foo 1 foo 1 foo 1 2 foo 2 foo 2 bar 1 bar 1 foo 1 2 bar 2 foo 2 3 bar 3 foo 3 I'm worried that this might need a macro to be done properly. I used to be a whiz with Excel macros, and then suddenly they were all in VB. My fallback position will be to dump the whole thing to CSV and write a Python script, but if there's any way to do it in Excel that would be much preferable. Even if it involves a couple of different manual steps, that's fine; just not one step per group of lines. That is, a process of "copy the column, do X to it, cut and paste it back" would work, but "do X for each occurrence of foo or bar" won't. The files are too big for that. Any thoughts are appreciated!

    Read the article

  • Force request to miss cache but still store the response

    - by Tom Marthenal
    I have a slow web app that I've placed Varnish in front of. All of the pages are static (they don't vary for a different user), but they need to be updated every 5 minutes so they contain recent data. I have a simple script (wget --mirror) that crawls the entire website every 15 minutes. Each crawl takes about 5 minutes. The point of the crawl is to update every page in the Varnish cache so that a user never has to wait for the page to generate (since all pages have been generated recently thanks to the spider). The timeline looks like this: 00:00:00: Cache flushed 00:00:00: Spider starts crawling to update cache with new pages 00:05:00: Spider finishes crawling, all pages are updated until 1:15 A request that comes in between 0:00:00 and 0:05:00 might hit a page that hasn't been updated yet, and will be forced to wait a few seconds for a response. This isn't acceptable. What I'd like to do is, perhaps using some VCL magic, always foward requests from the spider to the backend, but still store the response in the cache. This way, a user will never have to wait for a page to generate since there is no 5-minute window in which parts of the cache are empty (except perhaps at server startup). How can I do this?

    Read the article

  • Using sed to Download ComboFix automatically

    - by user901398
    I'm trying to write a shell script to grab the dynamic URL which ComboFix is located at at BleepingComputer.com/download/combofix However, for some reason I can't seem to get my regex to match the download link of the "click here" if the download doesn't work. I used a regex tester and it said I matched the link, but I can't seem to get it to work when I execute it, it turns up an empty result. Here's my entire script: #!/bin/bash # Download latest ComboFix from BleepingComputer wget -O Listing.html "http://www.bleepingcomputer.com/download/combofix/" -nv downloadpage=$(sed -ne 's@^.*<a href="\(http://www[.]bleepingcomputer[.]com/download/combofix/dl/[0-9]\+/\)" class="goodurl">.*$@\1@p' Listing.html) echo "DL Page: $downloadpage" secondpage="$downloadpage" wget -O Download.html $secondpage -nv file=$(sed -ne 's@^.*<a href="\(http://download[.]bleepingcomputer[.]com/dl/[0-9A-Fa-f]\+/[0-9A-Fa-f]\+/windows/security/anti[-]virus/c/combofix/ComboFix[.]exe\)">.*$@\1@p' Download.html) echo "File: $file" wget -O "ComboFix.exe" "$file" -nv rm Listing.html rm Download.html mkdir Tools mv "ComboFix.exe" "Tools/ComboFix.exe" -f The first two downloads work successfully, and I end up with: http://www.bleepingcomputer.com/download/combofix/dl/12/ But it fails to match the final sed that will give me the download link. The code it's supposed to match is: <a href="http://download.bleepingcomputer.com/dl/6c497ccbaff8226ec84c97dcdfc3ce9a/5058d931/windows/security/anti-virus/c/combofix/ComboFix.exe">click here</a>

    Read the article

< Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >