Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 844/974 | < Previous Page | 840 841 842 843 844 845 846 847 848 849 850 851  | Next Page >

  • Clarification for setting up SSH terminal access on Cisco IOS

    - by Matt Malesky
    I'm attempting to set up SSH on a Cisco 2811 and having some difficulties. The first step to this should be running crypto key generate rsa I seem to be missing this though: better#crypto key generate rsa ^ % Invalid input detected at '^' marker. better# Furthermore, the only available commands I have in the crypto key namespace are lock and unlock, which seem to indicate a locked keypair (for which I don't know the password): better#crypto key ? lock Lock a keypair. unlock Unlock a keypair. better#crypto key unlock ? rsa RSA keys better#crypto key unlock rsa %% Please enter the passphrase: %% Unlocking failed. . better# More or less, I'm asking what exactly this might mean, and if I actually do have certificates already here (used router)? Otherwise, how can I solve this? It's my first time configuring this feature, but I definitely believe it's part of my IOS. Speaking of my IOS, I'm running the image c2800nm-advsecurityk9-mz.124-24.T6.bin I'll also note that I have my hostname and ip domain-name configured. I'll also give you a dir flash: below if it's at all of use: better#dir flash: Directory of flash:/ 2 -rw- 2748 Jul 27 2009 14:03:52 +00:00 sdmconfig-2811.cfg 3 -rw- 931840 Jul 27 2009 14:04:10 +00:00 es.tar 4 -rw- 1505280 Jul 27 2009 14:04:32 +00:00 common.tar 5 -rw- 1038 Jul 27 2009 14:04:46 +00:00 home.shtml 6 -rw- 112640 Jul 27 2009 14:05:00 +00:00 home.tar 7 -rw- 1697952 Jul 27 2009 14:05:26 +00:00 securedesktop-ios-3.1.1.45-k9.pkg 8 -rw- 415956 Jul 27 2009 14:05:46 +00:00 sslclient-win-1.1.4.176.pkg 9 -rw- 38732900 Dec 8 2011 06:28:56 +00:00 c2800nm-advsecurityk9-mz.124-24.T6.bin 64016384 bytes total (20598784 bytes free) better#

    Read the article

  • IIS WebServer CreatesNew file: OwnerShip?

    - by Beaud.
    IIS is configured for Integrated Windows Authentication. web.config is configured as follows: <authentication mode="Windows" /> <identity impersonate="true" /> We are Load balancing between \webserver1 and \webserver2. Windows Server 2003 \\webserverX creates a XML file to \\share1 and access is denied. We got pass through access denial by allowing Everyon to access the share... We would like to have the impersonated user to be the owner of the created file. Instead, \\webserver1's computer account is the owner. How can we make sure that the impersonated user has ownership of the file at creation time? PROGRESSION: I decided to create the file locally on \\webserver1's root directory. File's ownership is NETWORK SERVICES even if impersonate="true". I'm unable to change ownership of the file in C# code. Why when creating a file, IIS won't use the impersonated user's write permissions? If it actually does, what I am doing wrong?

    Read the article

  • Recover harddrive data

    - by gameshints
    I have a dell laptop that recently "died" (It would get the blue screen of death upon starting) and the hard drive would make a weird cyclic clicking noises. I wanted to see if I could use some tools on my linux machine to recover the data, so I plugged it into there. If I run "fdisk" I get: Disk /dev/sdb: 20.0 GB, 20003880960 bytes 64 heads, 32 sectors/track, 19077 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk identifier: 0x64651a0a Disk /dev/sdb doesn't contain a valid partition table Fine, the partition table is messed up. However if I run "testdisk" in attempt to fix the table, it freezes at this point, making the same cyclical clicking noises: Disk /dev/sdb - 20 GB / 18 GiB - CHS 19078 64 32 Analyse cylinder 158/19077: 00% I don't really care about the hard drive working again, and just the data, so I ran "gpart" to figure out where the partitions used to be. I got this: dev(/dev/sdb) mss(512) chs(19077/64/32)(LBA) #s(39069696) size(19077mb) * Warning: strange partition table magic 0x2A55. Primary partition(1) type: 222(0xDE)(UNKNOWN) size: 15mb #s(31429) s(63-31491) chs: (0/1/1)-(3/126/63)d (0/1/32)-(15/24/4)r hex: 00 01 01 00 DE 7E 3F 03 3F 00 00 00 C5 7A 00 00 Primary partition(2) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) (BOOT) size: 19021mb #s(38956987) s(31492-38988478) chs: (4/0/1)-(895/126/63)d (15/24/5)-(19037/21/31)r hex: 80 00 01 04 07 7E FF 7F 04 7B 00 00 BB 6F 52 02 So I tried to mount just to the old NTFS partition, but got an error: sudo mount -o loop,ro,offset=16123904 -t ntfs /dev/sdb /mnt/usb NTFS signature is missing. Ugh. Okay. But then I tried to get a raw data dump by running dd if=/dev/sdb of=/home/erik/brokenhd skip=31492 count=38956987 But the file got up to 59885568 bytes, and made the same cyclical clicking noises. Obviously there is a bad sector, but I don't know what to do about it! The data is still there... if I view that 57MB file in textpad... I can see raw data from files. How can I get my data back? Thanks for any suggestions, Solution: I was able to recover about 90% of my data: Froze harddrive in freezer Used Ddrescue to make a copy of the drive Since Ddrescue wasn't able to get enough of my drive to use testdisk to recover my partitions/file system, I ended up using photorec to recover most of my files

    Read the article

  • Problems forwarding zone to another DNS server.

    - by sebastian nielsen
    I have a authorative DNS server at 83.248.21.18 which are authorative for the domain "finahemgoteborg.se". Now my registrar is requiring me to have 2 DNS servers for the domain, so I would now want the machine 85.228.103.141 just forward all incoming queries for "finahemgoteborg.se" to the 83.248.21.18 server. In the 85.228.103.141 BIND server, I have the following config: zone "finahemgoteborg.se" in { type forward; forwarders {83.248.21.18;}; }; But the problem is that 85.228.103.141 is still responding with "REFUSED" when querying it for example www.finahemgoteborg.se A record. How can I fix it. I do NOT want to set up a master/slave situation, just one nameserver that forwards to a another. Edit The Rest of named.conf: options { directory "/var/cache/bind"; version "none"; allow-recursion {"none";}; minimal-responses no; }; zone "sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns1sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns2sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "finahemgoteborg.se" in{ type forward; forwarders {83.248.21.18;}; };

    Read the article

  • EngineX ignores Auth Basic?

    - by Miko
    I have configured nginx to password protect a directory using auth_basic. The password prompt comes up and the login works fine. However... if I refuse to type in my credentials, and instead hit escape multiple times in a row, the page will eventually load w/o CSS and images. In other words, continuously telling the login prompt to go away will at some point allow the page to load anyway. Is this an issue with nginx, or my configuration? Here is my virtual host: 31 server { 32 server_name sub.domain.com; 33 root /www/sub.domain.com/; 34 35 location / { 36 index index.php index.html; 37 root /www/sub.domain.com; 38 auth_basic "Restricted"; 39 auth_basic_user_file /www/auth/sub.domain.com; 40 error_page 404 = /www/404.php; 41 } 42 43 location ~ \.php$ { 44 include /usr/local/nginx/conf/fastcgi_params; 45 } 46 } My server runs CentOS + nginx + php-fpm + xcache + mysql

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • WordPress 3.5 Multisite and nginx siteurl issues

    - by Florin Gogianu
    I'm setting up multisite on localhost in subdirectories. The problem is that when I'm trying to access the dashboard of a site I just created ( localhost/wptest/site/wp-admin ) I get "This webpage has a redirect loop" and when I try to access the actual website ( localhost/wptest/site ) the page loads but without assets, such as css. When I access the network dashboard, or the primary site dashboard on localhost/wptest everything is just fine. Also when I edit the permalink of the second site in the network dashboard, to be like this: localhost/site it also runs fine. How to make it work with the default permalink structure localhost/wptest/site? The wordpress files are in /usr/share/html/wptest The wp-config.php is as follows: define('WP_ALLOW_MULTISITE', true); define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); define('DOMAIN_CURRENT_SITE', 'localhost'); define('PATH_CURRENT_SITE', '/wptest/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); And the server block / virtual host is like this: server { ##DM - uncomment following line for domain mapping listen 80 default_server; #server_name example.com *.example.com ; ##DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /usr/share/nginx/html/wptest; index index.html index.htm index.php; if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } location / { try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; access_log off; log_not_found off; } } And finally here's an error log: 2013/06/29 08:05:37 [error] 4056#0: *52 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: example.com, request: "GET /nginx HTTP/1.1", host: "localhost"

    Read the article

  • Folder redirection GPO doesn't seem to be working

    - by user57999
    I've been trying to set up roaming profiles and folder redirection, but have hit a bit of a snag with the latter. This is exactly what I've done so far: (I have OU permissions and GPO permissions over my division's OU.) Created a group called Roaming-Users in the OU 'Groups' Added a single user (testuser) to the group Using the Group Policy Management tool (via RSAT on Windows 7) I right-clicked on the Groups OU and selected 'Create a GPO in this domain, and Link it here' Added my 'Roaming-Users' group to the Security Filtering section of the policy. Added the Folder Redirection option, specifically for Documents. It is set to redirect to: \myserver\Homes$\%USERNAME%\Documents (Homes$ exists and is sharing-enabled). Right-clicked on the policy under the Groups OU and checked Enforced. Logged into a machine as testuser successfully. Created a simple text file, saved some gibberish, logged off. Remoted into the server with Homes$ on it, noticed that the directory Homes$\testuser was created, but was empty. No text file to be found. From what I've read, I did everything I aught to...but I can't quite figure out the issue. I had no errors when I logged off about syncing issues (offline files is enabled) or anything, so I can only imagine my file should have ended up up on the share. Any ideas? EDIT: Using gpresult /R, I confirmed the user is in fact part of the Roaming-Users group, but does not have the policy applied, if that helps. EDIT 2: Apparently you can't apply GPOs to groups...so I applied to users and used the same security filter to limit it to my test user. Nothing happens as far as redirection goes, but I now have the following error in the event log: Folder redirection policy application has been delayed until the next logon because the group policy logon optimization is in effect

    Read the article

  • ClassNotFoundException returned for all plugins

    - by razumny
    I am trying to use a Java applet (any Java Applet), but I always get a messages saying "Error. Click for details". When I do so, the pop-up says: Application Error ClassNotFoundException jreVerification.class When I click the "Details" button, all I see is the following: Java Plug-in 10.7.2.10 Using JRE version 1.7.0_07-b10 Java HotSpot(TM) Client VM User home directory = C:\Users\razumny ---------------------------------------------------- c: clear console window f: finalize objects on finalization queue g: garbage collect h: display this help message l: dump classloader list m: print memory usage o: trigger logging q: hide console r: reload policy configuration s: dump system and deployment properties t: dump thread list v: dump thread stack x: clear classloader cache 0-5: set trace level to <n> ---------------------------------------------------- I am running Windows 7 Professional, and am up to date on patches. The problem occurs in Google Chrome, Mozilla Firefox and Internet Explorer, regardless of what Java Applet I am running. The error I quoted above came from here: http://java.com/en/download/installed.jsp?detect=jre I have attempted the following to rectify the issue: Uninstall and reinstall Java Uninstall Java, reboot, install Java Uninstall Java, delete all registry entries, reboot, install Java In addition, I have run Malware and Virus scans, none of which have shown anything of relevance. At this point, I am at my wit's end, and so, I turn to you.

    Read the article

  • Ubuntu won't boot from USB memory stick

    - by mackenir
    I used the instructions on this webpage to create a bootable USB drive for running Ubuntu 9.10. Unfortunately it doesn't work on my EeePC. Even with 'Removable Dev.' selected in the BIOS as the first boot device, the PC just boots into Windows 7. How do I troubleshoot this problem? The drive is readable and looks like this: Directory of E:\ 28/10/2009 21:14 <DIR> .disk 28/10/2009 21:14 222 README.diskdefines 28/10/2009 21:14 143 autorun.inf 28/10/2009 21:14 <DIR> casper 28/10/2009 21:14 <DIR> dists 28/10/2009 21:14 <DIR> install 28/10/2009 21:14 <DIR> syslinux 28/10/2009 21:14 4,098 md5sum.txt 28/10/2009 21:14 <DIR> pics 28/10/2009 21:14 <DIR> pool 28/10/2009 21:14 <DIR> preseed 28/10/2009 21:14 0 ubuntu 26/10/2009 16:16 1,468,640 wubi.exe 25/02/2010 00:28 2,147,483,648 casper-rw 8 Dir(s) 5,290,307,584 bytes free

    Read the article

  • Did chkdsk make it harder to restore files?

    - by neyl
    My friend asked me to try and fix his loaded Sansa Clip + which wasn't playing. After opening it in MSC mode I discovered that the Music directory was empty and total of all files was only a few MB. However Disk properties showed me that it was 7Gb full. I then ran Tools - Error Checking and Windows dutifully informed me that disk was corrupt and I should run again Allowing Windows to Fix Errors. I did that and it told me everything was fixed and that all files were placed in FOUND.000 Dir. FOUND.000 was about 7.5 GB with FILE0000-1546 . CHK. (I am aware of methods like ChkBack to scan and convert to mp3 etc BUT Original filenames and structure needed!) Now I started getting worried that I made things worse! I have plenty of experience with Data Recovery Programs - Recuva, Restore My Files etc. and I was anyhow planning to use them to scan the drive. But NOW after CHKDSK "fixed" the drive maybe it modified critical FAT information vital for data recovery. So I run these programs and 0!!!. No trace of files! I tried a ton of Recovery Programs with same results TILL EaseUS Data Recovery Wizard found all files and I purchased program for $55! My Question In your opinion - did running CHKDSK with automatic fixing of errors make matters worse (i.e. many data recovery progs. didn't find a trace and they would have done if not for chkdsk) or was the filesystem too corrupt anyhow for regular File Recovery Progs.? If I would be a Professional - would I be responsible for running CHKDSK - automatic Fixing. Do you know of a better Data Recovery Program than EaseUs Data Recovery wizard - According to my experience I haven't found better!? Thanks

    Read the article

  • Access to File being restricted after Ubuntu crashed

    - by Tim
    My Ubuntu 8.10 crashed due to the overheating problem of the CPU when I am opening some directory and intend to do some file transfer under Nautilus. After reboot, under gnome, all the files cannot be removed, their properties cannot be viewed and they can only be opened, although all are still fine under terminal. I was wondering why is that and how can I fix it? Thanks and regards UPdate $ cat /etc/mtab /dev/sda7 / ext3 rw,relatime,errors=remount-ro 0 0 tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0 /proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 varrun /var/run tmpfs rw,nosuid,mode=0755 0 0 varlock /var/lock tmpfs rw,noexec,nosuid,nodev,mode=1777 0 0 udev /dev tmpfs rw,mode=0755 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 lrm /lib/modules/2.6.27-15-generic/volatile tmpfs rw,mode=755 0 0 /dev/sda8 /home ext3 rw,relatime 0 0 /dev/sda2 /windows-c vfat rw,utf8,umask=007,gid=46 0 0 /dev/sda5 /windows-d fuseblk rw,allow_other,blksize=4096 0 0 securityfs /sys/kernel/security securityfs rw 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/tim/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=tim 0 0

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • Start & shutdown as tomcat as non-root user

    - by user53864
    How to start, shutdown and restart tomcat as non-root user. I have tomcat5.x installed on ubuntu machine(usr/share/tomcat). The tomcat directory has full permissions. When I shutdown(/usr/share/tomcat/bin/shudown.sh) or startup(/usr/share/tomcat/startup.sh) tomcat as normal user, it starts & shuts down normally as if it is executed as root user but I could not access webpage even after starting tomcat as non-root user. user1@station2:/usr/share/tomcat/webapps$ ../bin/shutdown.sh Using CATALINA_BASE: /usr/share/tomcat Using CATALINA_HOME: /usr/share/tomcat Using CATALINA_TMPDIR: /usr/share/tomcat/temp Using JRE_HOME: /usr 4 Oct, 2010 6:41:11 PM org.apache.catalina.startup.Catalina stopServer SEVERE: Catalina.stop: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:310) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:176) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:163) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at java.net.Socket.connect(Socket.java:495) at java.net.Socket.<init>(Socket.java:392) at java.net.Socket.<init>(Socket.java:206) at org.apache.catalina.startup.Catalina.stopServer(Catalina.java:421) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:337) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:415)

    Read the article

  • Can not copy files after installing windows

    - by Ali
    I am experiencing a weird problem. I was running Xubuntu on my laptop until yesterday that I had to delete Xubuntu and install Windows. I had a NTFS partition on my Xubuntu that I kept some files on it. Today after installing windows I wanted to move all the files from that partition to an external HDD. I selected all files and folders and clicked on Copy, then I went to the HDD and clicked on paste but nothing happened. I can not do that. I do not know why. I copy the files, and wherever I click paste, nothing happens. If I try to copy the files and folders one by one, I can copy some of them, but some of them do not move. The other problem I have is that I can not open some files, in particular pdf files. When I click on pdf files I get this error: There was an error opening this document. This file cannot be found. Also, I cannot play some mp4 files. I can not open some jpg and txt files. I get this error The directory name is invalid. So in summary, after removing Xubuntu and installing windows 7 I have the following problems with one of the NTFS partitions on my internal drive: Can not copy or cut all folders and files from that partition to any other partition - I also do not get any errors. Can copy some folders and files Can not access some pdf, jpeg, txt and mp4 files and get the above errors. I should also mention I did not change anything for this partition during the installation or formatting the other partitions.

    Read the article

  • HTTPS request to a specific load-balanced virtual host (using Shibboleth for SSO)?

    - by Gary S. Weaver
    In one environment, we have three servers load balanced that have a single Tomcat instance on each, fronted by two different Apache virtual hosts. Each of those two virtual hosts (served by all three servers) has its own different load balancer. Internally, the first host (we'll call it barfoo) is served by port 443 (HTTPS) with its cert and the second host (we'll call it foobar) is served by port 1443 (HTTPS). When you hit foobar, it goes to the load balancer which is using IP affinity for that host, so you can easily test login/HTTPS on one of the servers serving foobar, but not the others (because you keep getting that server for the lifetime of the LB session, iirc). In addition, each of the servers are using Shibboleth v2 for authN/SSO, using mod_shib (iirc). So, a normal request to foobar hits the LB, is directed to the 3rd server (and will do that from then on for as long as the LB session lasts), then Apache, then to the Shibboleth SP which looks at the request, makes you login via negotiation with the Shibboleth IdP, then you hit Apache again which in turn hits Tomcat, renders, and returns the response. (I'm leaving out some steps there.) We'd like to hit one of the individual servers (foobar-03.acme.org which we'll say has IP 1.2.3.4) via HTTPS (skipping the load balancer), so we at first try putting this in /etc/hosts: 1.2.3.4 foobar.acme.org But since foobar.acme.org is a secondary virtual host running on 1443, it attempts to get barfoo.acme.org rather than foobar.acme.org at port 1443 and see that the cert for barfoo.acme.org is invalid for this case since it doesn't match the request's host, foobar.acme.org. I thought an ssh tunnel might be easy enough, so I tried: ssh -L 7777:foobar-03.acme.org:1443 [email protected] I tried just hitting https://localhost:7777/webappname in a browser, but when the Shibboleth login is over, it again tries to redirect to barfoo.acme.org, which is the default host for 443, and we get into an infinite redirect loop. I then tried setting up an SSH tunnel with privileged port 443 locally going to 443 of foobar-03.acme.org as the hostname for that virtual host: sudo ssh -L 443:foobar-03.acme.org:1443 [email protected] I also edited /etc/hosts to add: 127.0.0.1 foobar.acme.org This finally worked and I was able to get the browser to hit the individual HTTPS host at https://foobar.acme.org/webappname, bypassing the load balancer. This was a bit of a pain and wouldn't work for everyone, due to the requirement to use the local 443 port and ssh to the server. Is there an easier way to browse to and log into an individual host in this case?

    Read the article

  • Configure Domino to use SMTP routing and hMailServer

    - by Sébastien Lachance
    I have been trying for a couple of days to set up a Domino 8.5 server. Basically, I want everything to be run inside a local network. Right now I can send email to other user in the Domino directory without any mail address. I am pretty new to all this stuff, so maybe the answer will be really obvious. What I need to do is be able to send a mail from somewhere else to a domino user that will be redirected to his account. On the Domino server, I also have hMailServer installed on port 25. I configured Domino to use port 26. I followed those step to get where I am now. -I have set the Fully qualified Internet host name to "preview.notes". -Smtp Listener task changed to Enabled to turn on the Listener so that the server can receive messages routed via SMTP routing -Setting up SMTP routing within the local Internet domain (http://www.h2l.com/help/help85%5Fadmin.nsf/f4b82fbb75e942a6852566ac0037f284/7f9738a49efc4f58852574d500097b01?OpenDocument) -I modified the person to use the [email protected] address. -I'm using the hMailServer (which have the local "preview.local" domain name) to send mail to [email protected]. When sending mail I got an error telling that the DNS is not set up correctly. Is using the Domino Smtp server instead of hMailServer will solve the problem? I can Telnet the Domino Smtp Server.

    Read the article

  • BIND zones and named files

    - by preethika
    I've installed BIND in my Windows server2003. i've configured the named file in C:\named\etc\named.conf as: options { directory "c:\named\zones"; allow-transfer { none; }; recursion no; }; zone "tisdns.com" IN { type master; file "db.tisdns.com.txt"; allow-transfer { none; }; }; My zone file is configured in C:\named\zones\db.tisdns.com.txt as: $TTL 6h @ IN SOA ns1.tisdns.com. hostmaster.tisdns.co… ( 2010010901 10800 3600 604800 86400 ) @ NS ns1.tisdns.com. ns1 IN A 192.168.0.17 mug IN A 192.168.0.103 key "rndc-key" { algorithm hmac-md5; secret "M0oW24WFQZhMu9wTq8qepw=="; }; controls { inet 127.0.0.1 port 53 allow { 127.0.0.1; } keys { "rndc-key"; }; }; In the above i've given the name to the domain as "tisdns". i want to create a new domain name in a different zone file. how can i create it?

    Read the article

  • Immediate logout after login with PAM, Kerberos, and LDAP

    - by Dylan Klomparens
    I've set up remote login on a computer using Kerberos and LDAP. I've also configured NFS to mount onto /home so that the user's home directory is the same wherever they login. Kerberos authentication seems to work fine. I can get a ticket using kinit user1 (assuming user1 is a remote user) and see the ticket with klist. I'm pretty sure LDAP is working because I see the proper output from getent passwd, which lists all the remote users. The contents of /home are present when I list the files. The problem is: when I try to login as a remote user the session is immediately ended. Why is it not letting me stay logged in? Here is the output from /var/log/messages after a login attempt: # /var/log/messages: Oct 9 10:57:53 tophat login[6472]: pam_krb5[6472]: authentication succeeds for 'user1' ([email protected]) Oct 9 10:57:53 tophat login[6472]: pam_krb5[6472]: pam_setcred (establish credential) called Oct 9 10:57:53 tophat login[6472]: pam_krb5[6472]: pam_setcred (delete credential) called EDIT: The distro is openSUSE. Here are the common-* files in /etc/pam.d:   # /etc/pam.d/common-account account required pam_unix.so   # /etc/pam.d/common-auth auth sufficient pam_krb5.so minimum_uid=1000 auth required pam_unix.so nullok_secure try_first_pass   # /etc/pam.d/common-session session optional pam_umask.so umask=002 session sufficient pam_krb5.so minimum_uid=1000 session required pam_unix.so There doesn't appear to be a /var/log/auth.log file nor a /var/log/secure file.

    Read the article

  • legitimacy of the tasks in the task scheduler

    - by Eyad
    Is there a way to know the source and legitimacy of the tasks in the task scheduler in windows server 2008 and 2003? Can I check if the task was added by Microsoft (ie: from sccm) or by a 3rd party application? For each task in the task scheduler, I want to verify that the task has not been created by a third party application. I only want to allow standards Microsoft Tasks and disable all other non-standards tasks. I have created a PowerShell script that goes through all the xml files in the C:\Windows\System32\Tasks directory and I was able to read all the xml task files successfully but I am stuck on how to validate the tasks. Here is the script for your reference: Function TaskSniper() { #Getting all the fils in the Tasks folder $files = Get-ChildItem "C:\Windows\System32\Tasks" -Recurse | Where-Object {!$_.PSIsContainer}; [Xml] $StandardXmlFile = Get-Content "Edit Me"; foreach($file in $files) { #constructing the file path $path = $file.DirectoryName + "\" + $file.Name #reading the file as an XML doc [Xml] $xmlFile = Get-Content $path #DS SEE: http://social.technet.microsoft.com/Forums/en-US/w7itprogeneral/thread/caa8422f-6397-4510-ba6e-e28f2d2ee0d2/ #(get-authenticodesignature C:\Windows\System32\appidpolicyconverter.exe).status -eq "valid" #Display something $xmlFile.Task.Settings.Hidden } } Thank you

    Read the article

  • Calling Excel from PHP 5 through COM fails on Windows 7 when Apache started through Task Planner

    - by Stefan Pantke
    I currently write an application, which controls Excel through COM: The app creates a COM-based Excel instance, opens some XLS files and reads their contents. Scenario I On Windows 7, I start Apache and mySQL using xmapp-control with system administrator rights. All works as expected. The PHP-based controller script interacts with Excel as expected. Scenario II A problem appears, if I start Apache and mySQL as 'background jobs'. Here is how: I created two jobs using Windows 7 Task Planner. One runs apache_start.bat, the other runs mysql_start.bat. Both tasks run as SYSTEM with elevated privileges when Windows 7 boots. Apache and mySQL work as expected. Specifically, Apache serves HTTP request from clients and PHP is able to talk to mySQL. When I call the PHP controller, which calls and interacts with Excel using COM, I do receive an error. The error seems to come from Excel [not COM itself] and reads like this: Excel can't read the XLS-file Excel failed to save the file due to an ill-name worksheet Interestingly, during the first run of the PHP-based controller script, it takes a few seconds to render the error message. Each subsequent run immediately renders the error message. Windows system logs didn't show a single problem report entry. Note, that the PHP program and the Apache instance didn't change - except the way Apache was started. At least the PHP controller script is perfectly able to read the file-system, since it provides the pathes to the XLS-file through scandir() of a certain directory. Concurrency issues can't be the cause of the problem. A single instance of the specific PHP controller interacts with Excel. Question Could someone provide details, why this happens?

    Read the article

  • Blank list of windows services

    - by Joe
    Recently when I open windows services (always as administrator) I get a blank list of services: When I try and click on one of the empty lines I get this "Script Error" message: This happens over and over again, after several times I restarted my computer. I can't pinpoint exactly when this started happening or if I made any specific changes to my computer at that time. Someone told my to try running scf /scannow as administrator, but when I try to do that the scan stops at 34% and I get the message: "Windows Resource Protection could not perform the requested operation." I am running Windows 7 Enterprise 64 bit, and I would really like to avoid reinstalling windows. Does anyone know how to fix this? Edit - Here is another attempt I made and some more information that might help: Following WhoIsRich's suggestion, I tried the command sfc /scannow /offbootdir=c:\ /offwindir=c:\windows. This gave the error message "The arguments passed to sfc are invalid. The offline windows directory specified points to the online system", and then I realized this command is meant to be run after booting from another system. Since I don't have my windows installation disk right now, I used my own system to create a recovery disk, and then restarted my computer and used the recovery disk to boot. I then ran the above command, and I got the following message: "Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.log". I then restarted my computer and let it boot up normally. The problem with windows services persists, and the CBS.log file is a long log file with many entries, and I don't know if there is useful information in it, and if there is, how to find it.

    Read the article

  • On Windows 2008 R2, how do I back up DHCP if the DHCP .mdb database is always busy?

    - by johnny
    I get this from my backup software. C:\WINDOWS\system32\dhcp\dhcp.mdb : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50tmp.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\tmp.edb : The process cannot access the file because it is being used by another process. My questions: Should I be doing a manual backup of DHCP via command line tools or maybe with MMC, Action, Backup before I run my backup? Is the %SystemRoot%\System32\DHCP\Backup directory always kept up to date? (which does get backed up by backup software) I'm answering my own question but the registry key is set up for 3c, 60 minutes, I believe. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DHCPServer\Parameters\BackupInterva This is not the included backup software for Windows. It is another product, but I have seen this with every backup software I've ever used.

    Read the article

  • Find and Replace String in filenames

    - by shekhar
    I have thousands of files with no specific extensions. What I need to do is to search for a sting in filename and replace with other string and further search for second string and replace with any other string and so on. I.e.: I have multiple strings to replace with other multiple strings. It may be like: "abc" in filename replaced with "def" * String "abc" may be in many files "jkl" in filename replaced with "srt" * String "jkl" may be in many files "pqr" in filename replaced with "xyz" * String "pqr" may be in many files I am currently using excel macro to get the file names in excel and then preserving original names in one column and replacing desired in the content copied in other column. then I create a batch file for the same. Like: rename Path\OriginalName1 NewName1 rename Path\OriginalName2 NewName2 Problem with the above procedure is that it takes a lot of time as the files are many. And As I am using excel 2003 there is limitation on number of rows as well. I need a script in batch like: replacestr abc with def replacestr pqr with xyz in a single directory. Will it be better to do in unix script?

    Read the article

  • Logging won't stop on log file after renaming/moving it.... how do I stop it?

    - by Jakobud
    Just discovered that logrotate is not rotating our firewall log. So its up to 12g in size. I need to split up the file into smaller chunks and start manually rotating them so I can get things back on track. However before I start splitting the firewall up, I need to stop the firewall from logging to the current firewall log file and force it to start logging to a new empty file. This way I'm not trying to split up or rotate a log file that is still constantly growing. I tried to simply do this: mv firewall firewall.old touch firewall I expected to see the new empty firewall file to start growing in size, but no... the firewall.old is still be logged to. Then I tried to start/stop iptables. No change. firewall.old is still the log file. I tried to move it to another directory. That didn't help. I tried to stop iptables, then change the filename and create a new firewall file and then start iptables again, but no change. How do I stop the logging on this file and force it to start logging on a new file?

    Read the article

< Previous Page | 840 841 842 843 844 845 846 847 848 849 850 851  | Next Page >