Search Results

Search found 10931 results on 438 pages for 'struts config'.

Page 348/438 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Issue with SSH on Ubuntu - Local connection ok, remote connection - Is it me or my ISP?

    - by Benjamin
    I have an issue with a server running Ubuntu 12.04, I am trying to set up a remote connection so I can access the server at my work from out of town. I have installed the SSH server and all that stuff, and I have reassigned the default port from 22 to 3399. A local connection from any OS can connect on the 192.168... address, but in no way can I get a connection on the actual IP address. I believe my configuration is correct, and I will attach it. If I have done something wrong in the config, please tell me and I will make a change to it. I honestly think that the Router that my ISP provided is horrible, and although the port for ssh is forwarded, it might be stopping any traffic coming inbound. Is there anything I can try to verify this? /var/log/auth does not show any error when I connect VIA our static IP. I have included all values not commented out below: (sshd_config) Port 3399 ListenAddress 0.0.0.0 Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key UsePrivilegeSeparation yes KeyRegenerationInterval 3600 ServerKeyBits 768 SyslogFacility AUTH LogLevel INFO LoginGraceTime 120 PermitRootLogin yes StrictModes yes UseDNS no RSAAuthentication yes IgnoreRhosts yes RhostsRSAAuthentication no HostbasedAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no PasswordAuthentication yes GSSAPIAuthentication no X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Am I doing this wrong? port forwarding image

    Read the article

  • Wordpress on Apache is redirecting all https to http

    - by Krist van Besien
    I have a problem with a wordpress site on a server I admin. I don't know anything about wordpress however. My problem is that we want the site to be accessed over https, bot somehow all requests to https:// URLs are answered by the server with a 302, redirecting to http. The wordpress site itself is configured to use https, and we see that in the pages that are generated the links are all https links. In the apache config there are no rewrite rules and no redirects. However, any request to a https:// URL is answered with a redirect to the equivalent http URL. And I really would like to know where these redirects are coming from, what is generating these redirects. I've increased the loglevel on the webserver to DEBUG, but did not get any info there. I tried to enable debug logging in wordpress per the recipy I found here: http://codex.wordpress.org/Debugging_in_WordPress But did not get a debug.log file in the directory where one should appear. I'm really at a loss here, and need to fix this urgently. Any hints as where to start looking? Apache is 2.2.14 on Ubuntu. There are several other virtual hosts on this server, using php and https without any problem... Edit: I created a small info.php script and dropped that in the webservers' root. Calling this yields the output of the script, no redirect is generated. This suggest that it's not the webserver, but wordpress that is doing it. A second thing I noticed is that the redirect comes with several cookies, one of which has "httponly" set. Could that be it?

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • Software for RAID Failure Alerts?

    - by QF_Developer
    I have two 256 GB Samsung 840 Pro SSD disks in a RAID 1 array. I would like to receive a notification if one of the disks in the array fails. Can anybody recommend an application I can install on the server to fire an email if such an event occurs? Here are some additional specs: Supermicro X9SCM-IIF motherboard utilising the hardware RAID controller. OS = Windows 2012 Standard Also is it possible to simulate a disk failure by pulling it out of the bay? SSDs appear to fail close together when in a mirrored config so I'd like to know ASAP if one goes down so I can swap them out with minimum delay. UPDATE 26th June 2013 ------------------------ None of the software that ships with the Supermicro X9SCM-* motherboards offer support for RAID monitoring. As has been pointed out here, these boards are built on an Intel chipset for RAID and so I installed Intel Rapid Storage Technology that supports automated email notifications on RAID failure http://www.intel.com/support/chipsets/imsm/sb/cs-020784.htm One small issue, the software only allows you to send email notifications without SMTP authentication. There's a bunch of different workarounds here: http://communities.intel.com/thread/30771

    Read the article

  • IP Masquerade and forwarding

    - by poelinca
    Hi all , i got a dedicated server running ubuntu server 10.10 with 3 ip adresses on the same eth card ( example: eth0 192.168.0.1 , eth0:0 188.78.45.0 , eth0:1 ... ) with a 3 virtual machines running ( virtualization technologi used is lxc but i don't think this matters too much ) . Now i need to redirect all ports opened ( using ufw to close/open ports ) from the ip 188.78.54.0 ( eth0:0 ) to a virtual machine ip ( let's say for example 192.168.2.3 ) , all requests made by a virtual machine should be redirected back to the virtual machine that made the request ( in this example 192.168.2.3 ) . Lets say the second vm has the ip 192.168.2.4 now i need to redirect all opened ports to from eth0:1 to this ip and viceversa . And so on and so on , what are the iptables/ufw rules to get this done ? and where to save them ( witch config file ) so they stay the same after reboot . In a few words redirect all requests comming from/to eth0:0 to a certan ip , all requests comming from/to eth0:1 to another ip ... Remember i'm saying all ports opened becouse they might be dynamicly changed . p.s. please excuse my bad english

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • How can I get multiple video cards to work on linux?

    - by user17943
    I installed fedora 12. I have 2 ATI cards that I used to use on windows to run 4 monitors. A recurring problem has been to get them detected in linux. Only my secondary card is picked up linux. When I manage the displays it detects the 2 monitors connected that card. What are the specific steps I should take to get the second card detected? Supposedly there is a tool system-config-xfree. I don't have it, yum can't find it. Also I heard it has something to do with editing some xorg.conf file or something to that effect. I have absolutely no idea how to find the "bus id" of my card, or lookup the horizontal refresh rates, etc.. I would probably have no problem following the documentation & editing the file if I knew a good way to find these values. Someone also suggested installing linux twice and saving the xorg.conf it generates each time (with different card each time) and then merging the two by hand. That is like killing a fly with a hammer though, when I do this again and again in the future It'd be nice to not have to take twice as long. Thanks

    Read the article

  • Why does lighttpd keep static files in cache, even when modified on disk ?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • Size of modules within initrd

    - by LiKao
    I am currently trying to manually replace the kernel within ubuntu-core on an embedded device with a custom kernel. However when I try to update the initrd my initrd becomes much bigger. Here is what I did: Extract the initrd that was shipped with ubuntu Make a list of all modules within the old initrd get the same modules from the new module director at /lib/modules/new_kernel_version add these modules to the initrd and remove the old ones If I do this my initrd becomes quite bigger than the original one, so I checked the individual modules. I picked the btrfs.ko filesystem driver as an example. So by comparing these two modules I noticed the one I just picked into the initrd was much bigger, which caused the difference in size. -rw-r--r-- 1 root root 999K Nov 14 15:06 btrfs.ko For the btrfs.ko within the shipped initrd. -rw-r--r-- 1 root root 7.2M Nov 14 15:08 btrfs.ko For the new btrfs.ko. What is different between these two modules? Could this be caused by some faulty setting for the new kernel? When producing the kernel I copied /proc/config.gz and used make oldconfig to update it, so all optimisations should be the same for both kernels. Or is there something else which is being done to the modules before they are put into the initrd? Maybe is there even some better way to build a new initrd for the new kernel in ubuntu altogether. Update: I just also tested with an initrd which I created from scratch using the mkinitrfs command within ubuntu, and it has the same size difference that I found for the initrd I manually updated.

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • Access an external SSH server through a restrictive proxy [on hold]

    - by Cyrille
    I'm a software developer. I wish to access my computer at home through SSH. For example, I sometime need to access my personal projects source code to check how I handled specific problems. Unfortunately, I currently work under an over-restrictive and anti-productive proxy that waste a hell of a lot of everyone's time (We often have to visit websites from our smartphones or use a web proxy to check very legitimates websites for answers, and don't get me started on other "security" overkill features we have to cope with...). Well, back to the subject, I can access my home computer from my phone (SSH, port 22 and 80 both redirected by router on port 22). It works, but it's quite uncomfortable. From my office computer, this is what I tried so far: export http_proxy=http://user:pass@proxyip:8080 echo "user:pass" > ~/.corkscrew-auth echo "ProxyCommand corkscrew proxyip 8080 %h %p /home/me/.corkscrew-auth" > ~/.ssh/config ssh 82.23.34.56 -l me -p 80 Proxy could not open connnection to 82.23.34.56: Forbidden ssh_exchange_identification: Connection closed by remote host (same without -p 80) Without corkscrew: ssh: connect to host 82.23.34.56 port 80: Connection timed out ssh: connect to host 82.23.34.56 port 22: Connection timed out Any other idea ?

    Read the article

  • Can't access certain web sites - reset router, any ideas?

    - by IniTech
    EDIT: This problem was resolved by my ISP - had to do with damaged fiber in one of their locations. Thanks to everyone that helped. Not sure if this is the right site (I'm a StackOverflow user) so I thought I'd give it a shot. I'm having trouble connecting to certain sites on any of the 3 machines that are on my LAN. The following sites are returning "Problem Loading Page - The connection has timed out" Sourceforge.net CNet.com Microsoft.com OpenDNS.com even my company's webiste I was worried about possible malware/virus, but I don't think that is the case (given the inability to access my company's site and the fact that all 3 machines are having the same issues.) I've tried with IE8, FF, and Chrome I have reset my router (WRT54G) and my machine(s) multiple times. EDIT: It is also worth noting that this page spins constantly and no avatars show up (I'm assuming it is trying to access gravatar.com with no success.) EDIT: I have the same issues directly connected to the modem. So, any router config is probably not the issue I'm a programmer, not a network guy - any ideas?

    Read the article

  • Centos VM serving multiple public IP: how to configure network interface?

    - by Glasnhost
    I have a Centos 5.6 VM (Vsphere client) already responding to two different public IPs on eth0 and eth0:1 and I'm trying to add eth0:2. I copied the eth0 config file and restarted the network service. I don't understand which other steps are needed... ifconfig eth0 Link encap:Ethernet HWaddr 00:40:46:B9:00:41 inet addr:10.1.12.10 Bcast:10.1.12.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:163371837 errors:77 dropped:0 overruns:0 frame:0 TX packets:168210961 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1891221045 (1.7 GiB) TX bytes:855899500 (816.2 MiB) Interrupt:59 Base address:0x2000 eth0:1 Link encap:Ethernet HWaddr 00:40:46:B9:00:41 inet addr:10.1.12.11 Bcast:10.1.12.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:59 Base address:0x2000 eth0:2 Link encap:Ethernet HWaddr 00:40:46:B9:00:41 inet addr:10.1.12.12 Bcast:10.1.12.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:59 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:188976973 errors:0 dropped:0 overruns:0 frame:0 TX packets:188976973 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2015642664 (1.8 GiB) TX bytes:2015642664 (1.8 GiB) more /etc/resolv.conf nameserver 10.1.12.1 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.12.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.1.12.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • SBS DC DNS entries going missing?

    - by Chris W
    I've been looking at a problem on a friends SBS (2003) server where the client PC's aren't able to connect to the server with a variety of errors reported. Checking the server itself the only indicator of an issue is an error 5782: Dynamic registration or deregistration of one or more DNS records failed with the following error: No DNS servers configured for the local system. Running a dcdiag reports that there are no DNS records registered for the DC so I fixed the problem by doing a netdiag /fix after which the dcdiag comes back clean and clients are ok again. It happened a few weeks ago as well and the same fix solved it. What are the possible causes of the DC DNS entries going missing? Is this a config option that needs tweaking or could it be solved by something simple like scheduling the SBS server to re-boot periodically? The only change they can think of that was made near to the time of the first instance of this problem occurring is that RRAS was started up to allow for a VPN connection from a home user. NB - The server is setup with a pair of NICs in a team so the server has a single virtual NIC providing both LAN/WAN connections to it. An external hardware firewall is in use rather than the windows firewall.

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • debian lenny email server

    - by Dal
    Hi I am a newbie and set up a debian lenny at home and set up the web and email server in the default installation. I followed the instructions for Exim and ran dpkg-reconfigure exim4-config and set it up for mydomainhere.com. I created a one line message file and attempted to test exim by running the command exim [email protected] < msgfile. I also tried using exim4 Exim but i get same error -bash: Exim: command not found. Obviously I am ignorant on how to run exim and test. I also tried to run a php file that sends a test mail and had no success. That script is tested and works fine if I send it from my hosting isp on a different domain. So I know the php script is good. I set up the debian system behind a netgear firewall and uses 192.168.1.x ip . The web server works great and users can visit my site. But I am lack the knowledge on how to get the email working. Appreciate is someone can guide me.

    Read the article

  • DRBD as a block device for XEN VM (Centos 5.3)

    - by SaberTooth
    Hi all, I have setup a drbd resource between 2 server nodes - everything works correctly when doing sync tests between the two. (I want to create a HA cluster using drbd,xen and heartbeat) However, when I try and create a XEN VM with Centos as guest operating system, I get through to the partitioning screen on the install but when I select a partitioning type the next screen gives me the following error : "An error has occurred - no valid devices were found on which to create new file systems. Please check your hardware for the cause of this problem." This is the first time attempting create a setup like this and searching Google does not help much... my config files for DRBD and XEN.... DRBD (just the section that is pertinent) on xennode0 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; flexible-meta-disk internal; } on xennode1 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; meta-disk internal; } XEN kernel = "/boot/xeninstall/vmlinuz" ramdisk = "/boot/xeninstall/initrd.img" extra = "text" name = "VM" maxmem = 3000 memory = 3000 vcpus = 4 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "phy:/dev/drbd0,sda1,w", "tap:aio:/srv/xen/xenswap.img,sda2,w" ] vif = [ "mac=00:16:3e:11:67:ae,bridge=xenbr0" ] root = "/dev/sda1 ro" Thanks in advance!

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Why is there an extra HDD under /dev being added in my Linux Kernel?

    - by user1279156
    I have created a Linux kernel and for some reason an extra drive is always added at bootup. My hard drive is listed as /dev/sdb. /dev/sda is created too, and it is 8 MB in size. I can't find anything in the kernel config that is creating this, but if I use a different kernel it is not there. Kernel logs show it as an attached SCSI device, looks just like my hard drive but only 8 MB, and has no partition table. It also doesn't appear to be a physical device. I've tried the kernel on many different models of PCs and it is always there. Does anyone know how to remove it? /dev/disk/by-id gives me: scsi-1AMCC_U21413034D98EB000584 scsi-1AMCC_U21413034D98EB000584-part1 scsi-353333330000007d0 scsi-SATA_ST3250312AS_5VY7SH42 scsi-SATA_WDC_WD800JD-60L_WD-WMAM9Y085675 scsi-SATA_WDC_WD800JD-60L_WD-WMAM9Y085675-part1 scsi-SATA_WDC_WD800JD-60L_WD-WMAM9Y085675-part2 hdparm -i /dev/sda gives me an "invalid argument". dd if=/dev/sda of=sda.img the resulting file does not have any content sdparm results: /dev/sda: Linux scsi_debug 0004 Device identification VPD page: Addressed logical unit: designator type: T10 vendor identification, code set: ASCII vendor id: Linux vendor specific: scsi_debug 2000 designator type: NAA, code set: Binary 0x53333330000007d0 Target port: designator type: Relative target port, code set: Binary transport: Serial Attached SCSI (SAS) Relative target port: 0x1 designator type: NAA, code set: Binary transport: Serial Attached SCSI (SAS) 0x52222220000007ce designator type: Target port group, code set: Binary transport: Serial Attached SCSI (SAS) Target port group: 0x100 Target device that contains addressed lu: designator type: NAA, code set: Binary transport: Serial Attached SCSI (SAS) 0x52222220000007cd designator type: SCSI name string, code set: UTF-8 transport: Serial Attached SCSI (SAS) SCSI name string: naa.52222220000007CD

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • Cisco ASA 5505 - InterVLAN NAT Exemptions Implementation not working

    - by Brandon Bearden
    Short version is we cannot communicate between our subnets. We have a Cisco ASA 5505 we are using for our network router. We have a Netgear L3 switch behind that with 10 vlans. Each VLAN is on its own subnet. (10.0.10.x/24, 10.0.11.x/24, etc) So ASA Switch Hosts We have PAT for each subnet to our outside interface. Each subnet NATs out properly. I have NAT exemption enabled for 2 of the subnets (eventually I will need all, but am just testing at the moment). Config is here: http://pastebin.com/pDsG7hsh I have tried multiple ways for the NAT exemption to allow all traffic from our inside VLANS. At this point in time I am trying to get "Engineering" to communicate with all hosts on "AuthUser". I can ping some hosts, but not as many as if I am directly on the interface. I can reach a port 80 service, but not 443. I cannot access anything via hostname or NetBIOS. What am I missing to allow higher security level interfaces to fully communicate with lower security level interfaces? Thx!

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • Virtualhost one https site, the rest http

    - by RJP1
    I have a linode server with Apache2 running a handful of sites with virtualhosting. All sites work fine on port 80, but one site has a ssl certificate and also runs okay. My problem is as follows: The non-https sites, if visiting https://domain.com - show the contents of the only secure site... Is there a way of disabling the *:443 match for these non-secure sites? Thanks! EDIT (more information): Here's a typical config in sites-available for a normal insecure http site: <VirtualHost *:80> ServerName www.insecure.com ServerAlias insecure.com ... </VirtualHost> The secure https site is as follows: <VirtualHost *:80> ServerName www.secure.com Redirect permanent / https://secure.com/ </VirtualHost> <VirtualHost *:80> ServerName secure.com RedirectMatch permanent ^/(.*) https://secure.com/$1 </VirtualHost> <VirtualHost *:443> SSLEngine on SSLProtocol all SSLCertificateChainFile ... SSLCertificateFile ... SSLCertificateKeyFile ... SSLCACertificateFile ... ServerName secure.com ServerAlias secure.com ... </VirtualHost> So, visiting: http:/insecure.com - works http:/www.insecure.com - works http:/secure.com - redirects to https:/secure.com - works http:/www.secure.com - redirects to https:/secure.com - works https:/insecure.com - shows https:/secure.com - WRONG!

    Read the article

  • Empty $upstream_http_location variable if response was cached

    - by Ivaldi
    I would like to cache the response of an redirect. (Cache the request to some site which returns a redirect and cache the second request which returns the actual content.) So far my config looks like this: location = /proxy { error_page 301 302 307 = @redir; resolver 8.8.8.8; proxy_pass $arg_url; proxy_intercept_errors on; proxy_cache pcache; proxy_cache_key $arg_url; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_client_abort on; proxy_ignore_headers Set-Cookie Expires Cache-Control; } location @redir { resolver 8.8.8.8; # we need to assign $upstream_http_location to another var in order to use it with proxy_pass set $target $upstream_http_location; proxy_pass $target; proxy_cache predirects; proxy_cache_key $upstream_http_location; proxy_cache_valid 200 301 302 307 1d; proxy_cache_min_uses 1; proxy_ignore_headers Set-Cookie Expires Cache-Control; } It works for the first request or without the 30x codes for proxy_cache_valid in the /proxy part, but $target and $upstream_http_location are empty, if the response was cached. Is there a nice solution to cache both requests? Thanks!

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >