Search Results

Search found 46178 results on 1848 pages for 'java home'.

Page 1118/1848 | < Previous Page | 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125  | Next Page >

  • Running $ORIGIN linked binaries from setuid scripts on linux

    - by drscroogemcduck
    I'm using suidperl to run some programs that require root permissions. however, the runtime linker won't expand library paths which contain $ORIGIN entries so the programs i want to run (jstack from java) won't run. more info here There is one exception to the advice to make heavy use of $ORIGIN. The runtime linker will not expand tokens like $ORIGIN for secure (setuid) applications. This should not be a problem in the vast majority of cases. my program looks something like this: #!/usr/bin/perl $ENV{PATH} = "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.6.0_12/bin:/root/bin"; $ENV{JAVA_HOME} = "/usr/java/jdk1.6.0_12"; open(FILE, '/var/run/kil.pid'); $pid = <FILE>; close(FILE); chomp($pid); if ($pid =~ /^(\d+)/) { $pid = $1; } else { die 'nopid'; } system( "/usr/java/jdk1.6.0_12/bin/jstack", "$pid"); is there any way to fork off a child process in a way so that the linker will work correctly.

    Read the article

  • SubDomain creation issue.Ubuntu 12.04 Apache 2.22 Webmin

    - by anarchos78
    I have a technical question concerning subdomains. My installation is UBUNTU 12.04 and WEBMIN for administration (using Apache web server). I am trying to create a subdomain to one IP (the domain is www.ithemis.gr and I want to create test.ithemis.gr and/or test1.ithemis.gr) with no success. I think I've tried the right way to set subdomains. The address does not resolving (I have already created DNS records via BIND). Do you have any suggestions? I am very new in server administration. Any help will be greatly appreciated! Apache configuration: In /etc/apache2/sites-available: Conf file:www.ithemis.gr.conf (main website) <VirtualHost 184.171.255.110:80> DocumentRoot /home/ithemis.gr ServerName www.ithemis.gr <Directory "/home/ithemis.gr"> allow from all #Options +Indexes Options +Includes -Indexes </Directory> </VirtualHost> Conf file:www.test.ithemis.gr.conf (subdomain website): <VirtualHost *:80> DocumentRoot /home/test.ithemis.gr ServerName test.ithemis.gr <Directory "/home/test.ithemis.gr"> allow from all Options +Indexes </Directory> </VirtualHost> My DNS records: Master Zone: ithemis.gr Name Type TTL Values ithemis.gr. NS Default ns1.themis.gr. ithemis.gr. A Default 184.171.255.110 ns1.ithemis.gr. A Default 184.171.255.110 ns2.ithemis.gr. A Default 184.171.255.110 mail.ithemis.gr. A Default 184.171.255.110 www.ithemis.gr. CNAME Default ithemis.gr. ithemis.gr. MX Default 5 mail.ithemis.gr. www.test.ithemis.gr. CNAME Default ithemis.gr. test.ithemis.gr. CNAME Default ithemis.gr.

    Read the article

  • Trying to install datastax opscenter - Failed to load application: cannot import name _parse

    - by gansbrest
    I'm not familiar with python, maybe someone could explain what's going on here? ec2-user@prod-opscenter-01:~ % java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) ec2-user@prod-opscenter-01:~ % python -V Python 2.6.8 ec2-user@prod-opscenter-01:~ % openssl version OpenSSL 1.0.1e-fips 11 Feb 2013 And now the error ec2-user@prod-opscenter-01:~ % sudo /etc/init.d/opscenterd start Starting Cassandra cluster manager opscenterd Starting opscenterdUnhandled Error Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 652, in run runApp(config) File "/usr/lib64/python2.6/site-packages/twisted/scripts/twistd.py", line 23, in runApp _SomeApplicationRunner(config).run() File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 386, in run self.application = self.createOrGetApplication() File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 451, in createOrGetApplication application = getApplication(self.config, passphrase) --- <exception caught here> --- File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 462, in getApplication application = service.loadApplication(filename, style, passphrase) File "/usr/lib64/python2.6/site-packages/twisted/application/service.py", line 405, in loadApplication application = sob.loadValueFromFile(filename, 'application', passphrase) File "/usr/lib64/python2.6/site-packages/twisted/persisted/sob.py", line 210, in loadValueFromFile exec fileObj in d, d File "bin/start_opscenter.py", line 1, in <module> from opscenterd import opscenterd_tap File "/usr/lib/python2.6/site-packages/opscenterd/opscenterd_tap.py", line 37, in <module> File "/usr/lib/python2.6/site-packages/opscenterd/OpsCenterdService.py", line 13, in <module> File "/usr/lib/python2.6/site-packages/opscenterd/ClusterServices.py", line 22, in <module> File "/usr/lib/python2.6/site-packages/opscenterd/WebServer.py", line 40, in <module> File "/usr/lib/python2.6/site-packages/opscenterd/Agents.py", line 18, in <module> exceptions.ImportError: cannot import name _parse Failed to load application: cannot import name _parse Maybe there are open source alternatives to monitoring cassandra I should look at? Thanks a lot

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • Apache proxy: Why is one vhost returning Forbidden while the other one works?

    - by Stefan Majewsky
    I have a Java application that needs to talk to another intranet website using HTTPS in both directions. After fighting with Java's SSL implementations for some time, I gave up on that, and have now set up an Apache that's supposed to act as a bidirectional reverse proxy: external app ---(HTTPS request)---> Apache ---(local HTTP request)---> Java app This direction works just fine, however the other direction does not: Java app ---(local HTTP request)---> Apache ---(HTTPS request)---> external app This is the configuration for the vhost implementing the second proxy: Listen 127.0.0.1:8081 <VirtualHost appgateway:8081> ServerName appgateway.local SSLProxyEngine on ProxyPass / https://externalapp.corp:443/ ProxyPassReverse / https://externalapp.corp:443/ ProxyRequests Off AllowEncodedSlashes On # we do not need to apply any more restrictions here, because we listened on # local connections only in the first place (see the Listen directive above) <Proxy https://externalapp.corp:443/*> Order deny,allow Allow from all </Proxy> </VirtualHost> A curl http://127.0.0.1:8081/ should serve the equivalent of https://externalapp.corp, but instead results in 403 Forbidden, with the following message in the Apache error log: [Wed Jun 04 08:57:19 2014] [error] [client 127.0.0.1] Directory index forbidden by Options directive: /srv/www/htdocs/ This message completely puzzles me: Yes, I have not set up any permissions on the DocumentRoot of this vhost, but everything works fine for the other proxy direction where I haven't. For reference, here's the other vhost: Listen this_vm_hostname:443 <VirtualHost javaapp:443> ServerName javaapp.corp SSLEngine on SSLProxyEngine on # not shown: SSLCipherSuite, SSLCertificateFile, SSLCertificateKeyFile SSLOptions +StdEnvVars ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ProxyRequests Off AllowEncodedSlashes On # Local reverse proxy authorization override <Proxy http://localhost:8080/*> Order deny,allow Allow from all </Proxy> </VirtualHost>

    Read the article

  • How to reinstall bootloader after migration to SSD

    - by hijarian
    I must say, it was difficult to name this question. Basically, I need to properly reinstall the bootloader on my system, because I already have the working system disks for my OSes. The long story is this: I had the large slow HDD with Windows7 & Debian Wheezy dual-boot on it, perfectly bootable. Then, I ordered the SSD drive and prepared my system partitions to fit onto the much smaller SSD. I wanted the following schema: 128 GB Windows 24 GB / on Debian 86 GB /home on Debian Strange size for /home because there's no such thing as true 256GB disk drive. So, I've prepared such a partitions on my initial HDD and installed the new SSD and then I loaded the GParted live USB (can't remember now how it was really named), and then just copypasted the partitions from HDD to SSD. So, now I have the following partitions across the physical disks: SSD 128 GB copy of original Windows partition 24 GB copy of presumably Debian / 86 GB copy of presumably Debian /home HDD 128 GB Windows 24 GB / on Debian 86 GB /home on Debian ... several other partitions with non-system data ... And the behavior of the system right after the Ctrl+C, Ctrl+V in GParted was as follows: no GRUB, system boots right into the Windows on HDD. In BIOS settings are to boot from SSD first. I managed to create the Debian Testing installation USB and loaded it into the rescue mode, found that it identified my SSD as /dev/sda and installed the GRUB to the /dev/sda. Now my system loads the GRUB which lists both Windows and Debian. From HDD. So, I am now back into initial position. Please, how I should set up the GRUB so it'll load the OSes correctly from SSD? Should I fire up my Debian, fiddle with the GRUB's config and reinstall it again to the same place (at SSD)?

    Read the article

  • Nginx configuration leads to endless redirect loop

    - by brianthecoder
    So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2. Here's the ssl config server { listen 443 default ssl; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; ssl_certificate /home/app/ssl/<redacted>.com.pem; ssl_certificate_key /home/app/ssl/<redacted>.key; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Url-Scheme $scheme; proxy_redirect off; proxy_max_temp_file_size 0; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Here's the non-ssl config server { listen 80; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Let me know if there's any additional info I can give to help diagnose the issue.

    Read the article

  • Sparc Solaris 2.6 will not boot

    - by joshxdr
    I have a very old Sparc Solaris network that was working fine last week, but after a power outage none of the workstations will boot. The network looks like this: host A: solaris 2.6, shares /export/home to network by NFS host B: solaris 8, runs NIS server. Mounts /export/home/ by NFS. host C: RHEL5, shares /share to network by NFS. Mounts /export/home/ by NFS. I figured that the main problem was host A, since you need the home directories available for the other workstations to boot(?). Host A does not mount anything by NFS as far as I know. However, this workstation will NOT boot. The OBP bootup sequence looks like this: Boot device <blah> configuring network interface le0 Hostname <hostname> check file system <everything ok> check ufs filesystem <everything ok> NIS domainname is <name> starting router discovery starting rpc services: rpcbind keyserv ypbind done setting default interface for multicast: add net 224.0.0.0: gateway <hostname> <HANGS at this point> Is there some kind of debug mode so that I can get more detail as to why the workstation won't boot? Is my network structure inherently susceptible to power outage? Is there a way I can boot up to command line so I can at least turn off the NFS mounting?

    Read the article

  • 403 Error when accessing vhost directive

    - by Ortix92
    I'm having some troubles with setting up my webserver (Centos 5.8). It's a brand new server and I'm trying to set a vhost to the following dir: /home/exo/public_html However whenever I restart httpd I get the following warning: Code: Starting httpd: Warning: DocumentRoot [/home/exo/public_html] does not exist Yes the directory does exist. So whenever I visit the domain exo-l.com it gives me a 403 error. This is my config file (I put this inside my httpd.conf because the files in conf.d were not included for some reason. Or at least my newly created vhost conf file, but that has 0 priority for now) <VirtualHost *:80> DocumentRoot /home/exo/public_html ServerName www.exo-l.com ServerAlias exo-l.com <Directory /home/exo/public_html> Order allow,deny Allow from all </Directory> </VirtualHost I'm completely clueless because this should work as far as I know. httpd is being run as apache:apache i tried chowning the public_html directory (also recursively) to exo:apache, apache:apache, root:root with no success. chmod 777 doesn't do anything either. a tail from the log: [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied I also found something about selinux and that disabling it might help, but do I really want to do that?

    Read the article

  • Set up Linux box as WAP for MyBookLive?

    - by AcidFlask
    I inherited an old Linux box as well as a MyBookLive and would like to make the MyBookLive available over my wireless, essentially using the Linux box as a wireless access point. I just wiped the Linux box (home) and installed Ubuntu 12.04 on it. My network setup currently looks like this: (192.168.0.1 netmask 255.255.255.0) ISP --- wireless router --- wlan0 on home (192.168.0.12) | eth0 on home --- MyBookLive MacBook (192.168.0.11) so that the MyBookLive is basically a glorified external hard drive. The router does have an Ethernet port, but it is being used by my roommate's computer so I can't plug the MyBookLive directly into it. Right now I can ping MyBookLive.local and MacBook.local from home, but I am having trouble understanding and figuring out what the correct iptables commands are to make my MacBook see my MyBookLive through the Bonjour network. Also, I'm not sure if I need to set up DNS to forward xxx.local Bonjour/Zeroconf addresses. I tried the following to forward my entire wired network (which has only my MyBookLive) to a single IP address: sysctl net.ipv4.ip_forward=1 iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT iptables -t nat -A PREROUTING -i eth0 -p tcp -j DNAT --to 192.168.0.66 iptables -t nat -A PREROUTING -i eth0 -p udp -j DNAT --to 192.168.0.66 but I can't ping this address from my MacBook. This is probably horribly wrong, but I am a complete noob at setting up this kind of network and could use some expert help with setting this up properly.

    Read the article

  • Port(s) not forwarding?

    - by user11189
    I have cable internet service through Charter Communications and feed two desktop computers through a Linksys RP614v3 router. One system is my wife's running WinXP Home Edition and the other is mine, running Vista Home Premium (sp1). I have port forwarding configured in the Linksys so I can access the Vista system remotely using TightVNC. Initially, it worked great and I was able to remotely tend email and access local files while out of town for work. Lately, the cable internet service appears to flicker intermittently and upon return, my Mailwasher program loses ability to access the net and I've been unable to make the remote connection. When I reset the port forwarded for email in the router control panel, Mailwasher functionality returns but as I'm home when that happens, I have no easy way to check remote access until the next time I'm on the road or at work. I'm at my wit's end -- the TightVNC client accesses fine from my wife's system from behind the modem/router setup but I don't know how to maintain whatever gets reset when I fiddle with the control panel and the need to do so at all is new. I accessed it fine for a week off and on while out of town a month ago and now I can't leave home and access it from work an hour later.

    Read the article

  • Nginx configuration leads to endless redirect loop

    - by brianthecoder
    So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2. Here's the ssl config server { listen 443 default ssl; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; ssl_certificate /home/app/ssl/<redacted>.com.pem; ssl_certificate_key /home/app/ssl/<redacted>.key; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Url-Scheme $scheme; proxy_redirect off; proxy_max_temp_file_size 0; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Here's the non-ssl config server { listen 80; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Let me know if there's any additional info I can give to help diagnose the issue.

    Read the article

  • Remove an apache alias subdirectory

    - by Hippyjim
    I'm using Apache 2 on Ubuntu 12.04. I added an alias for a subdirectory, to point to gitweb. I realised I should probably make it accessible only on https - so I removed the alias and restarted Apache. I can still navigate to http://xyz/gitweb - even with no alias in any of my config files. How do I remove it? EDIT The config file looked like this before: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> Alias /gitweb/ /usr/share/gitweb/ <Directory /usr/share/gitweb/> Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch AllowOverride All order allow,deny Allow from all AddHandler cgi-script cgiDirectory Index gitweb.cgi </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> And this after: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • exim4 seem to stop listening

    - by trakos
    Hey, I have a strange problem with my exim4 configuration. I have a dedicated server running debian for quite a long time now, but I'm not really using it actively recently, so everything just worked due to lack of changes ;) However, recently, my exim4 smtp stopped answering on port 25. It does not respond through localhost, as well - even though it's set to listen on any interface available. Some things I've checked: ks:/home/trakos/Maildir/new# netstat -ap | grep exim tcp 0 0 *:smtp : LISTEN 12521/exim4 ks:/home/trakos/Maildir/new# exiwhat 12521 daemon: -q30s, listening for SMTP on port 25 (IPv4) ks:/home/trakos/Maildir/new# cat /var/log/exim4/rejectlog ks:/home/trakos/Maildir/new# cat /var/log/exim4/paniclog The queue is set for 30s only because I was running it in a non-daemon mode to see any output. Strangely enough, no suspicious output is given, netstat even shows it is listening on port 25, but still trying to telnet to it times out. The only things that may have changed recently are: I've got second IP for my server I remember that few days ago my spamassasin crashed, and I've started it up again So yeah, I'm really clueless about this one now :P I mean, I don't even know what could be failing here. Could someone give me some ideas what should I check next? PS: it has uptime of 442 days, so I haven't really tried rebooting it yet ^^

    Read the article

  • Why doesn't SSHFS let me look into a mounted directory?

    - by Jan
    I use SSHFS to mount a directory on a remote server. There is a user xxx on client and server. UID and GID are identical on both boxes. I use sshfs -o kernel_cache -o auto_cache -o reconnect -o compression=no \ -o cache_timeout=600 -o ServerAliveInterval=15 \ [email protected]:/mnt/content /home/xxx/path_to/content to mount the directory on the remote server. When I log in as xxx on the client I have no problems. I can cd into /home/xxx/path_to/content. But when I log in on the client as another user zzz and then $ ls -l /home/xxx/path_to I get this d????????? ? ? ? ? ? content and on $ ls -l /home/xxx/path_to/content I get ls: cannot access content: Permission denied When I do $ ls -l /mnt on the remote server I get drwxr-xr-x 6 xxx xxx 4096 2011-07-25 12:51 content What am I doing wrong? The permissions seem to be correct to me. Am I wrong?

    Read the article

  • Why just splitting an Ethernet cable does not work?

    - by Sin Jeong-hun
    I thought the Ethernet is logically one-line communication bus (for argument's sake, I am excluding hubs). All machines attached in the bus hears the same signals and the machines themselves try to avoid collisions by randomly backing off. http://computer.howstuffworks.com/ethernet6.htm If so, why splitting one Ethernet line from my home router into two and connecting two computers would not work? Why do I have to add a switch to it? *What the Internet said would not work. [4 port home router] ------[one Ethernet cable]-----[simple splitter]======[two computers] *What the Internet said I should do [4 port home router] ------[one Ethernet cable]-----[switch]======[two computers] Is this because of the signal degradation (reduced electric current)? Thank you for all the answers! The reason why I did not just use the two ports of my home router is... The 4-port gigabit router is in my room and I had put a computer in another room (also my room, though). Since wired network is far more reliable and secure, I had bought a long Ethernet cable and and connected the computer to the router. Now I was thinking about adding another computer to that room. I could buy another long Ethernet cable, but then there will be two cables between the rooms. The one line already is a minor annoyance, so I thought if I could share the one line between the two computers in that room. A switch would work, but it requires power and is a little bit pricey. That is why I wondered why it would not work to simply split the physical Ethernet cable. Apparently I do not completely understand how Ethernet and a switch work. I just have some bit of knowledge I heard in my college class.

    Read the article

  • Ubuntu Launcher Items Don't Have Correct Environment Vars under NX

    - by ivarley
    I've got an environment variable issue I'm having trouble resolving. I'm running Ubuntu (Karmic, 9.10) and coming in via NX (NoMachine) on a Mac. I've added several environment variables in my .bashrc file, e.g.: export JAVA_HOME=$HOME/dev/tools/Linux/jdk/jdk1.6.0_16/ Sitting at the machine, this environment variable is available on the command line, as well as for apps I launch from the Main Menu. Coming in over NX, however, the environment variable shows up correctly on the command line, but NOT when I launch things via the launcher. As an example, I created a simple shell script called testpath in my home folder: #!/bin/sh echo $PATH && sleep 5 quit I gave it execute privileges: chmod +x testpath And then I created a launcher item in my Main Menu that simply runs: ./testpath When I'm sitting at the computer, this launcher runs and shows all the stuff I put into the $PATH variable in my .bashrc file (e.g. $JAVA_HOME, etc). But when I come in over NX, it shows a totally different value for the $PATH variable, despite the fact that if I launch a terminal window (still in NX), and type export $PATH, it shows up correctly. I assume this has to do with which files are getting loaded by the windowing system over NX, and that it's some other file. But I have no idea how to fix it. For the record, I also have a .profile file with the following in it: # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi

    Read the article

  • OpenLDAP 2.4.23 - Debian 6.0 - Import schema - Insufficient access (50)

    - by Yosifov
    Good day to everybody. I'm trying to add a new schema inside OpenLDAP. But getting an error: ldap_add: Insufficient access (50) root@ldap:/# ldapadd -c -x -D cn=admin,dc=domain,dc=com -W -f /tmp/test.d/cn\=config/cn\=schema/cn\=\{5\}microsoft.ldif root@ldap:/# cat /tmp/test.d/cn\=config/cn\=schema/cn\=\{5\}microsoft.ldif dn: cn=microsoft,cn=schema,cn=config objectClass: olcSchemaConfig cn: microsoft olcAttributeTypes: {0}( 1.2.840.113556.1.4.302 NAME 'sAMAccountType' DESC 'Fss ssully qualified name of distinguished Java class or interface' SYNTAX 1.3.6. 1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {1}( 1.2.840.113556.1.4.146 NAME 'objectSid' DESC 'Fssssull y qualified name of distinguished Java class or interfaced' SYNTAX 1.3.6.1.4. 1.1466.115.121.1.40 SINGLE-VALUE ) olcAttributeTypes: {2}( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' DESC 'Fds sssully qualified name of distinguished Java class or interfaced' SYNTAX 1.3. 6.1.4.1.1466.115.121.1.15 SINGLE-VALUE ) olcAttributeTypes: {3}( 1.2.840.113556.1.4.1412 NAME 'primaryGroupToken' SYNTA X 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {4}( 1.2.840.113556.1.2.102 NAME 'memberOf' SYNTAX 1.3.6.1. 4.1.1466.115.121.1.12 SINGLE-VALUE ) olcAttributeTypes: {5}( 1.2.840.113556.1.4.98 NAME 'primaryGroupID' SYNTAX 1.3 .6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcObjectClasses: {0}( 1.2.840.113556.1.5.6 NAME 'securityPrincipal' DESC 'Cso ntainer for a Java object' SUP top AUXILIARY MUST ( objectSid $ sAMAccountNam e ) MAY ( primaryGroupToken $ memberOf $ primaryGroupID ) ) I also tried to add the schema by phpldapadmin, but gain the same error. I'm using the admin user which is specified by default from the begging of the slpad installation. How may I add permissions to this user ? Best wishes

    Read the article

  • Remote Desktop leaves host unresponsive

    - by Jeff Dalley
    I have my desktop PC at home set up to accept remote connections, and I often connect to it from work on my laptop via mstsc.exe. However, every time I remote to it, I find when I go home that despite the monitor being on - it's not receiving an image and it looks as though the computer is hibernating/asleep. I basically have to restart it whenever I get home and I know there's an answer for why its doing this. More details: When exiting the remote session, I have tried both logging off the account, and closing the RDP window without logging off; both give the same result. When I get home to the desktop I of course try moving the mouse, ctrl+alt+del to see if its responsive to restart, multiple key-press to see if I can get any audio out of it; It seems pretty obvious its sleeping/hibernating in some way: Nothing happens in any of these cases and a physical restart is necessary. Both desktop and laptop are running Windows 7 Ultimate. I'm thinking it really is sleeping/hibernating it, and I'm not sure why because left alone my desktop's power options are set to never turn off the HDD or change its state - I leave it on 24/7. This could be a stupid error on my part but I just can't see it! Thanks.

    Read the article

  • ProxyPass for specific vhost

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • Sudo won't execute command as another user

    - by TOdorus
    I'm trying to get a unicorn server to start when the server boots. I've created a shell script which works if I log as the ubuntu user and run /etc/init.d/unicorn start Shell script #!/bin/sh case "$1" in start) cd /home/ubuntu/projects/asbest/current/ unicorn_rails -c /home/ubuntu/projects/asbest/current/config/unicorn.rb -D -E production ;; stop) if ps aux | awk '{print $2 }' | grep `cat ~/projects/asbest/current/tmp/pids/unicorn.pid`> /dev/null; then kill `cat ~/projects/asbest/current/tmp/pids/uni$ ;; restart) $0 stop $0 start ;; esac When I rebooted the server I noticed that the unicorn server wasn't listening to a socket. Since I ran the code succesfully as the ubuntu user I modified the script to let it always use the ubuntu user via sudo. #!/bin/sh case "$1" in start) cd /home/ubuntu/projects/asbest/current/ sudo -u ubuntu unicorn_rails -c /home/ubuntu/projects/asbest/current/config/unicorn.rb -D -E production ;; stop) if ps aux | awk '{print $2 }' | grep `cat ~/projects/asbest/current/tmp/pids/unicorn.pid`> /dev/null; then sudo -u ubuntu kill `cat ~/projects/asbest/current/tmp/pids/uni$ ;; restart) $0 stop $0 start ;; esac After rebooting unicorn still wouldn't start, so I tried running the script from the command line. Now I get the following error sudo: unicorn_rails: command not found I've searched high and low to what could cause this, but I'm afraid I've tapped my limited understanding of Linux. From what I can understand is that although sudo should use the ubuntu user to execute the commands, it still uses the environment of the root user, which isn't configured to run ruby or unicorn. Does anybody have any experience with this?

    Read the article

  • Google Sites (via Apps) setup questions

    - by Dave
    I thought that it would be a piece of cake to set up a Google site via Google Apps, but perhaps my previous (limited) experience with web development has given me unrealistic expectations. I have actually had a really tough time finding help with the exact question that I have, which is: How do I change the home page contents??? You see, I'm used to having hosting with someone like GoDaddy, where I can just ftp in and drop my HTML files in the www folder. From research I have found that this is simply not possible with any flavor of Google Sites. That's fine, I can live with it. So let's say I have www.mydomain.com. When I hit that URL, it redirects me to a very long URL (unfortunately) like https://sites.google.com/a/mydomain.com/sites/system/app/pages/meta/domainWelcome, which just says: Google Apps Welcome to mydomain.com If you are the domain administrator get started creating your home page with Google Sites Great! I want to do that. So I click on the "If you are the..." link and end up at a screen where I can choose a template, a name, and some visibility options. If I click on My Sites, there isn't a "default" site, i.e. the one that www.mydomain.com displays. I figured that maybe I just have to create a site first, so I went ahead and did that. My first test was to create a site that was publicly accessible. I thought that maybe if I did that, the Google would decide that this must be my home page since it's the only one. But it doesn't, and I still get the "Welcome to" page. Under "More Actions", I didn't see anything interesting except for "Manage site". I went in there and had a peek around, and didn't see anything about using this as the default home page. Am I looking for something that just doesn't exist? I can't believe there isn't a way to modify the "domain welcome to" page...

    Read the article

  • File upload folder permission fastCGI - How to make it writeable?

    - by user6595
    I am using centos 5.7 with cPanel WHM running fastcgi/suEXEC I am trying to make a particular folder writable to allow a script to upload files but seem to be having problems. The folder (and all recursive folders) I want to be writable is: /home/mydomain/public_html/uploads And I want only scripts run by the user "songbanc" to be able to write to this directory. I have tried the following: chown -R songbanc /home/mydomain/public_html/uploads chmod -R 755 /home/mydomain/public_html/uploads But it still doesn't seem to work. The script will only upload files if I set the permissions manually via FTP client to 777. I assume I am misunderstanding how to set permission for users with fastcgi and hopefully someone can help me. Thanks in advance EDIT: Running getfacl on one of the scripts or folders gives the following: # file: home/mydomain/public_html/ripples/1.jpg # owner: songbanc # group: songbanc So it appears that the owner is correct? I'm now totally confused! EDIT 2: The plot thickens... lsattr and chattr are returning Inappropriate ioctl for device While reading flags on...

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • WebSphere hung threads, how can I track then down?

    - by Puzzled
    We have an application running on WebSphere (unfortunately it is 6.1 which is no longer supported, it has not yet been migrated in production to a later version) which becomes entirely unresponsive because of hung threads. As far as I can tell we entirely exhaust one of the thread pools. I have activated hung thread detection and I get a core/thread dump when hung threads are detected. The server can run for several days without problems but has crashed twice this week. When load the core/thread dump in "IBM Thread and Monitor Dump Analyzer for Java", it tells me that there are a certain number of hung threads (this time it was 2, last time 11) and multiple (usually around 40) threads "waiting on condition" and some running threads. I believe one of the thread pool has around that size (50). Now what I see in there are threads waiting for locks, having locks or in wait. Most of them show a stack track which always ends like this: at java/lang/Object.wait(Native Method) at java/lang/Object.wait(Object.java:231) Now, how can I track this down to either a server configuration problem, application issue, WebSphere problem or something else? How is this supposed to help me track down the problem when almost everything in there refers to IBM code? I cannot ask IBM's help as 6.1 is now an unsupported version of WebSphere and while work has been done to make it work under WebSphere 7 we are not yet ready to switch to it in Production yet.

    Read the article

< Previous Page | 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125  | Next Page >