Search Results

Search found 95574 results on 3823 pages for 'mac osx server'.

Page 1899/3823 | < Previous Page | 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906  | Next Page >

  • How do I switch java versions to an earlier version in Fedora 17?

    - by JHutson456
    I just installed Fedora 17. I'm setting up the Android Build Environment and need Java. I downloaded and installed jdk-6u32-linux-amd64.rpm I ran java -version and it spit out the correct version. Well a day or two later i tried my first compile in Fedora 17 and it complained about java and failed. I ran java -version again and low and behold it spits out $ java -version java version "1.7.0_03-icedtea" OpenJDK Runtime Environment (fedora-2.1.fc17.7-x86_64) OpenJDK 64-Bit Server VM (build 22.0-b10, mixed mode) I'm stumped. I mean, i've run the update/upgrade commands since i installed but i didn't think that updated full version revisions... So, I ran alternatives --config java and that only gave me the java 1.7 version. While digging around more I discovered that the recommended version of Java for the build environment is jdk-6u27-linux-x64-rpm.bin so I downloaded that from here: Oracle Download When I ran: sudo sh jdk-6u27-linux-x64-rpm.bin it returned: Unpacking... Checksumming... Extracting... UnZipSFX 5.50 of 17 February 2002, by Info-ZIP ([email protected]). inflating: jdk-6u27-linux-amd64.rpm inflating: sun-javadb-common-10.6.2-1.1.i386.rpm inflating: sun-javadb-core-10.6.2-1.1.i386.rpm inflating: sun-javadb-client-10.6.2-1.1.i386.rpm inflating: sun-javadb-demo-10.6.2-1.1.i386.rpm inflating: sun-javadb-docs-10.6.2-1.1.i386.rpm inflating: sun-javadb-javadoc-10.6.2-1.1.i386.rpm Preparing... ########################################### [100%] package jdk-2000:1.6.0_32-fcs.x86_64 (which is newer than jdk-2000:1.6.0_27-fcs.x86_64) is already installed Done. so now I'm confused. I ran: alternatives --config java again but it's still only returning 1.7 so I don't know what to do.I want to end up with 6u27 as the installed and functional version of the JDK. Thank you.

    Read the article

  • CouchDB crashes at startup when path to config file has space(s)

    - by Barry Wark
    I'm hoping to run CouchDB as a per-user Launch Agent on OS X. I'm using the coucdbx-core folder from the CouchDB Server.app as the base of my CouchDB deployment. I'd like each user to have their own couch instance (on a different port), necessitating separate config files for each instance. The logical place to put these files is in ~/Library/Application Support/ for each user. I can put the entire distribution in ~/Library/Application Support/my-app/coucdbx, and put the .ini at ~/Library/Application Support/my-app/local.ini. Starting couchdb as bin/couchdb -a ../local.ini (from ~/Library/Application Support/my-app/coucdbx) works great. But I'd like to save every user the ~50MB couchdbx and install the couchdbx-core in a shared location (e.g. within my app's .app bundle). When I do this, the path to the per-user config file contains a space, and I get the following error when starting CouchDB: $ bin/couchdb -n -a ~/Library/Application\ Support/us.physion.ovation/default.ini {"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/Users/hs/prj/build-couchdb/build/etc/couchdb/default.ini","/Users/hs/prj/build-couchdb/build/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{error,enoent}}},[{couch_server_sup,start_server,1,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch_server_sup.erl"},{line,56}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,274}]}]}}}}}},[{couch,start,0,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}} Is there any way to provide a config file at the command line, if that config file's path includes space(s)? Despite my best efforts in the mailing list archives, wiki and google, I haven't been able to find a solution or a definitive "it can't work". Any help greatly appreciated.

    Read the article

  • what is uninstall procedure for software installed via "make install" on CentOS 6.2

    - by gkdsp
    I installed OCILIB on my CentOS 6.2 server some time ago, and now I want to install a newer version. The vendor requires an uninstall, but doesn't provide instructions. I'm guessing that's because it's trivial for people with a Linux background. http://orclib.sourceforge.net/doc/html/group_g_install.html If I installed this software using: step 1: # ./configure --with-oracle-headers-path=/usr/include/oracle/11.2/client64 --with-oracle-lib-path=/usr/lib/oracle/11.2/client64/lib step 2: # make step 3: # su root step 4: # make install step 5: # gcc -g -DOCI_IMPORT_LINKAGE -DOCI_CHARSET_ANSI -L/usr/lib/oracle/11.2/client64/lib -lclntsh -L/usr/local/lib -locilib conn.c -o conn How would I go about uninstalling this? I tried following this http://www.cyberciti.biz/faq/delete-uninstall-software-linux-commands/ but nothing was found on my disk using rpm -qa *oci* or yum list *oci*. Maybe since it wasn't installed with yum or rpm then I shouldn't expect either of these to find it. Are there generic instructions for uninstalling software on Linux that I could use, or do the instructions really depend on the specific software? Any help much appreciated.

    Read the article

  • Apche ssl is not working

    - by user1703321
    I have configure virtual host on 80 and 443 port(Centos 5.6 and apache 2.2.3), following is the sample, i have wrote the configuration in same order Listen 80 Listen 443 NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . </VirtualHost> then i have define 443 <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.be.crt SSLCertificateKeyFile /etc/ssl/private/abc.be.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_be.crt </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.fr.crt SSLCertificateKeyFile /etc/ssl/private/abc.fr.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_fr.crt </VirtualHost> First ssl certificate for abc.be is working fine, but 2nd domian abc.fr still load first ssl. following the output of apachictl -s VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server www.abc.be (/etc/httpd/conf/httpd.conf:1071) port 443 namevhost www.abc.fr (/etc/httpd/conf/httpd.conf:1071) Thanks

    Read the article

  • Apache ProxyPassReverse and https

    - by joshuaball
    Hi, I would like to map all traffic on 80 and 443 from foo.com to an internal server: 192.168.1.101. I have a VirtualHost (Apache 2.2 on Ubuntu) setup as follows (note, I had to break up the hyperlinks below because I am a 'new user'): <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://192.168.1.101/ ProxyPassReverse / http://192.168.1.101/ </VirtualHost> And that works great for http traffic. However, I can't seem to do the same thing for https. I have tried: Changing VirtualHost *:80 to * - but that doesn't work (I need it http-http and https-https) Creating a new VirtualHost entry for *:443 that redirects to http://192.168.1.101/, but that fails as well (browser timeouts) I did some searching, here and elsewhere, and the closest question I could find was this, but that didn't quite answer it. Also, just out of curiosity, I tried mapping all ports to https (by changing the two ProxyPass lines from http to https (and removing the :80 from VH), and that didn't work either. How would you do that as well? Any thoughts? Thanks in advance.

    Read the article

  • Clustering/load balancing for cluster unaware applications

    - by AaronLS
    Forgive me if I use any of these terms incorrectly. I am wondering if there is any kind of software that would allow my two "join" two computers together such that a cluster unaware application could utilize their combined computing resources? By "cluster unaware" I mean an application that isn't designed to share work across multiple services. My understanding is that clustering is enabled by the specific application by it's architecture, such that messaging with multiple instances of the application coordinate the sharing of work. Instead I am looking for something that enables clustering at the OS or virtualization level, so that any application could essentially be clustered. Failing that, I am also wondering about the following scenario: We have 3 different applications we will call A, B, and C. We have 2 single core computers. At any given time lets say that any combination of those applications will be CPU intensive. In cases where only 2 of those apps are very active, have one of them moved over to a different server. In a nutshell, some sort of dynamic automatic shuffling of the application's load. I have heard of virtual machines that can be migrated across physical machines while live, but I am wondering if this can be done automatically in response to an application's or VM's CPU activity?

    Read the article

  • Apache Consuming Resources

    - by Chris Edwards
    Our web server suddenly has been giving us load issues. After I restart Apache the load stays low for a few hours up to a day or so then its back up to around 3.0 until I restart Apache again. Any suggestions on tracking down what is causing this? Thanks! Chris Edwards top - 20:15:05 up 19 days, 10:59, 1 user, load average: 2.11, 2.17, 2.47 Tasks: 532 total, 6 running, 525 sleeping, 0 stopped, 1 zombie Cpu(s): 11.5%us, 0.4%sy, 0.0%ni, 88.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32842656k total, 13185872k used, 19656784k free, 6143740k buffers Swap: 1048568k total, 0k used, 1048568k free, 3515252k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19089 apache 20 0 1912m 1.5g 6584 R 99.6 4.9 71:01.53 /usr/sbin/httpd 21136 apache 20 0 392m 55m 5736 R 95.0 0.2 0:03.45 /usr/sbin/httpd 21139 apache 20 0 374m 38m 5808 S 40.5 0.1 0:04.91 /usr/sbin/httpd 21124 apache 20 0 389m 51m 5948 R 38.9 0.2 0:03.15 /usr/sbin/httpd 21111 apache 20 0 371m 35m 5964 S 18.8 0.1 0:01.22 /usr/sbin/httpd 21127 apache 20 0 375m 39m 5832 S 17.8 0.1 0:01.66 /usr/sbin/httpd 21128 apache 20 0 374m 38m 5792 S 16.2 0.1 0:01.56 /usr/sbin/httpd 21110 apache 20 0 374m 38m 5848 S 15.9 0.1 0:01.02 /usr/sbin/httpd 21113 apache 20 0 374m 38m 5836 S 15.9 0.1 0:02.16 /usr/sbin/httpd 21077 apache 20 0 379m 43m 6408 S 11.0 0.1 0:07.22 /usr/sbin/httpd 21101 apache 20 0 384m 49m 6988 R 5.8 0.2 0:04.47 /usr/sbin/httpd 21112 apache 20 0 374m 38m 5956 R 2.6 0.1 0:01.61 /usr/sbin/httpd

    Read the article

  • Unable to start tomcat6: Java error (Exception in thread "main")

    - by Nyxynyx
    After installing tomcat6 on CentOS 6.3, I am unable to start tomcat6 server. root@host [/var/log/tomcat6]# service tomcat6 start Starting tomcat6: [ OK ] Although it says OK, I cant access http://mydomain.com:8080. catalina.out Exception in thread "main" java.lang.NullPointerException at java.lang.VMClassLoader.defineClass(libgcj.so.10) at java.lang.ClassLoader.defineClass(libgcj.so.10) at java.security.SecureClassLoader.defineClass(libgcj.so.10) at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Tomcat6 was installed using yum: yum -y install java tomcat6 tomcat6-webapps tomcat6-admin-webapps When I tried to find the version: tomcat6 version: Exception in thread "main" java.lang.NoClassDefFoundError: org.apache.catalina.util.ServerInfo at gnu.java.lang.MainThread.run(libgcj.so.10) Caused by: java.lang.ClassNotFoundException: org.apache.catalina.util.ServerInfo not found in gnu.gcj.runtime.SystemClassLoader{urls=[], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}} at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Any idea what I should do? Thanks!

    Read the article

  • Postfix SASL Authentication using PAM_Python

    - by Christian Joudrey
    Cross-post from: http://stackoverflow.com/questions/4337995/postfix-sasl-authentication-using-pam-python Hey guys, I just set up a Postfix server in Ubuntu and I want to add SASL authentication using PAM_Python. I've compiled pam_python.so and made sure that it is in /lib/security. I've also added created the /etc/pam.d/smtp file and added: auth required pam_python.so test.py The test.py file has been placed in /lib/security and contains: # # Duplicates pam_permit.c # DEFAULT_USER = "nobody" def pam_sm_authenticate(pamh, flags, argv): try: user = pamh.get_user(None) except pamh.exception, e: return e.pam_result if user == None: pam.user = DEFAULT_USER return pamh.PAM_SUCCESS def pam_sm_setcred(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_acct_mgmt(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_open_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_close_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_chauthtok(pamh, flags, argv): return pamh.PAM_SUCCESS When I test the authentication using auth plain amltbXkAamltbXkAcmVhbC1zZWNyZXQ= I get the following response: 535 5.7.8 Error: authentication failed: no mechanism available In the postfix logs I have this: Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication problem: unknown password verifier Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication failure: Password verification failed Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: localhost.localdomain[127.0.0.1]: SASL plain authentication failed: no mechanism available Any ideas? tl;dr Anyone have step by step instructions on how to set up PAM_Python with Postfix? Christian

    Read the article

  • SeLinux blocking connection to sshd on Ubuntu 9.10

    - by Barton Chittenden
    When I try to log on to my laptop, which runs Ubuntu 9.10, the server rejects my login attempts. Checking /var/log/auth.log, I see the following: Feb 14 12:41:16 tiger-laptop sshd[6798]: error: ssh_selinux_getctxbyname: Failed to get default SELinux security context for tiger I googled for this, and ran across the following: http://www.spinics.net/lists/fedora-.../msg13049.html Here's the part that I think relates to the problem that I'm having: Quote: What's wrong on my system? Why it's not possible to login even if selinux is in permissive mode? Any suggestions? I'd start by trying to figure out why sshd isn't running in sshd_t (it seems to be running in sysadm_t). Paul. selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mail...stinfo/selinux Yes, sshd is running in sysadm_t: ps axZ | grep sshd system_u:system_r:sysadm_t 3632 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pi ls -Z /usr/sbin/sshd system_ubject_r:sshd_exec_t /usr/sbin/sshd Don't know why it's not sshd_t. I didn't modified something. It's a standard installation of sles11 with the default reference policy from tresys. Maybe this code snippet from policy/modules/services/ssh.te is responsible for that: Allow ssh logins as sysadm_r:sysadm_t gen_tunable(ssh_sysadm_login, true) Any ideas? Do you have boolean init_upstart set to on? if not try setting it to on. I do not believe ssh_sysadm_login boolean works currently but i may be mistaken. -- Yeah, setting init_upstart to on did the trick! THANK A LOT! Do you know why this prevents the user from logging in through ssh even if selinux is set to permissive?? Ok, so the million dollar question is "where do I set 'init_upstart=1'"? It's not clear from context which configuration file needs to be edited, and I'm not at all familiar with SELinux configuration.

    Read the article

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • Problem configuring php-fpm with nginx

    - by Nisanio
    First of all: I'm not an expert in configuring things. This is very new for me, so, my apologies in advance. At work we have a Centos server. The guy who worked here before installed nginx. We need to made a php site, so, obviously, I need to set up php and make it work with nginx. Making short a very long tale, I had to replace the nginx binary with a new one (because the older was compile without fast-cgi), and I had to recompile and install php (because the new version has fpm). Then I struggle with the config files, making this nginx.conf (not all the file) user php; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } and uncomment some parameters in php-fpm (to much to detail here, but the important is that group and user are "php") I never could start the php-fpm with the instructions of the book sudo /usr/sbin/php-fpm start But after look at the net, I found this sudo /usr/local/sbin/php-fpm --fpm-config=/usr/local/etc/php-fpm.conf This worked (I think) I restarted nginx. But... nothings happens with php... My calls to php files (via firefox) doesn't even appear in the log (/opt/nginx/logs/error.log) I'm really, really exhausted and lost... Could anyone help me, pleaaase.... :( Thanks in advance

    Read the article

  • ESXi 4.0 - cannot copy files

    - by user21368
    I am unable to copy files or make directories on my installation of VMWare ESXi 4.0. I have done so in the past (copied an iso onto a datastore). But something has changed and I have no idea what. I cannot copy using the datastore browser (get a dialog saying "Expected a PUT_FILE_DONE message. Got SESSION_COMPLETE"). I cannot create a directory through datastore browser (get a dialog saying "Cannot complete file creation operation"). When I ssh to the ESXi server I cannot create files or folders under /vmfs/volumes. But I can manipulate files elswhere (including /vmfs). Here are the permissions for the directories (I am logged in as root). ~ # ls -lh /vmfs/volumes/ drwxr-xr-t 1 root root 1.2k Sep 3 12:19 4a76f260-36b7eb85-c3b3-0024e8314929 drwxr-xr-x 1 root root 8 Jan 1 1970 4a76f261-d6190a9e-3b89-0024e8314929 drwxr-xr-t 1 root root 1.4k Sep 22 10:38 4a76f262-4ac21f0a-6bc1-0024e8314929 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor1 - c42ce27f-eb8d7f70-7f70-0e7a85e8edc4 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor2 - bbf1477b-4aec1d8c-caa5-5e8720bebd85 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor3 - efd8efe3-03bc1cbf-15e0-080efd9e7379 drwxr-xr-x 1 root root 8 Jan 1 1970 bbf1477b-4aec1d8c-caa5-5e8720bebd85 drwxr-xr-x 1 root root 8 Jan 1 1970 c42ce27f-eb8d7f70-7f70-0e7a85e8edc4 l--------- 0 root root 1.9k Jan 1 1970 datastore1 - 4a76f260-36b7eb85-c3b3-0024e8314929 l--------- 0 root root 1.9k Jan 1 1970 datastore2 - 4a76f262-4ac21f0a-6bc1-0024e8314929 drwxr-xr-x 1 root root 8 Jan 1 1970 efd8efe3-03bc1cbf-15e0-080efd9e7379 ~ # touch /vmfs/foo.txt ~ # touch /vmfs/volumes/foo.txt touch: /vmfs/volumes/foo.txt: Operation not permitted I've googled and found nothing helpful. Does anyone out there have an idea as to what is going on? Thanks in Advance. Pete.

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

  • Jboss unreachable/ slow behind apache with ajp

    - by Niels
    I have an linux server running with a JBoss Instance with apache2. Apache2 will use AJP connection to reverse proxy to JBoss. I found these messages in the apache error.log: [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header [error] ajp_read_header: ajp_ilink_receive failed [error] (120006)APR does not understand this error code: proxy: read response failed from 8.8.8.8:8009 (hostname) [error] (111)Connection refused: proxy: AJP: attempt to connect to 8.8.8.8:8009 (hostname) failed [error] ap_proxy_connect_backend disabling worker for (hostname) [error] proxy: AJP: failed to make connection to backend: hostname [error] proxy: AJP: disabled connection for (hostname)25 I googled around but I can't seem to find any related topics. There are people say this behavior can be caused by misconfigured apache vs jboss. Telling the max amount of connections apache allows are far greater then jboss, causing the apache connection to time out. But I know the app isn't used by thousands of simultaneous connections at the time not even hundreds of connections so I don't believe this could be a cause. Does anybody have an idea? Or could tell me how to debug this problem? I'm using these versions: Debian 4.3.5-4 64Bit Apache Version 2.2.16 JBOSS Version 4.2.3.GA Thanks

    Read the article

  • "cannot receive new filesystem stream: invalid backup stream" error when unpacking flash archive on solaris 10

    - by Bovril
    I've searched around but i'm having no luck with some peculiar behavior with a flash archive. I'm using HP Server Automation 9.14 to deploy the OS. I'm creating a Solaris 10 flash archive to create a snapshot default build in our environment. I create the flash archive with # flar create -c -S -n g8-solaris10-u10 g8-solaris10-u10.flar It seems to create the file without any problems (exit status 0). When deploying to a new system (same hardware), it extracts to a point and then bails. The last error in the log I can see is Extracted 2047.00 MB ( 82% of 2488.98 MB archive) ERROR: Could not read file (172.27.118.100:/media/opsware/sunos/flar/g8-solaris10-u10.flar ERROR: Errors occurred during the extraction of flash archive. The file /tmp/flash_errors contains the list of errors encountered ERROR: Could not extract Flash archive ERROR: Flash installation failed The error log contained the following message cannot receive new filesystem stream: invalid backup stream A previous version of this flash archive (1.8gb) worked ok, so I suspect size may be a factor. The source system (the one the flash archive is an image of) is an HP BL460C GEN8 some more info below. OS version Info # uname -a SunOS testhostname 5.10 Generic_147441-01 i86pc i386 i86pc # who -r . run-level 3 Oct 15 08:15 3 0 S disks # echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <DEFAULT cyl 17841 alt 2 hd 255 sec 63> /pci@0,0/pci8086,3c06@2,2/pci103c,3355@0/sd@0,0 Specify disk (enter its number): Specify disk (enter its number): zpools # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 136G 24.6G 111G 18% ONLINE - Zones # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared The file size of 2047 seems suspiciously close to 2048, which is concerning. Any help would be greatly appreciated. Thanks

    Read the article

  • Apache2 config variable is not defined

    - by Kurt Bourbaki
    I installed apache2 on ubuntu 13.10. If I try to restart it using sudo /etc/init.d/apache2 restart I get this message: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message So I read that I should edit my httpd.conf file. But, since I can't find it in /etc/apache2/ folder, I tried to locate it using this command: /usr/sbin/apache2 -V But the output I get is this: [Fri Nov 29 17:35:43.942472 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined [Fri Nov 29 17:35:43.942560 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_PID_FILE} is not defined [Fri Nov 29 17:35:43.942602 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_USER} is not defined [Fri Nov 29 17:35:43.942613 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined [Fri Nov 29 17:35:43.942627 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.947913 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948051 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948075 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf: Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} Line 74 of /etc/apache2/apache2.conf is this: Mutex file:${APACHE_LOCK_DIR} default I gave a look at my /etc/apache2/envvar file, but I don't know what to do with it. What should I do?

    Read the article

  • cannot send emails to other Web servers

    - by developer
    I'm trying to limit my server's open ports in CSF. The IPv4 port settings include: # Allow incoming TCP ports TCP_IN = "22,25,53,80,110,143,443,587,3654,53343” # Allow outgoing TCP ports TCP_OUT = "22,53,80,113,443,465,995,3654" # Allow incoming UDP ports UDP_IN = "53" # Allow outgoing UDP ports # To allow outgoing traceroute add 33434:33523 to this list UDP_OUT = "53,113,123" As you see, I have port 25 open in TCP_IN but have removed it from TCP_OUT. The reason is I wanted to have my mails transmitted over smtps, so I have port 465 opened instead in TCP_OUT. Since I am using Rouncube in Directpanel, I have also set the following in Rouncube's config.inc.php: $config['default_host'] = 'ssl://mail.mydomain.com'; $config['smtp_server'] = 'ssl://mail.mydomain.com'; $config['smtp_port'] = 465; However, when I remove port 25 from TCP_OUT, I no longer can send mails, say, to gmail, though I can send mails to own. But I can receive all mails. Please let me know if I need to make any further changes. Do I need to disable port 25 at all, to have my mails sent via ssl. Thanks

    Read the article

  • mod_jk problem: Tomcat is probably not started or is listening on the wrong port

    - by Konrad
    Hi, I am running some application on Tomcat 6.0.26. There is Apache in front of web server talking to it over mod_jk. Every few hours when I try to access application browser simply spins, and no content is retrieved. No error is reported in Tomcat logs, but I fond such errors in mod_jk log: [Sun Jul 04 21:19:13 2010][error] ajp_service::jk_ajp_common.c (1758): Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=***** failed [Sun Jul 04 21:19:13 2010][info] jk_handler::mod_jk.c (1985): Service error=0 for worker==***** [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 45 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_service::jk_ajp_common.c (1721): Receiving from tomcat failed, recoverable operation attempt=0 my worker is configured in following way: worker.admanagonode.port=8009 worker.admanagonode.host=*****.com worker.admanagonode.type=ajp13 worker.admanagonode.ping_mode=A worker.admanagonode.socket_timeout=60 worker.admanagonode.prepost_timeout=10000 worker.admanagonode.connect_timeout=10000 worker.admanagonode.connection_pool_size=200 worker.admanagonode.connection_pool_timeout=300 worker.admanagonode.retries=20 worker.admanagonode.socket_keepalive=1 worker.admanagonode.cachesize=10 worker.admanagonode.cache_timeout=600 Tomcat has same port number in Connector configuration: <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" address="*********" /> Does any of you has any ideas what i am missing? What can cause such problems? Cheers Konrad

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • Linux/Unix MTA with the smartest queue?

    - by threecheeseopera
    I am looking for an MTA that will allow me (a script, really) to proactively manage it's send queue in response to status codes returned by the remote servers I am delivering to. Basically, for each mail sent I would like to be able to react to the SMTP reply code returned by the remote server, ex. '250 OK', or to any error conditions like connection timeouts. Additionally, I would like to be able to manage the send queue moving forward based on this information, e.g. 'example.com has timed out the last 5 connection attempts, so no longer queue mail for recipients @example.com'. I am currently using postfix and perl to parse it's logs for this information, but I am playing a game of catchup that is prone to errors (out-of-order log entries etc.) and it's starting to get messy (some real ugly regexes ;). I really don't want to reinvent the wheel and use some language's smtp library; i would prefer to use a proven/fast/reliable MTA. I am however open to suggestions if what I need just isn't possible. Thanks for your help!

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • How to create VirtualHost in Ubuntu 12.10

    - by Mifas
    I had followed many articles to 'How to create VirtualHost in Ubuntu'. This is what have I done Installed Apache sudo apt-get install lamp-server^ phpmyadmin I created folder called site1.com in /var/www/ Then I have created the file in /etc/apache2/sites-available/site1.com Then added the following code to that site1.com file <VirtualHost *:80> ServerName www.site1.com ServerAdmin [email protected] ServerAlias site1.com DocumentRoot /var/www/site1.com # Other directives here <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site1.com/> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> </VirtualHost> Then after that I edit the host file added the following line of code 127.0.0.1 site1.com Edit Also I enable the site1.com via sudo a2ensite site1.com Then i restart the apache serivice. (Even i restarted the pc) When I go to the site1.com, It will say The connection has timed out Error Message. But I can browse via localhost/site1.com. I have been trying since last two days. No solution. And followed many articles and videos.

    Read the article

  • Why Is ModSecurity Unable to Access the Data Directory?

    - by tommytwoeyes
    Update I think we've solved this; the problem appears to have been a result of the /modsec_storage directory having an incorrect value for its SELinux context type. However, we're still not sure, because although after I changed the SELinux context type value, Apache was able to create files in that directory for the global and ip collections (global.dir/global.pag and ip.dir/ip.pag), the new files still have zero bytes. I'm new to ModSecurity and am not sure if the files are empty because something is wrong with the configuration or if ModSecurity has simply determined it doesn't need to store IP addresses persistently after each transaction ends. Anyone able to offer guidance here? I've recently installed ModSecurity (v2.5.12 / CRS v2.0.8) on our production server, and everything works great, except for these errors that it keeps writing to the Apache error log: Failed to access DBM file "/modsec_storage/global": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] Failed to access DBM file "/modsec_storage/ip": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] After following the instructions for file permission settings in the ModSecurity handbook by Ivan Ristic, with no success, I created a /modsec_storage directory, set the owner & group to apache, and set the permissions for the directory recursively to 777. However, ModSecurity is still reporting the same permission errors, so I am stumped. Can anyone tell me how to fix this?

    Read the article

  • Route forwarded traffic through eth0 but local traffic through tun0

    - by Ross Patterson
    I have a Ubuntu 12.04/Zentyal 2.3 server configured with WAN NATed on eth0, local interfaces eth1 and wlan0 bridged on br1 on which DHCP runs, and an OpenVPN connection on tun0. I only need the VPN for some things running on the gateway itself and I need to make sure that everything running on the gateway goes through the VPNs tun0. root:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default gw... 0.0.0.0 UG 100 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 br1 192.168.1.0 * 255.255.255.0 U 0 0 0 br1 A.B.C.0 * 255.255.255.0 U 0 0 0 eth0 root:~# ip route 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.186 root:~# ip route show table main 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.D root:~# ip route show table default default via A.B.C.1 dev eth0 How can I configure routing (or otherwise) such that all forwarded traffic for other hosts on the LAN goes through eth0 but all traffic for the gateway itself goes through the VPN on tun0? Also, since the OpenVPN client changes routing on startup/shutdown, how can I make sure that everything running on the gateway itself loses all network access if the VPN goes down and never goes out eth0.

    Read the article

< Previous Page | 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906  | Next Page >