Search Results

Search found 11768 results on 471 pages for 'railstutorial org'.

Page 270/471 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • VirtualBox 4.1.20 (Windows 7 / Ubuntu 12.04 (32 bit)) copy/paste is broken

    - by user1628257
    I have a Windows 7 Pro host, and Ubuntu 12.04 LTS guest. I cannot get the shared clipboard working. I have installed Guest Additions 4.1.20 on VirtualBox 4.1.20, have restarted, followed instructions found at http://www.virtualbox.org/manual/ch04.html#idp18411760, and have enabled bidirectional clipboard sharing within VirtualBox options. However, I still cannot copy and paste between the host and guest. Copy/paste works great within the host, and within the guest, but not between the two. I'm out of ideas.

    Read the article

  • Apache Server access log shows another domain's request and got redirected

    - by user3162764
    I found my apache2 access log (debian) includes some entries not related to my domain and got '301' redirection: ,-,-,[19/Aug/2014:10:09:54 +0800],"GET /admin.php HTTP/1.0",301,493,,, ,-,-,[19/Aug/2014:10:09:55 +0800],"GET /administrator/index.php HTTP/1.0",301,521,,, ,-,-,[19/Aug/2014:10:09:55 +0800],"GET /wp-login.php HTTP/1.0",301,499,,, Obviously those requests are not to my domain, but from this source, debian will default deny all proxy request: https://wiki.apache.org/httpd/ClientDeniedByServerConfiguration Besides, I cannot find there is mod_proxy under /etc/apache2/mods-enabled. I am anxious about: 1. is the server acting as open proxy? 2. why http 301 is returned? Thx.

    Read the article

  • Manually start scheduled launchd job

    - by Pascal
    On our Mac OS X (10.6) Server we have setup several backup scripts that are controlled by launchd and launched at specific times. For this we have defined StartCalendarInterval and this all works very well. Now it happens that I would like to start one of these jobs out of schedule, but this does not start the job (but also does not give an error/warning): sudo launchctl start org.job-label The manpage of launchtl states that start is intended to test on-demand jobs, no word of scheduled jobs. Is there a way to kickstart scheduled jobs?

    Read the article

  • Upgrading openSUSE 11.1 with Plesk Panel 9.3 to PHP 5.3

    - by Jonathan
    I'm running a VPS with openSUSE 11.1 (i586). On the VPS is Parallels Plesk Panel 9.3.0 installed. The current PHP-version is PHP 5.2.11. I want to upgrade PHP to PHP 5.3, but I can't find good instructions on how to do this. If I check for updates in Zypper, it says this is the latest release. In the Plesk Updates isn't an update either, both via the webbased interface and the command line interface. On Software.openSUSE.org I can find packages for PHP 5.3.1 in both the server:php/server_apache_openSUSE_11.1-repo and the server:php/openSUSE_11.1-repo (can't post the link because I'm a newbie here). But if I add one of those to Zypper, I still don't see an update. Is there here somebody who knows how to do this? And is it completely safe to update that way? I don't want to end up with a broken VPS... Thanks! Jonathan

    Read the article

  • How to install wget on this?

    - by Winluser
    I did download RubyStack 2.0.3 for VMWare from http://bitnami.org/files/stacks/rubystack/2.0-3… but I cannot download anything on it! It appears that all basic utilities are missing/screwed: bitnami@linux:/var/tmp$ wget -bash: wget: command not found bitnami@linux:/var/tmp$ curl curl: error while loading shared libraries: libcurl.so.4: cannot open shared obj ect file: No such file or directory bitnami@linux:/var/tmp$ man wget -bash: man: command not found bitnami@linux:/var/tmp$ sudo apt-get install wget [sudo] password for bitnami: Reading package lists… Done Building dependency tree Reading state information… Done E: Couldn’t find package wget Any ideas how can I download anything on this machine? (I don't have physical access to it)

    Read the article

  • How to setup a host as a sendmail relay for particular IP subnet

    - by Abhinav
    Hi, By default, sendmail (I have version 8.13 on an RHEL4) allows only local mails. I wanted to allow mails from a particular network to be relayed via the system, so I did the following based on suggestions from various places : /etc/mail/access : Added the subnet and the domain 8.37 RELAY mydomain.com RELAY (I assume this is the originating email's domain) This alone did not work, so I added the following to sendmail.mc FEATURE(access_db)dbl Now, the problem is that it is allowing access from other domains as well. To test it out, I removed 8.37 RELAY from the access, and changed the email from field to abhinav@notmydomain.org However, I still receive the mail. What is the correct way to configure this, so that only mails from a particular subnet are relayed ?

    Read the article

  • Python2.7 / Pip2.7 install in Centos6: root does not see /usr/local/bin

    - by Erotemic
    I am trying to install Python2.7 in Centos 6. It's a pain as centos6 ships with python26 and yum is dependent on it. Furthermore yum does not seem to have python2.7 I ended up building it from source: wget https://www.python.org/ftp/python/2.7.6/Python-2.7.6.tgz gunzip Python-2.7.6.tgz tar -xvf Python-2.7.6.tar cd Python-2.7.6 ./configure --prefix=/usr/local --enable-unicode=ucs4 --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib" make sudo make altinstall cd ~ This installed python2.7 to /usr/local/bin and I can use it. But I cannot call it with sudo unless I specify the whole pathname To install pip I had to do: wget https://bootstrap.pypa.io/get-pip.py sudo /usr/local/bin/python2.7 get-pip.py Now whenever I want a package I have to call sudo /usr/local/bin/pip2.7 install somepackage Is there a clean way to be able to run: sudo pip2.7 install somepackage without having to specify the absolute path? Is a symlink into /usr/bin safe?

    Read the article

  • Solaris Non global Zone x server

    - by ankimal
    I am not even sure if this is possible but how can I start an X server on a non-global zone? If I run startx from within my zone. I created the xorg.conf by running /usr/X11/bin/xorgconfig root@foo:/usr/X11/bin# startx xauth: creating new authority file /root/.serverauth.20957 X.Org X Server 1.5.3 Release Date: 5 November 2008 X Protocol Version 11, Revision 0 Build Operating System: SunOS 5.11 snv_108 i86pc Current Operating System: SunOS dsol101 5.11 snv_111b i86pc Build Date: 07 May 2009 04:44:56PM Solaris ABI: 64-bit SUNWxorg-server package version: 6.9.0.5.11.11100,REV=0.2009.05.07 SUNWxorg-mesa package version: 6.9.0.5.11.11100,REV=0.2009.04.02 Before reporting problems, check http://sunsolve.sun.com/ to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 10 19:17:53 2009 (==) Using config file: "/etc/X11/xorg.conf" Fatal server error: xf86OpenConsole: Cannot open /dev/fb (No such file or directory)

    Read the article

  • Apache Passenger can't find gem

    - by purpletonic
    I'm running Ubuntu 10.04 and I've transferred over some sites built in Sinatra. I've set up Phusion passenger, but when I visit the sites I'm getting a Passenger LoadError claiming that passenger has 'no such file to load -- sinatra' yet when I run gem list or sudo gem list, I clearly see sinatra listed. Why can't passenger find this gem? My sudo gem env output looks like this RubyGems Environment: - RUBYGEMS VERSION: 1.3.5 - RUBY VERSION: 1.8.7 (2009-12-24 patchlevel 248) [x86_64-linux] - INSTALLATION DIRECTORY: /usr/local/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/local/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /usr/local/lib/ruby/gems/1.8 - /root/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources = true - :verbose = true - :benchmark = false - :backtrace = false - :bulk_threshold = 1000 - REMOTE SOURCES: - http://gems.rubyforge.org/ running 'sudo ruby -v' I see the following: ruby 1.8.7 (2009-12-24 patchlevel 248) [x86_64-linux], MBARI 0x6770, Ruby Enterprise Edition 2010.01 Is that correct, or should the two ruby versions match up correctly, displaying REE in both? Thanks in advance!

    Read the article

  • Manual Duplex printing for Mac (and/or Linux)

    - by chris_l
    My printers don't support automatic duplex printing. I'm looking for a solution for my Mac and Linux computers that I've seen with most Windows printer drivers: Check "Manual duplex" in the printer screen Printer starts printing one side A dialog appears, asking me to flip the pages Printer prints the other side. One thing I can do, is print odd pages, then reopen the dialog and print even pages, but this is very inconvenient, especially when I only want to print a certain page range of the document as the Mac dialog forgets my previous page range every time. It gets even more inconvenient, when printing 2-up double sided, or when changing additional settings for this one printout. Is there maybe some tool, that can do this? Or maybe a "virtual printer driver" that can sit somewhere between the dialog and the actual printer driver, which manages these steps? (The Windows tool http://en.wikipedia.org/wiki/FinePrint can do something like that, but I don't need all of its features - and I need it on Mac/Linux) Thanks, Chris

    Read the article

  • Fsck stuck on "Clone Multiply-claimed blocks"

    - by user3436581
    Update: I fixed the issue. But I don't see eth0 directory in /sys/class/net Any idea how to fix that? I could not bring up eth0 and I need it badly so that I can backup everything over the network since I'm working on VM console. This virtual machine sda1 is stuck. I've tried e2fsck and fsck and both gets stuck after "Clone multiply-claimed blocls? yes" I've waited for around 5 to 8 hours and it still the same. I could not mount the filesystem without fixing these errors. I'm doing this after un-mounting all filesystems in rescue mode.. Reboot does not help. Any suggestions? Screenshot: http://i.stack.imgur.com/lgixr.jpg Alternative screenshot url: http://s27.postimg.org/grk4p9eeb/error.png

    Read the article

  • Munin Aggregated Graphs Configuration Error

    - by Sparsh Gupta
    I tried making some Munin Aggregated graphs but somehow I am unable to make the configuration work. I think I have followed the instructions but since its not working, I would love some assistance or guidance as to what I am doing wrong. I want to Aggregate (sum) the total number of requests / second all my nginx servers are doing combined together. The configuration looks like [TRAFFIC.AGGREGATED] update no requests.graph_title nGinx requests requests.graph_vlabel nGinx requests per second requests.draw LINE2 requests.graph_args --base 1000 requests.graph_category nginx requests.label req/sec requests.type DERIVE requests.min 0 requests.graph_order output requests.output.sum \ lb1.visualwebsiteoptimizer.com:nginx_request_lb1.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb2.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb3.visualwebsiteoptimizer.com_request.request The munin graph I want to aggregate is http://exchange.munin-monitoring.org/plugins/nginx_request/details Thanks Sparsh Gupta

    Read the article

  • Git pull auto complete OSX

    - by vodkhang
    Follow some instruction on this site http://denis.tumblr.com/post/71390665/adding-bash-completion-for-git-on-mac-os-x-leopard . I can do git auto complete for MAC OS. However, when I type git pull origin ma (for master), and then tab it takes a long time for git to auto complete to become git pull origin master . I think it connect to the server to get the branch, but I am not sure, is there any way to make it faster and only get the branch on local machine cd /tmp git clone git://git.kernel.org/pub/scm/git/git.git cd git git checkout v`git --version | awk '{print $3}'` cp contrib/completion/git-completion.bash ~/.git-completion.bash cd ~ rm -rf /tmp/git echo -e "source ~/.git-completion.bash" >> .profile

    Read the article

  • Ubuntu 12.04 Netinstall URL? Xen Host

    - by notFound
    Well I have an Xen server, I've got a CentOS container up fine but a friend of mine wants (oh god) Ubuntu Server 12.04, why he can't use Debian is beyond my understanding. But anyways, I can't remember how I installed the CentOS container but I'm giving virt-manager a try now, since I don't have a disk image already the only option is to get a Network Install URL since I'm using PV. So does anyone know what I should type in there, if it was CentOS I could easily type http://mirror.centos.org/centos/6.2/os/i386 for example. The furthest I've got in finding a suitable URL is http://archive.ubuntu.com/ubuntu/dists/precise/ but that of course wont work. Any ideas?

    Read the article

  • How to connect ftp server outside lan?

    - by srisar
    hi all , im setting up home ftp server, so i can share some files with my friends outside my lan. I am using filezilla server and everything configured. http://www.canyouseeme.org/ even see my port 21 as opend, but when i connect through fit client or through web browser, its saying "530 User saravana access denied." how can i solve this problem, i checked the user name and password, everything is good, but i didnt sent any passive mode, (i didnt know how to set), if that is causing the trouble can anyone help me, bu the way i can connect locally through localhost.

    Read the article

  • How would you shorten 5,000+ URLs? [closed]

    - by Tyler J Fisher
    How would you go about shortening approximately 5,000 permalinks? The links point to a remote media archiving server, and are unlikely to change. Example URLs: rtsp://foo-1.bar.com/xx/xx/xx/xx.rm http://media.foo.org/xx/xx/xx.mp4 The URLs are going to be stored in a local MySQL database, as such it's crucial that the URLs are in a manageable form (i.e bit.ly or ow.ly). There are bulk URL shortening services, but those only allow shortening of 100 links/day, which isn't technically feasible so I need to think of something else.

    Read the article

  • Upgrade PHP to 5.3 in Ubuntu Server 8.04 with Plesk 9.5

    - by alcuadrado
    I have a dedicated server with Ubuntu 8.04, and really need to upgrade php to 5.3 version in order to deploy a new version of the system. This version of php is the default one in ubuntu 10.04, so I considered upgrading the OS, but after trying that, I lost my plesk installation, which annoyed my client. I tried adding the dotdeb.org repositories, but don't know why, after running an apt-get upgrade, I get this error: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: libapache2-mod-php5 php5 php5-cgi php5-cli php5-common php5-curl php5-gd php5-imap php5-mysql php5-sqlite php5-xsl 0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded. Any idea why is this happenning? Or do you know any alternative method (except compiling my own binaries) to upgrade php or update ubuntu without loosing plesk? Thanks!

    Read the article

  • Launching mysql server: same permissions for root and for user

    - by toinbis
    Hi folks, have been directed here from stackoverflow here, am reposting the question and adding my.cnf at the end of a post. so far in my 10+ years experience with linux, all the permission problems I've ever encountered, have been successfully solved with chmod -R 777 /path/where/the/problem/has/occured (every lie has a grain of truth in it :) This time the trick doesn't work, so I'm turning to you for help. I'm compiling mysql server from scratch with zc.buildout (www . buildout . org). I do launch it by executing /home/toinbis/.../parts/mysql/bin/mysqld_safe, this works. The thing is that i'll be launching this from within supervisor (supervisord . org) script, and when used on the deployment server, it'll need it to be launched with root permissions(so that nginx server, launched with the same script, would have access to 80 port). The problem is that sudo /home/toinbis/.../parts/mysql/bin/mysqld_safe, fails, generating the error, posted bellow, in mysql error log (apache and nginx works as expected). http://lists.mysql.com/mysql/216045 suggests, that "there are two errors: A missing table and a file system that mysqld doesn't have access to". Mysqldatadir and all the mysql server binary files has 777 permissions, talbe mysql.plugin does exist and has 777 permissions (why Can't open the mysql.plugin table?), "sudo touch mysql_datadir/tmp/file" does create file (why Can't create/write to file /home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz?). chgrp -R mysql mysql_datadir and adding "root, toinbis, mysql" users to mysql group ( cat /etc/group | grep mysql outputs mysql:x:124:root,toinbis,mysql) has no effect - when i launch it as a casual user, it starts, when as a root - it fails. Does mysql server, even started as root, tries to operate as other, let's say, 'mysql' user? but even in that case, adding mysql user to mysql group and making all the mysql_datadirs files belong to mysql group should make things work smoothly. I do know that it might be a better idea to simply to launch one the nginx as root and mysql - as just a user, but this error irritated me enough so to devote enough energy so not to only "make things work", but to also make things work exactly as i wanted it initially, so to have a proof of concept that it's possible. and this is the generated error: 091213 20:02:55 mysqld_safe Starting mysqld daemon with databases from /home/toinbis/.../runtime/mysql_datadir /home/toinbis/.../parts/mysql/libexec/mysqld: Table 'plugin' is read only 091213 20:02:55 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. /home/toinbis/.../parts/mysql/libexec/mysqld: Can't create/write to file '/home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz' (Errcode: 13) 091213 20:02:55 InnoDB: Error: unable to create temporary file; errno: 13 091213 20:02:55 [ERROR] Plugin 'InnoDB' init function returned error. 091213 20:02:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 091213 20:02:55 [ERROR] Can't start server : Bind on unix socket: Permission denied 091213 20:02:55 [ERROR] Do you already have another mysqld server running on socket: /home/toinbis/.../runtime/var/pids/mysql.sock ? 091213 20:02:55 [ERROR] Aborting 091213 20:02:55 [Note] /home/toinbis/.../parts/mysql/libexec/mysqld: Shutdown complete 091213 20:02:55 mysqld_safe mysqld from pid file /home/toinbis/.../runtime/var/pids/mysql.pid ended My my.cnf (the basedir and datadir(including tempdir) have chmod -R 777 permissions) : [client] socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 [mysqld_safe] socket = /home/toinbis/.../runtime/var/pids/mysql.sock nice = 0 [mysqld] # # * Basic Settings # socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 pid-file = /home/toinbis/.../runtime/var/pids/mysql.pid basedir = /home/toinbis/.../parts/mysql datadir = /home/toinbis/.../runtime/mysql_datadir tmpdir = /home/toinbis/.../runtime/mysql_datadir/tmp skip-external-locking bind-address = 127.0.0.1 log-error =/home/toinbis/.../runtime/logs/mysql_errorlog # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 32M thread_stack = 128K thread_cache_size = 8 myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /home/toinbis/.../runtime/logs/mysql_logs/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /home/toinbis/.../runtime/logs/mysql_logs/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 #log_bin = /home/toinbis/.../runtime/mysql_datadir/mysql-bin.log #binlog_format = ROW #read_only = 0 #expire_logs_days = 10 #max_binlog_size = 100M #sync_binlog = 1 #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=64M innodb_log_file_size=16M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table innodb_locks_unsafe_for_binlog=1 [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M Any ideas much appreciated! regards, to P.S. sorry for messy hyperlinks, it's my first post and anti-spam feature of SF doesn't allow to post them properly :)

    Read the article

  • Unable to log into Ubuntu

    - by Rodnower
    I have Ubuntu 12.04.1. Last time I did nothing especial, but suddenly some problem appear: I have a login screen (using lightdm), when I attempt a login, I get a console session and returned to the login screen. I see that it is a known issue, so I tried everything from following steps: To removed .XAuthority Configure to use gdm Reinstall lightdm To include my user to nopasswdlogin group But nothing help... So, these are errors from /var/log/auth.log: Oct 3 01:11:48 alphabet-2 lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Oct 3 01:11:48 alphabet-2 lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 Oct 3 01:11:48 alphabet-2 lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "andrey" Oct 3 01:11:48 alphabet-2 dbus[704]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.35" (uid=104 pid=1709 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.14" (uid=0 pid=1169 comm="/usr/sbin/console-kit-daemon --no-daemon ") Any ideas?

    Read the article

  • MaxClients in apache. How to know the size of my proccess?

    - by Larry
    From http://httpd.apache.org/docs/2.2/misc/perf-tuning.html The single biggest hardware issue affecting webserver performance is RAM. A webserver should never ever have to swap, as swapping increases the latency of each request beyond a point that users consider "fast enough". This causes users to hit stop and reload, further increasing the load. You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes. The main issue is that I can't understand how to know the size, because, well i have the size of httpd on no more of 3888 But, if we need to determine the number for MaxClients, and I have 4GB of RAM, so I get: 972, so I should use like 900 in the MaxClients?

    Read the article

  • Connect to SVN repository with Netbeans using SVN+SSH

    - by shuby_rocks
    I am trying to connect to a SVN server in order to import my project into it with svn+ssh authentication method. I am using the NetBeans IDE (6.8) with subversion plugin installed on Windows XP SP2. I have plink installed with its path set in the Windows PATH env variable. When I use the similar looking repository URL (XXXX and YYYY replaced with sensible things) svn+ssh://XXXX@YYYY/home/dce/svn/trunk along with this external tunnel command plink -l <myUserName> -i C:\\privateKey.ppk I keep getting this error: org.tigris.subversion.javahl.ClientException: Network connection closed unexpectedly I searched about it on the Internet and tried many things but didn't work out. Please help if anybody has some idea what may be going wrong. Thanks a lot in advance.

    Read the article

  • ICS as guest in Ubuntu 12.04 host

    - by oshirowanen
    I have installed Android as per this guide as a guess os via VirtualBox: http://www.android-x86.org/documents/virtualboxhowto Using the following ISO: android-x86-4.0-RC1-eeepc.iso But I am unable to connect to the internet from within the android virtual machine. The host OS is Ubuntu 12.04 where the internet works fine. I have internet access via a usb wireless connection to the home router. All this is fine. If I install Ubuntu 12.04 as a guest, where the host is also Ubuntu 12.04. The guest os'es internet works fine out of the box. But for some reason, I can't get the above androids internet to work out of the box as the guest os. Anyone know what I am doing wrong?

    Read the article

  • CENTOS: unsupported dictionary type: sqlite in POSTFIX

    - by Ferdinand
    Oct 30 09:24:15 postfix postfix/smtpd[1622]: fatal: unsupported dictionary type: sqlite Oct 30 09:24:16 postfix postfix/master[1165]: warning: process /usr/libexec/postfix/smtpd pid 1622 exit status 1 Oct 30 09:24:16 postfix postfix/master[1165]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling I'm trying to use sqlite with postfix, but I get that error. I'm using CENTOS 6.4 x64. I have sqlite and sqlite-devel installed too. I'm assuming postfix from BASE (CentOS repo) comes without sqlite support? I've been not able to recompile with sqlite support using this: http://www.postfix.org/SQLITE_README.html Is there another way to get it to work?

    Read the article

  • Shibboleth: found encrypted assertions, but no CredentialResolver was available

    - by HorusKol
    I've gotten a Shibboleth Server Provider (SP) up and running, and I'm using the TestShib Identity Provider (IdP) for testing. The configuration appears to be all correct, and when I requested my secured directory I was sent to the IdP where I logged in and then was sent back to https://example.org/Shibboleth.sso/SAML2/POST where I am getting a generic error message. Checking the logs, I am told: found encrypted assertions, but no CredentialResolver was available I have rechecked the configuration, and there I have: <CredentialResolver type="File" key="/etc/shibboleth/sp-key.pem" certificate="/etc/shibboleth/sp-cert.pem"/> Both of these files are present at those locations. I've restarted apache and retried, but still get the same error. I don't know if it makes a difference - but only a subdirectory of the site has been secured - the documentroot is publicly available.

    Read the article

  • File system loop detected in /var/named/chroot/var/named/

    - by Iko
    The problem start with a message No space left on device. After investigating a little (with google's help) I found : find: File system loop detected; /var/named/chroot/var/named' is part of the same file system loop as/var/named'. What I don't know is what to do next. I found this on centos.org : and see if the inode numbers are the same (they shouldn't be). If they are then you need to remove the /var/named/chroot/var/named/ hard link and recreate it as a directory the inode number are the same but I don't know exactly which folder to delete and what to do next thank you for any help Linux xxxxx.onlinehome-server.info 2.6.32-220.13.1.el6.x86_64 #1 SMP Tue Apr 17 23:56:34 BST 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >