Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 599/1523 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • running vmstat around a command

    - by svrist
    Im trying to automate a long list of database experiments. I would like to save vmstat output while specific parts of the experiments are running. Does something exist along the lines of time(1) or script(1) that I could use to say: " While this command is running, save vmstat 2(or similar) output to a file " I am aware that vmstat gives my vmstat for the complete system including other processes, but that is OK for my usage. (Running Ubuntu 9.10 if that makes a difference)

    Read the article

  • Why do I have untrusted certificates for Google, Yahoo, Mozilla and others?

    - by jackweirdy
    In the HTTPS/SSL section of chrome://chrome/settings, I see the following: What does this mean, and is there something wrong? I have a basic understanding of SSL/TLS - I'm not claiming to be completely familiar, but I'm fairly confident I know my way around it - but I don't understand why I have certificates installed on my machine specifically for these sites. From my understanding, I should have the certificates for Certificate Authorities, and any site I visit and use SSL/TLS should have a certificate signed by one of these trusted CAs for me to trust the site. My worry is that if someone has maliciously installed a certificate for these sites on my machine, they could perform a DNS spoofing attack (or a number of other attacks) to hijack my connection to my email account without me knowing, and as they've got the private counterpart to the certificate on my machine, decrypt the communication. NB: I'm also aware that CA certificates aren't just within Chromium and are used system wide as part of libssl - they're stored in /etc/ssl/certs. What I'd like to know is: Is this correct? - The big red boxes make me think no Is this malicious or benign? What can I do to resolve this problem? (If indeed it is a problem) Thanks :)

    Read the article

  • Using an allowed user list with VSFTPD

    - by Naftuli Tzvi Kay
    According to the Wiki here, you can only allow certain users to log in over FTP using the following configuration in your /etc/vsftp.conf file: userlist_enable=YES userlist_file=/etc/vsftp.user_list userlist_deny=NO I've configured my system to use this configuration, and I only have one user which I'd like to expose over FTP named streams, so my /etc/vsftp.user_list looks like this: streams Interestingly enough, I cannot log in once I enable to user list. If I change userlist_enable to NO, then things work properly, but if I enable it, I can't log in all, it just keeps trying to reconnect. I don't get a login failed message, it just keeps trying to reconnect when using lftp. My /etc/vsftp.conf file is available on Pastebin here and my /etc/vsftp.user_list is available here. What am I doing wrong here? I'd just like to only make the streams user able to log in.

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • "File" exists or not

    - by SnailTang
    ls -il ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory total 55456 ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N and when i use to show these files, i get the info: p+st.ó·e p¦ft.¦·N Please, where do these files or somethings others exist. Or what makes them show here.

    Read the article

  • Monitor LSI 3ware raid controller on ESXi

    - by aseq
    This concerns a server that runs ESXi (v. 4.x or 5.x) installed on drives that are configured into a raid10 using an LSI 3ware 97050 raid controller. I would like to know if there is a way to monitor the LSI 3ware series of controllers, in particular the 9750, through ESXi. And to hopefully also run the monitoring daemon LSI provides. I know you can set up a cronjob to execute tw_cli through ssh on the ESXi server. However that's not really ideal. I am not using vcenter by the way. It would be nice to have more than just monitoring working, since the 3ware software has a very useful web client, besides tw_cli.

    Read the article

  • Getting Error while running RED5 server - class path resource [red5.xml] cannot be opened because it does not exist

    - by sunil221
    HI , I have installed java version "1.6.0_14" and Ant version 1.8.2 for red5 Server. when i am trying to run red5 server i am getting the following error please help Root: /usr/local/red5 Deploy type: bootstrap Logback selector: org.red5.logging.LoggingContextSelector Setting default logging context: default 11:27:39.838 [main] INFO org.red5.server.Launcher - Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/red5/red5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/red5/lib/logback-classic-0.9.26.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 11:27:39.994 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Refreshing org.springframework.context.support.FileSystemXmlApplicationContext@39d85f79: startup date [Mon Dec 21 11:27:39 EST 2009]; root of context hierarchy 11:27:40.149 [main] INFO o.s.b.f.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [red5.xml] Exception org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [red5.xml]; nested exception is java.io.FileNotFoundException: class path resource [red5.xml] cannot be opened because it does not exist Bootstrap complete

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • sequential SSH command execution not working in Ubuntu/Bash

    - by kumar
    My requirement is I will have a set of commands that needs to be executed in a text file. My Shell script has to read each command, execute and store the results in a separate file. Here is the snippet which does the above requirement. while read command do echo 'Command :' $command >> "$OUTPUT_FILE" redirect_pos=`expr index "$command" '>>'` if [ `expr index "$command" '>>'` != 0 ];then redirect_fn "$redirect_pos" "$command"; else $command state=$? if [ $state != 0 ];then echo "command failed." >> "$OUTPUT_FILE" else echo "executed successfully." >> "$OUTPUT_FILE" fi fi echo >> "$OUTPUT_FILE" done < "$INPUT_FILE" Sample Commands.txt will be like this ... tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt gzip /var/tmp/logs.tar rm -f /var/tmp/list.txt This is working fine for commands which needs to be executed in local machine. But When I am trying to execute the following ssh commands only the 1st command getting executed. Here are the some of the ssh commands added in my text file. ssh uname@hostname1 tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt ssh uname@hostname2 gzip /var/tmp/logs.tar ssh .. etc When I am executing this in cli it is working fine. Could anybody help me in this?

    Read the article

  • How to tell X.org to reload input device module? (Working around suspend-to-ram crash on Acer laptop

    - by Vi
    When X.org boots up, Synaptics touchpad works well. But when I remove the module it falls back to /dev/input/mice and don't use normal driver even when touchpad is available again. Xorg.0.log: ... (II) XINPUT: Adding extended input device "Synaptics Touchpad" (type: TOUCHPAD) (--) Synaptics Touchpad: touchpad found # { rmmod psmouse && echo mem /sys/power/state && modprobe psmouse; } (WW) : No Device specified, looking for one... (II) : Setting Device option to "/dev/input/mice" ... How to tell X.org to try it's InputDevice again (without restarting X server)? P.S. rmmod psmouse is needed to prevent crashing of Acer Extensa 5220 when resuming from suspend-to-ram. Update: Found answer myself: Doing xinput set-int-prop "Synaptics Touchpad" "Device Enabled" 8 1 after reloading the kernel module reloads touchpad. Now suspend-to-ram works OK.

    Read the article

  • mysqld causes high CPU load

    - by Radu
    My mysqld goes to use 99.9% of CPU for variable time (between 2 - 20 minutes), and then goes back to normal 0.1% - 5%. Checked processlist: all is normal, 1 to 20 inserts or updates that last 2 to 5 sec, and about 20 process that are in Sleep Mode (maybe because the scripts don't close the mysql connection, but are they are closed in about 5 - 10 secs, I didn't make the scripts :P but the server was running fine the last 2 years, since is was made): | 15375 | root | localhost | stoc | Query | 0 | NULL | show processlist | | 79480 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79481 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79482 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79483 | pppoe | localhost | pppoe | Query | 0 | init | UPDATE acc SET InputOctets="0", OutputOctets="0", InputPackets="unknown", OutputPackets="User | | 79484 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79485 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79486 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL Checked raid, seemns OK: [root@db2]# cat /proc/mdstat Personalities : [raid5] [raid4] [raid1] md0 : active raid1 sdd1[3] sdc1[2] sdb1[0] sda1[1] 136448 blocks [4/4] [UUUU] md1 : active raid5 sdd2[3] sdc2[2] sdb2[0] sda2[1] 12023808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md3 : active raid5 sda4[1] sdd4[3] sdc4[2] sdb4[0] 203647488 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid5 sda3[1] sdd3[3] sdc3[2] sdb3[0] 24024576 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> [root@db2]# top sees my mysqld cpu load, but nothing else seems to be wrong: [root@db2]# top top - 17:56:05 up 7 days, 3:55, 3 users, load average: 32.93, 24.72, 22.70 Tasks: 75 total, 4 running, 71 sleeping, 0 stopped, 0 zombie Cpu(s): 63.4% us, 36.6% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st Mem: 1988824k total, 1304776k used, 684048k free, 99588k buffers Swap: 12023800k total, 0k used, 12023800k free, 951028k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5754 mysql 19 0 236m 57m 5108 R 99.9 2.9 21:58.76 mysqld 1 root 16 0 7216 700 580 S 0.0 0.0 0:00.39 init 2 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 Repaired all mysql databases, reindexed raid ... I'm running out of ideeas ... Anyone has an ideea what can go wrong with this server ? Thank you

    Read the article

  • TIME_WAIT connections not being cleaned up after timeout period expires

    - by Mark Dawson
    I am stress testing one of my servers by hitting it with a constant stream of new network connections, the tcp_fin_timeout is set to 60, so if I send a constant stream of something like 100 requests per second, I would expect to see a rolling average of 6000 (60 * 100) connections in a TIME_WAIT state, this is happening, but looking in netstat (using -o) to see the timers, I see connections like: TIME_WAIT timewait (0.00/0/0) where their timeout has expired but the connection is still hanging around, I then eventually run out of connections. Anyone know why these connections don't get cleaned up? If I stop creating new connections they do eventually disappear but while I am constantly creating new connections they don't, seems like the kernel isn't getting chance to clean them up? Is there some other config options I need to set to remove the connections as soon as they have expired? The server is running Ubuntu and my web server is nginx. Also it has iptables with connection tracking, not sure if that would cause these TIME_WAIT connections to live on. Thanks Mark.

    Read the article

  • Apache directory structure with multiple hosted languages.

    - by anomareh
    I just got a new work machine up and running and I'm trying to decide on how to set everything up directory wise. I've done some digging around and really haven't been able to find anything conclusive. I know it's a question with a variety of answers but I'm hoping there's some sort of general guidelines or best practices to go by. With that said, here are a few things specific to my situation. I will be doing actual development and testing on the same machine as the server. It is a single user machine in the sense that I will be the only one working on the machine. There will be multiple hosted languages, specifically PHP and RoR while possibly expanding later. I'd like the setup to translate well to a production environment. With those 3 things in mind there are a couple of things I've had in the back of mind. Seeing as it's a single user machine I haven't been able to decide whether or not I should be working on things out of my home directory or if they should be located outside of it. I'm feeling that outside of a user directory would be better as it would translate better to a production environment, but I'm also not sure if that will come with any permission annoyances or concerns seeing as I'll be working on the same machine. Hosting multiple languages seems like it may be a bit quirky. With PHP I've found you're generally just dumping the project somewhere in the document root where as something like a Rails app you have the entire project and you only want the public directory in the document root. Thanks for any insight, opinion, or just personal preference from experience anyone can offer.

    Read the article

  • Where is xorg.conf in Ubuntu 10.04?

    - by Mikey.B
    Hi Guys, I'm in the middle of trying to setup dual monitors on ubuntu and would like to backup my xorg.conf... The documentation I've been thus far say to do the following: sudo cp /etc/X11/xorg.conf /etc/X11/xorg.conf_backup But I don't see the xorg.conf file anywhere... Am I missing something? Where is this located?

    Read the article

  • HAproxy - Redirect issue - Uri Variables ?

    - by Justin
    I'm using haproxy 1.5dev3 and I was wondering if there is any possible way to grab uri variables from a request to reappend the query on the end of a redirect url? What I'm trying to do is redirect from: http://www.domain.com/page/example.htm?id=1234567 to: http://www.domain.com/frame/newpage.cfm?id=1234567 redirect prefix doesn't work properly as it tries to append /page/example.htm to the end of the redirect url. Can I do some sort of rewrite to accomplish this? It would be awesome if you could use uri and queries as variables for redirection/pool selection like on F5. Please help...Thanks!

    Read the article

  • IP tables gateway

    - by WojonsTech
    I am trying to make an iptables gateway. I ordered 3 dedicated server from my hosting company all with dual nics. One server has been given all the ip addresses and is connected directly to the internet and has its other nic connected to a switch where the other servers are all connected also. I want to setup iptables so for example the ip address 50.0.2.4 comes into my gateway server it fowards all the traffic to a private ip address using the second nic. This way the second nic can do what ever it needs and can respond back also. I also want it setup that if any of the other servers needs to download anything over the inernet it is able to do so and by using the same ip address that is used for its incomming traffic. Lastly I would like to be able to setup dns and other needed networking stuff that i maybe not thinking about.

    Read the article

  • LVM2 vs MDADM performance

    - by archer
    I've used MDADM + LVM2 on many boxes for quite a while. MDADM was serving for both RAID0 and RAID1 arrays, while LVM2 where used for logical volumes on top of MDADM. Recently I've found that LVM2 could be used w/o MDADM (thus minus one layer, as the result - less overhead) for both mirroring and stripping. However, some guys claims that READ PERFORMANCE on LVM2 for mirrored array is not that fast as for LVM2 (linear) on top of MDADM (RAID1) as LVM2 does not read from 2+ devices at a time, but use 2nd and higher devices in case of 1st device failure. MDADM reads from 2 devices at a time (even in mirrored mode). Who could confirm that?

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • Virtual host “Forbidden You don't have permission to access / on this server” on debian

    - by ulduz114
    Before I created a virtual host I could see "http://localhost", but when I created a virtual host I could not see "http://localhost" and my virtual host "http://test" Here is my virtualhost config file: <VirtualHost test:80> ServerAdmin [email protected] ServerName test ServerAlias test DocumentRoot "/home/javad/Public/test/public" <Directory "/home/javad/Public/test/public/" > Options Indexes FollowSymLinks MultiViews ExecCGI DirectoryIndex index.php AllowOverride all Order allow,deny allow from all </Directory> </VirtualHost> so I ran a2ensite test and added 127.0.0.1 test to /etc/hosts file and restart apapche2 fine But after that I cannot access to http://test or even http://localhost i get Forbidden You don't have permission to access / on this server. When I delete my virtual host setting I can access http://localhost

    Read the article

  • Accessing Netatalk/AFP Shares from OS X Snow Leopard

    - by j4nus_
    Recently upgraded Ubuntu home server from 8.04 client to 10.04 server and reinstalled all services therein. One of them is a Netatalk daemon that I configured in a fashion similar to this website: http://www.kremalicious.com/2008/06/ubuntu-as-mac-file-server-and-time-machine-volume/ Finder recognizes my server and the afp service, yet when I attempt to log in (using valid credentials), Finder indicates its the wrong username and password. I've tried altering some of the config files and my Google-fu to look for solutions, but no luck. Any tips? (This was not an issue under 8.04, if it matters)

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Best practice for authenticating DMZ against AD in LAN

    - by Sergei
    We have few customer facing servers in DMZ that also have user accounts , all accounts are in shadow password file. I am trying to consolidate user logons and thinking about letting LAN users to authenticate against Active Directory.Services needing authentication are Apache, Proftpd and ssh. After consulting security team I have setup authentication DMZ that has LDAPS proxy that in turn contacts another LDAPS proxy (proxy2) in LAN and this one passes authentication info via LDAP (as LDAP bind) to AD controller.Second LDAP proxy only needed because AD server refuses speak TLS with our secure LDAP implemetation. This works for Apache using appropriate module.At a later stage I may try to move customer accounts from servers to LDAP proxy so they are not scattered around servers. For SSH I joined proxy2 to Windows domain so users can logon using their windows credentials.Then I created ssh keys and copied them to DMZ servers using ssh-copy, to enable passwordless logon once users are authenticated. Is this a good way to implement this kind of SSO?Did I miss any security issues here or maybe there is a better way ofachieving my goal?

    Read the article

  • Nagios orphaned services warnings

    - by Gordon
    We have had Nagios running on one of our servers with out any problems for a while but lately certain old service warning have been reappearing and then disappearing on the service detail page. From looking at the logs I found warning like the following. Warning: The check of service 'Tomcat' on host 'virtual1' looks like it was orphaned (results never came back). I'm scheduling an immediate check of the service... Has anyone ever came across this before or at least know a way to delete the old Orphaned Warnings. The Nagios Version we are running is Version 3.0b7 so an update might be in order. Thanks.

    Read the article

  • open_basedir vs sessions

    - by liquorvicar
    On a virtual hosting server I have the open_basedir set to .:/path/to/vhost/web:/tmp:/usr/share/pear for each virtual host. I have a client who's running WordPress and he's complaining about open_basedir errors thus: PHP WARNING: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/var/lib/php/session/sess_42k7jn3vjenj43g3njorrnrmf2) is not within the allowed path(s): (.:/path/to/vhost/web:/tmp:/usr/share/pear) So the PHP session save_path isn't included in open_basedir but sessions across all sites on the server seems to be working fine apart from in this intermittent instance. I thought that perhaps the default session handler ignored open_basedir and this warning was caused by WP accessing the session file directly. However from what I can see PHP 5.2.4 introduced open_basedir checking to the session.save_path config: http://www.php.net/ChangeLog-5.php#5.2.4 (I am on PHP 5.2.13). Any ideas?

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >