Search Results

Search found 24814 results on 993 pages for 'linux distro'.

Page 310/993 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • Alternatives to native LDAP

    - by Matt
    We've implemented an LDAP to NIS solution and have begun transitioning some systems to native LDAP binding for authentication and automount maps. Unfortunately we have a very mixed environment with more than 20 *nix environments. The setup for each variant is of course unique and has required various workarounds to get full functionality. We're now at the point where we're willing to revisit the solution and possibly migrate toward something like Likewise (http://www.likewise.org), but would like to know what others are using to solve this problem.

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • VMware Server 2.0.2 and Firefox 3.6 RC1

    - by Mads
    Hi, it seems that VMware Server 2.0.2 and Firefox 3.6 RC1 don't like each other. I have a reproducible problem on different networks, each with same software configuration (FF3.6 RC1 and VMware Server 2.0.2 on Ubuntu 8.04.3 LTS 64bit). The login screen doesn't show up and the Firefox remarks and non loadable page. It cannot load the page at all. The redirect is done (from http://:8222 to https://:8333 ) Maybe someone can help?

    Read the article

  • How to get AMD Catalyst working on Arch x86_64

    - by gh403
    I've got a Dell Inspiron 15R 7520 with AMD's hybrid "PowerXpress" graphics. The integrated graphics card is (if I understand it correctly) integrated with the i7-3612QM processor, and the discrete graphics card is a "Southern Islands" Radeon HD 7730M. The integrated graphics work perfectly under Arch. However, the discrete graphics don't. I have tried several different methods, and the one that seems to get me the farthest with the least effort is the AUR package catalyst-total-pxp. After installing, rebooting, and issuing the commands # aticonfig --initial # pxp_switch_catalyst amd # X X completely fails to start. The X log can be found here. I don't understand what is failing; potentially, it has something to do with the way my card is hooked up--I think it's muxless, but I really don't know. What is the matter here? Any help would be appreciated.

    Read the article

  • Memcached Debuging/Server Logs Monitor the Memcached Servers?

    - by user1179459
    I have chat engine which is based on the Memcached variables, putting them into arrays and reading them in other end via jquery, which works fine 95% of the times, however when the server load is high memcached (presume its the memcached) the crash and browser gets stucks up. I dont think its jquery issue since this only happens when the server load is very high. I need a way to monitor the memcached servers or somehow write a log file into where the fails/errors comes in... Any idea on how i can do this ? or any idea why memcached servers fails ? I run the memcached as follows $GLOBALS['MemCached'] = FALSE; $GLOBALS['MemCached'] = new Memcache; $GLOBALS['MemCached']->pconnect('localhost', 11211); My memcached config is as follows #! /bin/sh # # chkconfig: - 55 45 # description: The memcached daemon is a network memory cache service. # processname: memcached # config: /etc/sysconfig/memcached # pidfile: /var/run/memcached/memcached.pid # Standard LSB functions #. /lib/lsb/init-functions # Source function library. . /etc/init.d/functions PORT=11211 USER=memcached MAXCONN=1024 CACHESIZE=128 OPTIONS="" if [ -f /etc/sysconfig/memcached ];then . /etc/sysconfig/memcached fi # Check that networking is up. . /etc/sysconfig/network if [ "$NETWORKING" = "no" ] then exit 0 fi RETVAL=0 prog="memcached" pidfile=${PIDFILE-/var/run/memcached/memcached.pid} lockfile=${LOCKFILE-/var/lock/subsys/memcached} start () { echo -n $"Starting $prog: " # Ensure that /var/run/memcached has proper permissions if [ "`stat -c %U /var/run/memcached`" != "$USER" ]; then chown $USER /var/run/memcached fi daemon --pidfile ${pidfile} memcached -d -p $PORT -u $USER -m $CACHESIZE -c $MAXCONN -P ${pidfile} $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch ${lockfile} } stop () { echo -n $"Stopping $prog: " killproc -p ${pidfile} /usr/bin/memcached RETVAL=$? echo if [ $RETVAL -eq 0 ] ; then rm -f ${lockfile} ${pidfile} fi } restart () { stop start } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status -p ${pidfile} memcached RETVAL=$? ;; restart|reload|force-reload) restart ;; condrestart|try-restart) [ -f ${lockfile} ] && restart || : ;; *) echo $"Usage: $0 {start|stop|status|restart|reload|force-reload|condrestart|try-restart}" RETVAL=2 ;; esac exit $RETVAL

    Read the article

  • Using an allowed user list with VSFTPD

    - by Naftuli Tzvi Kay
    According to the Wiki here, you can only allow certain users to log in over FTP using the following configuration in your /etc/vsftp.conf file: userlist_enable=YES userlist_file=/etc/vsftp.user_list userlist_deny=NO I've configured my system to use this configuration, and I only have one user which I'd like to expose over FTP named streams, so my /etc/vsftp.user_list looks like this: streams Interestingly enough, I cannot log in once I enable to user list. If I change userlist_enable to NO, then things work properly, but if I enable it, I can't log in all, it just keeps trying to reconnect. I don't get a login failed message, it just keeps trying to reconnect when using lftp. My /etc/vsftp.conf file is available on Pastebin here and my /etc/vsftp.user_list is available here. What am I doing wrong here? I'd just like to only make the streams user able to log in.

    Read the article

  • running vmstat around a command

    - by svrist
    Im trying to automate a long list of database experiments. I would like to save vmstat output while specific parts of the experiments are running. Does something exist along the lines of time(1) or script(1) that I could use to say: " While this command is running, save vmstat 2(or similar) output to a file " I am aware that vmstat gives my vmstat for the complete system including other processes, but that is OK for my usage. (Running Ubuntu 9.10 if that makes a difference)

    Read the article

  • How to tell X.org to reload input device module?

    - by Vi
    When X.org boots up, Synaptics touchpad works well. But when I remove the module it falls back to /dev/input/mice and don't use normal driver even when touchpad is available again. Xorg.0.log: ... (II) XINPUT: Adding extended input device "Synaptics Touchpad" (type: TOUCHPAD) (--) Synaptics Touchpad: touchpad found # { rmmod psmouse && echo mem /sys/power/state && modprobe psmouse; } (WW) : No Device specified, looking for one... (II) : Setting Device option to "/dev/input/mice" ... How to tell X.org to try it's InputDevice again (without restarting X server)? P.S. rmmod psmouse is needed to prevent crashing of Acer Extensa 5220 when resuming from suspend-to-ram. Update: Found answer myself: Doing xinput set-int-prop "Synaptics Touchpad" "Device Enabled" 8 1 after reloading the kernel module reloads touchpad. Now suspend-to-ram works OK.

    Read the article

  • MySQL - Why would SHOW SLAVE HOSTS cause a binlog dump?

    - by Rory McCann
    We're getting loads of binlog files in our MySQL 5.0.x. We have a normal master/slave replication thing going with 1 master, 1 slave. Looking at /var/log/mysql.log, nearly 90% of the time the replicator connects and does a SHOW SLAVE HOSTS causes a bin log dump. For example: 7020 Query SHOW SLAVE HOSTS 7020 Binlog Dump Log: 'mysql-bin.029634' Pos: 13273 However when I do a SHOW SLAVE HOSTS on the mysql myself, I get no results. Occasionally when the replicator does a SHOW SLAVE HOSTS, mysql will hang for hours. I see nothing in the /var/log/syslog at the same time... What's going on here? How can I debug this more? For the record the MySQL master and slave servers are ubuntu dapper.

    Read the article

  • "File" exists or not

    - by SnailTang
    ls -il ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory total 55456 ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N and when i use to show these files, i get the info: p+st.ó·e p¦ft.¦·N Please, where do these files or somethings others exist. Or what makes them show here.

    Read the article

  • Install 64-bit Ubuntu or 32-bit?

    - by nitbuntu
    I'll be receiving a new notebook in a few days and was planning on running Ubuntu on it as it's compatible and the notebook has no OS pre-installed. The specifications are: Core 2 Duo, T6600, 4 GB RAM, Intel integrated graphics. I know a year or two ago, running a 64-bit version of Ubuntu was not advised due to much of the applications and plugins (e.g. Flash) only running on 32-bit. Is this still the case? Would I get better performance with 64-bit Ubuntu since I have 4 GB of RAM? Are there any downsides anymore?

    Read the article

  • sequential SSH command execution not working in Ubuntu/Bash

    - by kumar
    My requirement is I will have a set of commands that needs to be executed in a text file. My Shell script has to read each command, execute and store the results in a separate file. Here is the snippet which does the above requirement. while read command do echo 'Command :' $command >> "$OUTPUT_FILE" redirect_pos=`expr index "$command" '>>'` if [ `expr index "$command" '>>'` != 0 ];then redirect_fn "$redirect_pos" "$command"; else $command state=$? if [ $state != 0 ];then echo "command failed." >> "$OUTPUT_FILE" else echo "executed successfully." >> "$OUTPUT_FILE" fi fi echo >> "$OUTPUT_FILE" done < "$INPUT_FILE" Sample Commands.txt will be like this ... tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt gzip /var/tmp/logs.tar rm -f /var/tmp/list.txt This is working fine for commands which needs to be executed in local machine. But When I am trying to execute the following ssh commands only the 1st command getting executed. Here are the some of the ssh commands added in my text file. ssh uname@hostname1 tar -rvf /var/tmp/logs.tar -C /var/tmp/ Commands_log.txt ssh uname@hostname2 gzip /var/tmp/logs.tar ssh .. etc When I am executing this in cli it is working fine. Could anybody help me in this?

    Read the article

  • Apache directory structure with multiple hosted languages.

    - by anomareh
    I just got a new work machine up and running and I'm trying to decide on how to set everything up directory wise. I've done some digging around and really haven't been able to find anything conclusive. I know it's a question with a variety of answers but I'm hoping there's some sort of general guidelines or best practices to go by. With that said, here are a few things specific to my situation. I will be doing actual development and testing on the same machine as the server. It is a single user machine in the sense that I will be the only one working on the machine. There will be multiple hosted languages, specifically PHP and RoR while possibly expanding later. I'd like the setup to translate well to a production environment. With those 3 things in mind there are a couple of things I've had in the back of mind. Seeing as it's a single user machine I haven't been able to decide whether or not I should be working on things out of my home directory or if they should be located outside of it. I'm feeling that outside of a user directory would be better as it would translate better to a production environment, but I'm also not sure if that will come with any permission annoyances or concerns seeing as I'll be working on the same machine. Hosting multiple languages seems like it may be a bit quirky. With PHP I've found you're generally just dumping the project somewhere in the document root where as something like a Rails app you have the entire project and you only want the public directory in the document root. Thanks for any insight, opinion, or just personal preference from experience anyone can offer.

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Cron Tips for not running cron jobs on holidays (the monday of a three day weekend)

    - by Poul
    We have about a one hundred machine set-up with each machine running cron jobs like starting and stopping services and archiving these services' log files at the end of the day to a centralized repository. One headache we have is the three-day weekend (we're closed on holidays). We don't want the services starting up on those days and connecting to our business partner's machines. We currently do this by manually commenting out the most critical jobs and letting a bunch of errors happen all day. Not ideal. Basically if a job has '1-5' set in the day field we want this to mean 'work days' and not Monday to Friday'. We have a database that keeps track of which days are indeed 'work days' So, is it possible to override Cron's day-matching algorithm, or is there some other way to easily set a cron setting to avoid things starting up on a Monday holiday? Thanks!

    Read the article

  • Why do I have untrusted certificates for Google, Yahoo, Mozilla and others?

    - by jackweirdy
    In the HTTPS/SSL section of chrome://chrome/settings, I see the following: What does this mean, and is there something wrong? I have a basic understanding of SSL/TLS - I'm not claiming to be completely familiar, but I'm fairly confident I know my way around it - but I don't understand why I have certificates installed on my machine specifically for these sites. From my understanding, I should have the certificates for Certificate Authorities, and any site I visit and use SSL/TLS should have a certificate signed by one of these trusted CAs for me to trust the site. My worry is that if someone has maliciously installed a certificate for these sites on my machine, they could perform a DNS spoofing attack (or a number of other attacks) to hijack my connection to my email account without me knowing, and as they've got the private counterpart to the certificate on my machine, decrypt the communication. NB: I'm also aware that CA certificates aren't just within Chromium and are used system wide as part of libssl - they're stored in /etc/ssl/certs. What I'd like to know is: Is this correct? - The big red boxes make me think no Is this malicious or benign? What can I do to resolve this problem? (If indeed it is a problem) Thanks :)

    Read the article

  • Monitor LSI 3ware raid controller on ESXi

    - by aseq
    This concerns a server that runs ESXi (v. 4.x or 5.x) installed on drives that are configured into a raid10 using an LSI 3ware 97050 raid controller. I would like to know if there is a way to monitor the LSI 3ware series of controllers, in particular the 9750, through ESXi. And to hopefully also run the monitoring daemon LSI provides. I know you can set up a cronjob to execute tw_cli through ssh on the ESXi server. However that's not really ideal. I am not using vcenter by the way. It would be nice to have more than just monitoring working, since the 3ware software has a very useful web client, besides tw_cli.

    Read the article

  • Where is xorg.conf in Ubuntu 10.04?

    - by Mikey.B
    Hi Guys, I'm in the middle of trying to setup dual monitors on ubuntu and would like to backup my xorg.conf... The documentation I've been thus far say to do the following: sudo cp /etc/X11/xorg.conf /etc/X11/xorg.conf_backup But I don't see the xorg.conf file anywhere... Am I missing something? Where is this located?

    Read the article

  • Explanation for kernel error - Eeeek! page_mapcount went negative

    - by Aditya Advani
    Internet says this is a genuine Kernel Bug but does anyone know what triggers it?? Server running CentOS x86_64 with kernel 2.6.27.24 Here is my crash output: [root@u15345757 httpdocs]# Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.506380] Eeek! page_mapcount(page) went negative! (-1) Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.517515] page pfn = d0a3 Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.523814] page->flags = 10000000000083c Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.532489] page->count = 2 Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.538741] page->mapping = ffff88001f01a110 Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.547924] vma->vm_ops = 0x0 Message from syslogd@ at Thu Aug 6 01:42:22 2009 ... u15345757 kernel: [1145736.554543] [ cut here ] Message from syslogd@ at Thu Aug 6 01:42:23 2009 ... u15345757 kernel: [1145736.564528] invalid opcode: 0000 [1] SMP Message from syslogd@ at Thu Aug 6 01:42:23 2009 ... u15345757 kernel: [1145736.564528] Code: 80 e8 22 51 fd ff 48 8b 85 90 00 00 00 48 85 c0 74 19 48 8b 40 20 48 85 c0 74 10 48 8b 70 58 48 c7 c7 10 7f 7d 80 e8 fd 50 fd ff <0f> 0b eb fe 8b 77 18 41 58 5b 5d 83 e6 01 f7 de 83 c6 04 e9 df Broadcast message from root (pts/3) (Thu Aug 6 01:49:29 2009): The system is going down for reboot NOW!

    Read the article

  • HAproxy - Redirect issue - Uri Variables ?

    - by Justin
    I'm using haproxy 1.5dev3 and I was wondering if there is any possible way to grab uri variables from a request to reappend the query on the end of a redirect url? What I'm trying to do is redirect from: http://www.domain.com/page/example.htm?id=1234567 to: http://www.domain.com/frame/newpage.cfm?id=1234567 redirect prefix doesn't work properly as it tries to append /page/example.htm to the end of the redirect url. Can I do some sort of rewrite to accomplish this? It would be awesome if you could use uri and queries as variables for redirection/pool selection like on F5. Please help...Thanks!

    Read the article

  • TIME_WAIT connections not being cleaned up after timeout period expires

    - by Mark Dawson
    I am stress testing one of my servers by hitting it with a constant stream of new network connections, the tcp_fin_timeout is set to 60, so if I send a constant stream of something like 100 requests per second, I would expect to see a rolling average of 6000 (60 * 100) connections in a TIME_WAIT state, this is happening, but looking in netstat (using -o) to see the timers, I see connections like: TIME_WAIT timewait (0.00/0/0) where their timeout has expired but the connection is still hanging around, I then eventually run out of connections. Anyone know why these connections don't get cleaned up? If I stop creating new connections they do eventually disappear but while I am constantly creating new connections they don't, seems like the kernel isn't getting chance to clean them up? Is there some other config options I need to set to remove the connections as soon as they have expired? The server is running Ubuntu and my web server is nginx. Also it has iptables with connection tracking, not sure if that would cause these TIME_WAIT connections to live on. Thanks Mark.

    Read the article

  • Getting Error while running RED5 server - class path resource [red5.xml] cannot be opened because it does not exist

    - by sunil221
    HI , I have installed java version "1.6.0_14" and Ant version 1.8.2 for red5 Server. when i am trying to run red5 server i am getting the following error please help Root: /usr/local/red5 Deploy type: bootstrap Logback selector: org.red5.logging.LoggingContextSelector Setting default logging context: default 11:27:39.838 [main] INFO org.red5.server.Launcher - Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/red5/red5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/red5/lib/logback-classic-0.9.26.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 11:27:39.994 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Refreshing org.springframework.context.support.FileSystemXmlApplicationContext@39d85f79: startup date [Mon Dec 21 11:27:39 EST 2009]; root of context hierarchy 11:27:40.149 [main] INFO o.s.b.f.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [red5.xml] Exception org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [red5.xml]; nested exception is java.io.FileNotFoundException: class path resource [red5.xml] cannot be opened because it does not exist Bootstrap complete

    Read the article

  • IP tables gateway

    - by WojonsTech
    I am trying to make an iptables gateway. I ordered 3 dedicated server from my hosting company all with dual nics. One server has been given all the ip addresses and is connected directly to the internet and has its other nic connected to a switch where the other servers are all connected also. I want to setup iptables so for example the ip address 50.0.2.4 comes into my gateway server it fowards all the traffic to a private ip address using the second nic. This way the second nic can do what ever it needs and can respond back also. I also want it setup that if any of the other servers needs to download anything over the inernet it is able to do so and by using the same ip address that is used for its incomming traffic. Lastly I would like to be able to setup dns and other needed networking stuff that i maybe not thinking about.

    Read the article

  • Virtual host “Forbidden You don't have permission to access / on this server” on debian

    - by ulduz114
    Before I created a virtual host I could see "http://localhost", but when I created a virtual host I could not see "http://localhost" and my virtual host "http://test" Here is my virtualhost config file: <VirtualHost test:80> ServerAdmin [email protected] ServerName test ServerAlias test DocumentRoot "/home/javad/Public/test/public" <Directory "/home/javad/Public/test/public/" > Options Indexes FollowSymLinks MultiViews ExecCGI DirectoryIndex index.php AllowOverride all Order allow,deny allow from all </Directory> </VirtualHost> so I ran a2ensite test and added 127.0.0.1 test to /etc/hosts file and restart apapche2 fine But after that I cannot access to http://test or even http://localhost i get Forbidden You don't have permission to access / on this server. When I delete my virtual host setting I can access http://localhost

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >