Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 98/1664 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • How to calculate CPU % based on raw CPU ticks in SNMP

    - by bjeanes
    According to http://net-snmp.sourceforge.net/docs/mibs/ucdavis.html#scalar_notcurrent ssCpuUser, ssCpuSystem, ssCpuIdle, etc are deprecated in favor of the raw variants (ssCpuRawUser, etc). The former values (which don't cover things like nice, wait, kernel, interrupt, etc) returned a percentage value: The percentage of CPU time spent processing user-level code, calculated over the last minute. This object has been deprecated in favour of 'ssCpuRawUser(50)', which can be used to calculate the same metric, but over any desired time period. The raw values return the "raw" number of ticks the CPU spent: The number of 'ticks' (typically 1/100s) spent processing user-level code. On a multi-processor system, the 'ssCpuRaw*' counters are cumulative over all CPUs, so their sum will typically be N*100 (for N processors). My question is: how do you turn the number of ticks into percentage? That is, how do you know how many ticks per second (it's typically — which implies not always — 1/100s, which either means 1 every 100 seconds or that a tick represents 1/100th of a second). I imagine you also need to know how many CPUs there are or you need to fetch all the CPU values to add them all together. I can't seem to find a MIB that gives you an integer value for # of CPUs which makes the former route awkward. The latter route seems unreliable because some of the numbers overlap (sometimes). For example, ssCpuRawWait has the following warning: This object will not be implemented on hosts where the underlying operating system does not measure this particular CPU metric. This time may also be included within the 'ssCpuRawSystem(52)' counter. Some help would be appreciated. Everywhere seems to just say that % is deprecated because it can be derived, but I haven't found anywhere that shows the official standard way to perform this derivation. The second component is that these "ticks" seem to be cumulative instead of over some time period. How do I sample values over some time period? The ultimate information I want is: % of user, system, idle, nice (and ideally steal, though there doesn't seem to be a standard MIB for this) "currently" (over the last 1-60s would probably be sufficient, with a preference for smaller time spans).

    Read the article

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happes: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this? TIA, Marco

    Read the article

  • Apache Bad Request "Size of a request header field exceeds server limit" with Kerberos SSO

    - by Aurelin
    I'm setting up an SSO for Active Directory users through a website that runs on an Apache (Apache2 on SLES 11.1), and when testing with Firefox it all works fine. But when I try to open the website in Internet Explorer 8 (Windows 7), all I get is "Bad Request Your browser sent a request that this server could not understand. Size of a request header field exceeds server limit. Authorization: Negotiate [ultra long string]" My vhost.cfg looks like this: <VirtualHost hostname:443> LimitRequestFieldSize 32760 LimitRequestLine 32760 LogLevel debug <Directory "/data/pwtool/sec-data/adbauth"> AuthName "Please login with your AD-credentials (Windows Account)" AuthType Kerberos KrbMethodNegotiate on KrbAuthRealms REALM.TLD KrbServiceName HTTP/hostname Krb5Keytab /data/pwtool/conf/http_hostname.krb5.keytab KrbMethodK5Passwd on KrbLocalUserMapping on Order allow,deny Allow from all </Directory> <Directory "/data/pwtool/sec-data/adbauth"> Require valid-user </Directory> SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /etc/apache2/ssl.crt/hostname-server.crt SSLCertificateKeyFile /etc/apache2/ssl.key/hostname-server.key </VirtualHost> I also made sure that the cookies are deleted and tried several smaller values for LimitRequestFieldSize and LimitRequestLine. Another thing that seems weird to me is that even with LogLevel debug I won't get any logs about this. The log's last line is ssl_engine_kernel.c(1879): OpenSSL: Write: SSL negotiation finished successfully Does anyone have an idea about that?

    Read the article

  • Project Server 2007 Task Updates hangs on 'Loading Grid...'

    - by entens
    A strange problem began occurring after applying MOSS2 (KB953334) and the August 2009 cumulative update to our Project server. When a user enters the 'Task Update' screen they are prompted to download a new ActiveX control. Upon refresh, and subsequent access attempts, the user is presented with a blank grid with the caption 'Loading Grid...' We have attempted to fix this issue by updating the 'Trusted Sites' list and changing the security settings according to KB818046. However, nothing seems to definitely fix the problem. Also, when the problem randomly fixes itself, it still occurs when viewing specific projects. Any ideas on a fix?

    Read the article

  • Virtual Machine Manager 2012 is showing 0% CPU usage

    - by Mark Henderson
    When trying to do some science to answer this question, I took a Windows 7 guest on a Server 2008 R2 host being managed by SCVMM 2012 and ran Prime95 on it to just generate some CPU usage. Here's the Guest: The Hyper-V host shows 12%, which is 1/8 cores (which is what is allocated), so that's looking correct: But SCVMM is showing 0%: I have left the stress test running for a long time, thinking that maybe SCVMM averages out over a long time (I thought it was 9 minutes, but I've been known to be wrong; just don't tell my wife). Why is SCVMM showing 0% when everything else seems to disagree?

    Read the article

  • Filesystems for webserver with SATA and Solid State disk,

    - by Jorisslob
    We have just ordered a new webserver with 120 Gb solid state disk and a SATA disk. I am trying to plan ahead what sort of filesystem to use. This system will be running Linux, Apache/Tomcat to host java services. The main service is a system where people can upload reasonably large files (in the order of 100 Mb, images, image stacks and video), which people will be able to annotate and which will be sent to a database server when annotation is complete. Thus far, I plan to put most of the utility programs of the operating system om the SSD and put the large media files there. The SATA disks will hold the less volitile data like apache, tomcat and the servlets. For filesystems I have considered going for the stable EXT3 because I hear that it is best supported. The downside seems to be that it not the ideal choice for large files. That is why I am leaning towards using XFS for the SSD and EXT3 for the SATA. My questions are: 1) Does this sound like a reasonable setup? 2) What filesystems would you recommend for the SSD and for the SATA? Thanks

    Read the article

  • Meaning of directories on Unix and Unix like systems

    - by Luke
    I've been using Linux for a couple of years now but I still haven't figured out what the origin or meaning of some the directory names are on Unix and Unix like systems. E.g. what does etc stand for or var? Where does the opt name come from? And while we're on the topic anyway. Can someone give a clear explanation of what directory is best used for what. I sometimes get confused where certain software is installed or what the most appropriate directory is to install software into.

    Read the article

  • Dealing with HTTP w00tw00t attacks

    - by Saif Bechan
    I have a server with apache and I recently installed mod_security2 because I get attacked a lot by this: My apache version is apache v2.2.3 and I use mod_security2.c This were the entries from the error log: [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) Here are the errors from the access_log: 202.75.211.90 - - [29/Mar/2010:10:43:15 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:11:40:41 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:12:37:19 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecFilterSelective REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me errors. Instead I use a rule like this: SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecRule REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" Even this does not work. I don't know what to do anymore. Anyone have any advice? Update 1 I see that nobody can solve this problem using mod_security. So far using ip-tables seems like the best option to do this but I think the file will become extremely large because the ip changes serveral times a day. I came up with 2 other solutions, can someone comment on them on being good or not. The first solution that comes to my mind is excluding these attacks from my apache error logs. This will make is easier for me to spot other urgent errors as they occur and don't have to spit trough a long log. The second option is better i think, and that is blocking hosts that are not sent in the correct way. In this example the w00tw00t attack is send without hostname, so i think i can block the hosts that are not in the correct form. Update 2 After going trough the answers I came to the following conclusions. To have custom logging for apache will consume some unnecessary recourses, and if there really is a problem you probably will want to look at the full log without anything missing. It is better to just ignore the hits and concentrate on a better way of analyzing your error logs. Using filters for your logs a good approach for this. Final thoughts on the subject The attack mentioned above will not reach your machine if you at least have an up to date system so there are basically no worries. It can be hard to filter out all the bogus attacks from the real ones after a while, because both the error logs and access logs get extremely large. Preventing this from happening in any way will cost you resources and they it is a good practice not to waste your resources on unimportant stuff. The solution i use now is Linux logwatch. It sends me summaries of the logs and they are filtered and grouped. This way you can easily separate the important from the unimportant. Thank you all for the help, and I hope this post can be helpful to someone else too.

    Read the article

  • How to create an NFS proxy by using kernel server & client?

    - by Martin C. Martin
    I have a file server that exports as NFS. On an Ubuntu machine I mount that, then try to export it as an NFS volume. When I go to export it, I get the message: exportfs: /test/nfs-mount-point does not support NFS export How can I get this to work, or at least get more information as to what the problem is? Exact steps: Unbuntu 12.04 mount -f nfs myfileserver.com:/server-dir /test/nfs-mount-point [Works fine, I can read & write files] /etc/exports contains: /test/nfs-mount-point *(rw,no_subtree_check) sudo /etc/init.d/nfs-kernel-server restart Stopping NFS kernel daemon [ OK ] Unexporting directories for NFS kernel daemon... [ OK ] Exporting directories for NFS kernel daemon... exportfs: /test/nfs-mount-point does not support NFS export [ OK ] Starting NFS kernel daemon [ OK ]

    Read the article

  • SUSE 12.1 Apache startup after oci8 installation

    - by DKSan
    I have got a virtual server running opensuse 11.4 with apache, php, oracle instantclient, and oci installed through pecl. The steps it took for me to have it up and running on 11.4 were: # Install instantclient rpm -Uvh oracle-instantclient11.2-basic-11.2.0.2.0.x86_64.rpm rpm -Uvh oracle-instantclient11.2-devel-11.2.0.2.0.x86_64.rpm # Install OCI8 through pecl pecl install oci8 # add oci8 to modules vi /etc/php5/conf.d/oci8.ini extension=oci8.so # add LD_LIBRARY_PATH to apache vi /etc/sysconfig/apache2 # add to bottom of script export LD_LIBRARY_PATH="/usr/lib/oracle/11.2/client64/lib" # restart Apache /etc/init.d/apache2 restart Celebrating the same procedure on a fresh installation of OpenSUSE 12.1 results in apache throwing the following message at startup: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php5/extensions/oci8.so' - libnnz11.so: cannot open shared object file: No such file or directory in Unknown on line 0 I can't get any explanation, why it is working for 11.4 and in 12.1 it stops working. Can someone please point me in the right direction..

    Read the article

  • Disable gdm in Ubuntu 10.04?

    - by Nick Brooks
    The new Ubuntu features a completely unkillable gdm. Is there a way to disable it? It is not enabled in services , the gdm startup script is deleted , it is removed from 'update.rc' but it still starts up. How do I disable GDM and Graphical User Selection?

    Read the article

  • Gearman too many processes issue

    - by Roman Newaza
    I use Net_Gearman from PECL, Gearmand 1.1.11 and Gearman Manager. Every time I add background job, I can see new worker listed with no Function, nor Id in Ggearman-Monitor: If I add many messages in the bash loop, after some time it becomes very slow. for i in $(seq 0 9999); do php Client.php && echo $i; done Yesterday, the situation was even worse - I had many error messages in Gearmand log regarding Too many open files and once I added --file-descriptors=49152 as an option and swithched to 1.1.11 from 1.0.6, these errors gone. Here is lsof -p $(cat /var/run/gearman/gearmand.pid) output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME gearmand 2020 gearman cwd DIR 8,2 4096 2 / gearmand 2020 gearman rtd DIR 8,2 4096 2 / gearmand 2020 gearman txt REG 8,2 3852472 3672962 /opt/sbin/gearmand gearmand 2020 gearman mem REG 8,2 52120 9961752 /lib/x86_64-linux-gnu/libnss_files-2.15.so gearmand 2020 gearman mem REG 8,2 47680 9961756 /lib/x86_64-linux-gnu/libnss_nis-2.15.so gearmand 2020 gearman mem REG 8,2 97248 9961768 /lib/x86_64-linux-gnu/libnsl-2.15.so gearmand 2020 gearman mem REG 8,2 35680 9961750 /lib/x86_64-linux-gnu/libnss_compat-2.15.so gearmand 2020 gearman mem REG 8,2 92720 9964871 /lib/x86_64-linux-gnu/libz.so.1.2.3.4 gearmand 2020 gearman mem REG 8,2 109288 11014600 /usr/lib/x86_64-linux-gnu/libsasl2.so.2.0.25 gearmand 2020 gearman mem REG 8,2 1030512 9961759 /lib/x86_64-linux-gnu/libm-2.15.so gearmand 2020 gearman mem REG 8,2 1930616 9964982 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 gearmand 2020 gearman mem REG 8,2 382896 9964977 /lib/x86_64-linux-gnu/libssl.so.1.0.0 gearmand 2020 gearman mem REG 8,2 1815224 9961748 /lib/x86_64-linux-gnu/libc-2.15.so gearmand 2020 gearman mem REG 8,2 88384 9964865 /lib/x86_64-linux-gnu/libgcc_s.so.1 gearmand 2020 gearman mem REG 8,2 962656 11014043 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16 gearmand 2020 gearman mem REG 8,2 199600 11016157 /usr/lib/x86_64-linux-gnu/libmemcached.so.11.0.0 gearmand 2020 gearman mem REG 8,2 31752 9961755 /lib/x86_64-linux-gnu/librt-2.15.so gearmand 2020 gearman mem REG 8,2 14768 9961763 /lib/x86_64-linux-gnu/libdl-2.15.so gearmand 2020 gearman mem REG 8,2 414280 9183971 /usr/lib/libboost_program_options.so.1.46.1 gearmand 2020 gearman mem REG 8,2 283832 9183656 /usr/lib/libevent-2.0.so.5.1.4 gearmand 2020 gearman mem REG 8,2 664504 11014432 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6 gearmand 2020 gearman mem REG 8,2 135366 9961757 /lib/x86_64-linux-gnu/libpthread-2.15.so gearmand 2020 gearman mem REG 8,2 3534240 9175810 /usr/lib/libmysqlclient.so.18.1.0 gearmand 2020 gearman mem REG 8,2 149280 9961760 /lib/x86_64-linux-gnu/ld-2.15.so gearmand 2020 gearman 0u CHR 1,3 0t0 1029 /dev/null gearmand 2020 gearman 1u CHR 1,3 0t0 1029 /dev/null gearmand 2020 gearman 2u CHR 1,3 0t0 1029 /dev/null gearmand 2020 gearman 3w REG 8,2 9381897 3409366 /var/log/gearman-job-server/gearman.log gearmand 2020 gearman 4r FIFO 0,8 0t0 38869143 pipe gearmand 2020 gearman 5w FIFO 0,8 0t0 38869143 pipe gearmand 2020 gearman 6u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 7u unix 0xffff880230fdf500 0t0 38869144 socket gearmand 2020 gearman 8u unix 0xffff880230fdde40 0t0 38869145 socket gearmand 2020 gearman 9u IPv4 38869146 0t0 TCP localhost:4730 (LISTEN) gearmand 2020 gearman 10r FIFO 0,8 0t0 38869147 pipe gearmand 2020 gearman 11w FIFO 0,8 0t0 38869147 pipe gearmand 2020 gearman 12u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 13u unix 0xffff880230fde4c0 0t0 38869148 socket gearmand 2020 gearman 14u unix 0xffff880230fdeb40 0t0 38869149 socket gearmand 2020 gearman 15r FIFO 0,8 0t0 38869150 pipe gearmand 2020 gearman 16w FIFO 0,8 0t0 38869150 pipe gearmand 2020 gearman 17u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 18u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 19u unix 0xffff880230fdb400 0t0 38869151 socket gearmand 2020 gearman 20u unix 0xffff880230fdaa40 0t0 38869152 socket gearmand 2020 gearman 21r FIFO 0,8 0t0 38869153 pipe gearmand 2020 gearman 22w FIFO 0,8 0t0 38869153 pipe gearmand 2020 gearman 23u unix 0xffff880203cfce00 0t0 38868290 socket gearmand 2020 gearman 24u unix 0xffff880203cfdb00 0t0 38868291 socket gearmand 2020 gearman 25r FIFO 0,8 0t0 38868292 pipe gearmand 2020 gearman 26w FIFO 0,8 0t0 38868292 pipe gearmand 2020 gearman 27u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 28u unix 0xffff880203cf9040 0t0 38868293 socket gearmand 2020 gearman 29u unix 0xffff880203cfaa40 0t0 38868294 socket gearmand 2020 gearman 30r FIFO 0,8 0t0 38868295 pipe gearmand 2020 gearman 31w FIFO 0,8 0t0 38868295 pipe gearmand 2020 gearman 32u IPv4 38868324 0t0 TCP localhost:4730->localhost:57954 (ESTABLISHED) gearmand 2020 gearman 33u IPv4 38868325 0t0 TCP localhost:4730->localhost:57955 (ESTABLISHED) gearmand 2020 gearman 34u IPv4 38901247 0t0 TCP localhost:4730->localhost:38594 (ESTABLISHED) gearmand 2020 gearman 35u IPv4 38868327 0t0 TCP localhost:4730->localhost:57957 (ESTABLISHED) gearmand 2020 gearman 36u IPv4 38867483 0t0 TCP localhost:4730->localhost:57959 (ESTABLISHED) gearmand 2020 gearman 37u IPv4 38867484 0t0 TCP localhost:4730->localhost:57958 (ESTABLISHED) gearmand 2020 gearman 38u IPv4 38901248 0t0 TCP localhost:4730->localhost:38595 (CLOSE_WAIT) gearmand 2020 gearman 39u IPv4 38901249 0t0 TCP localhost:4730->localhost:38597 (ESTABLISHED) gearmand 2020 gearman 40u IPv4 38869201 0t0 TCP localhost:4730->localhost:57979 (ESTABLISHED) gearmand 2020 gearman 41u IPv4 38900437 0t0 TCP localhost:4730->localhost:38599 (ESTABLISHED) gearmand 2020 gearman 42u IPv4 38900438 0t0 TCP localhost:4730->localhost:38602 (ESTABLISHED) gearmand 2020 gearman 43u IPv4 38868375 0t0 TCP localhost:4730->localhost:57987 (ESTABLISHED) gearmand 2020 gearman 44u IPv4 38900468 0t0 TCP localhost:4730->localhost:38606 (CLOSE_WAIT) gearmand 2020 gearman 45u IPv4 38868381 0t0 TCP localhost:4730->localhost:57999 (ESTABLISHED) gearmand 2020 gearman 46u IPv4 38868388 0t0 TCP localhost:4730->localhost:58007 (ESTABLISHED) gearmand 2020 gearman 47u IPv4 38868393 0t0 TCP localhost:4730->localhost:58011 (ESTABLISHED) gearmand 2020 gearman 48u IPv4 38903950 0t0 TCP localhost:4730->localhost:38609 (ESTABLISHED) gearmand 2020 gearman 49u IPv4 38870276 0t0 TCP localhost:4730->localhost:58019 (ESTABLISHED) gearmand 2020 gearman 50u IPv4 38903955 0t0 TCP localhost:4730->localhost:38613 (ESTABLISHED) gearmand 2020 gearman 51u IPv4 38900477 0t0 TCP localhost:4730->localhost:38617 (CLOSE_WAIT) gearmand 2020 gearman 52u IPv4 38867630 0t0 TCP localhost:4730->localhost:58031 (ESTABLISHED) gearmand 2020 gearman 53u IPv4 38867633 0t0 TCP localhost:4730->localhost:58035 (ESTABLISHED) gearmand 2020 gearman 54u IPv4 38867636 0t0 TCP localhost:4730->localhost:58039 (ESTABLISHED) gearmand 2020 gearman 55u IPv4 38900536 0t0 TCP localhost:4730->localhost:38619 (ESTABLISHED) gearmand 2020 gearman 56u IPv4 38868419 0t0 TCP localhost:4730->localhost:58047 (ESTABLISHED) gearmand 2020 gearman 57u IPv4 38869263 0t0 TCP localhost:4730->localhost:58051 (ESTABLISHED) gearmand 2020 gearman 58u IPv4 38900537 0t0 TCP localhost:4730->localhost:38621 (ESTABLISHED) gearmand 2020 gearman 59u IPv4 38869271 0t0 TCP localhost:4730->localhost:58059 (ESTABLISHED) gearmand 2020 gearman 60u IPv4 38900538 0t0 TCP localhost:4730->localhost:38623 (ESTABLISHED) gearmand 2020 gearman 61u IPv4 38870319 0t0 TCP localhost:4730->localhost:58067 (ESTABLISHED) gearmand 2020 gearman 62u IPv4 38900540 0t0 TCP localhost:4730->localhost:38628 (ESTABLISHED) gearmand 2020 gearman 63u IPv4 38869289 0t0 TCP localhost:4730->localhost:58075 (ESTABLISHED) ... gearmand 2020 gearman 2229u IPv4 38903885 0t0 TCP localhost:4730->localhost:38572 (ESTABLISHED) gearmand 2020 gearman 2230u IPv4 38901211 0t0 TCP localhost:4730->localhost:38576 (ESTABLISHED) gearmand 2020 gearman 2234u IPv4 38901237 0t0 TCP localhost:4730->localhost:38588 (ESTABLISHED)

    Read the article

  • Cannot open TUN/TAP dev /dev/as0t0: No such file or directory (errno=2)

    - by Mark
    I just attempted to install OpenVPN Access Server on my Debian VPS that uses OpenVZ. It installed fine, however when I try to start it from the administration panel, I get these errors: process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t0: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t1: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t2: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t3: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t4: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t5: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t6: No such file or directory (errno=2)'] service failed to start or returned error status process started and then immediately exited: ['Sat Sep 22 19:14:33 2012 Cannot open TUN/TAP dev /dev/as0t7: No such file or directory (errno=2)'] service failed to start or returned error status Is there a solution for this?

    Read the article

  • gparted installed on OpenSuse shows all file system types as greyed out except for hfs

    - by cmdematos.com
    I have had this problem before and fixed it, but I don't recall how I did it and I did not record it (sadness :( ) I have all the requisite commands installed on OpenSuse to support gparted's efforts in creating any of the supported file systems. I recall that the problem was that gparted could not find the commands, in any event all the file systems are greyed out in the context menu except for the legacy hfs partition which only supports < 2gb. Even extfs2-extfs4 are greyed out. How do I fix this?

    Read the article

  • How to Route Traffic in Case PPTP Remote Client is on Same Subnet as Server

    - by Marcus Cole
    I've a PPTP server setup on my local home network (192.168.1.0/24, pfSense). Now sometimes when I'm away and want to connect remotely my client (Windows 7) is also on the same network because e.g. the hotel has set it up the same way. Thus the connection works, but I can't reach any PC on my home network because everything is routed directly to the client local router which is in the same subnet. Is there a way to work around this by messing with a configuration or adapting Windows routing table, i.e. without modifying either network?

    Read the article

  • Installing esx 3.5 on hpblade 460c G7

    - by stillStudent
    I was trying to install ESX 3.5 on hp blade 460c but it fails with error no 'network adapter is detected'. Anybody has work around for this. I also tried to install ESX 4.1, it also had the same problem but in case of 4.1 there is support to add your custom driver while installing. So I added NIC drivers and installation completed successfully. ESX 3.5 installtion does not support adding custom drivers while installing. I can't use ESX 4.1 because it does not support 'license server' configuration and only asks for license file(key). I do not have license keys, I only have vmware license server. That is the reason I have to be with ESX 3.5.

    Read the article

  • IIS on localhost is very slow

    - by Nyla Pareska
    I am using IIS7 on Windows Vista dual core cpu. The first time hit on a WCF service or an ASP.NET webform sometimes takes way longer than a minute which is not really acceptable for me. I configured the application to use the Classic .NET application pool and tried playing with the Maximum worker processes, first setting it to 4 but put it back to 1 as it did not have the expected result. Are there any other things that I can try?

    Read the article

  • Using nginx + wordpress with all wordpress files in a subdirectory

    - by GorillaPatch
    My setup I am running nginx 0.7.67 on Debian Lenny as a webserver, not as a reverse proxy. I am using php5-fpm to handle my PHP requests, which works fine. My aim I would like to have a wordpress installation that is layed out as described here clean wordpress subversion installation. I would like to have a clean wordpress installation without cluttering my server root directory with all the wordpress files. That means that my wordpress installation would be in /wordpress and my themes and plugins inside /wordpress-content. The important point however is that if you navigate to my domain www.example.com then you would be taken directly to the wordpress blog, without having to specify the subdirectory where wordpress lives. I found a how-to at the nginx site installing wordpress but unfortunately this is for moving the entire wordpress directory instead of redirecting the traffic to it. I tried with the following configuration: example.conf in sites-available server { listen 80; server_name www.example.com; access_log /var/log/nginx/www.example.com.access.log main; root /var/www/example/htdocs; location / { try_files $uri $uri/ /wordpress/index.php?q=$uri&$args; } include /etc/nginx/includes/php5-wordpress.conf; include /etc/nginx/includes/deny.conf; } php5-wordpress.conf in includes location /wordpress { try_files $uri $uri/ /wordpress/index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_split_path_info ^(/wordpress)(/.*)$; fastcgi_ignore_client_abort on; fastcgi_pass unix:/var/run/php5-fpm.socket; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } fastcgi_params fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; The problems I have is that when I go to the adress "http://www.example.com" I get a 403 error as I disabled directory listing. Instead I would like my wordpress to appear then. Also if I navigate to "http://www.example.com/wordpress" I get a "file not found" error. However if I comment out the fastcgi_split_path_info line in my php5-wordpress.conf at least the wordpress installation works inside /wordpress. I need help how to debug this behavior or where I can find more information. Thanks alot. Update: Added error log entry for the 403 error. in the error.log I get the following entry for the 403 error: 2010/12/11 07:54:24 [error] 9496#0: *1 directory index of "/var/www/example/htdocs/" is forbidden, client: XXX.XXX.XXX.XXX, server: www.example.com, request: "GET / HTTP/1.1", host: "www.example.com" Update 2: Added the nginx.conf below: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $status ' '"$request" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; index index.php index.html; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • Dovecot install: what does this error mean?

    - by jamie
    I have postfix and dovecot installed on CentOS 6 (linode) along with MySQL. The table and user is already set up, postfix installed fine, but dovecot gives me this error in the mail log: Warning: Killed with signal 15 (by pid=9415 uid=0 code=kill) The next few lines say this: Apr 7 16:13:35 dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled) Apr 7 16:13:35 dovecot: config: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1: protocols=pop3s is no longer supported. to disable non-ssl pop3, use service pop3-login { inet_listener pop3 { p$ Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:5: ssl_cert_file has been replaced by ssl_cert = <file Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:6: ssl_key_file has been replaced by ssl_key = <file Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:8: namespace private {} has been replaced by namespace { type=private } Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:24: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:25: auth_user has been replaced by service auth { user } I am following directions for the install on CentOS 5 with changes in the dovecot.conf file from different sources specific to CentOS 6. So the dovecot.conf file might not be correct, but there is no good source I have found yet for making dovecot install correctly. Can anyone tell me what the error above means? The terminal does not give any message as to start OK or FAIL. When I issue the service dovecot start command, it says: Starting Dovecot Imap: and nothing more.

    Read the article

  • Logging in user in Windows 2008 server using LogonUser fails on LogonType LOGON32_LOGON_SERVICE

    - by Ofiris
    I am using LogonUser function to logon an account to Windows 2008 R2 server on a domain with clusterring. When using LOGON32_LOGON_INTERACTIVE as LogonType, I successfully login. When using LOGON32_LOGON_SERVICE as LogonType, Login fails, EventViewer says: An account failed to log on. Logon Type: 5 Account For Which Logon Failed: Security ID: NULL SID Account Name: thename Account Domain: thedomain Logon ID: 0x1009371c Logon Type: 5 Failure Information: Failure Reason: The user has not been granted the requested logon type at this machine. Status: 0xc000015b Sub Status: 0x0 Was not sure if its for superuser or stackoverflow (calling LogonUser from C# code), but I guess its some Windows server issue*. EventID = 4625 Edit: Found that - 0xc000015b The user has not been granted the requested logon type (aka logon right) at this machine Edit: Should be serverfault question...

    Read the article

  • Problem Activating “Windows Communication Foundation Non-HTTP Activation” feature in Windows 7

    - by Escobar5
    I'm having the following problem. I'm installing SharePoint 2010 Beta so I need to activate the windows feature "Windows Communication Foundation Non-HTTP Activation". The problem is I cannot activate it, i get the message: "An error has occurred. Not all features were successfully changed" When i look at the log (C:\Windows\Logs\CBS\CBS.log) I found this error: Process output: [l:186 [186]"SMConfigInstaller[Error]: Failed in calling 'StartService' for service 'NetTcpActivator'. Error code: 0x8007042c Anyone can give me a clue of what is happening here?

    Read the article

  • ssh tunnel error "ssh_exchange_identification: Connection closed by remote host"

    - by Jacob Ewing
    I'm trying to use an ssh tunnel from my office machine to my home machine, and get an error when I try to use it. What I'm doing is starting one shell like so: ssh -gL 12345:my.home.domain:22 my.home.domain This is giving me a proper shell, no problem. What I normally do then is ssh to my home machine through this office machine, like so: ssh -p 12345 127.0.0.1 This has always worked for me, until last week, when I set up a new system on my home machine (switching from Ubuntu to Debian). Now I get an error. I can still open up my initial ssh connection, but when I try to use that tunnel, I get (on the office machine) this error: ssh_exchange_identification: Connection closed by remote host Also, when that happens, the open shell that I have the tunnelling set up through gets this line spat out at it: channel 3: open failed: connect failed: Connection timed out At which point, I'm at a loss. If any more info is needed, I'll be happy to post it. ============= further to that ============== After fiddling around further, I've found that I'm getting a different response from the server (my home machine that is) when I try to telnet in on the various ports. If I try: telnet my.home.domain 22 I get this back: Trying <my ip address>... Connected to <my domain>. Escape character is '^]'. SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze2 Which is what I would expect. After setting up the tunnel though, and then telnetting to that, I see this response: Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. ============== and further still ================== As per kbulgrien's suggestion, here is the output from the client machine with the -v option: ssh -vp 24600 127.0.0.1 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 127.0.0.1 [127.0.0.1] port 24600. debug1: Connection established. debug1: identity file /home/jacob/.ssh/id_rsa type -1 debug1: identity file /home/jacob/.ssh/id_rsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_dsa type -1 debug1: identity file /home/jacob/.ssh/id_dsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • How to install mcrypt on RHEL5

    - by wag2639
    We have an RHEL5 server that I'm trying to install PHP-Mcrypt on and I'm stuck when I tried to run ./configure for mcrypt source files. I was using this guide: http://atlantatechpro.com/howtos/howtoslinux/howtosmhashmcrypt When I try to install (./configure) mcrypt, I get this checking for libmcrypt - version = 2.5.0... no Could not run libmcrypt test program, checking why... The test program compiled, but did not run. This usually means that the run-time linker is not finding LIBMCRYPT or finding the wrong version of LIBMCRYPT. If it is not finding LIBMCRYPT, you'll need to set your LD_LIBRARY_PATH environment variable, or edit /etc/ld.so.conf to point to the installed location Also, make sure you have run ldconfig if that is required on your system If you have an old version installed, it is best to remove it, although you may also be able to get things to work by modifying LD_LIBRARY_PATH configure: error: * libmcrypt was not found I also made a file at /etc/ld.so.conf.d/libmcrypt.conf with /usr/local/libmcrypt in it and ran /sbin/ldconfig I might have screwed things up by trying to reinstall libmcrypt without the configure arguments. Any suggestions on what to do now?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >