Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 141/1664 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • svnstat script

    - by Kyle Hodgson
    So I'm building out a shell script to check out all of our relevant svn repositories for analysis in svnstat. I've gotten all of this to work manually, now I'm writing up a bash script in cygwin on my Vista laptop, as I intend to move this to a Linux server at some point. Edit: I gave up on this and wrote a simple .bat script. I'll figure out the Linux deployment some other way. Edit: added the sleep 30 and svn log commands. I can tell now, with the svn log command, that it's not getting to the svn log ... this time, it did Applications, and ran the log, and then check out Database, and froze. I'll put the sleep 30 before and after the log this time. co2.sh #!/bin/bash function checkout { mkdir $1 svn checkout svn://dev-server/$1 $1 svn log --verbose --xml >> svn.log $1 sleep 30 } cd /cygdrive/c/Users/My\ User/Documents/Repos/wc checkout Applications checkout Database checkout WebServer/www.mysite.com checkout WebServer/anotherhost.mysite.com checkout WebServer/AnotherApp checkout WebServer/thirdhost.mysite.com checkout WebServer/fourthhost.mysite.com checkout WebServer/WebServices It works, for the most part - but for some reason it has a tendency to stop working after a few repositories, usually right after finishing a repository before going to the next one. When it fails, it will not recover on its own. I've tried commenting out the svn line, it goes in and creates all the directories just fine when I do that - so its not that. I'm looking for direction as well as direct advice. Cygwin has been very stable for me, but I did start using the native rxvt instead of "bash in a cmd.exe window" recently. I don't think that's the problem, as I've left top on remote systems running all night and rxvt didn't seem to mind. Also I haven't done any bash scripting in cygwin so I suppose this might not be recommended; though I can't see why not. I don't want all of WebServer, hence me only checking out certain folders like that. What I suspect is that something is hanging up the svn checkout. Any ideas here? Edit: this time when I hit ctrl+z to cancel out, I forgot I was on Windows and typed ps to see if the job was still running; and as you can see there are lots of svn processes hanging around... strange. Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [1]- Stopped bash co2.sh [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ kill %1 [1]- Stopped bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ [1]- Terminated bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ ps PID PPID PGID WINPID TTY UID STIME COMMAND 7872 1 7872 2340 0 1000 Jun 29 /usr/bin/svn 7752 1 6140 7828 1 1000 Jun 29 /usr/bin/svn 6192 1 5044 2192 1 1000 Jun 30 /usr/bin/svn 7292 1 7452 1796 1 1000 Jun 30 /usr/bin/svn 6236 1 7304 7468 2 1000 Jul 2 /usr/bin/svn 1564 1 5032 7144 2 1000 Jul 2 /usr/bin/svn 9072 1 3960 6276 3 1000 Jul 3 /usr/bin/svn 5876 1 5876 5876 con 1000 11:22:10 /usr/bin/rxvt 924 5876 924 10192 4 1000 11:22:10 /usr/bin/bash 7212 1 7332 5584 4 1000 13:17:54 /usr/bin/svn 9412 1 5480 8840 4 1000 15:38:16 /usr/bin/svn S 8128 924 8128 9452 4 1000 17:38:05 /usr/bin/bash 9132 8128 8128 8172 4 1000 17:43:25 /usr/bin/svn 3512 1 3512 3512 con 1000 17:43:50 /usr/bin/rxvt I 10200 3512 10200 6616 5 1000 17:43:51 /usr/bin/bash 9732 1 9732 9732 con 1000 17:45:55 /usr/bin/rxvt 3148 9732 3148 8976 6 1000 17:45:55 /usr/bin/bash 5856 3148 5856 876 6 1000 17:51:00 /usr/bin/vim 7736 924 7736 8036 4 1000 17:53:26 /usr/bin/ps Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Here's an strace on the PID of the hung svn program, it's been like this for hours. Looks like its just doing nothing. I keep suspecting that some interruption on the server is causing this; does svn have a locking mechanism I'm not aware of? Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ strace -p 7304 ********************************************** Program name: C:\cygwin\bin\svn.exe (pid 7304, ppid 6408) App version: 1005.25, api: 0.156 DLL version: 1005.25, api: 0.156 DLL build: 2008-06-12 19:34 OS version: Windows NT-6.0 Heap size: 402653184 Date/Time: 2009-07-06 18:20:11 **********************************************

    Read the article

  • Windows 7 Enterprise MAK needs Reactivation?

    - by chris
    I have a VMware image that contains our "standard workstation" where I do a lot of testing. I used a Windows 7 Enterprise MAK key (from MSDN) for activation because the doc said that MAK keys don't have to be reactivated when the hardware changes. Activation was done with slmgr.vbs /ipk LICENSE-KEY slmgr.vbs /ato Now after some testing where the virtual hw was changed it says it "needs to activate because the hw has changed". What did I do wrong?

    Read the article

  • Passive ftp on Server 2008

    - by xpda
    I have a new Windows 2008 server with IIS7. When I connect to the ftp in active mode, it works fine. In passive mode, it connects, but then times out trying to get the directory listing. I tried disabling both firewalls, but it didn't help. I've tried this with difference client machines and different ftp client software, with no change. Any ideas?

    Read the article

  • 'skb rides the rocket' on Xen VM

    - by Kye
    I've just set up Ubuntu 13.10 server as a VM on my Ubuntu/Xen server, and I'm getting these weird lines in my syslog. Nov 12 10:26:32 human kernel: [130782.315333] xennet: skb rides the rocket: 19 slots Nov 12 10:26:32 human kernel: [130782.362405] xennet: skb rides the rocket: 20 slots Nov 12 10:26:32 human kernel: [130782.408458] xennet: skb rides the rocket: 19 slots Nov 12 10:26:32 human kernel: [130782.490260] xennet: skb rides the rocket: 20 slots Nov 12 10:26:32 human kernel: [130782.541931] xennet: skb rides the rocket: 19 slots Nov 12 10:26:35 human kernel: [130785.226635] xennet: skb rides the rocket: 19 slots Nov 12 10:26:35 human kernel: [130785.261026] xennet: skb rides the rocket: 21 slots Nov 12 10:26:35 human kernel: [130785.469306] xennet: skb rides the rocket: 19 slots Nov 12 10:26:36 human kernel: [130786.552730] xennet: skb rides the rocket: 21 slots Nov 12 10:26:38 human kernel: [130788.212747] xennet: skb rides the rocket: 20 slots Nov 12 10:26:38 human kernel: [130788.257544] xennet: skb rides the rocket: 19 slots Nov 12 10:26:38 human kernel: [130788.903841] xennet: skb rides the rocket: 19 slots Unsure of what they mean, and Google has nothing meaningful. Any help is appreciated.

    Read the article

  • How do I install PHP with JSON and OAuth on Mac Snow Leopard?

    - by meilas
    I want to use the Dropbox API via this library, http://code.google.com/p/dropbox-php/. I installed MAMP, and then I tried sudo pecl install oauth but I got the following. >>>> downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/oauth-1.0.0 running: /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/configure checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /opt/local/bin/gsed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-apple-darwin10.4.0 checking host system type... i686-apple-darwin10.4.0 checking target system type... i686-apple-darwin10.4.0 checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... no configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking for oauth support... yes, shared checking for cURL in default path... found in /usr checking for ld used by cc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from cc object... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory checking if cc static flag works... rm: conftest.dSYM: is a directory yes checking if cc supports -fno-rtti -fno-exceptions... rm: conftest.dSYM: is a directory no checking for cc option to produce PIC... -fno-common checking if cc PIC flag -fno-common works... rm: conftest.dSYM: is a directory yes checking if cc supports -c -o file.o... rm: conftest.dSYM: is a directory yes checking whether the cc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.4.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no >>>> creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /private/var/tmp/pear-build-root/oauth-1.0.0/libtool --mode=compile cc -I. -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c -o oauth.lo mkdir .libs cc -I. "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c "/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c" -fno-common -DPIC -o .libs/oauth.o In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:29:18: error: pcre.h: No such file or directory In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:37: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:38: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:44: error: expected specifier-qualifier-list before 'pcre' make: *** [oauth.lo] Error 1 ERROR: `make' failed </block>

    Read the article

  • SGE - limit a user to a certain host, using resource quota configuration

    - by pufferfish
    Is it possible to limit a user to a particular host, using the Resource Quota Configuration option in qmon for Sun Grid Engine? I'm thinking of a line to the effect of: { ... limit users {john} to hostname=compute-1-1.local } The documentation mentions built in resource types: slots, arch, mem_total, num_proc, swap_total, and the ability to make custom types. Details: SGE 6.1u5 on Rocks update: The above rule seems to be valid, since using an unknown hostname mangling the resource name 'hostname' both cause errors

    Read the article

  • Exchange 2007 ignoring Send Connectors (again)

    - by gravyface
    Wow, I'm at a loss here -- I posted this exact same question a while back and it's doing the exact same thing: my Send Connector I've created for "Microsoft Domains" (hotmail.com cost 1) is being ignored again and routed through the "Default" Send Connector (* cost 10). Last time, I had the same cost for both Send Connectors, but I've tried setting the Default connector to 5, 10, 100, etc. and regardless, all mail gets routed through that connector (which smarthosts through Postini). Besides calling an air strike on Redmond, what else can I do? MS is blocking Postini again, need to get this working permanently.

    Read the article

  • Redirect as a backup trick? w/o modifying DNS?

    - by acidzombie24
    I specifically looked up how to do something like this ( Can you set a backup ip for your server in DNS? ) and the answer basically was you can't. If i say specify 2 ip addresses could i somehow use a HTTP response header to ignore it temporary (say 5mins) and go to the other IP address? Or maybe i can play dead however i'm unsure how to play dead using nginx. I then would like to be available after my box notice the other box is down and be some kind of readonly server. I'm sure something like this has been implemented i am just wondering how i might implement it with 2 boxes. I'm sure it isn't very difficult? How might i redirect traffic from a backup box to my main server without modifying the DNS?

    Read the article

  • How to restart php-cgi automatically with spawn-fcgi

    - by mrm8
    I'm running nginx with php as fcgi. It's working just fine, however, php-cgi keeps on exit()ing after serving 500 requests. I tried increasing that value (PHP_FCGI_MAX_REQUESTS), and that worked, but that seems to be a workaround. Then I set it to 0, and it didn't exit() yet. But I think there's a reason why php-cgi should be restarted. At the moment, I'm running php-cgi with spawn-fcgi: when the php process exits, spawn-fcgi exits, too. Now, is there a way to automatically restart php (without dirty hacks like while [ 1 ]; do spawn-fcgi; done etc)?

    Read the article

  • Two network adapters on Ubuntu Server 9.10 - Can't have both working at once?

    - by Rob
    I'm trying to set up two network adapters in Ubuntu (server edition) 9.10. One for the public internet, the other a private LAN. During the install, I was asked to pick a primary network adapter (eth0 or eth1). I chose eth0, gave the installer the details listed below in the contents of /etc/network/interfaces, and carried on. I've been using this adapter with these setting for the last few days, and every thing's been fine. Today, I decide it's time to set up the local adapter. I edit the /etc/network/interfaces to add the details for eth1 (see below), and restart networking with sudo /etc/init.d/networking restart. After this, attempting to ping the machine using it's external IP address fails, but I can ping it's local IP address. If I bring eth1 down using sudo ifdown eth1, I can successfully ping the machine via it's external IP address again (but obviously not it's internal IP address). Bringing eth1 back up returns us to the original problem state: external IP not working, internal IP working. Here's my /etc/network/interfaces (I've removed the external IP information, but these settings are unchanged from when it worked) rob@rhea:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary (public) network interface auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # The secondary (private) network interface auto eth1 iface eth1 inet static address 192.168.99.4 netmask 255.255.255.0 network 192.168.99.0 broadcast 192.168.99.255 gateway 192.168.99.254 I then do this: rob@rhea:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... [ OK ] rob@rhea:~$ sudo ifup eth0 ifup: interface eth0 already configured rob@rhea:~$ sudo ifup eth1 ifup: interface eth1 already configured Then, from another machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for [external ip]: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), Back on the Ubuntu server in question: rob@rhea:~$ sudo ifdown eth1 ... and again on the other machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Ping statistics for [external ip]: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms So... what am I doing wrong?

    Read the article

  • Parallel prologue and epilogue in Grid Engine

    - by ajdecon
    We have a cluster being used to run MPI jobs for a customer. Previously this cluster used Torque as the scheduler, but we are transitioning to Grid Engine 6.2u5 (for some other features). Unfortunately, we are having trouble duplicating some of our maintenance scripts in the Grid Engine environment. In Torque, we have a prologue.parallel script which is used to carry out an automated health-check on the node. If this script returns a fail condition, Torque will helpfully offline the node and re-queue the job to use a different group of nodes. In Grid Engine, however, the queue "prolog" only runs on the head node of the job. We can manually run our prologue script from the startmpi.sh initialization script, for the mpi parallel environment; but I can't figure out how to detect a fail condition and carry out the same "mark offline and requeue" procedure. Any suggestions?

    Read the article

  • PHP Suhosin extension is not loading

    - by wintercounter
    For some reason i have to adjust the suhosin.request.max_vars and suhosin.post.max_vars directives. I'm using ispCP, and it has default the suhosin patch, but as i read, i need to install the extension too. I've did this with apt-get install php5-suhosin and the suhosin.ini appeared in conf.d, and suhosin.so exists too in /usr/lib/php5. After the Apache restart the extension isn't loading. phpinfo() says: Scan this dir for additional .ini files /etc/php5/cgi/conf.d additional .ini files parsed /etc/php5/cgi/conf.d/adodb.ini, /etc/php5/cgi/conf.d/curl.ini, /etc/php5/cgi/conf.d/eAccelerator.ini, /etc/php5/cgi/conf.d/gd.ini, /etc/php5/cgi/conf.d/idn.ini, /etc/php5/cgi/conf.d/imagick.ini, /etc/php5/cgi/conf.d/imap.ini, /etc/php5/cgi/conf.d/mcrypt.ini, /etc/php5/cgi/conf.d/memcache.ini, /etc/php5/cgi/conf.d/mhash.ini, /etc/php5/cgi/conf.d/ming.ini, /etc/php5/cgi/conf.d/mysql.ini, /etc/php5/cgi/conf.d/mysqli.ini, /etc/php5/cgi/conf.d/pdo.ini, /etc/php5/cgi/conf.d/pdo_mysql.ini, /etc/php5/cgi/conf.d/pdo_sqlite.ini, /etc/php5/cgi/conf.d/ps.ini, /etc/php5/cgi/conf.d/pspell.ini, /etc/php5/cgi/conf.d/recode.ini, /etc/php5/cgi/conf.d/snmp.ini, /etc/php5/cgi/conf.d/sqlite.ini, /etc/php5/cgi/conf.d/tidy.ini, /etc/php5/cgi/conf.d/xmlrpc.ini, /etc/php5/cgi/conf.d/xsl.ini As you can see, it doesn't loads the suhosin.ini. What can be the problem?

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • SBS domain name choice

    - by sandymac
    We are about to set up SBS 2011 at my small company < 10 users. My collaborator wants to name the SBS domain "example.local" . I'm of the opinion we should name the SBS domain "corp.example.com" and setup DNS so the "corp" record is a NS record to the SBS server's private IP. FYI: "Example.com" isn't the real domain name and while the website is hosted outside our office, email will be stored on the SBS server in our office after passing though a spam filtering smart host hosted elsewhere too.

    Read the article

  • Good book for a software developer doing part-time (Linux) system administration work

    - by Tony Meyer
    In many smaller organisations, developers often end up doing some system administration work (for obvious reasons). A lot of the time, they have great developer skills, but few system administration skills (perhaps all self-taught), and so have to learn as they go, which is fairly inefficient. Are there canonical (or simply great) books that would help in this situation? More advanced than just using a shell (presumably a developer can do that), but not aimed at someone that hopes to spend many years doing this work. Ideally, something fairly generic (although specific to a distribution would be OK), covering databases, networking, general maintenance, etc, not just one specific task. For the most part, I'm interested in shell-based work (i.e. no GUI installed), although if there's something outstanding I'm missing, please point it out. (As an analogy, replace "system administration" with C, and I'd want K&R, with C++ and I'd want Meyers' "Effective C++").

    Read the article

  • nmap on my webserver shows TCP ports 554 and 7070 open

    - by atc
    I have a webserver that hosts various websites for me. The two services that are accessible outside are SSH and Apache2. These are running on a non-standard and standard port, respectively. All other ports are closed explicitly via arno-iptables-firewall. The host is running Debian Testing. I noticed that a scan of the host using nmap produced different results from different PCs. From my laptop on my home network (behind a BT Homehub), I get the following: Not shown: 996 filtered ports PORT STATE SERVICE 80/tcp open http 554/tcp open rtsp 7070/tcp open realserver 9000/tcp open cslistener whereas scanning from a US-based server with nmap 5.00 and a Linux box in Norway running nmap 5.21 I get the following: Not shown: 998 filtered ports PORT STATE SERVICE 80/tcp open http 9000/tcp open cslistener so I hope it's my internal network or ISP that's playing up, but I cannot be sure. Running a netstat -l | grep 7070 produces nothing. Similarly for port 554. Can anyone explain the peculiarities I'm seeing?

    Read the article

  • What happens when the USB key or SD card I've installed VMware ESXi on fails?

    - by ewwhite
    An SD (SDHC) card installed in an HP ProLiant DL380p Gen8 server running VMware ESXi just failed. I encountered some rather ominous looking messages on the vCenter console and in the HP ProLiant ILO event log... Lost connectivity to the device ... backing the boot filesystem. As a result, host configuration changes will not be saved to persistent storage. Embedded Flash/SD-CARD: Error writing media 0, physical block 848880: Stack Exception. VMware advocates the use of USB and SD (SDHC) boot devices for ESXi. It was one of the main reasons the smaller footprint ESXi was developed (versus the older ESX). I've spent much time highlighting the differences between ESXi's installable and embedded modes to coworkers and clients. However, these failures do seem to happen. In this case, this is my third instance. Luckily, this is a vSphere cluster with SAN storage. What steps should be taken to remediate this failure?

    Read the article

  • Drupal install and permissions

    - by Richard
    So I'm really stuck on this issue. An install process is complaining about write permission on settings.php and sites/default/files/. However, I've moved these files temporarily to write/read (chmod 777) and changed the owner/group to "apache" as shown below. -bash-4.1$ ls -hal total 28K drwxrwxrwx. 3 richard richard 4.0K Aug 23 15:03 . drwxr-xr-x. 4 richard richard 4.0K Aug 18 14:20 .. -rwxrwxrwx. 1 apache apache 9.3K Mar 23 16:34 default.settings.php drwxrwxrwx. 2 apache apache 4.0K Aug 23 15:03 files -rwxrwxrwx. 1 apache apache 0 Aug 23 15:03 settings.php However, the install is still complaining about write permissions. I followed steps one and two of the INSTALL.txt but no luck. Update: To further explore the situation, I created sites/default/richard.php with the following code: <?php error_reporting(E_ALL); ini_set('display_errors', '1'); mkdir('files'); print("<hr> User is "); passthru("whoami"); passthru("pwd"); ?> Run from the command line (under user "richard"), no problem. The folder is created everything is a go. Run from the web, I get the following: Warning: mkdir(): Permission denied in /var/www/html/sites/default/richard.php on line 9 User is apache /var/www/html/sites/default Update 2: Safe mode appears to be off... -bash-4.1$ cat /etc/php.ini | grep safe | grep mode | grep -v \; safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH sql.safe_mode = Off

    Read the article

  • fsck on LVM snapshots

    - by Alpha01
    I'm trying to do some file system checks using LVM snapshots of our Logical Volumes to see if any of them have dirty file systems. The problem that I have is that our LVM only has one Volume Group with no available space. I was able to do fsck's on some of the logical volumes using a loopback file system. However my question is, is it possible to create a 200GB loopback file system, and saved it on the same partition/logical volume that I'll be taking a snapshot of? Is LVM smart enough to not take a snapshot copy of the actual snapshot? [root@server z]# vgdisplay --- Volume group --- VG Name Web2-Vol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 29 VG Access read/write VG Status resizable MAX LV 0 Cur LV 6 Open LV 6 Max PV 0 Cur PV 1 Act PV 1 VG Size 544.73 GB PE Size 4.00 MB Total PE 139450 Alloc PE / Size 139450 / 544.73 GB Free PE / Size 0 / 0 VG UUID BrVwNz-h1IO-ZETA-MeIf-1yq7-fHpn-fwMTcV [root@server z]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.7G 3.6G 5.6G 40% / /dev/sda1 251M 29M 210M 12% /boot /dev/mapper/Web2--Vol-var 12G 1.1G 11G 10% /var /dev/mapper/Web2--Vol-var--spool 12G 184M 12G 2% /var/spool /dev/mapper/Web2--Vol-var--lib--mysql 30G 15G 14G 52% /var/lib/mysql /dev/mapper/Web2--Vol-usr 13G 3.3G 8.9G 27% /usr /dev/mapper/Web2--Vol-z 468G 197G 267G 43% /z /dev/mapper/Web2--Vol-tmp 3.0G 76M 2.8G 3% /tmp tmpfs 7.9G 92K 7.9G 1% /dev/shm The logical volume in question is /dev/mapper/Web2--Vol-z. I'm afraid if I created the loopback file system in /dev/mapper/Web2--Vol-z and take a snapshot of it, the disk size will be trippled in size, thus running out of disk space available.

    Read the article

  • Windows 7 CHKDSK log - What is "Internal Info"?

    - by Ron Klein
    If I run Disk Scan (CHKDSK) on Windows 7, I get the log in the event viewer. If I look inside it, I can see some kind of a binary dump: Internal Info: 00 4f 05 00 53 4a 05 00 ec 46 09 00 00 00 00 00 .O..SJ...F...... fa 03 00 00 5c 00 00 00 00 00 00 00 00 00 00 00 ....\........... 48 93 42 00 50 01 41 00 f8 1f 41 00 00 00 41 00 H.B.P.A...A...A. Is there any meaningful information in that field, other than debug info for the programmers who developed this tool?

    Read the article

  • Perfmon: which counter identifies that threads are waiting?

    - by frankadelic
    While load testing an ASP.NET app, we find that the pages are taking 20-30 sec under heavy load. We suspect this is because the pages are waiting for database calls or web services. Is there a particular perfmon counter that can identify this sort of bottleneck on the web servers? CPU, Memory, and Disk are normal. Or must we use a tool other than perfmon to track down this bottleneck?

    Read the article

  • Civic Duty Badge

    - by Campo
    As only @Evan Anderson has this badge I would like some clarification on how to get it. The Civic Duty Badge is described as such Hit the daily reputation cap on 50 days Does this imply that on day 50 of being a member of this site you must accumulate 200 rep on that day? Thanks Evan! lol

    Read the article

  • OpenVPN and TomatoVPN

    - by Bill Johnson
    Wondering if someone can help me with the following. I have updated my Linksys router with TomatoVPN and used the following config: Interface Type:TAP Protocol:UDP Port:1195 Firewall Custom Authorization Mode:Static Key I have then inserted the static key generated in OpenVPN saved and started the service. connect.ovpn. # Use the following to have your client computer send all traffic through your router # (remote gateway) remote (entered my DNS/DHCP servers external IP address here) port 1195 dev tap secret static.key.txt proto udp comp-lzo route-gateway 192.168.1.1 redirect-gateway float I've then placed my static key in a file in the same directory as your connect.ovpn (static.key.txt) Now OpenVPN is installed on a laptop that I use at home. I have plugged in the laptop to my home connection and started connect.ovpn The Local Area Connection is connected as 'Home Network 3' - and when I start OpenVPN it is connected as 'Local Area Connection 2' and this is showing as 'Unidentified Network' and it appears there is no network access. TAP-Win32 Adapter V9 appears to be the adaptors name and the IP and DNS properties are set to automatic. If I open up the OpenVPN GUI it shows an error message saying "Connecting to connect has failed". Looking at the error message behind this pop-up one line says "TCP/UDP Socket bind failed on local address [undef]:1195 Address already in use [WSAEADDRINUSE] Could anyone possibly help me further with this please?

    Read the article

  • CentOS 5.5 APIC issue on ESX 4.1 & ML115

    - by Adnan
    Hi, I've just installed vSphere 4.1 on an HP ProLiant ML115 G5 Quad-core and am trying to install CentOS 5.5 as a guest system. However, when the guest boots up I get a calibrate_APIC_clock warning and a kernel panic message. I've come across this knowledge base article on the vmware website which suggests moving the guest onto another Intel based host (!). Funnily enough I don't have a collection of spare host servers sitting around, so can anyone suggest another solution? Alternatively, would installing an earlier version of CentOS get around this issue, or would a yum update put me back to square one? How about BIOS settings, could anything be tweaked there? Thanks.

    Read the article

  • Most efficient way to connect an ISAPI Dll to a windows service

    - by Mike Trader
    I am writing a custom server for a client. They want scalability so I must use a thread pool and probably I/O completion port to regulate it. The main requirement is that a windows service manage the HTTP requests for a number of reasons. An example of one would be that a client session spans many requests and continuity must be maintained. Another would be that the ISAPI Dll will be in the IIS address space and so it's code will be lean and very carefully implemented. The extensive processing in the Windows service may get unruly for the duration of the lengthy development. If the service crashes it will not take out IIS. Anyway, the remaining decision is how to have these two processes communicate. We have talked about pipes, tcp, global memory and even a single pipe with multiplexed data ala FastCGI. Would love to hear anyones experience with a decision like this.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >