Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 297/1664 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Fedora, ssh and sudo

    - by Ricky Robinson
    I have to run a script remotely on several Fedora machines through ssh. Since the script requires root priviliges, I do: $ ssh me@remost_host "sudo touch test_sudo" #just a simple example sudo: no tty present and no askpass program specified The remote machines are configured in such a way that the password for sudo is never asked for. For the above error, the most common fix is to allocate a pseudo-terminal with the -t option in ssh: $ ssh -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Let's try to force this allocation with -t -t: $ ssh -t -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Nope, it doesn't work. In /etc/sudoers of course I have this line: #Defaults requiretty ... but I can't manually change it on tens of remote machines. Am I missing something here? Is there an easy fix? EDIT: Here is the sudoers file of a host where ssh me@host "sudo stat ." works. Here is the sudoers file of a host where it doesn't work. EDIT 2: Running tty on a host where it works: $ ssh me@host_ok tty not a tty $ ssh -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. $ ssh -t -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. Now on a host where it doesn't work: $ ssh me@host_ko tty not a tty $ ssh -t me@host_ko tty not a tty Connection to host_ko closed. $ ssh -t -t me@host_ko tty not a tty Connection to host_ko closed.

    Read the article

  • What cloud backup solution supports a "backup server to cloud" configuration?

    - by Gepeto
    What online backup tool allows you to: A) Back up Windows, Linux and optionally Mac desktops and servers to the cloud B) Do so by first backing up to a central server or appliance C) Allow restoring from that appliance when possible and if not go to the cloud For now the best option I have seen is i365 by seagate with an appliance between the local computers and the cloud. I know Microsoft also has an i365 plugin for DPM, as well as an Iron Mountain plugin. However, I feel that there must be a simpler way to do this. Can any of the "simpler" solutions like Jungle Disk or anything else going to s3, Mozy, Carbonite, Crashplan, etc do this? Thank you

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • How to send massive mail in Window [closed]

    - by Magic
    I'm looking for a solution that can send massive mail in Windows. I'm not spaming. My company want to send mail to our user. I don't want to use third party smtp server(like google mail). Because it'll ask captcha when sending several mail continuously. Please suggest me some solution. EDIT: I only want to send mail in Windows. Massive only means several hundreds mail per days. I just want to send our product information to our user.

    Read the article

  • Port-forwarding on livebox to router

    - by Yusuf
    Hello, At home, I have two routers, one Livebox and a Netgear. The reason why I need the Livebox is that the phone line cannot be connected to the Netgear router. So I have the Livebox connected to the phone line, the Netgear connected to the Livebox, and all PCs connected to the Netgear. My issue is that for every application or port that I want to give external access, I have to create entry in both the Livebox and the Netgear routers; so I would like to know if there's a way to automatically forward all requests to the Netgear router, from which I will then forward to the required IP:port. Thanks in advance.

    Read the article

  • Fighting Spam - What can I do as an: Email Administrator, Domain Owner, or User?

    - by Chris S
    This is a Canonical Question about Fighting Spam. Also related: How to stop people from using my domain to send spam? There are so many techniques and so much to know about fighting SPAM. What widely used techniques and technologies are available to Administrator, Domain Owners, and End Users to help keep the junk out of our inboxes? We're looking for an answer that covers different tech from various angles. The accepted answer should include a variety of technologies (eg SPF/SenderID, DomainKeys/DKIM, Graylisting, DNS RBLs, Reputation Services, Filtering Software [SpamAssassin, etc]); best practices (eg mail on Port 25 should never be allowed to relay, Port 587 should be used; etc), terminology (eg, Open Relay, Backscatter, MSA/MTA/MUA, Spam/Ham), and possibly other techniques.

    Read the article

  • Strange Jmeter connection refuse on Tomcat

    - by Tommy
    I tried difference setting in Jmeter and Tomcat. If the Threads number in JMeter is 1~200, Then tomcat is okay. If It is 300, Then after serving few requests, tomcat starts to output errors. Here is the error show in JMeter java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at sun.net.NetworkClient.doConnect(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.<init>(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source) at org.apache.jmeter.protocol.http.sampler.HTTPJavaImpl.sample(HTTPJavaImpl.java:483) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:62) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1018) at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1004) at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:411) at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:297) at java.lang.Thread.run(Unknown Source) My tomcat server.xml in eclipse <!--The connectors can use a shared executor, you can define one or more named thread pools--> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="2000" minSpareThreads="250" acceptCount="2000"/> <Connector executor="tomcatThreadPool" URIEncoding="UTF-8" connectionTimeout="20000" port="8080" protocol="HTTP/1.1" redirectPort="8443" /> Any idea why this is happening ? How do i check the server.xml is correctly used? It is a JSF2 application if it helps. Thanks in advance.

    Read the article

  • fink hangs while compiling Octave on OS X

    - by Mark Bennett
    Disclaimer: I'm totally new to Fink. I'm trying to install Octave (Matlab open source clone) on Mountain Lion using Fink, following instructions at http://wiki.octave.org/Octave_for_MacOS_X It's a new installation of Fink, and I've also installed X11 per instructions. I'm using this command (which I believe is correct since everything's 64 bit now): sudo fink install octave-atlas It's hanging after a while, showing this as it's last output: ... Setting up xft2-dev (2.2.0-2) ... Clearing dependency_libs of .la files being installed Reading buildlock packages... All buildlocks accounted for. /sw/bin/dpkg-lockwait -i /sw/fink/dists/stable/main/binary-darwin-x86_64/x11/xinitrc_1.5-1_darwin-x86_64.deb (Reading database ... 14871 files and directories currently installed.) Preparing to replace xinitrc 1.5-1 (using .../xinitrc_1.5-1_darwin-x86_64.deb) ... Unpacking replacement xinitrc ... Setting up xinitrc (1.5-1) ... I did notice the process name on the terminal's tab was "sort", so the second time I hit this I tried Control-D (End-of-File), and this did seem to unstick it. I'm wondering if there's some misformed command and sort was trying to read from stdin? Questions: 1: Has anybody else seen this? Google wasn't helpful 2: Fink outputs a LOT of warnings and errors.... is that normal? 3: wondering if anybody's got Fink to compile Octave on Mountain Lion specifically? And whether they used just "octave" or "octave-atlas". Or if you got it working with MacPorts or Homebrew? 4: later Fink failed with "Failed: phase compiling: gnuplot-minimal-4.6.1-1 failed". I haven't started googling that yet... but wondering if anybody's see that? Also tried MacPorts but got other errors with Octave. And reading online it looks like HomeBrew also has issues with Octave on Mountain Lion. 5: Generally looking for anybody who's got Octave running on Mountain Lion with any of the package managers. I've been at this for a couple days ;-)

    Read the article

  • How do you get the IP addresses of KVM guests that are using a Bridge Network Device from the KVM host

    - by slm
    Does anyone know of a way using KVM to find out the IP addresses of KVM guests that are using a bridge interface (br0) through the KVM host? Currently I have br0 setup with a single NIC (eth0) included. The KVM host is a system running CentOS 5.6 in case that matters. The guests are also running CentOS 5.6. BTW, the guests do not just show up in the KVM host's arp table, I'm looking for better solutions and/or leads.

    Read the article

  • Xvnc4 started from xinetd only displays empty gray X screen

    - by Scott Thomason
    Hi. I'm attempting to setup an Ubuntu 10.10 box so that anyone can connect to port 5900 and be greeted by the gdm login manager. To do so, I added a vnc entry in /etc/services and I am starting Xvnc4 using this xinetd config file: service vnc { protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -geometry 1000x700 -depth 24 -broadcast -inetd -once -securitytypes None } This kind of works...I can start multiple sessions all to port 5900, and I get an X screen. The problem is that I only get an empty, gray X screen with no applications started. I know when you run vncserver from the command line it will look to your ~/.vnc/ directory for your passwd and xstartup files, and I think what I want to do is put "gnome-session" into the xstart file. However, which xstartup file? The running user is "nobody" who obviously doesn't have a ~/.vnc/ directory. I tried a /root/.vnc/xstartup file and a ~scott/.vnc/xstartup file and it doesn't look like they were even read. I changed the xinetd vnc service so that it would "strace" Xvnc4. I looked thru all the "open" lines and didn't get a clue as to what file it was trying to read for xstart. Can anyone help? I just want a terminal server where the user is presented with a gdm login screen.

    Read the article

  • help needed for server hardware configuration

    - by sansknowledge
    hi, basically i am software guy got recently promoted to managerial cadre which requires giving recommendation for server to run software developed by our company , the software is a work flow management and the db is oracle 11 , approximately the size of daily transaction would be around 40 gb, and it should be connected to ~ 150 client machines , the client machine will be growing. help on terms of cpu, processor, memory , rack and stack or raid (i really yet to understand that concept) OS, will be greatly appreciated.

    Read the article

  • Apache has a 2GB file limit on a CIFS network drive?

    - by netvope
    Setup: A Windows and a Ubuntu Server are hosted in VMware ESXi I have a a 6GB file on a Windows share The Windows share is mount on Ubuntu with smbmount A symlink pointing to the 6GB file is created in a public_html directory, which is readable by Apache The problem: wget gets an error Connection closed at byte 2130706432. Retrying. after downloading 2130706432 bytes (exactly 2032 MiB, and is the same every time) Apache returns 206 Partial Content without showing any errors in the log Same error even if I download from localhost Similar error when Firefox is used instead of wget No error if I md5sum or cp the file on Ubuntu, suggesting that smbmount and the Windows Server are OK with 6GB files. No error if Apache serve a 6GB file from the local disk, suggesting that Apache has no problems handling 6GB files. Any ideas why Apache/symlink/smbmount/Windows would cause an error when used together? How can I fix the problem? Software used: VMware ESXi 4 Update 1 Windows Server 2008 R2 Ubuntu 8.04 Server, vmxnet3 Apache 2.2.8 mount.cifs 1.10-3.0.28a

    Read the article

  • powweb and cloudflare

    - by sonill
    i am using powweb as hosting provider and cloudflare as free cdn. Its been few weeks since my website is down and it says "website down, no cache version available". And to add more to it, I cannot access powweb or any website hosted from powweb from my ISP connection. So i am facing trouble solving my cloudflare issue. I just wanted to know if the problem I am facing is from cloudflare side or powweb side. Some time i can access my website from my end but my friend in another country says its down for days. So is there any suggestion guys?? thank you

    Read the article

  • pam_filter usage prevent passwd from working

    - by Henry-Nicolas Tourneur
    Hello everybody, I have PAM+LDAP SSL running on Debian Lenny, it works well. I always want to restrict who's able to connect, in the past I used pam_groupdn for that but I recently got a situation where I has to accept 2 different groups. So I used pam_filter like this : pam_filter |(groupattribute=server)(groupattribute=restricted_server) The problem is that with this statement, passwd doesn't work anymore with LDAP accounts. Any idea why ? Please find hereby some links to my config files : Since serverfault.com only allow me to post 1 link, please find hereunder the link to other conf files : http://pastebin.org/447148 Many thanks in advance :)

    Read the article

  • [tcp] :/: RPCPROG_NFS: RPC: Program not registered

    - by frankcheong
    I tried to share the root / from a fedora 9 to a freeBSD while when I tried to mount the / folder it complained with "[tcp] nfs_server:/: RPCPROG_NFS: RPC: Program not registered". I followed the below steps to setup on the fedora nfs server:- Add the below line inside the /etc/exports / nfs_client(rw,no_root_squash,sync) restart the nfs related service service portmapper restart service nfslock restart service nfs restart export the filesystem using the below command:- exportfs -arv On the nfs client, I have troubleshoot using the below command:- rpcinfo -p nfs_server program vers proto port service 100000 2 tcp 111 rpcbind 100000 2 udp 111 rpcbind 100024 1 udp 32816 status 100024 1 tcp 34173 status 100011 1 udp 817 rquotad 100011 2 udp 817 rquotad 100011 1 tcp 820 rquotad 100011 2 tcp 820 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100021 1 udp 32818 nlockmgr 100021 3 udp 32818 nlockmgr 100021 4 udp 32818 nlockmgr 100005 1 udp 32819 mountd 100005 1 tcp 34174 mountd 100005 2 udp 32819 mountd 100005 2 tcp 34174 mountd 100005 3 udp 32819 mountd 100005 3 tcp 34174 mountd showmount -e nfs_client Exports list on nfs_server: / nfs_client What else did I missed?

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • Sophos Enterprise Console 4.5, Mac Client 7 Not Auto-Populating SEC Info

    - by user65712
    I have Sophos Endpoint Security and Control, which includes Sophos Enterprise Console (SEC). I'm currently running version 4.5 of SEC, which is an older version. I subscribe to Mac updates, and SEC generates a binary Mac installer for me to use on Mac endpoints (Version 7 for Mac, also an older version). However, when I run the installer on Mac endpoints, it installs fine but then never auto-fills out the location of the update server, which is on a network share, and the account credentials used to access it, which I do not know and were generated by Sophos automatically. Previously, I had been able to use the SEC-generated installer to install and run Sophos on a Mac seamlessly; the update location information and account credentials were automatically filled during login, I ran the installer and it was perfectly set up. Now, however, Sophos installs on a Mac but never updates because it doesn't have the update location OR credentials. Has anyone else run across this problem or know why it is happening? Sophos Enterprise Console 4.5.1.0

    Read the article

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • 1000Base-X layer 2/MAC address details

    - by user69971
    A layer 2 Ethernet frame is sent with a source and destination MAC address. Given a 100Base-TX (copper) trunk between two Cisco switches, I can do a "show interface fa 0/0" on S1 to see the MAC address assigned to the trunking interface, then go to Switch2 and do a "show mac address-table" and find the MAC address of the S1 fa 0/0 interface as a dynamically learned MAC in the table. Given a similar setup with a 1000Base-X (fiber GBIC) trunk, the MAC address shown in "show interface gi 0/0" on S1 does not show up in the MAC address-table of S2. Everything I can find online indicates that 1000Base-X uses largely the same layer 2 format as copper connections. There's some slight alterations - minimum frame size is slightly larger - but the fundamentals of the frame structure appear to be the same, including transmission with a source and destination L2 address. Why doesn't the address of gi 0/0 show up in the MAC address table of the connection switch? The only thing which seems to make sense would be that the GBIC has its own MAC address, almost as if its acting as a mini 2-port switch or hub, with the switch-assigned MAC address showing up on the interface connection and a different MAC address assigned to the fiber side. If this is the case, is there any way to see the GBIC MAC address on the switch? (I've tried to look up the details in IEEE 802.3z but it doesn't seem to be available without an IEEE membership or purchasing the standard. I find the base 802.3 PDFs for download, but not 802.3z.)

    Read the article

  • What is the best distro to host a KVM virtualization solution

    - by elventear
    I am looking for a Server oriented distro that we can expect have decent support but also offer as much as possible some of the latest features that KVM might offer. I am leaning towards Ubuntu LTS 10.04, because well it's LTS and more bleeding edge, but I find Ubuntu not serious enough in terms of support (I say this a heavy Ubuntu user). Given that Centos 6 is not out yet, I am not sure if going Centos 5 would be the best option in terms of getting more features from KVM. Any other distro you would recommend that could meet the criteria of long term support? (At least 4 years)

    Read the article

  • Netgear VPN endpoint drops connectivity to single IP address

    - by Justin Bowers
    I'm having a strange issue with one of the networks I manage recently. We have about 14 different networks connected together through a Netgear hardware VPN. Everything has been running fine (other than standard connectivity problems) for a few years now, but I've hit a wall with a problem that's just cropped up at one of the VPN endpoint locations. Our primary VPN network is on the 192.168.1.0/24 subnet and our other 13 networks are on the 192.168.2.0/24 - 192.168.14.0/24 subnets. We run a terminal server on the 192.168.1.0/24 network with IP address 192.168.1.100. Starting Thursday of last week, we had a problem with connectivity of the 192.168.2.0/24 network to 192.168.1.100. When troubleshooting the problem, I found that Network 2 (192.168.2.0/24) still had connectivity to the Internet as well as VPN connectivity to Network 1 (192.168.1.0/24). We could ping and connect to any other device other than the server with IP address 192.168.1.100. Also, none of our networks had an issue accessing 192.168.1.100. I ran a scan on Network 2 after assigning static IP addresses to one of the workstations but received no response from 192.168.1.100 (looking for possibly a new device that someone had plugged into Network 2 that had a duplicate IP address with the server). Asking the staff, noone had reported connecting a new device to Network 2 as well. I then assigned a secondary IP address of 192.168.1.88 to the server and could ping and connect to the secondary IP address from Network 2, but still couldn't access it via 192.168.1.100. I then just rebooted the Netgear VPN Firewall (FVS318v3) and after it came back up, connectivity to 192.168.1.100 was restored. Beforehand, when checking for devices with a possible duplicate IP address, I did run a check for available wireless access points and stations and found none (our wireless is secured via MAC address access control through a WG102 device). I thought that it may have been a fluke for some reason since everything came back up after a power cycle of the VPN Firewall. Things ran fine for a few days until this afternoon, when the problem happened again. One of our users claimed that they had connectivity problems to the server and after connecting to the computer, I found that I couldn't ping the server address anymore. I could still ping the alternate IP address of the server though, so I went ahead and rebooted the VPN firewall again and connectivity was restored. Unfortunately, I can't find anything in the security or VPN logs of the firewall that helps point me in the right direction, so I thought I would go ahead and ask to see if anyone else has any other insight into why we've started having this problem. I am aware that it could still be a device with a duplicate IP address of the server on Network 2, but every employee claim states that there's been no such new device brought in to the network. I know this is a long read, but any help is appreciated! Thanks, Justin

    Read the article

  • nginx starts up before apache

    - by paullb
    I've been fumbling through setting up redmine on a unbuntu (12.04) box and somewhere along the line NginX got set up and now apache no longer loads because nginx has already grabbed the port. I tried removing NginX with the below command but that didn't seem to make any difference. When I restarted the server and pointed my web browser I still got the "Welcome to NginX" message sudo apt-get purge nginx I have confirmed that NginX is gone because when I run the above now I get as an output Package nginx is not installed, so not removed Yet everytime I start the machine it is running again. I noticed the following for the running processes (if that is helpful) root 923 0.0 0.0 76784 1280 ? Ss 03:00 0:00 nginx: master process /usr/sbin/nginx www-data 925 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 926 0.0 0.1 77092 2204 ? S 03:00 0:00 nginx: worker process www-data 927 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 928 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process Any advice for bringing back apache2 as the "default" (for lack of a better term) web server?

    Read the article

  • Using Upstart to manage Unicorn w/ rbenv + bundler binstubs w/ ruby-local-exec shebang

    - by codykrieger
    Alright, this is melting my brain. It might have something to do with the fact that I don't understand Upstart as well as I should. Sorry in advance for the long question. I'm trying to use Upstart to manage a Rails app's Unicorn master process. Here is my current /etc/init/app.conf: description "app" start on runlevel [2] stop on runlevel [016] console owner # expect daemon script APP_ROOT=/home/deploy/app PATH=/home/deploy/.rbenv/shims:/home/deploy/.rbenv/bin:$PATH $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production # >> /tmp/upstart.log 2>&1 end script # respawn That works just fine - the Unicorns start up great. What's not great is that the PID detected is not of the Unicorn master, it's of an sh process. That in and of itself isn't so bad, either - if I wasn't using the automagical Unicorn zero-downtime deployment strategy. Because shortly after I send -USR2 to my Unicorn master, a new master spawns up, and the old one dies...and so does the sh process. So Upstart thinks my job has died, and I can no longer restart it with restart or stop it with stop if I want. I've played around with the config file, trying to add -D to the Unicorn line (like this: $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production -D) to daemonize Unicorn, and I added the expect daemon line, but that didn't work either. I've tried expect fork as well. Various combinations of all of those things can cause start and stop to hang, and then Upstart gets really confused about the state of the job. Then I have to restart the machine to fix it. I think Upstart is having problems detecting when/if Unicorn is forking because I'm using rbenv + the ruby-local-exec shebang in my $APP_ROOT/bin/unicorn script. Here it is: #!/usr/bin/env ruby-local-exec # # This file was generated by Bundler. # # The application 'unicorn' is installed as part of a gem, and # this file is here to facilitate running it. # require 'pathname' ENV['BUNDLE_GEMFILE'] ||= File.expand_path("../../Gemfile", Pathname.new(__FILE__).realpath) require 'rubygems' require 'bundler/setup' load Gem.bin_path('unicorn', 'unicorn') Additionally, the ruby-local-exec script looks like this: #!/usr/bin/env bash # # `ruby-local-exec` is a drop-in replacement for the standard Ruby # shebang line: # # #!/usr/bin/env ruby-local-exec # # Use it for scripts inside a project with an `.rbenv-version` # file. When you run the scripts, they'll use the project-specified # Ruby version, regardless of what directory they're run from. Useful # for e.g. running project tasks in cron scripts without needing to # `cd` into the project first. set -e export RBENV_DIR="${1%/*}" exec ruby "$@" So there's an exec in there that I'm worried about. It fires up a Ruby process, which fires up Unicorn, which may or may not daemonize itself, which all happens from an sh process in the first place...which makes me seriously doubt the ability of Upstart to track all of this nonsense. Is what I'm trying to do even possible? From what I understand, the expect stanza in Upstart can only be told (via daemon or fork) to expect a maximum of two forks.

    Read the article

  • cygwin sshd times out for remote login

    - by reve_etrange
    I have configured SSHD using Cygwin on Windows 7. I have checked and double-checked all of the following points: Port forwarding is correctly configured Windows Firewall is configured to pass port 22 Local login attempts (using Cygwin SSH) succeed sshd_config has UseDNS No Using nmap from remote machine confirms port 22 is accessible /etc/passwd and /etc/group are correctly populated However, remote login attempts time out. This includes from the local network. user@host:~$ ssh -vvv [email protected] OpenSSH_5.5p1 Debian-4ubuntu6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /home/user/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to the.ip.add.ress [the.ip.add.ress] port 22. debug1: connect to address the.ip.add.ress port 22: Connection timed out ssh: connect to the.ip.add.ress port 22: Connection timed out No messages are logged to /var/log/sshd.log. I suspect that there is a permissions issue with a particular file somewhere, however I have checked the permissions of all my Cygwin binaries, DLLs and the particular files important to Cygwin sshd, including all of: /etc/passwd /etc/group /var /var/log/sshd.log /var/empty Others who have reported this or similar errors appear to have missed one of the points enumerated above. Can anyone point me to a possible solution?

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >