Search Results

Search found 9 results on 1 pages for 'leopd'.

Page 1/1 | 1 

  • APT wedged by kernel version mismatch

    - by Leopd
    Apt is seemingly unable to do anything useful for me, repeatedly giving messages of this form: dpkg: dependency problems prevent configuration of linux-server: linux-server depends on linux-image-server (= 3.2.0.37.44); however: Version of linux-image-server on system is 3.2.0.37.45. linux-server depends on linux-headers-server (= 3.2.0.37.44); however: Version of linux-headers-server on system is 3.2.0.37.45. dpkg: error processing linux-server (--configure): dependency problems - leaving unconfigured This is basically the same problem as I cannot install any package (linux-image-server, linux-server dependencies errors) which got closed Duplicate to an answer that is totally useless for this situation. None of the advice in that very generic answer about dependencies helps. Explicitly: sudo apt-get clean sudo apt-get autoclean sudo apt-get update all have no not effect. While sudo apt-get -f install sudo dpkg --configure -a sudo apt-get -u dist-upgrade sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade all give some form of the error message above.

    Read the article

  • Ubuntu 12 crashed and took down network

    - by Leopd
    We recently set up a new Ubuntu 12.04LTS server on our network. It's not fully configured so it's not doing much beyond sshd and a default apache2 install. But this evening it appears to have crashed. It wasn't responding to the network or the keyboard. But the worst part is, it took down the entire network. My knowledge of the network stack below OSI layer 3 is very limited, so the rest confuses me. When this machine was physically connected to the network, no other machine could connect to the outside internet. When things were broken, running arp showed that our gateway's IP address (10.0.1.1) was listed as "invalid." Unplugging the server from the network fixed the problem, and plugging it back in broke it again. So the crashed server was advertising itself as owning the gateway's IP address? There's nothing at all in syslog during the time when it was causing problems. Any ideas about how to figure out what went wrong or what we can do to prevent it from happening again? I'm hesitant to even put the machine back on the network right now. Update ** It crashed again, and I ran tcpdump -penn arp (thanks bahamat!) for several minutes and got this... (timestamps and duplicate lines removed) 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.191, length 46 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.44 tell 10.0.2.191, length 46 60:d8:19:d4:71:d6 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.125, length 46 d4:9a:20:04:e9:78 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.1.1 tell 192.168.1.100, length 28 Update 2 ** When the network is functioning properly, arping -c4 10.0.1.1 returns this: ARPING 10.0.1.1 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=0 time=267.982 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=1 time=422.955 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=2 time=299.215 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=3 time=366.926 usec --- 10.0.1.1 statistics --- 4 packets transmitted, 4 packets received, 0% unanswered (0 extra) When the bad server is plugged in, arping -c4 10.0.1.1 returns: ARPING 10.0.1.1 --- 10.0.1.1 statistics --- 4 packets transmitted, 0 packets received, 100% unanswered (0 extra) Context ** 10.0.x.x is the main subnet. 10.0.1.1 is the main internet gateway 10.0.1.44 is a printer 10.0.2.* devices are all laptops / workstations I have no idea what's using the 192.168.x.x subnet -- your guesses are at least as good as mine. A VM on a workstation? A misconfigured WAP? Somebody re-sharing wifi? A machine that failed to DHCP? The offending ubuntu server's MAC address ends in cd:80 so isn't listed in the dump. It should DHCP to 10.0.3.3 Thanks for any help. This ARP stuff is all voodoo to me. Packets just go to IP addresses, right? ;)

    Read the article

  • How to configure mysqldump to avoid max_allowed_packet error

    - by Leopd
    Honestly it baffles me that with a completely default installation of mysql if I run mysqldump with default parameters it generates a SQL file that can't be imported into another completely default installation of mysql. From what I can gather it's got something to do with the max_allowed_packet setting and/or the net_buffer_length setting. I've read a bunch about this, and tried tweaking it a bunch of ways on both the export and import sides, but it still doesn't work. I keep getting the packet too big error on import. From everything I've read, here's my best guess: mysqldump --net_buffer_length=50000 myschema > giant_file.sql Because I read here that mysqldump refers to max_allowed_packet as net_buffer_length because ... uhh ... anyway. Then to import mysql --max_allowed_packet=999999 myschema < giant_file.sql But this still doesn't work. How do I export / import the database???

    Read the article

  • Easy way to deploy PHP sites from git

    - by Leopd
    I'm looking for recommendations on how to automate / simplify deployment from a git repository (github) to a hosting service. The hosting service supports FTP (yuck) / SSH / SFTP access. Any good tools out there to give push-button deployment of new revisions? I know it's not a hard script to write, but when you start thinking about things like roll-back and multiple sites, it gets complicated enough that I'd rather not re-invent the wheel.

    Read the article

  • SSL 3.0 warning in Chrome on Ubuntu 10.04LTS

    - by Leopd
    I'm running Apache2 with SSL on Ubuntu 10.04LTS. Chrome gives me this annoying warning when I inspect the certificate: The connection had to be retried using SSL 3.0. This typically means that the server is using very old software and may have other security issues. The relevant part of the apache config looks like: SSLEngine on SSLCertificateFile /etc/ssl/... SSLCertificateKeyFile /etc/ssl/... SSLCACertificateFile /etc/ssl/... SSLProtocol -all +SSLv3 +TLSv1 The last line I added to try to address this problem, but it's not working. Any advice on properly enabling TLS?

    Read the article

  • Catching a python app before it exits

    - by Leopd
    I have a python app which is supposed to be very long-lived, but sometimes the process just disappears and I don't know why. Nothing gets logged when this happens, so I'm at a bit of a loss. Is there some way in code I can hook in to an exit event, or some other way to get some of my code to run just before the process quits? I'd like to log the state of memory structures to better understand what's going on.

    Read the article

  • Django CSRF failure when form posts to a different frame

    - by Leopd
    I'm building a page where I want to have a form that posts to an iframe on the same page. The Template looks like this: <form action="form-results" method="post" target="resultspane" > {% csrf_token %} <input name="query"> <input type=submit> </form> <iframe src="form-results" name="resultspane" width="100%" height="70%"> </iframe> The view behind form-results is getting CSRF errors. Is there something special needed for cross-frame posting?

    Read the article

  • Can django lazy-load fields in a model?

    - by Leopd
    One of my django models has a large TextField which I often don't need to use. Is there a way to tell django to "lazy-load" this field? i.e. not to bother pulling it from the database unless I explicitly ask for it. I'm wasting a lot of memory and bandwidth pulling this TextField into python every time I refer to these objects. The alternative would be to create a new table for the contents of this field, but I'd rather avoid that complexity if I can.

    Read the article

  • Submit a form and get a JSON response with jQuery

    - by Leopd
    I expect this is easy, but I'm not finding a simple explanation anywhere of how to do this. I have a standard HTML form like this: <form name="new_post" action="process_form.json" method=POST> <label>Title:</label> <input id="post_title" name="post.title" type="text" /><br/> <label>Name:</label><br/> <input id="post_name" name="post.name" type="text" /><br/> <label>Content:</label><br/> <textarea cols="40" id="post_content" name="post.content" rows="20"></textarea> <input id="new_post_submit" type="submit" value="Create" /> </form> I'd like to have javascript (using jQuery) submit the form to the form's action (process_form.json), and receive a JSON response from the server. Then I'll have a javascript function that runs in response to the JSON response, like function form_success(json) { alert('Your form submission worked'); // process json response } How do I wire up the form submit button to call my form_success method when done? Also it should override the browser's own navigation, since I don't want to leave the page. Or should I move the button out of the form to do that?

    Read the article

1