Search Results

Search found 4390 results on 176 pages for 'git daemon'.

Page 154/176 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Print from Linux to Windows networked printer

    - by wonkothenoob
    I want to print from a Debian (Lenny) workstation to a Windows networked printer. I'm not even sure what type of Windows network this is. Our tech-support is friendly but doesn't want to get involved with supporting Linux. I need to use it for a variety of reasons and am completely stumped because I know nothing about Windows networking. They gave me URI smb://msprint.ourorg.edu as the "address" of the printer and further confirmed that the domain is "OURORG" and the share is "PHYS-PRI". I've installed CUPS and made sure that it's running as a daemon, I've clicked on the system-config-printer[1] icon, selected the printer as a Windows printer shared via SAMBA and entered the above URI. Attempting to print a testpage just sees it sit in the queue. I attempted to see if I could access the share using two other methods. Method 1. First I tried the "smbclient" from the CLI: $ smbclient -L //msprint.ourorg.edu -U user23 timeout connecting to 192.168.44.3:445 timeout connecting to 192.168.44.3:139 Connection to msprint.ourorg.edu failed (Error NT_STATUS_ACCESS_DENIED) Method 2. I tried to use the GUI tool Smb4K. This shows me four other toplevel (I'm assuming they're domains?) groupings one of which is the one which our IT department supplied to me. Clicking them shows a bunch of other machines with (what I assume are NetBIOS names?) including my own. I see all sorts of other networked printers belonging to other departments but none within mine. Certainly not the PHYS-PRI one suggested to me by the IT folks. I realize that I'm probably using the wrong terminology for the windows network, but can anyone help me with this? What steps should I be taking in debugging this? Do I need to actually run my machine as a SAMBA server to authenticate to the printer or should I just be able to communicate using CUPS? It's a GUI to CUPS configuration http://cyberelk.net/tim/software/system-config-printer/

    Read the article

  • LLVM Clang 5.0 explicit in copy-initialization error

    - by kevzettler
    I'm trying to compile an open source project on OSX that has only been tested on Linux. $: g++ -v Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) Target: x86_64-apple-da I'm trying to compile with the following command line options g++ -MMD -Wall -std=c++0x -stdlib=libc++ -Wno-sign-compare -Wno-unused-variable -ftemplate-depth=1024 -I /usr/local/Cellar/boost/1.55.0/include/boost/ -g -O3 -c level.cpp -o obj-opt/level.o I am seeing several errors that look like this: ./square.h:39:70: error: chosen constructor is explicit in copy-initialization int strength = 0, double flamability = 0, map<SquareType, int> constructions = {}, bool ticking = false); The project states the following are requirements for the Linux setup. How can I confirm I'm making that? gcc-4.8.2 git libboost 1.5+ with libboost-serialize libsfml-dev 2+ (Ubuntu ppa that contains libsfml 2: ) freeglut-dev libglew-dev

    Read the article

  • Renaming files: Visual Studio vs Version control

    - by Benjol
    The problem with renaming files is that if you want to take advantage of Visual Studio refactoring, you really need to do it from inside Visual Studio. But most (not all*) version control system also want to be the ones doing the renaming. One solution is to use integrated source control, but this is not always available, and in some cases is pretty clunky. I'd personally be more comfortable using source control separately, outside of Visual Studio, but I'm not sure how to manage this question of file renames. So, for those of you that use Visual Studio, which source control do you use? Do you use a VS integration (which one?) and otherwise, how do you resolve this renaming problem? (* git is smart enough to work it out for itself)

    Read the article

  • Jailkit not locking down SFTP, working for SSH

    - by doublesharp
    I installed jailkit on my CentOS 5.8 server, and configured it according to the online guides that I found. These are the commands that were executed as root: mkdir /var/jail jk_init -j /var/jail extshellplusnet jk_init -j /var/jail sftp adduser testuser; passwd testuser jk_jailuser -j /var/jail testuser I then edited /var/jail/etc/passwd to change the login shell for testuser to be /bin/bash to give them access to a full bash shell via SSH. Next I edited /var/jail/etc/jailkit/jk_lsh.ini to look like the following (not sure if this is correct) [testuser] paths= /usr/bin, /usr/lib/ executables= /usr/bin/scp, /usr/lib/openssh/sftp-server, /usr/bin/sftp The testuser is able to connect via SSH and is limited to only view the chroot jail directory, and is also able to log in via SFTP, however the entire file system is visible and can be traversed. SSH Output: > ssh testuser@server Password: Last login: Sat Oct 20 03:26:19 2012 from x.x.x.x bash-3.2$ pwd /home/testuser SFTP Output: > sftp testuser@server Password: Connected to server. sftp> pwd Remote working directory: /var/jail/home/testuser What can be done to lock down SFTP access to the jail? FWIW, I mostly used this as a guide: http://digitalpatch.blogspot.com.ar/2010/03/openssh-daemon-hardening-part-3-setup.html

    Read the article

  • Source control Branching needs

    - by Mükremin
    Hello, we are creating hospital information system software. The project will be different hospital to hospital and contain different use cases. But lots of parts will be the same. So we will use branching mechanism of the source control. If we find a bug in one hospital, how can we know the other branches have the same bug or not. Resim The numbers in the picture which we attached show the each hospital software. Do you have a solution about this problem ? Which source control(SVN,Git,Hg) we will be suitable about this problem ? Thank you.!

    Read the article

  • Redirection of outbound UDP port.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. I tried, but can't seem to get iptables to do what I want. I'm not sure if it's my lack of skill, or if I'm trying the wrong solution. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • subversion 1.6.x losing changes on check-in

    - by Bernard
    I'm trying to figure out if this is a known issue with SVN 1.6.x Developer A modifies a file and commits it. Developer B modifies the same file. Tries to commit it and gets told local copy out of date so does an update and then a commit. However the changes from Developer A are lost so the resulting file only contains the version that Developer B checked in. We can see this in the logs. It seems to happen when the same file is modified but in different places. Anyone else experienced this? We've had it happen 4 or 5 times in the past few weeks and we've lost a half day or so each time trying to figure out what's been lost, etc. We're starting to lose confidence in SVN. Should we be thinking of moving to GIT or Mecurial? Would that sort out this problem?

    Read the article

  • Building FFmpeg for Android

    - by varevarao
    I've spent almost a week on this now, trying to get FFmpeg "Angel" to build for Android. I've tried build scripts from all over the internet to no avail. I got closest was using this. A sthe author himself says the script doesn't work for newer versions of FFmpeg due to this bug, which has been dismissed on that ticket saying "I found a Makefile that does it." This was dis-heartening, being the only post on all of the cast Google world that was anywhere close to my problem. So, question time: Is there a way to get around the above bug? I'm trying to use the newest ffmpeg API, and "Love" is just giving me "undefined reference" errors while trying to use av_encode_video2(), and av_free_frame(). The code I was working on the lines of is at the ffmpeg git repo, under /doc/examples/decoding_encoding.c (the function starting on line 338)

    Read the article

  • How to configure hostname for `apache22` package on FreeBSD?

    - by Eonil
    I'm configuring development & test FreeBSD machine on VM. I installed apache22 package and restarted. But the daemon does not started with this error: %apachectl start httpd: apr_sockaddr_info_get() failed for test.box httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName (13)Permission denied: make_sock: could not bind to address [::]:80 (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs % My hostname is test.box. Because this is temporary test box, it has no real domain-name. But I used 2-level name to avoid long time waiting of sshd on booting. However, I searched web, and I modified /etc/hosts file like this (I didn't touches this file before): # This is original configuration #::1 localhost localhost.my.domain #127.0.0.1 localhost localhost.my.domain # New configuration ::1 localhost test.box 127.0.0.1 localhost test.box 127.0.0.1 test.box test Now apache fails with this error message: %apachectl start httpd: Could not reliably determine the server's fully qualified domain name, using test.box for ServerName (13)Permission denied: make_sock: could not bind to address [::]:80 (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs % I don't know what's required now. Please let me know reason and solution of this error. ---- (edit) ---- The permission errors are caused from omission of sudo.

    Read the article

  • SSH connection times out unless I tunnel in from a different server-

    - by rm-vanda
    OK, so this just started last week - Whenever we try to connect to our server via ssh (we use sftp, as well) - The connection times out. However, when you ssh to any other server and then ssh into the machine - it works flawlessly. Now, the mindblowing thing is that sometimes the ssh connection will succeed. Moments ago, I tried it from another machine, and then my own, and it worked - only to time out the next go around. Last week, simply restarting the ssh daemon worked, but this week, no such luck. I even went in and changed: /etc/hosts.allow ALL : ALL and /etc/hosts.deny is blank. The firewall config hasn't changed - but I even disabled the firewall to see if that would work - It did, for a moment - before cutting off, again. (ufw is set to "ALLOW" not "LIMIT") When I try SSH'ing in from my phone -- it works, fine -- So, it seems the problem is with our ISP/router/gateway - However, I see no log in the router/gateway that says its blocking our connections - And that wouldn't explain why we can SSH into any other server -- except for this one - from our network --- I truly appreciate any insight that anyone may have on this matter -

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

  • Reduce Heroku Compiled Slug Size

    - by etrepat
    I've just updated rails to v2.3.6 on my app under a bamboo-ree-1.8.7 stack and the compiled slug size has grown up to 40.5Mb! Previous to that last git push, the slug size was about 20Mb and was using rails v2.3.5. Is it because my slug has both of rails versions installed? Probably I'm missing something but I haven't added any special code/files into my app as to increase the slug size by ~20Mb. Can you point me on how can I reduce the slug size? Any help will be greatly appreciated. Thank you very much in advance.

    Read the article

  • Where to observe application deployed by Jenkins?

    - by James
    Question from a first-time Jenkins user. So I hope you wouldn't mind if the question is too silly. I have installed jenkins on a Ubuntu machine, and is accessible at localhost:8080. I have successfully configured it to work with Maven2 and Git as well. Next, I created a job/project (A Java/Spring application), and got it to build without error on Jenkins as well. Now my question is, where do I see this application running? :) Best Regards James

    Read the article

  • How to redirect a name-based VirtualHost to a different port?

    - by Andra
    I have a virtuoso sparql endpoint installed, which I want to make available through a hostname (e.g. www.virtuosoexample.com). The thing with virtuoso is that the is no Document root. The endpoint is initiated by the daemon and made available through a source port (e.g. localhost:1234/) I know how to set a virtual host pointing to a document root, but i don't know how to do this for a server with a port number. Any advice would be appreciated. Below is the code, how I would do it with a document root. I tried to change that (naively) into localhost:1234/sparql, but that didn't work <VirtualHost * ServerName www.virtuosoexample.com <www.virtuosoexample.com> ServerAlias www.virtuosoexample.com <www.virtuosoexample.com> ErrorLog /var/log/apache2/error.wp-sparql.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.wp-sparql.log combined DocumentRoot /var/www/endpoint/sparql/ <Directory /var/www/endpoint/sparql> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost

    Read the article

  • Convention location for JAR files for a LaunchDaemon on OS X?

    - by Barry Wark
    I'm setting up a Hudson build slave on an OS X machine. I'm using launchd to start the slave using the following plist in `/Library/LaunchDaemons/': <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <true/> <key>Label</key> <string>org.hudson-ci.jnlpslave</string> <key>ProgramArguments</key> <array> <string>/usr/bin/java</string> <string>-jar</string> <string>/Users/Shared/Hudson/slave.jar</string> <string>-noCertificateCheck</string> <string>-jnlpUrl</string> <string>file:///Users/Shared/Hudson/slave-agent.jnlp</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> I'm currently putting the slave.jar and slave-agent.jnlp files in /Users/Shared/Hudson but this seems like an unnecessarily user-visible location. What's the convention? Where should I be putting these JARs for a daemon?

    Read the article

  • Pre-packaged Rails applications

    - by Craig
    Seems like most Rails applications have similar 'base' functionality. As such, it seems that there would be value in having pre-build Rails applications at various functionality points such as: basic User model with authentication using Authlogic #1 + openid integration #2 + authorization using declarative_authorization #3 + Administration module #4 + a Profile model Themes (useful stylesheets and such) Friendship model Geocoding ... In addition to the basic MVC stuff, these applications would include: testing harnesses seed data git support One could choose start from any of these functionality points. Other than the sample application that are available with the various gems/plugins, are there projects such as these? If not, I would certainly be willing to contribute what I have.

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • Best Practices for a Web App Staging Server (on a budget)

    - by fig-gnuton
    I'd like to set up a staging server for a Rails app. I use git & github, Cap, and have a VPS with Apache/Passenger. I'm curious as to the best practices for a staging setup, as far as both the configuration of the staging server as well as the processes for interacting with it. I do know it should be as identical to the production server as possible, but restricting public access to it will limit that, so tips on securing it only for my use would also be great. Another specific question would be whether I could just create a virtual host on the VPS, so that the staging server could reside alongside the production one. I have a feeling there may be reasons to avoid this, though.

    Read the article

  • How to validate Windows VC++ DLL on Unix systems

    - by Guildencrantz
    I have a solution, mostly C#, but with a few VC++ projects, that is pushed through our standard release process (perl and bash scripts on Unix boxes). Currently the initiative is to validate DLL and EXE versions as they pass through the process. All the versioning is set so that File Version is of the format $Id: $ (between the colon and the second dollar should be a git commit hash), and the Product Version is of the format $Hudson Build: $ (between the colon and the second dollar should be a string representing the hudson build details). Currently this system works extremely well for the C# projects because this version information is stored as plain strings within the compiled code (you can literally use the unix strings command and see the version information); the problem is that the VC++ projects do not expose this information as strings (I have used a windows system to verify that the version information is correctly being set), so I'm not sure how to extract the version on a unix system. Any suggestions for either A) Getting a string representation of the version embedded in the compiled code, or B) A utility/script which can extract this information?

    Read the article

  • Correct structure and way of website versioning

    - by Saif Bechan
    Recently I use GIT to version my website. It makes it all really easy to see how my project develops and I always have save backups on different places on the web. Now my main question is if it is recommended to version your whole root of the website. I have a basic structure that looks something like this: /httpdocs /config /media /application index.php .htaccess 1) Should I use the /httpdocs folder to version, or should I use the content of the folder. 2) Is it recommended to version the media folder. In the media version I have several images for the overall layout, and some other images for the website. These imagas can be quite large. I work on these images from time to time and so they change. I hardly never need the old image again, so is this not just taking up precious storage space. I would highly appreciate just some basic recommendation on this topic.

    Read the article

  • code deployment options

    - by bobinabottle
    We've been looking at automating our server and code deployments. We've already decided on puppet for our server configurations, but are looking for a more "push" style tool to use for code deployments. I'm currently looking at either using capistrano or fabric, but I'm not sure what would be the most mature to use? We deploy a number of different services, none of which are currenlty written in rails or django, so we don't mind about language. What would be the best one to build custom deployment scripts? Or have I missed another tool out there? We are also considering git pushing with hooks for deployment, but feel it will be limited/hacky in what we want to achieve with it. Any thoughts or experience would be great to hear. Cheers

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • iptables logging not working?

    - by vps_newcomer
    OS: Ubuntu 10.04 Logging daemon: rsyslog For some reason i'm not getting any iptables logs, even thought i don't look through them very often i'd still like to get it working for the sake of it working XD Here is my /etc/ryslog.d/iptables.conf :msg, contains, "[IPTABLES]" -/var/log/iptables.log & ~ My iptables logging prefix is "[IPTABLES]" followed by whatever else (example [IPTABLES] Denied xyz) the /var/log/iptables.log file is being created, however its not getting any entries. I can see the logging entries in dmesg but not in syslog or messages. Whats going on? EDIT: My iptables logging rules: # logging limit LoggingLimit=5/min LoggingPrefix=IPTABLES # Logging chain iptables -N LOG_REJECT iptables -A LOG_REJECT -j LOG # join INPUT to LOG_REJECT iptables -A INPUT -j LOG_REJECT # logging iptables -A LOG_REJECT -p tcp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied TCP: " #--log-level 7 iptables -A LOG_REJECT -p udp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied UDP: " #--log-level 7 iptables -A LOG_REJECT -p icmp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied ICMP: " #--log-level 7 Update: I found a thread that has the same symptoms as i do, apparently is a kernel bug. I am using a VPS so could anyone point me on how to upgrade my kernel or apply a workaround? I couldn't find a 2.6.34 kernel listed in apt-cache. Thread: http://www.linode.com/forums/viewtopic.php?t=5533

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >