Search Results

Search found 130 results on 6 pages for 'kern'.

Page 2/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • /etc/hosts: What is loghost? (fresh install of Solaris 10 update 9)

    - by cjavapro
    # # Internet host table # ::1 localhost 127.0.0.1 localhost XX.XX.XX.XX myserver loghost What is the purpose of loghost? If it was not for having loghost in there, all the /etc/hosts files on all the servers in this particular network could be identical. Edit: I looked at /etc/syslog.conf #ident "@(#)syslog.conf 1.5 98/12/14 SMI" /* SunOS 5.0 */ # # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator *.alert root *.emerg * # if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost) mail.debug ifdef(`LOGHOST', /var/log/syslog, @loghost) # # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , user.err /dev/sysmsg user.err /var/adm/messages user.alert `root, operator' user.emerg * ) Very interesting. when shutting down,, alerts go to all users probably through *.emerg * Looking at ifdef, it seems that the first parameter checks to see if current machine is a loghost, second parameter is what to do if it is and third parameter is what to do if it is not. Edit: If you want to test a logging rule you can use svcadm restart system-log to restart the logging service and then logger -p notice "test" to send a test log message where notice can be replaced with any type such as user.err, auth.notice, etc.

    Read the article

  • strange Kernel Process Threads taking over my AIX box....

    - by Paul
    When I check the Running Stats of my box I get the following: CPU User% Kern% Wait% Idle% Physc 0 37.5 57.4 0.0 5.1 0.01 2 0.0 18.3 0.0 81.7 0.00 3 0.0 22.5 0.0 77.5 0.00 4 0.0 17.0 0.0 83.0 0.00 5 0.0 20.5 0.0 79.5 0.00 6 0.0 33.7 0.0 66.3 0.00 7 0.0 4.4 0.0 95.6 0.00 8 0.0 19.3 0.0 80.7 0.00 9 0.0 22.3 0.0 77.7 0.00 10 0.0 19.2 0.0 80.8 0.00 1 0.0 1.3 0.0 98.7 0.00 11 0.0 21.8 0.0 78.2 0.00 21 0.0 62.9 0.0 37.1 0.00 12 0.0 21.1 0.0 78.9 0.00 13 0.0 22.7 0.0 77.3 0.00 14 0.0 18.1 0.0 81.9 0.00 15 0.0 21.2 0.0 78.8 0.00 16 0.0 19.1 0.0 80.9 0.00 The Kern% seems high to me and I cannot find a reason for this much Kernel activity.... Doing a deep dive into what User processes are doing I find nothing with significant CPU utilization even though TOPAS and SAR both show the same thing.... One CPU with 30-60 % user and every processor with 5-30% Kernel % utilization... What is my box doing??? here is a second sample of CPU % from TOPAS CPU User% Kern% Wait% Idle% Physc 0 67.8 31.4 0.1 0.7 0.14 2 0.0 18.2 0.0 81.8 0.00 3 0.0 20.3 0.0 79.7 0.00 4 0.0 17.3 0.0 82.7 0.00 5 0.0 20.7 0.0 79.3 0.00 6 0.0 39.2 0.0 60.8 0.00 7 0.0 5.0 0.0 95.0 0.00 8 0.0 17.9 0.0 82.1 0.00 9 0.0 22.0 0.0 78.0 0.00 10 0.0 18.0 0.0 82.0 0.00 1 0.0 0.7 0.0 99.3 0.02 11 0.0 21.7 0.0 78.3 0.00 21 0.0 21.7 0.0 78.3 0.00 12 0.0 17.0 0.0 83.0 0.00 13 0.0 21.1 0.0 78.9 0.00 14 0.0 17.8 0.0 82.2 0.00 15 0.0 21.8 0.0 78.2 0.00 16 0.0 17.6 0.0 82.4 0.00 Any ideas to help identify what is running in the Kernel Space would be great....

    Read the article

  • AlertDialog in if-stetement doesn't show()

    - by Steffen Kern
    I have the following code: public void button_login(View view) { // Instantiate an AlertDialog.Builder with its constructor AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setPositiveButton(R.string.ok, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { /* User clicked OK button */ } }); // Preserve EditText values. EditText ET_username = (EditText) findViewById(R.id.username); EditText ET_password = (EditText) findViewById(R.id.password); String str_username = ET_username.toString(); String str_password = ET_password.toString(); // Intercept missing username and password. if(str_username.length() == 0) { builder.setMessage(R.string.hint_username_empty); AlertDialog dialog = builder.create(); dialog.show(); } } I have an activity that includes the two EditText-Views and a button. When I click the button the shown method will be called. My problem: The AlertDialog doesnt show up! When I put the create and show at beginning like this: // Instantiate an AlertDialog.Builder with its constructor AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setPositiveButton(R.string.ok, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { /* User clicked OK button */ } }); builder.setMessage(R.string.hint_username_empty); AlertDialog dialog = builder.create(); dialog.show(); // Preserve EditText values. EditText ET_username = (EditText) findViewById(R.id.username); EditText ET_password = (EditText) findViewById(R.id.password); String str_username = ET_username.toString(); String str_password = ET_password.toString(); // Intercept missing username and password. if(str_username.length() == 0) { } } Then the Dialog shows up. Any ideas why the dialog doesnt show up in the first place? Greetz, Steffen

    Read the article

  • Emulating old-school sprite flickering (theory and concept)

    - by Jeffrey Kern
    I'm trying to develop an oldschool NES-style video game, with sprite flickering and graphical slowdown. I've been thinking of what type of logic I should use to enable such effects. I have to consider the following restrictions if I want to go old-school NES style: No more than 64 sprites on the screen at a time No more than 8 sprites per scanline, or for each line on the Y axis If there is too much action going on the screen, the system freezes the image for a frame to let the processor catch up with the action From what I've read up, if there were more than 64 sprites on the screen, the developer would only draw high-priority sprites while ignoring low-priority ones. They could also alternate, drawing each even numbered sprite on opposite frames from odd numbered ones. The scanline issue is interesting. From my testing, it is impossible to get good speed on the XBOX 360 XNA framework by drawing sprites pixel-by-pixel, like the NES did. This is why in old-school games, if there were too many sprites on a single line, some would appear if they were cut in half. For all purposes for this project, I'm making scanlines be 8 pixels tall, and grouping the sprites together per scanline by their Y positioning. So, dumbed down I need to come up with a solution that.... 64 sprites on screen at once 8 sprites per 'scanline' Can draw sprites based on priority Can alternate between sprites per frame Emulate slowdown Here is my current theory First and foremost, a fundamental idea I came up with is addressing sprite priority. Assuming values between 0-255 (0 being low), I can assign sprites priority levels, for instance: 0 to 63 being low 63 to 127 being medium 128 to 191 being high 192 to 255 being maximum Within my data files, I can assign each sprite to be a certain priority. When the parent object is created, the sprite would randomly get assigned a number between its designated range. I would then draw sprites in order from high to low, with the end goal of drawing every sprite. Now, when a sprite gets drawn in a frame, I would then randomly generate it a new priority value within its initial priority level. However, if a sprite doesn't get drawn in a frame, I could add 32 to its current priority. For example, if the system can only draw sprites down to a priority level of 135, a sprite with an initial priority of 45 could then be drawn after 3 frames of not being drawn (45+32+32+32=141) This would, in theory, allow sprites to alternate frames, allow priority levels, and limit sprites to 64 per screen. Now, the interesting question is how do I limit sprites to only 8 per scanline? I'm thinking that if I'm sorting the sprites high-priority to low-priority, iterate through the loop until I've hit 64 sprites drawn. However, I shouldn't just take the first 64 sprites in the list. Before drawing each sprite, I could check to see how many sprites were drawn in it's respective scanline via counter variables . For example: Y-values between 0 to 7 belong to Scanline 0, scanlineCount[0] = 0 Y-values between 8 to 15 belong to Scanline 1, scanlineCount[1] = 0 etc. I could reset the values per scanline for every frame drawn. While going down the sprite list, add 1 to the scanline's respective counter if a sprite gets drawn in that scanline. If it equals 8, don't draw that sprite and go to the sprite with the next lowest priority. SLOWDOWN The last thing I need to do is emulate slowdown. My initial idea was that if I'm drawing 64 sprites per frame and there's still more sprites that need to be drawn, I could pause the rendering by 16ms or so. However, in the NES games I've played, sometimes there's slowdown if there's not any sprite flickering going on whereas the game moves beautifully even if there is some sprite flickering. Perhaps give a value to each object that uses sprites on the screen (like the priority values above), and if the combined values of all objects w/ sprites surpass a threshold, introduce the sprite flickering? IN CONCLUSION... Does everything I wrote actually sound legitimate and could work, or is it a pipe dream? What improvements can you all possibly think with this game programming theory of mine?

    Read the article

  • Adding Chart to WordprocessingML

    - by kern
    I would like to generate an Open XML document containing a Chart using the Open Xml SDK 2. I found an SpreadsheetML example, but I can't work out how to add the chart in a .docx... Is there a good source for documentation/examples for the Open Xml SDK 2?

    Read the article

  • 'Forward-Compatible' Program Design

    - by Jeffrey Kern
    The majority of my questions I've asked here so far on StackOverflow have been how to implement individual concepts and techniques towards developing a software-based NES clone via the XNA environment. The small samples that I've thrown together on my PC work relatively great and everything. Except I hit a brick wall. How do I merge all of these samples together. Having proof-of-concept is amazing, except when you need it to go beyond just that. I now have samples strewn about that I'm trying to merge, some of them incomplete. And now I'm stuck with the chicken-and-the-egg situation of where I would like to incorporate these samples together, to make sure they work, but I cannot without test data. And I don't have tools to create test data, because they'd need to be based off of the individual pieces that need to be put together. In my mind, I'm having nightmares with circular reference. For my sample data, I am hoping to save it in XML and write a specification - and then make sample data by hand - but I'm too paranoid of manually creating an XML file full of incorrect data and blaming it on my code, or vice-versa. It doesn't help that the end-result of my work is graphic-oriented, which makes it interseting how a graphic on the screen can be visualized in XML Nodes. I guess, my question is this: What design patterns and disciplines exist in the coding world that address this type of concern? I've always relied on brute-force coding and restarting a project with a whole new code base in attempts to further along my goals, but I doubt that would be the best way to do so. Within my college career, the majority of my programming was to work on simple projects that came out of a book, or with a given correct data set and a verifyable result. I don't have that, as my own design documents that I am going by could be terribly wrong.

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • Creating a Multiwindowed Cocoa Program - Launching Procedure Suggestions?

    - by Jeffrey Kern
    I'm porting an application I developed in Visual Studio 2008 over to Cocoa. I'm currently doing a 'learn-as-you-go' approach to Cocoa, so I can experiment with different ideas and techniques in smaller, simpler projects and eventually combine them into one big application. My program logic is as follows (in a dumbed-down sense). Items in the list are mandated by my boss. Application is started 1a. Verify CD program is in drive. Verify license. If found and is valid, skip to step 7 Display license agreement. Display serial number prompt. Verify and save serial number. Hide all prior windows. Load main application window Intercept requests and commands from main application window, including making a duplicate main application window Exit program when requested by user What would the best bet be for this type of application? From another question I asked, I found out that I should keep the 'main application' window in a separate XIB file from the rest, because I might need to clone and interact with it. I know that since Cocoa and Objective-C is based off of C, there is a Main method somewhere. But what would you all suggest as a starting place for an application like this?

    Read the article

  • 60K+ Sprites on the 360?

    - by Jeffrey Kern
    Hey everyone, Just wondering - throwing ideas in my head - about starting a new XNA project for the 360. I would like it to be retro-old school, and emulating scanlines and color palettes and such. As part of this idea, what I would ideally like to do is manually draw each and every pixel of the screen. So, worst-case scenario I would have to draw about 60K sprites on a 252x240 resolution (I think thats correct). 60K sprites on the screen at a time. So, before I even attempt to code this - would the XBOX 360 be able to keep up with this even? That is a lot of sprites, but they aren't big sprites, and the texture data needed would be non-existant. However, I guess how this project would be implemented would make it or break it, but all I was thinking was coming up with a 2D array and mapping which color value would need to be drawn at that point. Of course, this is watered down talk right now. But what you all suggest? EDIT: Each sprite would represent one pixel. E.g., a sprite at 0,0. Another at 0,1. etc.

    Read the article

  • NES Programming - Nametables?

    - by Jeffrey Kern
    Hello everyone, I'm wondering about how the NES displays its graphical muscle. I've researched stuff online and read through it, but I'm wondering about one last thing: Nametables. Basically, from what I've read, each 8x8 block in a NES nametable points to a location in the pattern table, which holds graphic memory. In addition, the nametable also has an attribute table which sets a certain color palette for each 16x16 block. They're linked up together like this: (assuming 16 8x8 blocks) Nametable, with A B C D = pointers to sprite data: ABBB CDCC DDDD DDDD Attribute table, with 1 2 3 = pointers to color palette data, with < referencing value to the left, ^ above, and ' to the left and above: 1<2< ^'^' 3<3< ^'^' So, in the example above, the blocks would be colored as so 1A 1B 2B 2B 1C 1D 2C 2C 3D 3D 3D 3D 3D 3D 3D 3D Now, if I have this on a fixed screen - it works great! Because the NES resolution is 256x240 pixels. Now, how do these tables get adjusted for scrolling? Because Nametable 0 can scroll into Nametable 1, and if you keep scrolling Nametable 0 will wrap around again. That I get. But what I don't get is how to scroll the attribute table wraps around as well. From what I've read online, the 16x16 blocks it assigns attributes for will cause color distortions on the edge tiles of the screen (as seen when you scroll left to right and vice-versa in SMB3). The concern I have is that I understand how to scroll the nametables, but how do you scroll the attribute table? For intsance, if I have a green block on the left side of the screen, moving the screen to right should in theory cause the tiles to the right to be green as well until they move more into frame, to which they'll revert to their normal colors.

    Read the article

  • Parsing NSXMLNode Attributes in Cocoa

    - by Jeffrey Kern
    Hello everyone, Given the following XML file: <?xml version="1.0" encoding="UTF-8"?> <application name="foo"> <movie name="tc" english="tce.swf" chinese="tcc.swf" a="1" b="10" c="20" /> <movie name="tl" english="tle.swf" chinese="tlc.swf" d="30" e="40" f="50" /> </application> How can I access the attributes ("english", "chinese", "name", "a", "b", etc.) and their associated values of the MOVIE nodes? I currently have in Cocoa the ability to traverse these nodes, but I'm at a loss at how I can access the data in the MOVIE NSXMLNodes. Is there a way I can dump all of the values from each NSXMLNode into a Hashtable and retrieve values that way? Am using NSXMLDocument and NSXMLNodes.

    Read the article

  • Arrays/Lists and computing hashvalues (VB, C#)

    - by Jeffrey Kern
    I feel bad asking this question but I am currently not able to program and test this as I'm writing this on my cell-phone and not on my dev machine :P (Easy rep points if someone answers! XD ) Anyway, I've had experience with using hashvalues from String objects. E.g., if I have StringA and StringB both equal to "foo", they'll both compute out the same hashvalue, because they're set to equal values. Now what if I have a List, with T being a native data type. If I tried to compute the hashvalue of ListA and ListB, assuming that they'd both be the same size and contain the same information, wouldn't they have equal hashvalues as well? Assuming as sample dataset of 'byte' with a length of 5 {5,2,0,1,3}

    Read the article

  • Using CreateOrthographicOffCenter in XNA

    - by Jeffrey Kern
    I'm trying to figure out how to draw graphics in XNA, and someone else suggested this. But before I attempt to use this... If I create and use this camera, and set LEFT,TOP to 0 and WIDTH=256 and HEIGHT=240, anything I render to the screen will use these coordinates? So a box with a width and height of 1, if set to 0,0 will take up space from 0,0 to 1,1?

    Read the article

  • Native arrays and computing hashvalues (VB, C#)

    - by Jeffrey Kern
    I feel bad asking this question but I am currently not able to program and test this as I'm writing this on my cell-phone and not on my dev machine :P (Easy rep points if someone answers! XD ) Anyway, I've had experience with using hashvalues from String objects. E.g., if I have StringA and StringB both equal to "foo", they'll both compute out the same hashvalue, because they're set to equal values. Now what if I have a List, with T being a native data type. If I tried to compute the hashvalue of ListA and ListB, assuming that they'd both be the same size and contain the same information, wouldn't they have equal hashvalues as well?

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • Visually Viewing and Editing a Two-Dimensional Array - VB/C#

    - by Jeffrey Kern
    I've never personally used any of the Data controls within Visual Studio. I need to view and edit a two-dimensional byte array, 16x15 objects. Is there any control capable of editing this information? I've tried to access data with the DataViewGrid, but am not sure how to use it. It would be great to edit this information via rows and columns, like how you can in Excel. Thank you! Times like this I wish I could just use multiple text boxes and assign them each an index value. Oh VB6 how I miss you :P

    Read the article

  • COCOA: Programatically creating new windows and accessing window objects

    - by Jeffrey Kern
    I'm having an issue with creating new windows in Cocoa. Hypothetically speaking, lets say I have "WindowA" and has a button called "myButton". When you click on "myButton", it runs this code in the following class file: -(void)openFile2:(id)sender { myNextWindow = [[TestWindowController alloc] initWithWindowNibName:@"MainMenu"]; NSString *testString = @"foo"; [myNextWindow showWindow:self]; [myNextWindow setButtonText:testString]; } The code in a nutshell makes a duplicate "WindowA" and shows it. As you can see, this code also runs a method called 'setButtonText', which is this: - (void)setButtonText:(NSString *)passedText { [myButton setTitle:passedText]; } The problem is that when I call this method locally, in the original window - the button text changes (e.g., [self setButtonText:testString]) it works. However, it does not work in the newly created window (e.g., [myNextWindow setButtonText:testString];) When I debug the newly created window, step by step, the 'myButton' value it gives is 0x0. Do I have to manually assign controllers/delegates to the new window? I think the 'myButton' in the code isn't associated to the 'myButton' in the newly created window. How would I fix this problem?

    Read the article

  • Sorting Custom Objects with Parameter in .NET?

    - by Jeffrey Kern
    Let's say I have a custom object of Foo Is there anyway I can sort through a list of these objects, like list<of foo>.sort() and also be able to sort this list with a passable parameter. which will influence the sort? list<of foo>.sort(pValue) I'm guessing I'll need to define two separate sorts, but I am not sure.

    Read the article

  • Sorting Custom Objects with Parameter in VB.Net/C#

    - by Jeffrey Kern
    Let's say I have a custom object of Foo Is there anyway I can sort through a list of these objects, like list<of foo>.sort() and also be able to sort this list with a passable parameter. which will influence the sort? list<of foo>.sort(pValue) I'm guessing I'll need to define two separate sorts, but I am not sure. Thanks!

    Read the article

  • OS X Lion - Installing Oracle 10g Standard Edition

    - by Cellze
    Im trying to install oracle 10g on to OS X Lion, I have previous achieved this on snow leopard with the following http://blog.rayapps.com/2009/09/14/how-to-install-oracle-database-10g-on-mac-os-x-snow-leopard/ The issue im having is that the ulimit settings in the oracle/.bash_profile cannot be modified. I have the following in the bash_profile: export DISPLAY=:0.0 export ORACLE_BASE=$HOME umask 022 # must match `sysctl kern.maxprocperuid` ulimit -Hu 512 ulimit -Su 512 # must match `sysctl kern.maxfilesperproc` ulimit -Hn 10240 ulimit -Sn 10240 Upon applying the bash_profile settings . ~/.bash_profile i get the following error: -bash: ulimit: max user processes: cannot be modify limit: Invalid argument This then results in $ sqlplus / as sysdba not functioning correctly with a Segmentation fault: 11 The output of $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 512 virtual memory (kbytes, -v) unlimited If any one knows how I can apply these ulimit settings to the oracle user I have created to allow me to install sqlplus and therefore create a db, that would be great.

    Read the article

  • Logfiles filling with iptables logging

    - by Peter I
    OS: Debian 6 Server Version I have different logfiles which are filling up: user@server:/var/log$ ls -lahS | head total 427G -rw-r--r-- 1 root root 267G Nov 2 17:29 bandwidth -rw-r----- 1 root adm 44G Nov 2 17:29 kern.log -rw-r----- 1 root adm 27G Nov 2 17:29 debug -rw-r----- 1 root adm 23G Oct 27 06:33 kern.log.1 -rw-r----- 1 root adm 17G Nov 2 17:29 messages -rw-r----- 1 root adm 14G Oct 27 06:33 debug.1 -rw-r----- 1 root adm 12G Nov 2 17:29 syslog -rw-r----- 1 root adm 12G Nov 1 06:26 syslog.1 -rw-r----- 1 root adm 9.0G Oct 27 06:33 messages.1 So I looked up the file /etc/iptables.up.rules which had those lines in it: -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: So deleting those lines will solve my problem. But how would I edit those lines without losing their functionality?

    Read the article

  • Random touchpad and keyboard freezes on new installation

    - by ancaleth
    My touchpad and keyboard freeze up on my newly installed Ubuntu 10.10. They remain frozen until you shut down manually. No keys work, cursor doesn't move - it's like a screenshot. I was using Ubuntu 10.4 via wubi before on this Laptop where this problem never occurred. (I did not migrate wubi or upgrade to 10.10, it's a fresh start. 64-bit on Dell Studio, plenty of RAM, plenty of free space on partition etc.) I can't say there is a pattern yet, once it happened during the download of packages with the Update Manager, once it was just using Firefox, no other program running. In between these crashes the laptop was booted once, updates were installed etc., firefox was used and there weren't any problems. Both crashes should be in the attached kern.log and I noticed there were some error problems before the last crash (at the end, obviously). It seems the wireless was experiencing problems. This wasn't noticed on the user end, since the touchpad + keyboard were already frozen. kern.log: http://paste.ubuntu.com/552617/ How can the freezes be fixed? Edit: I will try Ctrl+Alt+F1 and then Ctrl+Alt+F7 when next freeze occurs, to see if it works again after this, as suggested here. But the keyboard seemed pretty frozen to me.

    Read the article

  • Syslog/kernlog filling up with "did not claim interface N before use"

    - by Wayne Werner
    As I discovered when asking this question, it appears that demond_nscan is trying to use a device without claiming it. And it does it.... hundreds of time per second apparently. This makes kern.log and syslog huge (100GB ). In this particular case the problem is directly a result of some Lexmark drivers that were installed (found at /usr/local/lexmark/unix_scan_drivers/bin/demond_nscan). Here are a few things I know: The drivers are for an all-in-one printer/scanner device. There was a previous lexmark printer-only that was installed with CUPS This driver was the one for CUPS systems, and I think that it automatically added it to the list of printers in CUPS. The issue started spamming kern/syslog only after these drivers were installed, using the lexmark installers While googling around I found this thread that's not completely related, but it does mention that it might be happening when two drivers try to control the same device at the same time. How can I resolve this issue so that either I only have one driver, or get the driver to claim the device before usage?

    Read the article

  • Random touchpad and keyboard freezes on new 10.10 installation

    - by ancaleth
    My touchpad and keyboard freeze up on my newly installed Ubuntu 10.10. I was using Ubuntu 10.4 via wubi before on this Laptop where this problem never occurred. (I did not migrate wubi or upgrade to 10.10, it's a fresh start. 64-bit on Dell Studio, plenty of RAM, plenty of free space on partition etc.) I can't say there is a pattern yet, once it happened during the download of packages with the Update Manager, once it was just using Firefox, no other program running. I was forced to shut down manually. In between these crashes the laptop was booted once, updates were installed etc., firefox was used and there weren't any problems. Both crashes should be in the attached kern.log and I noticed there were some error problems before the last crash (at the end, obviously). It seems the wireless was experiencing problems. This wasn't noticed on the user end, since the touchpad + keyboard were already frozen. kern.log: http://paste.ubuntu.com/552617/ How can the freezes be fixed?

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >