Search Results

Search found 5286 results on 212 pages for 'logs'.

Page 12/212 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Tool to collect stacktraces in server logs

    - by hstoerr
    Is there a tool collect and count all different stacktraces in a bunch of server logfiles? Sometimes you have just too many stacktraces that repeat over and over. So it can be difficult to spot different problems between the stacktraces. So the idea would be to have a tool that looks for stacktraces, compares them and counts them. It would be nice if the tool could ignore minor differences (e.g. $Proxy150.dispatchCalls() versus $Proxy25.dispatchCalls() ).

    Read the article

  • Chrome logs me out of everything when I exit--tried cookie-related stuff already

    - by GreatBigBore
    I've been using Chrome very successfully for a long time. It has always kept me logged in to all my sites even after exiting the app. Recently it started logging me out of everything when I exit Chrome. I've fooled around with all the various advanced cookie settings, and I've cycled through the options hoping that Chrome just needed a wakeup call or a reset or something. I've also deleted all the cookies in case a corrupted one is confusing Chrome. Nothing works! I see cookies when I log in, but they all go away when I exit Chrome. I've searched all over the place and seen only the standard answers relating to resetting cookies, local data, sessions, that sort of thing. Any Chrome gurus out there, please send a telepathic message to my browser asking it to resume its previous excellent behavior. Alternatively, you could suggest other possible solutions.

    Read the article

  • cron+pam heavily spamming my logs

    - by Lo'oris
    Two times every minute I get this in auth.log: May 12 15:21:01 ruptai CRON[25303]: pam_unix(cron:session): session opened for user root by (uid=0) May 12 15:21:01 ruptai CRON[25303]: pam_unix(cron:session): session closed for user root This never stops, two times every minute, every minute of every day. I've no idea what it is, I would just to stop it from pointless logging this stuff. This has been going on for ages so I can't recall when it started. OS is debian stable. Btw, I've found questions on google but no answers

    Read the article

  • Apache logs other user read permissions

    - by user2344668
    We have several developers who maintain the system and I want them to easily read the log files in /var/log/httpd without needing root access. I set the read permission for 'other' users but when I run tail on the log files I get permission denied: [root@ourserver httpd]# chmod -R go+r /var/log/httpd [root@ourserver httpd]# ls -la drwxr--r-- 13 root root 4096 Oct 25 03:31 . drwxr-xr-x. 6 root root 4096 Oct 20 03:24 .. drwxr-xr-x 2 root root 4096 Oct 20 03:24 oursite.com drwxr-xr-x 2 root root 4096 Oct 20 03:24 oursite2.com -rw-r--r-- 1 root root 0 May 7 03:46 access_log -rw-r--r-- 1 root root 3446 Oct 24 22:05 error_log [me@ourserver ~]$ tail -f /var/log/httpd/oursite.com/error.log tail: cannot open `/var/log/httpd/oursite/error.log' for reading: Permission denied Maybe I'm missing something on how permissions work but I'm not finding any easy answers on it.

    Read the article

  • Search SharePoint ULS Logs with Windows

    - by djeeg
    Hi, I have a standard/new Windows 2008 R2 install with SharePoint 2010 and am looking for a SharePoint expectation that occurred sometime during the last week. So I open windows explorer, then go to the logs directory (C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\LOGS). In the toolbar I can enter some search text (exception) or i can Ctrl-F which puts my cursor in the same search box. First it searches the filenames, comes back with no results, and then i click File Contents. And it comes back with still no results. Now I think, maybe there are no errors, so i search for something that I know is in the log (w3wp), still no results. In previous versions of windows, i could usually fix this by making *.log files read as text. But apparently (http://technet.microsoft.com/en-us/library/cc725753(WS.10).aspx) *.log files should already be read as text. Any idea how to make the search, really search log files. I would prefer a solution that did not involve installing any 3rd party software (eg like ULSViewer), but registry/powershell settings are okay.

    Read the article

  • Websphere logs report {0} File not found, but application continues to work without issues

    - by Eric
    A websphere 6.1 server is running a struts application that seems to be working fine. In the logs, however, I'm seeing the following error message, which is being continually emailed to the support staff. com.ibm.ws.webcontainer.webapp.WebAppErrorReport: SRVE0190E: File not found: {0} at com.ibm.ws.webcontainer.webapp.WebAppDispatcherContext.sendError(WebAppDispatcherContext.java:536) at com.ibm.ws.webcontainer.srt.SRTServletResponse.sendError(SRTServletResponse.java:930) at com.ibm.ws.webcontainer.extension.DefaultExtensionProcessor.handleRequest(DefaultExtensionProcessor.java:524) at com.ibm.ws.wswebcontainer.extension.DefaultExtensionProcessor.handleRequest(DefaultExtensionProcessor.java:111) at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3129) at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:238) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:811) at com.ibm.ws.wswebcontainer.WebContainer.handleRequest(WebContainer.java:1433) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:93) I can narrow down the issue to a single Action and JSP, which are too big to show here, but here's the action definition in struts-config.xml: <action path="/HappyDefaultThing" name="HappyDefaultThingActionForm" type="com.foo.webadministration.action.HappyDefaultThingAction" validate="true" input="/WaAssignDefaultHappyThing.jsp" scope="session"> <forward name="success" path="/WaAssignDefaultHappyThing.jsp"/> <forward name="failure" path="/WaAssignDefaultHappyThing.jsp"/> </action> As far as I can see, nothing is missing, and everything necessary is being found, but the logs say "File not found: {0}" What is "{0}"?? The stack trace only shows IBMs code, which I can't see the source of, and therefore can't trace. Is this a bug in the websphere code? I'd appreciate any help.

    Read the article

  • Tomcat Application Generating too many logs

    - by rohitgu
    Hi, I have an application which runs on tomcat 6.0.20 server on linux ubuntu server. It generates a huge amount of logs in the catalina.out folder, most of these are generated while using the application, but are not generated by the application. Some of the logs it generates are given below, Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: startElement(,,mime-type) Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: Pushing body text ' ' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: New match='web-app/mime-mapping/mime-type' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: Fire begin() for CallParamRule[paramIndex=1, attributeName=null, from stack=false] Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester characters FINE: characters(audio/x-mpeg) Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: endElement(,,mime-type) Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: match='web-app/mime-mapping/mime-type' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: bodyText='audio/x-mpeg' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: Fire body() for CallParamRule[paramIndex=1, attributeName=null, from stack=false] Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: Popping body text ' How can I turn them off? This is very important, since this a production application. Regards, Rohit

    Read the article

  • Use the output of logs in the execution of a program

    - by myle
    When I try to create a specific object, the program crashes. However, I use a module (mechanize) which logs useful information just before the crash. If I had somehow this information available I could avoid it. Is there any way to use the information which is logged (when I use the function set_debug_redirects) during the normal execution of the program? Just to be a bit more specific, I try to emulate the login behavior in a webpage. The program crashes because it can't handle a specific Following HTTP-EQUIV=REFRESH to <omitted_url>. Given this url, which is available in the logs but not as part of the exception which is thrown, I could visit this page and complete successfully the login process. Any other suggestions that may solve the problem are welcomed. It follows the code so far. SERVICE_LOGIN_BOX_URL = "https://www.google.com/accounts/ServiceLoginBox?service=adsense&ltmpl=login&ifr=true&rm=hide&fpui=3&nui=15&alwf=true&passive=true&continue=https%3A%2F%2Fwww.google.com%2Fadsense%2Flogin-box-gaiaauth&followup=https%3A%2F%2Fwww.google.com%2Fadsense%2Flogin-box-gaiaauth&hl=en_US" def init_browser(): # Browser br = mechanize.Browser() # Cookie Jar cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) # Browser options br.set_handle_equiv(True) br.set_handle_gzip(False) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(True) br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=30.0, honor_time=False) # Want debugging messages? #br.set_debug_http(True) br.set_debug_redirects(True) #br.set_debug_responses(True) return br def adsense_login(login, password): br = init_browser() r = br.open(SERVICE_LOGIN_BOX_URL) html = r.read() # Select the first (index zero) form br.select_form(nr=0) br.form['Email'] = login br.form['Passwd'] = password br.submit() req = br.click_link(text='click here to continue') try: # this is where it crashes br.open(req) except HTTPError, e: sys.exit("post failed: %d: %s" % (e.code, e.msg)) return br

    Read the article

  • Is there a sample set of web log data available for testing analysis against?

    - by Peter
    Sorry if this isn't strictly speaking a programming question, but I figure my best chance of success would be to ask here. I'm developing some web log file analysis algorithms, but to date I only have access to a fairly small amount of web log data to process. One algorithm I want to use makes some assumptions about 'the shape' of typical web log data, and so I'd like to test it against a larger 'exemplar' - perhaps the logs of a busy site with a good distribution of traffic from different sources etc. Is there a set of such data available somewhere? Thanks for any help.

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • IP Blocking on error logs PHP

    - by Lee
    Is there a way to block error logging from a specific set of IP's ? Basically macafee carry out a range of testing on the server nightly and we don't want to record these in our error logs. Is there a good way to avoid this from happening ? Hope you can advice!

    Read the article

  • Django logs: any tutorial to log to a file

    - by Algorist
    Hi, I am working with a django project, I haven't started. The developed working on the project left. During the knowledge transfer, it was told to me that all the events are logged to the database. I don't find the database interface useful to search for logs and sometimes they don't even log(I might be wrong). I want to know, if there is an easy tutorial that explains how to enable logging in Django with minimal configuration changes. Thank you Bala

    Read the article

  • Azure Diagnostics wrt Custom Logs and honoring scheduledTransferPeriod

    - by kjsteuer
    I have implemented my own TraceListener similar to http://blogs.technet.com/b/meamcs/archive/2013/05/23/diagnostics-of-cloud-services-custom-trace-listener.aspx . One thing I noticed is that that logs show up immediately in My Azure Table Storage. I wonder if this is expected with Custom Trace Listeners or because I am in a development environment. My diagnosics.wadcfg <?xml version="1.0" encoding="utf-8"?> <DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M""overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Information" /> <Directories scheduledTransferPeriod="PT1M"> <IISLogs container="wad-iis-logfiles" /> <CrashDumps container="wad-crash-dumps" /> </Directories> <Logs bufferQuotaInMB="0" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Information" /> </DiagnosticMonitorConfiguration> I have changed my approach a bit. Now I am defining in the web config of my webrole. I notice when I set autoflush to true in the webconfig, every thing works but scheduledTransferPeriod is not honored because the flush method pushes to the table storage. I would like to have scheduleTransferPeriod trigger the flush or trigger flush after a certain number of log entries like the buffer is full. Then I can also flush on server shutdown. Is there any method or event on the CustomTraceListener where I can listen to the scheduleTransferPeriod? <system.diagnostics> <!--http://msdn.microsoft.com/en-us/library/sk36c28t(v=vs.110).aspx By default autoflush is false. By default useGlobalLock is true. While we try to be threadsafe, we keep this default for now. Later if we would like to increase performance we can remove this. see http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.usegloballock(v=vs.110).aspx --> <trace> <listeners> <add name="TableTraceListener" type="Pos.Services.Implementation.TableTraceListener, Pos.Services.Implementation" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> I have modified the custom trace listener to the following: namespace Pos.Services.Implementation { class TableTraceListener : TraceListener { #region Fields //connection string for azure storage readonly string _connectionString; //Custom sql storage table for logs. //TODO put in config readonly string _diagnosticsTable; [ThreadStatic] static StringBuilder _messageBuffer; readonly object _initializationSection = new object(); bool _isInitialized; CloudTableClient _tableStorage; readonly object _traceLogAccess = new object(); readonly List<LogEntry> _traceLog = new List<LogEntry>(); #endregion #region Constructors public TableTraceListener() : base("TableTraceListener") { _connectionString = RoleEnvironment.GetConfigurationSettingValue("DiagConnection"); _diagnosticsTable = RoleEnvironment.GetConfigurationSettingValue("DiagTableName"); } #endregion #region Methods /// <summary> /// Flushes the entries to the storage table /// </summary> public override void Flush() { if (!_isInitialized) { lock (_initializationSection) { if (!_isInitialized) { Initialize(); } } } var context = _tableStorage.GetTableServiceContext(); context.MergeOption = MergeOption.AppendOnly; lock (_traceLogAccess) { _traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry)); _traceLog.Clear(); } if (context.Entities.Count > 0) { context.BeginSaveChangesWithRetries(SaveChangesOptions.None, (ar) => context.EndSaveChangesWithRetries(ar), null); } } /// <summary> /// Creates the storage table object. This class does not need to be locked because the caller is locked. /// </summary> private void Initialize() { var account = CloudStorageAccount.Parse(_connectionString); _tableStorage = account.CreateCloudTableClient(); _tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists(); _isInitialized = true; } public override bool IsThreadSafe { get { return true; } } #region Trace and Write Methods /// <summary> /// Writes the message to a string buffer /// </summary> /// <param name="message">the Message</param> public override void Write(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.Append(message); } /// <summary> /// Writes the message with a line breaker to a string buffer /// </summary> /// <param name="message"></param> public override void WriteLine(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.AppendLine(message); } /// <summary> /// Appends the trace information and message /// </summary> /// <param name="eventCache">the Event Cache</param> /// <param name="source">the Source</param> /// <param name="eventType">the Event Type</param> /// <param name="id">the Id</param> /// <param name="message">the Message</param> public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message) { base.TraceEvent(eventCache, source, eventType, id, message); AppendEntry(id, eventType, eventCache); } /// <summary> /// Adds the trace information to a collection of LogEntry objects /// </summary> /// <param name="id">the Id</param> /// <param name="eventType">the Event Type</param> /// <param name="eventCache">the EventCache</param> private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); var message = _messageBuffer.ToString(); _messageBuffer.Length = 0; if (message.EndsWith(Environment.NewLine)) message = message.Substring(0, message.Length - Environment.NewLine.Length); if (message.Length == 0) return; var entry = new LogEntry() { PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30), RowKey = string.Format("{0:D19}", eventCache.Timestamp), EventTickCount = eventCache.Timestamp, Level = (int)eventType, EventId = id, Pid = eventCache.ProcessId, Tid = eventCache.ThreadId, Message = message }; lock (_traceLogAccess) _traceLog.Add(entry); } #endregion #endregion } }

    Read the article

  • Writing Logs to an XML File with .NET

    - by JL
    I am storing logs in an xml file... In a traditional straight text format approach, you would typically just have a openFile... then writeLine method... How is it possible to add a new entry into the xml document structure, like you would just with the text file approach?

    Read the article

  • Creating a second login page that automatically logs in the user

    - by nsilva
    I have a login page as follows: <form action="?" method="post" id="frm-useracc-login" name="frm-useracc-login" > <div id="login-username-wrap" > <div class="login-input-item left"> <div class="div-search-label left"> <div id="div-leftheader-wrap"> <p class="a-topheader-infotext left"><strong>Username: </strong></p> </div> </div> <div class="login-input-content left div-subrow-style ui-corner-all"> <input type="text" tabindex="1" name="txt-username" id="txt-username" class="input-txt-med required addr-search-input txt-username left"> </div> </div> </div> <div id="login-password-wrap" > <div class="login-input-item left"> <div class="div-search-label left"> <div id="div-leftheader-wrap"> <p class="a-topheader-infotext left"><strong>Password: </strong></p> </div> </div> <div class="login-input-content left div-subrow-style ui-corner-all"> <input type="password" tabindex="1" name="txt-password" id="txt-password" class="input-txt-med required addr-search-input txt-password left"> </div> </div> </div> <div id="login-btn-bottom" class="centre-div"> <div id="login-btn-right"> <button name="btn-login" id="btn-login" class="btn-med ui-button ui-state-default ui-button-text-only ui-corner-all btn-hover-anim btn-row-wrapper left">Login</button> <button name="btn-cancel" id="btn-cancel" class="btn-med ui-button ui-state-default ui-button-text-only ui-corner-all btn-hover-anim btn-row-wrapper left">Cancel</button><br /><br /> </div> </div> </form> And here my session.controller.php file: Click Here Basically, what I want to do is create a second login page that automatically passes the value to the session controller and logs in. For example, if I go to login-guest.php, I would put the default values for username and password and then have a jquery click event that automatically logs them in using $("#btn-login").trigger('click'); The problem is that the session controller automatically goes back to login.php if the session has timed out and I'm not sure how I could go about achieving this. Any help would be much appreciated!

    Read the article

  • Gnome/X logs off immediately after login -- which logfiles are relevant?

    - by joebuntu
    I've been tinkering with fingerprint-gui as well as X/xrandr resolution settings. When I start my machine, it boots up normally. As soon as X and gnome have finished starting, it logs me off automatically and brings me back to the gdm login prompt with the user list. Then I am, however, able to log in using "Ubuntu Desktop Fail-safe". I've checked the list of start-up applications, but everything seems fine there. I can't yet put my finger on what exactly might be responsible for this: X, gnome or some messed up pam.d settings. So far I've checked /var/logs/X11/xorg.0.log, /var/logs/auth.log and ~/.xsession-errors. In addition, I don't quite seem to understand the "interplay" between X, GDM, GNOME, GNOME-policykit, PAM.d and all that. Are there any other relevant log files that could point me to what's broken? Specs: Ubuntu 10.10 Maverick Meerkat IBM/Lenovo Thinkpad R60, ATI Radeon x1400 Mobility all updates installed Linux User 1 year+,

    Read the article

  • Gnome/X logs off immediately after login -- which logfiles are relevant?

    - by joebuntu
    I've been tinkering with fingerprint-gui as well as X/xrandr resolution settings. When I start my machine, it boots up normally. As soon as X and gnome have finished starting, it logs me off automatically and brings me back to the gdm login prompt with the user list. Then I am, however, able to log in using "Ubuntu Desktop Fail-safe". I've checked the list of start-up applications, but everything seems fine there. I can't yet put my finger on what exactly might be responsible for this: X, gnome or some messed up pam.d settings. So far I've checked /var/logs/X11/xorg.0.log, /var/logs/auth.log and ~/.xsession-errors. In addition, I don't quite seem to understand the "interplay" between X, GDM, GNOME, GNOME-policykit, PAM.d and all that. Are there any other relevant log files that could point me to what's broken? Specs: Ubuntu 10.10 Maverick Meerkat IBM/Lenovo Thinkpad R60, ATI Radeon x1400 Mobility all updates installed Linux User 1 year+,

    Read the article

  • Determining URLs updated via a series of commit logs

    - by adamrubin
    I'm working on a project where I programmatically need to know when a URL has been changed by a developer, post or during deploy. The obvious answer may be to curl the URL one day, save the output, then curl and in x days then do a diff. That won't work in my case, as I'm only looking for changes the developer mande. If the site is a blog, new comments, user submitted photos, etc would make that curl diff useless. RoR example, using github. Let's assume I have access to the entire repository and all commit logs between iterations. Is there a way I could see that "/views/people/show.html.erb" was commited, then backtrack from there (maybe by inspecting routes.rb), to come up with the URL I can then hit via a browser?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >