Search Results

Search found 55692 results on 2228 pages for 'error logging'.

Page 12/2228 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Declare module name of classes for logging

    - by Space_C0wb0y
    I currently am adding some features to our logging-library. One of these is the possibility to declare a module-name for a class that automatically gets preprended to any log-messages writing from within that class. However, if no module-name is provided, nothing is prepended. Currently I am using a trait-class that has a static function that returns the name. template< class T > struct ModuleNameTrait { static std::string Value() { return ""; } }; template< > struct ModuleNameTrait< Foo > { static std::string Value() { return "Foo"; } }; This class can be defined using a helper-macro. The drawback is, that the module-name has to be declared outside of the class. I would like this to be possible within the class. Also, I want to be able to remove all logging-code using a preprocessor directive. I know that using SFINAE one can check if a template argument has a certain member, but since other people, that are not as friendly with templates as I am, will have to maintain the code, I am looking for a much simpler solution. If there is none, I will stick with the traits approach. Thanks in advance!

    Read the article

  • Logging in "Java Library Code" libs for Android applications

    - by K. Claszen
    I follow the advice to implement Android device (not application!) independent library code for my Android Apps in a seperate "Java Library Code" project. This makes testing easier for me as I can use the standard maven project layout, Spring test support and Continuous Build systems the way I am used to. I do not want to mix this inside my Android App project although this might be possible. I now wonder how to implement logging in this library. As this library will be used on the Android device I would like to go with android.util.Log. I added the followind dependency to my project to get the missing Android packages/classes and dependencies: <dependency> <groupId>com.google.android</groupId> <artifactId>android</artifactId> <version>2.2.1</version> <scope>provided</scope> </dependency> But this library just contains stub method (similar to the android.jar inside the android-sdk), thus using android.util.Log results in java.lang.RuntimeException: Stub! when running my unit tests. How do I implement logging which works on the Android device but does not fail outside (I do not expect it to work, but it must not fail)? Thanks for any advice Klaus For now I am going with the workaround to catch the exception outside android but hope there is a better solution. try { Log.d("me", "hello"); } catch (RuntimeException re) { // ignore failure outside android }

    Read the article

  • implementing user tracking (logging) in Rails 3

    - by seth.vargo
    Hi, I'm creating a Rails application in which logging individual user actions is vital. Every time a user clicks a url, I want to log the action along with all parameters. Here is my current implementation: class CreateActivityLogs < ActiveRecord::Migration create_table :activity_logs do |t| t.references :user t.string :ip_address t.string :referring_url t.string :current_url t.text :params t.text :action t.timestamps end end   class ActivityLog < ActiveRecord::Base belongs_to :user end In a controller, I'd like to be able to do something like the following: ... ActivityLog::log @user.id, params, 'did foo with bar' ... I'd like to have the ActivityLog::log method automatically get the IP address, referring url, and current url (I know how to do this already) and create a new record in the table. So, my questions are: How do I do this? How do I use ActivityLog without having to create an instance everytime I want to log? Is this the best way? Some people have argued for a flat-file log for this kind of logging - however, I want admins to be able to see a user's activity in the backend as well, so I thought a database solution may be better?

    Read the article

  • php error reporting - having trouble matching local & web server settings

    - by Andrew Heath
    I'm trying to add a custom error handler to my site, but in doing so have discovered that my webhost's PHP error reporting settings and those of my localhost (default XAMPP) vary considerably. While I thought I was programming to E_STRICT like a good little boy, adding the error handler to my webhost revealed craploads of Runtime Notices. Example: Runtime notice strtotime() [function.strtotime]: It is not safe to rely on the system's timezone settings. Please use the date.timezone setting, the TZ environment variable or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead In /home/... Clearly this isn't a red-alert, showstopping error. But what bothers me is that it doesn't show up on my localhost. I'd certainly like to improve my code by addressing these sorts of issues if I could see them! I've looked through both php.ini files, and my webhost's setting is error_reporting = E_ALL & ~E_NOTICE whereas mine was error_reporting = E_STRICT, which I had thought was better. However, changing mine to match and rebooting the server doesn't seem to have accomplished anything. Could someone please point me in the right direction?

    Read the article

  • Error handling approach on PHP

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • Error monitoring/handling on webservers

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal

    - by pinaldave
    Earlier I wrote SQL SERVER – Challenge – Puzzle – Usage of FAST Hint and I did receive some good comments. Here is another question to tease your mind. Run following script and you will see that it will thrown an error. DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(5,2)) MoneyInt; GO The datatype of money is also visually look similar to the decimal, why it would throw following error: Msg 8115, Level 16, State 8, Line 3 Arithmetic overflow error converting money to data type numeric. Please leave a comment with explanation and I will post a your answer on this blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Error Messages, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Error caused by Dropbox in update manager

    - by Olivier Lalonde
    I am getting the following error message when the update manager runs: Apt Authentication issue Problem during package list update. The package list update failed with a authentication failure. This usually happens behind a network proxy server. Please try to click on the "Run this action now" button to correct the problem or update the list manually by running Update Manager and clicking on "Check". W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: http://linux.dropbox.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch http://linux.dropbox.com/ubuntu/dists/lucid/Release W: Some index files failed to download, they have been ignored, or old ones used instead. This error started to appear recently and for no obvious reason (maybe because I created myself a private PGP key?). I'm running Dropbox v0.7.11 on Ubuntu Lucid 10.04.

    Read the article

  • Wammu, Samsung J700 error GetNextMemory code: 56

    - by Tamas
    I have got a (old) Samsung J700i. When connecting with a USB cabel to Wammu first the access was denied. Now it is oké... However, when I try to get info out from the phone... I get error message: Error whlie communicating with phone Desciption: Internal phone error. Function: GetNextMemory Error code: 56 I am using Ubuntu 12.04 and Wammu 0.36 Running on Python 2.7.3 Using wxPython 2.8.12.1 Using python-gammu 1.31.0 and Gammu 1.31.0 How may I access data on the phone? Thanks, Tamas

    Read the article

  • What layer to introduce human readable error messages?

    - by MrLane
    One of the things that I have never been happy with on any project I have worked on over the years and have really not been able to resolve myself is exactly at what tier in an application should human readable error information be retrieved for display to a user. A common approach that has worked well has been to return strongly typed/concrete "result objects" from the methods on the public surface of the business tier/API. A method on the interface may be: public ClearUserAccountsResult ClearUserAccounts(ClearUserAccountsParam param); And the result class implementation: public class ClearUserAccountsResult : IResult { public readonly List<Account> ClearedAccounts{get; set;} public readonly bool Success {get; set;} // Implements IResult public readonly string Message{get; set;} // Implements IResult, human readable // Constructor implemented here to set readonly properties... } This works great when the API needs to be exposed over WCF as the result object can be serialized. Again this is only done on the public surface of the API/business tier. The error message can also be looked up from the database, which means it can be changed and localized. However, it has always been suspect to me, this idea of returning human readable information from the business tier like this, partly because what constitutes the public surface of the API may change over time...and it may be the case that the API will need to be reused by other API components in the future that do not need the human readable string messages (and looking them up from a database would be an expensive waste). I am thinking a better approach is to keep the business objects free from such result objects and keep them simple and then retrieve human readable error strings somewhere closer to the UI layer or only in the UI itself, but I have two problems here: 1) The UI may be a remote client (Winforms/WPF/Silverlight) or an ASP.NET web application hosted on another server. In these cases the UI will have to fetch the error strings from the server. 2) Often there are multiple legitimate modes of failure. If the business tier becomes so vague and generic in the way it returns errors there may not be enough information exposed publicly to tell what the error actually was: i.e: if a method has 3 modes of legitimate failure but returns a boolean to indicate failure, you cannot work out what the appropriate message to display to the user should be. I have thought about using failure enums as a substitute, they can indicate a specific error that can be tested for and coded against. This is sometimes useful within the business tier itself as a way of passing via method returns the specifics of a failure rather than just a boolean, but it is not so good for serialization scenarios. Is there a well worn pattern for this? What do people think? Thanks.

    Read the article

  • Is goto to improve DRY-ness OK?

    - by Marco Scannadinari
    My code has many checks to detect errors in various cases (many conditions would result in the same error), inside a function returning an error struct. Instead of looking like this: err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) { error.error = true; error.description = "invalid input"; return error; } ... case 1024: error.error = true; error.description = "invalid input"; // same error, but different detection scenario return error; break; // don't comment on this break please (EDIT: pun unintended) ... Is use of goto in the following context considered better than the previous example? err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) goto invalid_input; ... case 1024: goto invalid_input; break; return error; invalid_input: error.error = true; error.description = "invalid input"; return error;

    Read the article

  • How to log error queries in mysql?

    - by Kaizoku
    I know that there is general_log that logs all queries, but I want to find out which query has an error, and get the error message. I have tried running an error query on purpose, but it logs as a normal query and doesn't report it with error. Any ideas?

    Read the article

  • How to diagnose Internal Server error on Lighttpd?

    - by Tomaszs
    I have Lighttpd on Centos 5 with Fcgi and Memcached. Periodically, once per week or two i get internal server error 500 and i must manually restart lighttpd to get it to work again. In my lighttpd config I've defined error log file: server.errorlog = "/home/lxadmin/httpd/lighttpd/error.log" But when I open it, it has no rows for last days, only one month ago. So my question is how to diagnose what is the issue and how to enable error log for my configuration?

    Read the article

  • How to log error queries in mysql?

    - by Kaizoku
    I know that there is general_log that logs all queries, but I want to find out which query has an error, and get the error message. I have tried running an error query on purpose, but it logs as a normal query and doesn't report it with error. Any ideas?

    Read the article

  • weather-indicator: networking error: HTTP Error 403: Forbidden?

    - by quanta
    Here're the ~/.cache/indicator-weather.log: [Fetcher] 2012-11-24 11:45:52,619 - DEBUG - Indicator: getWeather for location 'Hanoi, Ha N?i, Vietnam' [Fetcher] 2012-11-24 11:45:52,620 - DEBUG - Indicator: getWeather: updating weather report [Fetcher] 2012-11-24 11:45:52,620 - DEBUG - Location: default weather source 'Google' chosen for 'Hanoi' [Fetcher] 2012-11-24 11:45:53,019 - ERROR - Indicator: networking error: HTTP Error 403: Forbidden [Fetcher] 2012-11-24 11:45:53,020 - DEBUG - Indicator: updateWeather: waiting for 'Cacher' thread to terminate [Fetcher] 2012-11-24 11:45:53,020 - ERROR - Indicator: updateWeather: could not get weather, leaving cached data

    Read the article

  • realtime logging

    - by Ion Todirel
    I have an application which has a loop, part of a "Scheduler", which runs at all time and is the heart of the application. Pretty much like a game loop, just that my application is a WPF application and it's not a game. Naturally the application does logging at many points, but the Scheduler does some sensitive monitoring, and sometimes it's impossible just from the logs to tell what may have gotten wrong (and by wrong I don't mean exceptions) or the current status. Because Scheduler's inner loop runs at short intervals, you can't do file I/O-based logging (or using the Event Viewer) in there. First, you need to watch it in real-time, and secondly the log file would grow in size very fast. So I was thinking of ways to show this data to the user in the realtime, some things I considered: Display the data in realtime in the UI Use AllocConsole/WriteConsole to display this information in a console Use a different console application which would display this information, communicate between the Scheduler and the console app using pipes or other IPC techniques Use Windows' Performance Monitor and somehow feed it with this information ETW Displaying in the UI would have its issues. First it doesn't integrate with the UI I had in mind for my application, and I don't want to complicate the UI just for this. This diagnostics would only happen rarely. Secondly, there is going to be some non-trivial data protection, as the Scheduler has it's own thread. A separate console window would work probably, but I'm still worried if it's not too much threshold. Allocating my own console, as this is a windows app, would probably be better than a different console application (3), as I don't need to worry about IPC communication, and non-blocking communication. However a user could close the console I allocated, and it would be problematic in that case. With a separate process you don't have to worry about it. Assuming there is an API for Performance Monitor, it wouldn't be integrated too well with my app or apparent to the users. Using ETW also doesn't solve anything, just a random idea, I still need to display this information somehow. What others think, would there be other ways I missed?

    Read the article

  • Logging errors in SCSF

    - by WF
    I'm quite new to SCSF. So, I'm developping a SCSF Winforms in C# (using May 2007 version in VSNet 2005 Fwk2.0, I can't use new version). I've implemented a Business module. What is the best practise to log errors? I've configured the Logging Application Block. But how to use that ? Thanks for answers

    Read the article

  • is there in R any standard logging package?

    - by mariotomo
    not only is googling R so terribly difficult, log4r has also been taken for Ruby! I am looking for the standard (if any) logging package for R. and some sample usage? I also don't see it in http://cran.r-project.org/web/packages/ (late edit: it is now at its place on CRAN and there's a R-Forge page for it.)

    Read the article

  • Logging and Error handling in asp.net

    - by parminder
    Hi Experts, I have a website running now. I have to implement some logging routines as well as some handler for unhandeled exceptions. I was looking at ELMAH also which seems good to me. I need something very light and easy to use. Can someone recommend any other option I can choose from. Thanks Parminder

    Read the article

  • Glassfish v3 logging

    - by George Liolios
    How to in Glassfish v3 can use stdout and stderr in Primary Key class the glassfish server log ($GF_HOME/domains/domain1/logs/server.log).Is there a setting that has to be turned or do applications now have to support their own logging?

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >