Search Results

Search found 526 results on 22 pages for 'tracing'.

Page 4/22 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How can I trace a variable at runtime in C#?

    - by luvieere
    How can I track a variable's values as they change, at runtime, in C#? I'm interested in the same functionality that the debugger provides when I'm tracing a variable through execution steps, only that I need to call upon it from my code. Some sort of key-value observing, but for all kinds of variables(local, class, static, etc), not only properties. So, basically, receive a notification when a variable's value changes.

    Read the article

  • Log application changes made to the system

    - by Maxim Veksler
    Hello, Windows 7, 64bit. I have an application which I don't trust but still need to run. I would like to run the installer of this application and later on the installed executable under some kind of "strace" for windows which will record what this application did to the system. Mainly: What files have been created / edited? What registery changed have been made? To what network hosts did the application tried to communicate? Ideally I would also be able to generate a "UNDO" action to undo all the changes. Please don't suggest full Virtualization solutions such as Virtualbox, VMWare and co. because the application should run in the host system (A "sandbox" approach will OTHO be accepted, IMHO). Do you any such utility I can use? Thank you, Maxim.

    Read the article

  • What is stored in %Windir%\System32\LogFiles\WMI\RtBackup?

    - by Helge Klein
    I occasionally notice in Resource Monitor hard disk activity related to ETL files in the folder C:\Windows\System32\LogFiles\WMI\RtBackup. Which process/service creates these ETL files and what is their purpose? Resource Monitor shows "System" as the process which is correct since ETW traces (that is what ETL files are) are created by the kernel. But I am interested in the process that causes the traces to be created. This happens on Windows 7, by the way.

    Read the article

  • Tracking IP through a socks5 proxy + RDP ?

    - by piro
    Hi all. We were having some issues at work until we found that we are being attacked almost every day. The attacker seems pretty smart - at first he was always using proxy to hide his IP. With scanning I found that they were socks 5 proxy. The last week we had 11 attacks and every time i found the ip i scanned it with nmap. I found that ALL of the 11 different ip addresses were RDP (port 3389 open, and accept rdp connections, checked by myself on ALL of them). So here follow the questions: 1. Can we trace his real IP back through a socks5 proxy ? 2. Can we trace him if he is using some RDP server to hide his ip ? Please do not answer like "Call the owner of the proxy server/RDP..." etc. we already tried it and it didn't work, that's why I am writing here. Thank you very much.

    Read the article

  • ASP.NET trace level

    - by axk
    This question is related to my another question. With trace enabled I can get the following(not quite verbose) trace of a page request: [2488] aspx.page: Begin PreInit [2488] aspx.page: End PreInit [2488] aspx.page: Begin Init [2488] aspx.page: End Init [2488] aspx.page: Begin InitComplete [2488] aspx.page: End InitComplete [2488] aspx.page: Begin PreLoad [2488] aspx.page: End PreLoad [2488] aspx.page: Begin Load [2488] aspx.page: End Load [2488] aspx.page: Begin LoadComplete [2488] aspx.page: End LoadComplete [2488] aspx.page: Begin PreRender [2488] aspx.page: End PreRender [2488] aspx.page: Begin PreRenderComplete [2488] aspx.page: End PreRenderComplete [2488] aspx.page: Begin SaveState [2488] aspx.page: End SaveState [2488] aspx.page: Begin SaveStateComplete [2488] aspx.page: End SaveStateComplete [2488] aspx.page: Begin Render [2488] aspx.page: End Render Reflector shows that System.Web.UI.Page.ProcessRequestMain method which I suppose does the main part of request processing has more conditional trace messges. For example: if (context.TraceIsEnabled) { this.Trace.Write("aspx.page", "Begin PreInit"); } if (EtwTrace.IsTraceEnabled(5, 4)) { EtwTrace.Trace(EtwTraceType.ETW_TYPE_PAGE_PRE_INIT_ENTER, this._context.WorkerRequest); } this.PerformPreInit(); if (EtwTrace.IsTraceEnabled(5, 4)) { EtwTrace.Trace(EtwTraceType.ETW_TYPE_PAGE_PRE_INIT_LEAVE, this._context.WorkerRequest); } if (context.TraceIsEnabled) { this.Trace.Write("aspx.page", "End PreInit"); } if (context.TraceIsEnabled) { this.Trace.Write("aspx.page", "Begin Init"); } if (EtwTrace.IsTraceEnabled(5, 4)) { EtwTrace.Trace(EtwTraceType.ETW_TYPE_PAGE_INIT_ENTER, this._context.WorkerRequest); } this.InitRecursive(null); So there are these EwtTrace.Trace messages which I don't see I the trace. Going deeper with Reflector shows that EtwTrace.IsTraceEnabled is checking if the appropriate tracelevel set: internal static bool IsTraceEnabled(int level, int flag) { return ((level < _traceLevel) && ((flag & _traceFlags) != 0)); } So the question is how do I control these _traceLevel and _traceFlags and where should these trace messages ( EtwTrace.Trace ) go? The code I'm looking at is of .net framework 2.0 @Edit: I guess I should start with ETW Tracing MSDN entry.

    Read the article

  • Page Render Time in ASP.MVC in trace

    - by Pankaj
    Hello Everyone I want to check render time of each page in asp.net mvc application. i am using asp.net tracing. i have override the OnActionExecuting and OnActionExecuted methods on the BaseController class. protected override void OnActionExecuting(ActionExecutingContext filterContext) { string controler = filterContext.RouteData.Values["controller"].ToString(); string action = filterContext.RouteData.Values["action"].ToString(); StartTime =System.DateTime.Now; System.Diagnostics.Trace.Write(string.Format("Start '{0}/{1}' on: {2}", controler, action, System.DateTime.Now.UtilToISOFormat())); } protected override void OnActionExecuted(ActionExecutedContext filterContext) { string controler = filterContext.RouteData.Values["controller"].ToString(); string action = filterContext.RouteData.Values["action"].ToString(); var totalTime = System.DateTime.Now - this.StartTime; System.Diagnostics.Trace.Write(totalTime.ToString()); System.Diagnostics.Trace.Write(string.Format("End '{0}/{1}' on: {2}", controler, action, System.DateTime.Now.UtilToISOFormat())); } in OnActionExecuted method i get total time. how can i show this time in my http://localhost:51335/Trace.axd report?

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • C# TraceSource class in multithreaded application

    - by matti
    msdn: "Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe." it contains only instance methods. How should I use it in a way that all activity gets recorder by TextWriterTraceListener to a text file. Is one static member which all threads use (by calling) TraceEvent-method safe. (I've kind of asked this question in http://stackoverflow.com/questions/1901086/how-to-instantiate-c-tracesources-to-log-multithreaded-asp-net-2-0-web-applica, but I cannot just believe if somebody just says it's OK despite the documentation).

    Read the article

  • ETW tracking from .net, user mode and driver

    - by Jack Juiceson
    Hi everyone, We have an application that parts of it are in .net, c++ usermode and C++ drivers. The application is divided into several executables that run on demand and communication with each other using LPC(the processes run in different sessions(winlogon)). Currently We have a home written logging service to which .net and c++ usermode communicate by sending LPC messages. The driver uses DbgPrint and is not always enabled, as it causes the code to run 30% slower(we have lots of logging). I want to have all the logs written in one place and preferably not writing the logger myself(I love log4cpp and log4net). The requirement is to write from all the executables and drivers into one place and to have minimal overhead. I have read that ETW is way to go, however I wasn't able to find already written logger that uses it like log4cpp or log4net. So basically my questions is, do you know if there is already implemented ETW appender for log4cpp and log4net I can use ?

    Read the article

  • Monitoring all events in a class and sub-classes

    - by Basiclife
    Hi, I wonder if someone can help me. I've got a console App which I use to debug various components as I develop them. I'd like to be able to log to the console every time an event is fired either in the object I've instantiated or in anything it's instantiated [ad infinitum]. I wouldn't see some of these events normally due to them being consumed further down the chain). Ideally I would be able to log all public and private events but if only public are possible, I can live with that. I've Googled and all I can find is how to monitor a directory - So I'm not sure if this is not possible or simply has a name that I don't know. The sort of information I'm after is similar to what's found in an exception - Target Site, Source, Stack Trace, etc... Could I perhaps do this through reflection somehow? If someone could tell me if this is even possible and perhaps point me at some good resources, I'd be very grateful. Many thanks Basic To Give you an idea of the console App: Sub Main() Container = ContainerGenerate.GenerateContainer() Dim TemplateID As New Guid("5959b961-b347-46bc-b1b6-cba311304f43") Dim Templater = Container.Resolve(Of Interfaces.Mail.IMailGenerator)() Dim MyMessage = Templater.GenerateMail(TemplateID, Nothing, Nothing) Dim MySMTPClient = Container.Resolve(Of SmtpClient)() MySMTPClient.Send(MyMessage) Finish() End Sub

    Read the article

  • How to monitor MySQL query errors, timeouts and logon attempts?

    - by Abel
    While setting up a third party closed source CMS (Sitefinity) the setup doesn't create all tables and procedures necessary to run it. The software lacks a logging system itself and it made me wonder: could I trace and monitor failing SQL statements from MySQL? This serves more than only the purpose of solving my issue with Sitefinity. More often I wonder what's send to the MySQL server, not wanting to dive into the software products or setup a debugging environment etc. I tried JetProfiler (only performance) and looked through a few others, but although they monitor a lot, they don't monitor query failures, timeouts or logon attempts. Does anyone know a profiler, tracer, monitoring tool, commercial or free, that can show me this information?

    Read the article

  • Trace large C++ code base?

    - by anon
    Problem: I have just inherited this 200K LOC source code base. There's only a small part of it I need (and I want to rip all else out). What I would like to do is to be able to: 1) run the program a few times 2) have something (here's where you come in) record which lines of code gets executed 3) then rip out all the irrelevant lines of code I realize this has "problems" in the forms of "different args will take different paths"; but for my needs, it's very specific, and I just want something ot get me started on the right line of for ripping stuff out (I'll fine tune those special cases later). Thanks!

    Read the article

  • C++: How to count all instantiated objects at runtime?

    - by nina
    I have a large framework consisting of many C++ classes. Is there a way using any tools at runtime, to trace all the C++ objects that are being constructed and currently exist? For example, at a certain time t1, perhaps the application has objects A1, A2 and B3, but at time t2, it has A1, A4, C2 and so on? This is a cross platform framework but I'm familiar with working in Linux, Solaris and (possibly) Mac OS X.

    Read the article

  • is there and SPY++ for viewing .NET Framework messages only?

    - by Or A
    Hi, is there any good program for viewing functions / messages that are being executed on the .net framework in the background? i'm looking for something similar to what spy++ is doing, just for .NET only. I have some weird behavior that i need to understand what causing it, and i don't think on any better alternative. Thanks

    Read the article

  • on Google App Engine 500 Error, it should be 200 instead of 500

    - by Faisal Amjad
    requestToken = function() { var getTokenURI = '/gettoken?userid=' + userid; var httpRequest = makeRequest(getTokenURI, true); httpRequest.onreadystatechange = function() { if (httpRequest.readyState == 4) { if (httpRequest.status == 200) { openChannel(httpRequest.responseText); } else { alert('ERROR: AJAX request status = ' + httpRequest.status); } } } }; function makeRequest(url, async) { var httpRequest; if (window.XMLHttpRequest) { httpRequest = new XMLHttpRequest(); } else if (window.ActiveXObject) { // IE try { httpRequest = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { httpRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) { } } } if (!httpRequest) { return false; } httpRequest.open('POST', url, async); httpRequest.send(); return httpRequest; } it is running excellent on localhost...but on google app engine it httpRequest.status equals 500 and goes in else statement. WHY? LOG on google app engine: /getFriendList?userid=d 500 253ms 0kb Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 175.110.179.86 - - [17/Dec/2012:08:35:33 -0800] "POST /getFriendList?userid=d HTTP/1.1" 500 0 "http://faisalimmsngr.appspot.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11" "faisalimmsngr.appspot.com" ms=254 cpu_ms=110 instance=00c61b117caf2d11ca57d2a2296ccd0b902b038a W 2012-12-17 08:35:33.272 Failed startup of context com.google.apphosting.utils.jetty.RuntimeAppEngineWebAppContext@10ff62a{/,/base/data/home/apps/s~faisalimmsngr/1.363934467542140431} org.mortbay.util.MultiException[java.lang.UnsupportedClassVersionError: adv/web/mid/exam/FriendServlet : Unsupported major.minor version 51.0, java.lang.UnsupportedClassVersionError: adv/web/mid/exam/MessageServlet : Unsupported major.minor version 51.0, java.lang.UnsupportedClassVersionError: adv/web/mid/exam/TokenServlet : Unsupported major.minor version 51.0] at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:656) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:517) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.createHandler(AppVersionHandlerMap.java:219) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.getHandler(AppVersionHandlerMap.java:194) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:134) at com.google.apphosting.runtime.JavaRuntime$RequestRunnable.run(JavaRuntime.java:447) at com.google.tracing.TraceContext$TraceContextRunnable.runInContext(TraceContext.java:454) at com.google.tracing.TraceContext$TraceContextRunnable$1.run(TraceContext.java:461) at com.google.tracing.TraceContext.runInContext(TraceContext.java:703) at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContextNoUnref(TraceContext.java:338) at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContext(TraceContext.java:330) at com.google.tracing.TraceContext$TraceContextRunnable.run(TraceContext.java:458) at com.google.apphosting.runtime.ThreadGroupPool$PoolEntry.run(ThreadGroupPool.java:251) at java.lang.Thread.run(Thread.java:679) java.lang.UnsupportedClassVersionError: adv/web/mid/exam/FriendServlet : Unsupported major.minor version 51.0 at com.google.appengine.runtime.Request.process-c04431eac3a1f275(Request.java) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:634) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:277) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at org.mortbay.util.Loader.loadClass(Loader.java:91) at org.mortbay.util.Loader.loadClass(Loader.java:71) at org.mortbay.jetty.servlet.Holder.doStart(Holder.java:73) at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:242) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:685) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:517) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at com.google.tracing.TraceContext$TraceContextRunnable.runInContext(TraceContext.java:454) at com.google.tracing.TraceContext$TraceContextRunnable$1.run(TraceContext.java:461) at com.google.tracing.TraceContext.runInContext(TraceContext.java:703) at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContextNoUnref(TraceContext.java:338) at com.google.tracing.TraceContext$AbstractTraceContextCallback.runInInheritedContext(TraceContext.java:330) at com.google.tracing.TraceContext$TraceContextRunnable.run(TraceContext.java:458) at java.lang.Thread.run(Thread.java:679)

    Read the article

  • Tracing what program is making a network connnection? (CentOS)

    - by Airjoe
    I was wondering if it is possible to find out which process is trying to make a specific network connection. On a server I support which hosts websites for about 200 users, the iptables firewall keeps blocking, as it should, a connection to 212.117.169.139 on port 80. Firefox reports this as an attack page (and at the least is obvious spam, if not malicious). It seems something on this server is trying to access this site for some reason, and although it's being blocked successfully, the requests seem to be going through every two to sixty seconds and I'd like to be able to find what process or script is doing this so I can handle it appropriately. Besides doing a grep to try and find if this IP is in some file (which probably won't even work because it may be working by hostname or it may be encoded), is there any way to find out some more information? Thanks!

    Read the article

  • What does this error mean in my IIS7 Failed Request Tracing report?

    - by Pure.Krome
    when I attempt to goto any page in my web application (i'm migrating the code from an asp.net web site to web application, and now testing it) .. i keep getting some not authenticated error(s) . So, i've turned on FREB and this is what it says... I'm not sure what that means? Secondly, i've also made sure that my site (or at least the default document which has been setup to be default.aspx) has anonymous on and the rest off. Proof: - C:\Windows\System32\inetsrv>appcmd list config "My Web App/default.aspx" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> C:\Windows\System32\inetsrv>appcmd list config "My Web App" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> Can someone please help?

    Read the article

  • WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256

    - by user702295
    WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256 I am running my engine on Linux and am receiving an intermittent message "WARNING Retrying bulk insert for file: sqlldr due to communication Error: 256" The engine seems to have completed successfully, but it is not clear if this error caused some of the forecast to not complete. It is also not clear what caused the error. Generally if you see only the WARNING of it, it means that next retries of the same load request have eventually succeeded and so the run a a whole is not affected. In order to know more about what happens, look for .log/.bad files left in the engines bin directory or possibly a quote of them within the specific engine log that had the issue.  The sqlnet.log file may also have some information about it and perhaps at the database server side there may be some log/alert regarding what happened.  Look at the alert.log. In general it could be that the database server/network was over loaded at the time and somehow the connection was rejected/failed/aborted either due to specific setting on concurrent connections/sessions or inadvertently due to glitch in network/os/hardware. If this repeats and becomes more frequent during the run you should look further into it as mentioned above. You can also track this using either SQL*Trace or java.util.logging.  - Globally enable logging by setting the oracle.jdbc.Trace system property java -Doracle.jdbc.Trace=true - Client Side Tracing: Your SQLNET.ORA file should contain the following lines to produce a client side trace file: trace_level_client = 10 trace_unique_client = on trace_file_client = sqlnet.trc trace_directory_client = <path_to_trace_dir> Server Side Tracing: To enable server side tracing, use the following parameters: trace_level_server = 10 trace_file_server = server.trc trace_directory_server = <path_to_trace_dir> Tracing Levels: The following values can be used for TRACE_LEVEL* parameters:     16 or SUPPORT — WorldWide Customer Support trace information     10 or ADMIN — Administration trace information     4 or USER — User trace information     0 or OFF — no tracing, the default Additional information is readily available via the web.

    Read the article

  • Oracle is Sponsoring LinuxCon Europe 2012

    - by Zeynep Koch
    Architecture is amazing in Barcelona but you will also be impressed with Oracle Linux sessions in LinuxCon Europe as well.  Oracle is one of the key sponsors in LinuxCon Europe and we have great sessions to show you why Oracle Linux is best for your "IT architecture"! We also have a booth where you can pick up latest Oracle Linux and Oracle VM DVD Kit and Virtualization for Dummies booklet. Don't forget to visit us at technology showcase Booth #19. Oracle Sessions at LinuxCon Europe 2012:  1. OCFS2: Status and Overview - Lenz Grimmer, Oracle Wednesday November 7, 2012 10:40am - 11:25am Venue: Diamant OCFS2, Oracle's general-purpose shared-disk cluster file system for Linux has come a long way since its development started in 2003. Distributed under the GPL and part of the mainline Linux Kernel, it is also included in Oracle Linux and plays a vital role in products like Oracle VM, Oracle RAC or E-Business Suite. This presentation will provide a general technical overview as well as an update on the latest developments. Attendees will learn about the features and improvements that set OCFS2 apart from other Linux-based cluster file systems, including: Heartbeat implementation: global vs. local heartbeats Storage optimizations: Extent-based Allocations, Hole punching, Reflinks 2. Status of Linux Tracing - Elena Zannoni, Oracle Wednesday November 7, 2012 11:35am - 12:20am Venue: Diamant There have been many developments recently in the Linux tracing area. The tracing infrastructure in the kernel is getting more robust, with  the recent introduction of uprobes to allow the implementation of user  space tracing, and new features of perf. There are many tracing tools to choose from, including the newest kid on the block, DTrace for Linux.  This talk will take the audience through the main tracing facilities  available today whether more tightly integrated with the kernel code, or maintained stand alone. 3. MySQL Security Model and Pluggable Authentication - Kristofer Pettersson, Oracle Wednesday November 7, 2012 1:50pm - 2:35pm Venue: Diamant With an increasing security awareness among web and cloud developers, knowing how to secure your database from unauthorized or malicious access has become important. This talk explains the MySQL security model, pluggable authentication, new auditing features and rounds off with some pointers on how to securely integrate your database into your Linux web stack. We look forward to seeing you in Barcelona, Spain on November 5-9, 2012. Register today 

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. [Read More] 

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >