Search Results

Search found 102773 results on 4111 pages for 'mcitp sql server 2008'.

Page 1445/4111 | < Previous Page | 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452  | Next Page >

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • Ajax Control Toolkit May 2012 Release

    - by Stephen.Walther
    I’m happy to announce the May 2012 release of the Ajax Control Toolkit. This newest release of the Ajax Control Toolkit includes a new file upload control which displays file upload progress. We’ve also added several significant enhancements to the existing HtmlEditorExtender control such as support for uploading images and Source View. You can download and start using the newest version of the Ajax Control Toolkit by entering the following command in the Library Package Manager console in Visual Studio: Install-Package AjaxControlToolkit Alternatively, you can download the latest version of the Ajax Control Toolkit from CodePlex: http://AjaxControlToolkit.CodePlex.com The New Ajax File Upload Control The most requested new feature for the Ajax Control Toolkit (according to the CodePlex Issue Tracker) has been support for file upload with progress. We worked hard over the last few months to create an entirely new file upload control which displays upload progress. Here is a sample which illustrates how you can use the new AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="01_FileUpload.aspx.cs" Inherits="WebApplication1._01_FileUpload" %> <html> <head runat="server"> <title>Simple File Upload</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager control. This control is required to use any of the controls in the Ajax Control Toolkit because this control is responsible for loading all of the scripts required by a control. The page also contains an AjaxFileUpload control. The UploadComplete event is handled in the code-behind for the page: namespace WebApplication1 { public partial class _01_FileUpload : System.Web.UI.Page { protected void ajaxUpload1_OnUploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save upload file to the file system ajaxUpload1.SaveAs(MapPath(filePath)); } } } The UploadComplete handler saves each uploaded file by calling the AjaxFileUpload control’s SaveAs() method with a full file path. Here’s a video which illustrates the process of uploading a file: Warning: in order to write to the Images folder on a production IIS server, you need Write permissions on the Images folder. You need to provide permissions for the IIS Application Pool account to write to the Images folder. To learn more, see: http://learn.iis.net/page.aspx/624/application-pool-identities/ Showing File Upload Progress The new AjaxFileUpload control takes advantage of HTML5 upload progress events (described in the XMLHttpRequest Level 2 standard). This standard is supported by Firefox 8+, Chrome 16+, Safari 5+, and Internet Explorer 10+. In other words, the standard is supported by the most recent versions of all browsers except for Internet Explorer which will support the standard with the release of Internet Explorer 10. The AjaxFileUpload control works with all browsers, even browsers which do not support the new XMLHttpRequest Level 2 standard. If you use the AjaxFileUpload control with a downlevel browser – such as Internet Explorer 9 — then you get a simple throbber image during a file upload instead of a progress indicator. Here’s how you specify a throbber image when declaring the AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="02_FileUpload.aspx.cs" Inherits="WebApplication1._02_FileUpload" %> <html> <head id="Head1" runat="server"> <title>File Upload with Throbber</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="MyThrobber" runat="server" /> <asp:Image id="MyThrobber" ImageUrl="ajax-loader.gif" Style="display:None" runat="server" /> </div> </form> </body> </html> Notice that the page above includes an image with the Id MyThrobber. This image is displayed while files are being uploaded. I use the website http://AjaxLoad.info to generate animated busy wait images. Drag-And-Drop File Upload If you are using an uplevel browser then you can drag-and-drop the files which you want to upload onto the AjaxFileUpload control. The following video illustrates how drag-and-drop works: Remember that drag-and-drop will not work on Internet Explorer 9 or older. Accepting Multiple Files By default, the AjaxFileUpload control enables you to upload multiple files at a time. When you open the file dialog, use the CTRL or SHIFT key to select multiple files. If you want to restrict the number of files that can be uploaded then use the MaximumNumberOfFiles property like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" MaximumNumberOfFiles="1" runat="server" /> In the code above, the maximum number of files which can be uploaded is restricted to a single file. Restricting Uploaded File Types You might want to allow only certain types of files to be uploaded. For example, you might want to accept only image uploads. In that case, you can use the AllowedFileTypes property to provide a list of allowed file types like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" AllowedFileTypes="jpg,jpeg,gif,png" runat="server" /> The code above prevents any files except jpeg, gif, and png files from being uploaded. Enhancements to the HTMLEditorExtender Over the past months, we spent a considerable amount of time making bug fixes and feature enhancements to the existing HtmlEditorExtender control. I want to focus on two of the most significant enhancements that we made to the control: support for Source View and support for uploading images. Adding Source View Support to the HtmlEditorExtender When you click the Source View tag, the HtmlEditorExtender changes modes and displays the HTML source of the contents contained in the TextBox being extended. You can use Source View to make fine-grain changes to HTML before submitting the HTML to the server. For reasons of backwards compatibility, the Source View tab is disabled by default. To enable Source View, you need to declare your HtmlEditorExtender with the DisplaySourceTab property like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="05_SourceView.aspx.cs" Inherits="WebApplication1._05_SourceView" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head id="Head1" runat="server"> <title>HtmlEditorExtender with Source View</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:TextBox id="txtComments" TextMode="MultiLine" Columns="60" Rows="10" Runat="server" /> <ajaxToolkit:HtmlEditorExtender id="HEE1" TargetControlID="txtComments" DisplaySourceTab="true" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager, TextBox, and HtmlEditorExtender control. The HtmlEditorExtender extends the TextBox so that it supports rich text editing. Notice that the HtmlEditorExtender includes a DisplaySourceTab property. This property causes a button to appear at the bottom of the HtmlEditorExtender which enables you to switch to Source View: Note: when using the HtmlEditorExtender, we recommend that you set the DOCTYPE for the document. Otherwise, you can encounter weird formatting issues. Accepting Image Uploads We also enhanced the HtmlEditorExtender to support image uploads (another very highly requested feature at CodePlex). The following video illustrates the experience of adding an image to the editor: Once again, for backwards compatibility reasons, support for image uploads is disabled by default. Here’s how you can declare the HtmlEditorExtender so that it supports image uploads: <ajaxToolkit:HtmlEditorExtender id="MyHtmlEditorExtender" TargetControlID="txtComments" OnImageUploadComplete="MyHtmlEditorExtender_ImageUploadComplete" DisplaySourceTab="true" runat="server" > <Toolbar> <ajaxToolkit:Bold /> <ajaxToolkit:Italic /> <ajaxToolkit:Underline /> <ajaxToolkit:InsertImage /> </Toolbar> </ajaxToolkit:HtmlEditorExtender> There are two things that you should notice about the code above. First, notice that an InsertImage toolbar button is added to the HtmlEditorExtender toolbar. This HtmlEditorExtender will render toolbar buttons for bold, italic, underline, and insert image. Second, notice that the HtmlEditorExtender includes an event handler for the ImageUploadComplete event. The code for this event handler is below: using System.Web.UI; using AjaxControlToolkit; namespace WebApplication1 { public partial class _06_ImageUpload : System.Web.UI.Page { protected void MyHtmlEditorExtender_ImageUploadComplete(object sender, AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save uploaded file to the file system var ajaxFileUpload = (AjaxFileUpload)sender; ajaxFileUpload.SaveAs(MapPath(filePath)); // Update client with saved image path e.PostedUrl = Page.ResolveUrl(filePath); } } } Within the ImageUploadComplete event handler, you need to do two things: 1) Save the uploaded image (for example, to the file system, a database, or Azure storage) 2) Provide the URL to the saved image so the image can be displayed within the HtmlEditorExtender In the code above, the uploaded image is saved to the ~/Images folder. The path of the saved image is returned to the client by setting the AjaxFileUploadEventArgs PostedUrl property. Not surprisingly, under the covers, the HtmlEditorExtender uses the AjaxFileUpload. You can get a direct reference to the AjaxFileUpload control used by an HtmlEditorExtender by using the following code: void Page_Load() { var ajaxFileUpload = MyHtmlEditorExtender.AjaxFileUpload; ajaxFileUpload.AllowedFileTypes = "jpg,jpeg"; } The code above illustrates how you can restrict the types of images that can be uploaded to the HtmlEditorExtender. This code prevents anything but jpeg images from being uploaded. Summary This was the most difficult release of the Ajax Control Toolkit to date. We iterated through several designs for the AjaxFileUpload control – with each iteration, the goal was to make the AjaxFileUpload control easier for developers to use. My hope is that we were able to create a control which Web Forms developers will find very intuitive. I want to thank the developers on the Superexpert.com team for their hard work on this release.

    Read the article

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • CodePlex Daily Summary for Tuesday, March 20, 2012

    CodePlex Daily Summary for Tuesday, March 20, 2012Popular ReleasesNearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...JavaScript Web Resource Manager for Microsoft Dynamics CRM 2011: JavaScript Web Resource Manager (1.2.1420.191): BUG FIXED : When loading scripts from disk, the import of the web resource didn't do anything When scripts were saved to disk, it wasn't possible to edit them with an editorSQL Monitor - managing sql server performance: SQLMon 4.2 alpha 12: 1. improved process visualizer, now shows how many dead locks, and what are the locked objects 2. fixed some other problems.Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...SCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionDaun Management Studio: Daun Management Studio 0.1 (Alpha Version): These are these the alpha application packages for Daun Management Studio to manage MongoDB Server. Please visit our official website http://www.daun-project.comSurvey™ - web survey & form engine: Survey™ 2.0: The new stable Survey™ Project 2.0.0.1 version contains many new features like: Technical changes: - Use of Jquery, ASTreeview, Tabs, Tooltips and new menuprovider Features & Bugfixes: Survey list and search function Folder structure for surveys New Menustructure Library list New Library fields User list and search functions Layout options for a survey with CSS, page header and footer New IP filter security feature Enhanced Token Management New Question fields as ID, Alias...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.28: changes NEW: Added Support for "PixHub.eu" linksSmartNet: V1.0.0.0: DY SmartNet ?????? V1.0callisto: callisto 2.0.21: Added an option to disable local host detection.Javascript .NET: Javascript .NET v0.6: Upgraded to the latest stable branch of v8 (/tags/3.9.18), and switched to using their scons build system. We no longer include v8 source code as part of this project's source code. Simultaneous multithreaded use of v8 now supported (v8 Isolates), although different contexts may not share objects or call each other. 64-bit .Net 4.0 DLL now included. (Download now includes x86 and x64 for both .Net 3.5 and .Net 4.0.)MyRouter (Virtual WiFi Router): MyRouter 1.0.6: This release should be more stable there were a few bug fixes including the x64 issue as well as an error popping up when MyRouter started this was caused by a NULL valueGoogle Books Downloader for Windows: Google Books Downloader-2.0.0.0.: Google Books DownloaderFinestra Virtual Desktops: 2.5.4501: This is a very minor update release. Please see the information about the 2.5 and 2.5.4500 releases for more information on recent changes. This update did not even have an automatic update triggered for it. Adds error checking and reporting to all threads, not only those with message loopsAcDown????? - Anime&Comic Downloader: AcDown????? v3.9.2: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Release Candidate: Your feedback is welcome - and this is your last chance to get your fixes in for this version! Includes installer for both Feature Server extension and Desktop extension, enhanced functionality for the Desktop tools, and enhanced built-in Javascript Editor for the Feature Server component. This release candidate includes fixes to beta 4 that accommodate domain users for setting up the Server Component, and fixes for reporting/uploading references tracked in the revision table. See Code In-P...C.B.R. : Comic Book Reader: CBR 0.6: 20 Issue trackers are closed and a lot of bugs too Localize view is now MVVM and delete is working. Added the unused flag (take care that it goes to true only when displaying screen elements) Backstage - new input/output format choice control for the conversion Backstage - Add display, behaviour and register file type options in the extended options dialog Explorer list view has been transformed to a custom control. New group header, colunms order and size are saved Single insta...New Projects{3S} SQL Smart Security "Protect your T-SQL know-how!": {3S} SQL Smart Security is an add-in which can be installed in Microsoft SQL Server Management Studio (SSMS). It enables software companies to create a secured content for database objects. The add-in brings much higher level of protection in comparison with SQL Server built in WITH ENCRYPTION feature.BETA - Content Slider for SharePoint 2010 / Office 365: SharePoint Banner / SharePoint 2010 Sliding Banner / Content Slider tools in Office 365/ Sliding Content in SharePoint is a general tool which could be used for sliding Banners or any other sliding content to be placed on any Office 365 / SharePoint 2010 / SharePoint Foundation.BF3 Development Server: The main issue of this project is to deliver a test server to all developers working on RCon (Remote-Administration-Console) Tools for Battlefield 3. Actually the only possibility to test the work made is to hire a real Game Server. BizTalk Server 2010 TCP/IP Adapter: This project is migration of existing BizTalk server 2009 TCPIP adapter to BizTalk server 2010. I have made few configuration changes which are making this adapter and installation compatible to BizTalk Server 2010. I have not modified adapter source code.BryhtCommon: It`s for easy to develope WP7 ,it contains some useful method Bug.Net Defect Tracking Components: Bug.Net server-side controls and components to add defect (bug) tracking to your current ASP.NET website.Customize Survey With Client Object Model: Customize OOB Survey/Vote/Poll in Share?Point With Client Object Model Visit http://swatipoint.blogspot.in/2011/12/sharepoint-client-object-modellist.html for more detailsDropboxToDo: A simple todo, synchronizing by Dropbox or, in future, by SkydriveFoggy: Foggy is a WPF dashboard for FogBugz information.Fort Myers High School Website: A website for Fort Myers High School in Fort Myers, Florida. This website will allow for both students and parents to better interact with the school. Developed in ASP.net (C#).Gonte.DataAccess: Data access for NET. It's developed in C#.Gonte.ObjectModel: Metadata about objects. It's developed in C#Gonte.SqlGenerator: Sql Generator It's developed in C#.kLib: This project space is for datastructures and classes, which should always be available. Any developer should use these in their projects. Liuyi.Phone.CharmScreen: Liuyi.Phone.CharmScreen Liuyi windows phone appLoU: Lord of Ultima helper suite.Managing Supplies: This WP7 project is able to manage your own suppliesNAV Fixed Assets 2012: Changes related to 'Dossier Fiscal' in Microsoft Dynamics NAV 2009 - New Model 30 - Changes to model 31 and model 32NCAA Tourney DotNetNuke Module: The NCAA Tourney is a DotNetNuke 3.X - 6.X module that allows you to add a NCAA tournament to your portal. You can allow users to record their picks for the tournament and then manage the outcome of the tournament calculating the winner of your tourney based on customizable point system. The module has been designed to be very user friendly and efficient for the end user as well as the administrator of the tournament.nothing here anymore: nothing here anymoreOrchard Dream Store Project: A simple website using Orchard CMS. For a school projectPowerShell Management Library for TEM: A project to provide a PowerShell functionality for managing your Tivoli Endpoint Manager (built upon BigFix technology). You can locally or remotely manage endpoints and relays via these simple and easy to use PowerShell Module.PrismWebBuilder: Web Builder ProjectProjeto Northwind: Northwind - FPUQLCF: QLCFShipwire API: Shipwire makes it easier for consumers of Shipwire's international shipment fulfilment service to integrate their XML API quickly and easily. Current features are: Inventory Service Rate Service (Shipping Costs) Future features are: Order Entry Service Order Tracking ServiceSmith XNA tools: Smith's XNA tools is a set of useful that i make to improve some basics features to XNA and make the Game Design More Easy.ST Recover: ST Recover can read Atari ST floppy disks on a PC under Windows, including special formats as 800 or 900 KB and damaged or desynchronized disks, and produces standard .ST disk image files. Then the image files can be read in ST emulators as WinSTon or Steem.SyncSMS: Windows desktop client for the Android App SyncSMS. This code is not affiliated with SyncSMS in any way,tempzz: tempzzTruxtor: Truxtor modular concept for electronic gadgetsTT SA TEST1: TT SA Test 1VisualQuantCode: Neuroquants is a library in c# for quants It's developed in c#.Weeps: Generate Bass and drum line in type of midi to be guitar's backing track that was playing by userxxtest: xxtest???-????? "???????????": ?????-????????? ???????????? Digital Design 2012. ??? ????. ?????? ?: ?????????????????? ??????? ?????. ??????? ?????????? ????????? ????????. ?????????? ?????????????? ??????? ??????? ? ????? ???????????? Digital Design 2012.????? ???????????? Digital Design 2012. ??? ????. ?????? ?: ?????????????????? ??????? ?????. ??????? ?????????? ????????? ????????. ?????????? ?????????????? ??????? ??????? ? ????? ???????????? Digital Design 2012.

    Read the article

  • Azure Mobile Services: lessons learned

    - by svdoever
    When I first started using Azure Mobile Services I thought of it as a nice way to: authenticate my users - login using Twitter, Google, Facebook, Windows Live create tables, and use the client code to create the columns in the table because that is not possible in the Azure Mobile Services UI run some Javascript code on the table crud actions (Insert, Update, Delete, Read) schedule a Javascript to run any 15 or more minutes I had no idea of the magic that was happening inside… where is the data stored? Is it a kind of big table, are relationships between tables possible? those Javascripts on the table crud actions, is that interpreted, what is that exactly? After working for some time with Azure Mobile Services I became a lot wiser: Those tables are just normal tables in an Azure SQL Server 2012 Creating the table columns through client code sucks, at least from my Javascript code, because the columns are deducted from the sent JSON data, and a datetime field is sent as string in JSON, so a string type column is created instead of a datetime column You can connect with SQL Management Studio to the Azure SQL Server, and although you can’t manage your columns through the SQL Management Studio UI, it is possible to just run SQL scripts to drop and create tables and indices When you create a table through SQL script, add the table with the same name in the Azure Mobile Services UI to hook it up and be able to access the table through the provided abstraction layer You can also go to the SQL Database through the Azure Mobile Services UI, and from there get in a web based SQL management studio where you can create columns and manage your data The table crud scripts and the scheduler scripts are full blown node.js scripts, introducing a lot of power with great performance The web based script editor is really powerful, I do most of my editing currently in the editor which has syntax highlighting and code completing. While editing the code JsHint is used for script validation. The documentation on Azure Mobile Services is… suboptimal. It is such a pity that there is no way to comment on it so the community could fill in the missing holes, like which node modules are already loaded, and which modules are available on Azure Mobile Services. Soon I was hacking away on Azure Mobile Services, creating my own database tables through script, and abusing the read script of an empty table named query to implement my own set of “services”. The latest updates to Azure Mobile Services described in the following posts added some great new features like creating web API’s, use shared code from your scripts, command line tools for managing Azure Mobile Services (upload and download scripts for example), support for node modules and git support: http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/14/custom-apis-in-azure-mobile-services.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/19/custom-api-in-azure-mobile-services-client-sdks.aspx In the mean time I rewrote all my “service-like” table scripts to API scripts, which works like a breeze. Bad thing with the current state of Azure Mobile Services is that the git support is not working if you are a co-administrator of your Azure subscription, and not and administrator (as in my case). Another bad thing is that Cross Origin Request Sharing (CORS) is not supported for the API yet, so no go yet from the browser client for API’s, which is my case. See http://social.msdn.microsoft.com/Forums/windowsazure/en-US/2b79c5ea-d187-4c2b-823a-3f3e0559829d/known-limitations-for-source-control-and-custom-api-features for more on these and other limitations. In his talk at Build 2013 Josh Twist showed that there is a work-around for accessing shared script code from the table scripts as well (another limitation mentioned in the post above). I could not find that code in the Votabl2 code example from the presentation at https://github.com/joshtwist/votabl2, but we can grab it from the presentation when it comes online on Channel9. By the way: you can always express your needs and ideas at http://mobileservices.uservoice.com, that’s the place they are listening to (I hope!).

    Read the article

  • Windows could not start Apache 2 on the local computer

    - by andig
    After installing PHP 5.3, Windows is unable to start Apache 2.2. Apache's error log is empty, no error message on startup: C:\Programme\Apache\bin>httpd -k start C:\Programme\Apache\bin>httpd -k stop The Apache2.2 service is not started. C:\Programme\Apache\bin>httpd -k config Reconfiguring the Apache2.2 service The Apache2.2 service is successfully installed. Testing httpd.conf.... Errors reported here must be corrected before the service can be started. I have no clue where to look for the cause. php5apache2_2.dll is copied to the Apache modules folder. The configuration looks like this: LoadModule php5_module modules/php5apache2_2.dll PHPIniDir "C:/programme/php" Where and how can I start diagnosis? The only hint I have so far is that startup fails as soon as a PHP module is enabled in the configuration. Is there a way to get more details out of the Apache startup process? This is the http.conf: # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/2.2> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/2.2/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "logs/foo.log" # with ServerRoot set to "C:/Programme/Apache" will be interpreted by the # server as "C:/Programme/Apache/logs/foo.log". # # NOTE: Where filenames are specified, you must use forward slashes # instead of backslashes (e.g., "c:/apache" instead of "c:\apache"). # If a drive letter is omitted, the drive on which httpd.exe is located # will be used by default. It is recommended that you always supply # an explicit drive letter in absolute paths to avoid confusion. # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to point the LockFile directive # at a local disk. If you wish to share the same ServerRoot for multiple # httpd daemons, you will need to change at least LockFile and PidFile. # ServerRoot "C:/Programme/Apache" # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so #LoadModule auth_digest_module modules/mod_auth_digest.so #LoadModule authn_alias_module modules/mod_authn_alias.so #LoadModule authn_anon_module modules/mod_authn_anon.so #LoadModule authn_dbd_module modules/mod_authn_dbd.so #LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so #LoadModule authnz_ldap_module modules/mod_authnz_ldap.so #LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so #LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so #LoadModule cache_module modules/mod_cache.so #LoadModule cern_meta_module modules/mod_cern_meta.so LoadModule cgi_module modules/mod_cgi.so #LoadModule charset_lite_module modules/mod_charset_lite.so #LoadModule dav_module modules/mod_dav.so #LoadModule dav_fs_module modules/mod_dav_fs.so #LoadModule dav_lock_module modules/mod_dav_lock.so #LoadModule dbd_module modules/mod_dbd.so #LoadModule deflate_module modules/mod_deflate.so LoadModule dir_module modules/mod_dir.so #LoadModule disk_cache_module modules/mod_disk_cache.so #LoadModule dumpio_module modules/mod_dumpio.so LoadModule env_module modules/mod_env.so #LoadModule expires_module modules/mod_expires.so #LoadModule ext_filter_module modules/mod_ext_filter.so #LoadModule file_cache_module modules/mod_file_cache.so #LoadModule filter_module modules/mod_filter.so #LoadModule headers_module modules/mod_headers.so #LoadModule ident_module modules/mod_ident.so #LoadModule imagemap_module modules/mod_imagemap.so LoadModule include_module modules/mod_include.so #LoadModule info_module modules/mod_info.so LoadModule isapi_module modules/mod_isapi.so #LoadModule ldap_module modules/mod_ldap.so #LoadModule logio_module modules/mod_logio.so LoadModule log_config_module modules/mod_log_config.so #LoadModule log_forensic_module modules/mod_log_forensic.so #LoadModule mem_cache_module modules/mod_mem_cache.so LoadModule mime_module modules/mod_mime.so #LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule negotiation_module modules/mod_negotiation.so #LoadModule proxy_module modules/mod_proxy.so #LoadModule proxy_ajp_module modules/mod_proxy_ajp.so #LoadModule proxy_balancer_module modules/mod_proxy_balancer.so #LoadModule proxy_connect_module modules/mod_proxy_connect.so #LoadModule proxy_ftp_module modules/mod_proxy_ftp.so #LoadModule proxy_http_module modules/mod_proxy_http.so #LoadModule proxy_scgi_module modules/mod_proxy_scgi.so #LoadModule reqtimeout_module modules/mod_reqtimeout.so #LoadModule rewrite_module modules/mod_rewrite.so LoadModule setenvif_module modules/mod_setenvif.so #LoadModule speling_module modules/mod_speling.so #LoadModule ssl_module modules/mod_ssl.so #LoadModule status_module modules/mod_status.so #LoadModule substitute_module modules/mod_substitute.so #LoadModule unique_id_module modules/mod_unique_id.so #LoadModule userdir_module modules/mod_userdir.so #LoadModule usertrack_module modules/mod_usertrack.so #LoadModule version_module modules/mod_version.so #LoadModule vhost_alias_module modules/mod_vhost_alias.so #!! LoadModule php5_module modules/php5apache2_2.dll PHPIniDir "C:/programme/php" <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User daemon Group daemon </IfModule> </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName localhost:8080 # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "C:/data/htdocs" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "C:/data/htdocs"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> DirectoryIndex index.html </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "logs/error.log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel debug <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "logs/access.log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "logs/access.log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://localhost/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "C:/Programme/Apache/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "C:/Programme/Apache/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "C:/Programme/Apache/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> # # DefaultType: the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig conf/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # #AddType text/html .shtml #AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile conf/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://localhost/subscription_info.html # # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall is used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # #EnableMMAP off #EnableSendfile off # Supplemental configuration # # The configuration files in the conf/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include conf/extra/httpd-mpm.conf # Multi-language error messages #Include conf/extra/httpd-multilang-errordoc.conf # Fancy directory listings #Include conf/extra/httpd-autoindex.conf # Language settings #Include conf/extra/httpd-languages.conf # User home directories #Include conf/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include conf/extra/httpd-info.conf # Virtual hosts #Include conf/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include conf/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include conf/extra/httpd-dav.conf # Various default settings #Include conf/extra/httpd-default.conf # Secure (SSL/TLS) connections #Include conf/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> #!! <IfModule mod_php5.c> AddType application/x-httpd-php .php AddType application/x-httpd-php .php5 AddType application/x-httpd-php-source .phps </IfModule>

    Read the article

  • Library like ENet, but for TCP?

    - by Milo
    I'm not looking to use boost::asio, it is overly complex for my needs. I'm building a game that is cross platform, for desktop, iPhone and Android. I found a library called ENet which is pretty much what I need, but it uses UDP which does not seem to support encryption and a few other things. Given that the game is an event driven card game, TCP seems like the right fit. However, all I have found is WINSOCK / berkley sockets and bost::asio. Here is a sample client server application with ENet: #include <enet/enet.h> #include <stdlib.h> #include <string> #include <iostream> class Host { ENetAddress address; ENetHost * server; ENetHost* client; ENetEvent event; public: Host() :server(NULL) { enet_initialize(); setupServer(); } void setupServer() { if(server) { enet_host_destroy(server); server = NULL; } address.host = ENET_HOST_ANY; /* Bind the server to port 1234. */ address.port = 1721; server = enet_host_create (& address /* the address to bind the server host to */, 32 /* allow up to 32 clients and/or outgoing connections */, 2 /* allow up to 2 channels to be used, 0 and 1 */, 0 /* assume any amount of incoming bandwidth */, 0 /* assume any amount of outgoing bandwidth */); } void daLoop() { while(true) { /* Wait up to 1000 milliseconds for an event. */ while (enet_host_service (server, & event, 5000) > 0) { ENetPacket * packet; switch (event.type) { case ENET_EVENT_TYPE_CONNECT: printf ("A new client connected from %x:%u.\n", event.peer -> address.host, event.peer -> address.port); /* Store any relevant client information here. */ event.peer -> data = "Client information"; /* Create a reliable packet of size 7 containing "packet\0" */ packet = enet_packet_create ("packet", strlen ("packet") + 1, ENET_PACKET_FLAG_RELIABLE); /* Extend the packet so and append the string "foo", so it now */ /* contains "packetfoo\0" */ enet_packet_resize (packet, strlen ("packetfoo") + 1); strcpy ((char*)& packet -> data [strlen ("packet")], "foo"); /* Send the packet to the peer over channel id 0. */ /* One could also broadcast the packet by */ /* enet_host_broadcast (host, 0, packet); */ enet_peer_send (event.peer, 0, packet); /* One could just use enet_host_service() instead. */ enet_host_flush (server); break; case ENET_EVENT_TYPE_RECEIVE: printf ("A packet of length %u containing %s was received from %s on channel %u.\n", event.packet -> dataLength, event.packet -> data, event.peer -> data, event.channelID); /* Clean up the packet now that we're done using it. */ enet_packet_destroy (event.packet); break; case ENET_EVENT_TYPE_DISCONNECT: printf ("%s disconected.\n", event.peer -> data); /* Reset the peer's client information. */ event.peer -> data = NULL; } } } } ~Host() { if(server) { enet_host_destroy(server); server = NULL; } atexit (enet_deinitialize); } }; class Client { ENetAddress address; ENetEvent event; ENetPeer *peer; ENetHost* client; public: Client() :peer(NULL) { enet_initialize(); setupPeer(); } void setupPeer() { client = enet_host_create (NULL /* create a client host */, 1 /* only allow 1 outgoing connection */, 2 /* allow up 2 channels to be used, 0 and 1 */, 57600 / 8 /* 56K modem with 56 Kbps downstream bandwidth */, 14400 / 8 /* 56K modem with 14 Kbps upstream bandwidth */); if (client == NULL) { fprintf (stderr, "An error occurred while trying to create an ENet client host.\n"); exit (EXIT_FAILURE); } /* Connect to some.server.net:1234. */ enet_address_set_host (& address, "192.168.2.13"); address.port = 1721; /* Initiate the connection, allocating the two channels 0 and 1. */ peer = enet_host_connect (client, & address, 2, 0); if (peer == NULL) { fprintf (stderr, "No available peers for initiating an ENet connection.\n"); exit (EXIT_FAILURE); } /* Wait up to 5 seconds for the connection attempt to succeed. */ if (enet_host_service (client, & event, 20000) > 0 && event.type == ENET_EVENT_TYPE_CONNECT) { std::cout << "Connection to some.server.net:1234 succeeded." << std::endl; } else { /* Either the 5 seconds are up or a disconnect event was */ /* received. Reset the peer in the event the 5 seconds */ /* had run out without any significant event. */ enet_peer_reset (peer); puts ("Connection to some.server.net:1234 failed."); } } void daLoop() { ENetPacket* packet; /* Create a reliable packet of size 7 containing "packet\0" */ packet = enet_packet_create ("backet", strlen ("backet") + 1, ENET_PACKET_FLAG_RELIABLE); /* Extend the packet so and append the string "foo", so it now */ /* contains "packetfoo\0" */ enet_packet_resize (packet, strlen ("backetfoo") + 1); strcpy ((char*)& packet -> data [strlen ("backet")], "foo"); /* Send the packet to the peer over channel id 0. */ /* One could also broadcast the packet by */ /* enet_host_broadcast (host, 0, packet); */ enet_peer_send (event.peer, 0, packet); /* One could just use enet_host_service() instead. */ enet_host_flush (client); while(true) { /* Wait up to 1000 milliseconds for an event. */ while (enet_host_service (client, & event, 1000) > 0) { ENetPacket * packet; switch (event.type) { case ENET_EVENT_TYPE_RECEIVE: printf ("A packet of length %u containing %s was received from %s on channel %u.\n", event.packet -> dataLength, event.packet -> data, event.peer -> data, event.channelID); /* Clean up the packet now that we're done using it. */ enet_packet_destroy (event.packet); break; } } } } ~Client() { atexit (enet_deinitialize); } }; int main() { std::string a; std::cin >> a; if(a == "host") { Host host; host.daLoop(); } else { Client c; c.daLoop(); } return 0; } I looked at some socket tutorials and they seemed a bit too low level. I just need something that abstracts away the platform (eg, no WINSOCKS) and that has basic ability to keep track of connected clients and send them messages. Thanks

    Read the article

  • IIS 7.0 informational HTTP status codes

    - by Samir R. Bhogayta
    1xx - Informational These HTTP status codes indicate a provisional response. The client computer receives one or more 1xx responses before the client computer receives a regular response. IIS 7.0 uses the following informational HTTP status codes: 100 - Continue. 101 - Switching protocols. 2xx - Success These HTTP status codes indicate that the server successfully accepted the request. IIS 7.0 uses the following success HTTP status codes: 200 - OK. The client request has succeeded. 201 - Created. 202 - Accepted. 203 - Nonauthoritative information. 204 - No content. 205 - Reset content. 206 - Partial content. 3xx - Redirection These HTTP status codes indicate that the client browser must take more action to fulfill the request. For example, the client browser may have to request a different page on the server. Or, the client browser may have to repeat the request by using a proxy server. IIS 7.0 uses the following redirection HTTP status codes: 301 - Moved permanently. 302 - Object moved. 304 - Not modified. 307 - Temporary redirect. 4xx - Client error These HTTP status codes indicate that an error occurred and that the client browser appears to be at fault. For example, the client browser may have requested a page that does not exist. Or, the client browser may not have provided valid authentication information. IIS 7.0 uses the following client error HTTP status codes: 400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 400 error: 400.1 - Invalid Destination Header. 400.2 - Invalid Depth Header. 400.3 - Invalid If Header. 400.4 - Invalid Overwrite Header. 400.5 - Invalid Translate Header. 400.6 - Invalid Request Body. 400.7 - Invalid Content Length. 400.8 - Invalid Timeout. 400.9 - Invalid Lock Token. 401 - Access denied. IIS 7.0 defines several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: 401.1 - Logon failed. 401.2 - Logon failed due to server configuration. 401.3 - Unauthorized due to ACL on resource. 401.4 - Authorization failed by filter. 401.5 - Authorization failed by ISAPI/CGI application. 403 - Forbidden. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 403 error: 403.1 - Execute access forbidden. 403.2 - Read access forbidden. 403.3 - Write access forbidden. 403.4 - SSL required. 403.5 - SSL 128 required. 403.6 - IP address rejected. 403.7 - Client certificate required. 403.8 - Site access denied. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. 403.10 - Forbidden: Web server is configured to deny Execute access. 403.11 - Forbidden: Password has been changed. 403.12 - Mapper denied access. 403.13 - Client certificate revoked. 403.14 - Directory listing denied. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. 403.16 - Client certificate is untrusted or invalid. 403.17 - Client certificate has expired or is not yet valid. 403.18 - Cannot execute requested URL in the current application pool. 403.19 - Cannot execute CGI applications for the client in this application pool. 403.20 - Forbidden: Passport logon failed. 403.21 - Forbidden: Source access denied. 403.22 - Forbidden: Infinite depth is denied. 404 - Not found. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 404 error: 404.0 - Not found. 404.1 - Site Not Found. 404.2 - ISAPI or CGI restriction. 404.3 - MIME type restriction. 404.4 - No handler configured. 404.5 - Denied by request filtering configuration. 404.6 - Verb denied. 404.7 - File extension denied. 404.8 - Hidden namespace. 404.9 - File attribute hidden. 404.10 - Request header too long. 404.11 - Request contains double escape sequence. 404.12 - Request contains high-bit characters. 404.13 - Content length too large. 404.14 - Request URL too long. 404.15 - Query string too long. 404.16 - DAV request sent to the static file handler. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. 404.18 - Querystring sequence denied. 404.19 - Denied by filtering rule. 405 - Method Not Allowed. 406 - Client browser does not accept the MIME type of the requested page. 408 - Request timed out. 412 - Precondition failed. 5xx - Server error These HTTP status codes indicate that the server cannot complete the request because the server encounters an error. IIS 7.0 uses the following server error HTTP status codes: 500 - Internal server error. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 500 error: 500.0 - Module or ISAPI error occurred. 500.11 - Application is shutting down on the Web server. 500.12 - Application is busy restarting on the Web server. 500.13 - Web server is too busy. 500.15 - Direct requests for Global.asax are not allowed. 500.19 - Configuration data is invalid. 500.21 - Module not recognized. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. Note Here is where the global rules configuration is read. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. 500.100 - Internal ASP error. 501 - Header values specify a configuration that is not implemented. 502 - Web server received an invalid response while acting as a gateway or proxy. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 502 error: 502.1 - CGI application timeout. 502.2 - Bad gateway. 503 - Service unavailable. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 503 error: 503.0 - Application pool unavailable. 503.2 - Concurrent request limit exceeded.

    Read the article

  • WebLogic JDBC Use of Oracle Wallet for SSL

    - by Steve Felts
    Introduction Secure Sockets Layer (SSL) can be used to secure the connection between the middle tier “client”, WebLogic Server (WLS) in this case, and the Oracle database server.  Data between WLS and database can be encrypted.  The server can be authenticated so you have proof that the database can be trusted by validating a certificate from the server.  The client can be authenticated so that the database only accepts connections from clients that it trusts. Similar to the discussion in an earlier article about using the Oracle wallet for database credentials, the Oracle wallet can also be used with SSL to store the keys and certificates.  By using it correctly, clear text passwords can be eliminated from the JDBC configuration and client/server configuration can be simplified by sharing the wallet across multiple datasources. There is a very good Oracle Technical White Paper on using SSL with the Oracle thin driver at http://www.oracle.com/technetwork/database/enterprise-edition/wp-oracle-jdbc-thin-ssl-130128.pdf [LINK1].  The link http://www.oracle.com/technetwork/middleware/weblogic/index-087556.html [LINK2] describes how to use WebLogic Server with Oracle JDBC Driver SSL. The information in this article is a guide on what steps need to be taken in the variety of available options; use the links above for details. SSL from the driver to the database server is basically turned on by specifying a protocol of “tcps” in the URL.  However, there is a fair amount of setup needed.  Also remember that there is an overhead in performance. Creating the wallets The common use cases are 1. “data encryption and server-only authentication”, requiring just a trust store, or 2. “data encryption and authentication of both tiers” (client and server), requiring a trust store and a key store. It is recommended to use the auto-login wallet type so that clear text passwords are not needed in the datasource configuration to open the wallet.  The store type for an auto-login wallet is “SSO” (Single Sign On), not “JKS” or “PKCS12” as in [LINK2].  The file name is “cwallet.sso”. Wallets are created using the orapki tool.  They need to be created based on the usage (encryption and/or authentication).  This is discussed in detail in [LINK1] in Appendix B or in the Advanced Security Administrator’s Guide of the Database documentation. Database Server Configuration It is necessary to update the sqlnet.ora and listener.ora files with the directory location of the wallet using WALLET_LOCATION.  These files also indicate whether or not SSL_CLIENT_AUTHENTICATION is being used (true or false). The Oracle Listener must also be configured to use the TCPS protocol.  The recommended port is 2484. LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))) WebLogic Server Classpath The WebLogic Server CLASSPATH must have three additional security files. The files that need to be added to the WLS CLASSPATH are $MW_HOME/modules/com.oracle.osdt_cert_1.0.0.0.jar $MW_HOME/modules/com.oracle.osdt_core_1.0.0.0.jar $MW_HOME/modules/com.oracle.oraclepki_1.0.0.0.jar One way to do this is to add them to PRE_CLASSPATH environment variable for use with the standard WebLogic scripts. Setting the Oracle Security Provider It’s necessary to enable the Oracle PKI provider on the client side.  This can either be done statically by updating the java.security file under the JRE or dynamically by setting it in a WLS startup class using java.security.Security.insertProviderAt(new oracle.security.pki.OraclePKIProvider (), 3); See the full example of the startup class in [LINK2]. Datasource Configuration When creating a WLS datasource, set the PROTOCOL in the URL to tcps as in the following. jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=host)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=myservice))) For encryption and server authentication, use the datasource connection properties: - javax.net.ssl.trustStore=location of wallet file on the client - javax.net.ssl.trustStoreType=”SSO” For client authentication, use the datasource connection properties: - javax.net.ssl.keyStore=location of wallet file on the client - javax.net.ssl.keyStoreType=”SSO” Note that the driver connection properties for the wallet require a file name, not a directory name. Active GridLink ONS over SSL For completeness, there is another SSL usage for WLS datasources.  The communication with the Oracle Notification Service (ONS) for load balancing information and node up/down events can use SSL also. Create an auto-login wallet and use the wallet on the client and server.  The following is a sample sequence to create a test wallet for use with ONS. orapki wallet create -wallet ons -auto_login -pwd ONS_Wallet orapki wallet add -wallet ons -dn "CN=ons_test,C=US" -keysize 1024 -self_signed -validity 9999 -pwd ONS_Wallet orapki wallet export -wallet ons -dn "CN=ons_test,C=US" -cert ons/cert.txt -pwd ONS_Wallet On the database server side, it’s necessary to define the walletfile directory in the file $CRS_HOME/opmn/conf/ons.config and run onsctl stop/start. When configuring an Active GridLink datasource, the connection to the ONS must be defined.  In addition to the host and port, the wallet file directory must be specified.  By not giving a password, a SSO wallet is assumed. Summary To use SSL with the Oracle thin driver without any clear text passwords, use an SSO Oracle Wallet.  SSL support in the Oracle thin driver is available starting in 10g Release 2.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS?? Or Routing table?

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • Proxmox VE cluster not working

    - by JJ56
    I have been using a Proxmox VE 2.0 cluster (2 servers) for months. Today, I had the "bright idea" to install a GUI on one of the servers. I used tasksel, and selected the graphical desktop environment (Gnome). When I rebooted the server, neither of the servers on the cluster could see each other (shows a red light by the server on the sidebar, no statistics). The VMs on each server are working fine, individually. pvecm status on the broken server shows cman_tool: cannot open connection to cman, is it running ?. Running it on the other server outputs a lot of lines (here: http://pastebin.com/HpQfUHTU), but I assume the important bit is expected votes:2 total votes: 1 Trying pvecm delnode (othernode) on either server outputs: cluster not ready - no quorum? Any suggestions as to how I can fix this? Thanks in advance!

    Read the article

  • Xcopy "exited with code 9009"

    - by David Veeneman
    I am getting the following error, which I don't understand: Error 1 The command "xcopy "D:\Users\johndoe\Documents\Visual Studio 2008\Projects\MyProject\MyProject.Modules.Ribbon\bin\Debug\MyProject.Modules.Ribbon.dll" "D:\Users\johndoe\Documents\Visual Studio 2008\Projects\MyProject\MyProject\bin\Debug\Modules\" /Y" exited with code 9009. MyProject.Modules.Ribbon Any suggestions on how to fix this?

    Read the article

  • TCP RST Reset Every 5 Minutes on Windows 2003 sp2

    - by Dan
    Hey, Recently I had a web developer come to me and ask why he was receiving connection errors in his app that was accessing a sql database. So, I went through my normal trouble shooting steps to isolate or reproduce the issue. I discovered that if I connected to the database using Query Analyzer and let the connection idle for 5 minutes it would disconnect. Meaning... I would no longer be able to refresh my tables or any other object/node within the object browser in Query Analyzer. I would have to right click on the instance and refresh for it to re-establish the connection. Next I went to wireshark and ran a capture on the client pc's nic card. Sure enough it was receiving a TCP RST reset every 5 min if the connection idled longer than 5 min. I also ran a capture on the SQL Server and noticed the TCP RST reset command as well. Attached below is the capture from the client Machine. If someone could please assist... That would be great. -I checked all settings within SQL Server 2000 against another server and they all seem to be the same. -Issue does not occur if I connect to any other SQL server 2000 server. -Issue does not occur if connecting to SQL on the server itself... so only over the network. -I consulted with network team and this is the response back: There are no firewalls or proxies in between SQL Server and your desktop. The traffic flows like this: Desktop-Access Switch-Distro Switch-Core Switch-Datacenter Switch-SQL Server None of the switches have security ACL’s configured on them. Also they stated that NAT was not turned on. -Issue does not occur with SQL server Enterprise Manager. -Ran SQL Profiler at the same time and did not see anything out of the ordinary during the RST I HAVE SEARCHED HIGH AND LOW ON GOOGLE FOR A RESOLUTION FOR THIS ISSUE. NO LUCK! My questions are: What could be causing this? Wrong Sequence number? setting in a router or switch the network team may have over looked? Setting within Windows? Setting within SQL Server 2000 that I have over looked? Better way to utilize Wireshark to find more answers? RST is about 10 from the bottom. No. Time Source Destination Protocol Info 258 24.390708 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [SYN] Seq=0 Len=0 MSS=1260 259 24.401679 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 260 24.401729 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 261 24.402212 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 262 24.413335 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 285 24.466512 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 286 24.466536 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=437 289 24.478168 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [ACK] Seq=38 Ack=1740 Win=64240 Len=0 290 24.480078 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=38 Ack=1740 Win=64240 Len=385 293 24.493629 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1740 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=60 294 24.504637 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=423 Ack=1800 Win=64180 Len=17 295 24.533197 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1800 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=44 296 24.544098 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=440 Ack=1844 Win=64136 Len=17 297 24.544524 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1844 Ack=457 Win=65079 [TCP CHECKSUM INCORRECT] Len=58 298 24.558033 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=457 Ack=1902 Win=64078 Len=31 299 24.558493 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1902 Ack=488 Win=65048 [TCP CHECKSUM INCORRECT] Len=92 300 24.569984 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=488 Ack=1994 Win=63986 Len=70 301 24.577395 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1994 Ack=558 Win=64978 [TCP CHECKSUM INCORRECT] Len=448 303 24.589834 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=558 Ack=2442 Win=63538 Len=64 304 24.590122 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [FIN, ACK] Seq=2442 Ack=622 Win=64914 [TCP CHECKSUM INCORRECT] Len=0 305 24.601094 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [ACK] Seq=622 Ack=2443 Win=63538 Len=0 306 24.601659 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [FIN, ACK] Seq=622 Ack=2443 Win=63538 Len=0 307 24.601686 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=2443 Ack=623 Win=64914 [TCP CHECKSUM INCORRECT] Len=0 321 25.839371 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [SYN] Seq=0 Len=0 MSS=1260 322 25.850291 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 323 25.850321 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 324 25.850660 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 325 25.861573 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 326 25.863103 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 327 25.863130 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=463 328 25.874417 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [ACK] Seq=38 Ack=1766 Win=64240 Len=0 329 25.876315 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=38 Ack=1766 Win=64240 Len=385 330 25.876905 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1766 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=60 331 25.887773 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=423 Ack=1826 Win=64180 Len=17 332 25.888299 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1826 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=44 333 25.899169 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=440 Ack=1870 Win=64136 Len=17 334 25.899574 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1870 Ack=457 Win=65079 [TCP CHECKSUM INCORRECT] Len=58 335 25.910618 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=457 Ack=1928 Win=64078 Len=31 336 25.911051 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1928 Ack=488 Win=65048 [TCP CHECKSUM INCORRECT] Len=92 337 25.922068 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=488 Ack=2020 Win=63986 Len=70 338 25.922500 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2020 Ack=558 Win=64978 [TCP CHECKSUM INCORRECT] Len=34 339 25.933621 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=558 Ack=2054 Win=63952 Len=29 340 25.941165 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2054 Ack=587 Win=64949 [TCP CHECKSUM INCORRECT] Len=54 341 25.952164 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=587 Ack=2108 Win=63898 Len=17 342 25.952993 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2108 Ack=604 Win=64932 [TCP CHECKSUM INCORRECT] Len=72 343 25.963889 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=604 Ack=2180 Win=63826 Len=17 344 25.964366 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2180 Ack=621 Win=64915 [TCP CHECKSUM INCORRECT] Len=52 345 25.975253 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=621 Ack=2232 Win=63774 Len=17 346 25.975590 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2232 Ack=638 Win=64898 [TCP CHECKSUM INCORRECT] Len=32 347 25.986588 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=638 Ack=2264 Win=63742 Len=167 348 25.987262 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2264 Ack=805 Win=64731 [TCP CHECKSUM INCORRECT] Len=512 349 25.998464 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=805 Ack=2776 Win=63230 Len=89 350 25.998861 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2776 Ack=894 Win=64642 [TCP CHECKSUM INCORRECT] Len=46 351 26.009849 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=894 Ack=2822 Win=63184 Len=17 352 26.010175 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2822 Ack=911 Win=64625 [TCP CHECKSUM INCORRECT] Len=80 353 26.021220 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=911 Ack=2902 Win=63104 Len=33 354 26.022613 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2902 Ack=944 Win=64592 [TCP CHECKSUM INCORRECT] Len=498 355 26.034018 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=944 Ack=3400 Win=64240 Len=89 356 26.046501 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [SYN] Seq=0 Len=0 MSS=1260 357 26.057323 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 358 26.057355 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 359 26.057661 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 361 26.068606 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 362 26.070087 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 363 26.070113 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=485 364 26.081336 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [ACK] Seq=38 Ack=1788 Win=64240 Len=0 365 26.083330 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=38 Ack=1788 Win=64240 Len=385 366 26.083943 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1788 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=46 368 26.094921 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=423 Ack=1834 Win=64194 Len=17 369 26.095317 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1834 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=48 370 26.107553 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=440 Ack=1882 Win=64146 Len=877 371 26.241285 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=3400 Ack=1033 Win=64503 [TCP CHECKSUM INCORRECT] Len=0 372 26.241307 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=1882 Ack=1317 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 653 55.913838 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 > 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 654 55.924547 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 > 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 910 85.887176 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 > 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 911 85.898010 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 > 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1155 115.859520 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1156 115.870285 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1395 145.934403 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1396 145.945938 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1649 175.906767 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1650 175.917741 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1887 205.881080 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1888 205.891818 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2112 235.854408 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2113 235.865482 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2398 265.928342 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2399 265.939242 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2671 295.900714 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2672 295.911590 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2880 315.705029 x.x.x.10 x.x.x.99 TCP 2226 14493 [RST] Seq=1317 Len=0 2973 325.975607 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2974 325.986337 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2975 326.154327 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive] 2226 14492 [ACK] Seq=1032 Ack=3400 Win=64240 Len=1 2976 326.154350 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive ACK] 14492 2226 [ACK] Seq=3400 Ack=1033 Win=64503 [TCP CHECKSUM INCORRECT] Len=0

    Read the article

  • CentOS 5.5 : Postfix, Dovecot & MySQL

    - by GruffTech
    I'm hoping someone has seen this issue before because I'm at quite a loss. We're building a new outbound smtp server for our clients that features anti-spam scanning and virus scanning for outbound emails, something we had not previously done. So with CentOS 5.5 x64, Installed and patched completely. Postfix & Dovecot both installed via base repo. [grufftech@outgoing postfix]# rpm -qa | grep postfix postfix-2.3.3-2.1.el5_2 [grufftech@outgoing postfix]# rpm -qa | grep dovecot dovecot-1.0.7-7.el5 [grufftech@outgoing ~]# dovecot --build-options Build options: ioloop=poll notify=inotify ipv6 openssl SQL drivers: mysql postgresql Passdb: checkpassword ldap pam passwd passwd-file shadow sql Userdb: checkpassword ldap passwd prefetch passwd-file sql static /etc/dovecot.conf auth default { mechanisms = plain login digest-md5 cram-md5 passdb sql { args = /etc/dovecot-mysql.conf } userdb sql { args = /etc/dovecot-mysql.conf } userdb prefetch { } user = nobody socket listen { master { path = /var/run/dovecot/auth-master mode = 0660 user = postfix group = postfix } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } All the server is doing is auth for postfix, so no reason to have imap / pop / dict. /etc/dovecot-mysql.conf driver = mysql connect = host=10.0.32.159 dbname=mail user=****** password=******** default_pass_scheme = plain user_query = select 1 password_query = select password from users where username = '%n' and domain = '%d' So drop in my configuration, (which is working on another server identical to this one.) [grufftech@outgoing ~]# /etc/init.d/dovecot start Starting Dovecot Imap: [ OK ] Sweet. Booted up nicely, thats good.... (incoming problem in 3....2....1....) May 21 08:09:01 outgoing dovecot: Dovecot v1.0.7 starting up May 21 08:09:02 outgoing dovecot: auth-worker(default): mysql: Connect failed to 10.0.32.159 (mail): Can't connect to MySQL server on '10.0.32.159' (13) - waiting for 1 seconds before retry well what the crap. went and checked permissions on my MySQL database, and its fine. [grufftech@outgoing ~]# mysql vpopmail -h 10.0.32.159 -u ****** -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 127828558 Server version: 4.1.22 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql>\q So! My server can talk to my database server. but dovecot, for whatever reason, isn't able to. I've fiddled with it for the last six hours, grabbed slightly-older copies of the RPM (ones that matched our production server exactly) to test those, copied configs, searched google, searched server fault, chatted in IRC, banged my head against the table, I've done it all. Surely I'm doing something wrong or forgetting something, can anyone tell me what the elephant in the room is? This stuff is supposed to work.

    Read the article

  • I am getting this error on each machine after installing ruby and rails, I created one web site and

    - by Santodsh
    D:\PROJECTS\RubyOnRail\webapp\Welcome>ruby script\server => Booting WEBrick => Rails 2.3.4 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-01-31 21:19:34] INFO WEBrick 1.3.1 [2010-01-31 21:19:34] INFO ruby 1.8.6 (2007-09-24) [i386-mswin32] [2010-01-31 21:19:34] INFO WEBrick::HTTPServer#start: pid=6576 port=3000 /!\ FAILSAFE /!\ Sun Jan 31 21:19:38 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 /!\ FAILSAFE /!\ Sun Jan 31 21:19:39 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3

    Read the article

  • Visual Studio 2010 crashes nearly every time when closing

    - by Andrei Drynov
    Mostly it happens if we open a team project from tfs 2008 or tfs 2010, but crashes can happen any time. When VS is closing down, it crashes nearly every time. Tried trial RTM and our MSDN download - same story. Tried on three different PCs - same issue. Tried on 32 and 64-bit Windows 7 and Windows Server 2008 R2 - crashes. Is it just our luck or does it happen to you to?

    Read the article

  • System error 1219 has occurred

    - by khebbie
    I am trying to connect to a remote server and deploy a service there, through a deploy script. I start by stating "Net use" and send the credentials for the server. But here I get system 1219 error: Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. I am not aware that I have any connections other than this one to the server. I have tried net "use /delete" but was told that no connections were open to the server. What gives?

    Read the article

  • Where should I download corflags.exe from?

    - by mmiika
    I'm running Windows Server 2008 64-bit "workstation" and would like to get corflags.exe. Which SDK do I need to download? Edit: I know about .NET Framework 2.0 Software Development Kit (SDK) (x64) and Windows SDK for Windows Server 2008 and .NET Framework 3.5 but I was hoping to find something smaller as these are quite large downloads. Also the note about 2.0 SDK seems to suggest to download the 3.5 one, should I follow that?

    Read the article

  • PostgreSQL service doesn't start on Windows 7

    - by Mehrdad
    (Not sure if this should be on Stack Overflow or Super User... please move if needed.) When I start the PostgreSQL service on Windows 7 x64, it immediately stops. When I check my log folder (C:\PostgreSQL\9.1\data\pg_log\), I see new but empty log files. The Event Viewer doesn't tell me anything other than the fact that the server did not respond. I've even tried turning off my firewall (I don't have any antivirus or anything else), but nothing helps. The setup works fine when I'm on Windows XP (32-bit) (same computer, different partition). I can't figure out what's wrong, even though I've tried tracing the system calls. Is PostgreSQL compatible with Windows 7 x64 at all? Any ideas what the issue might be? More info: This problem also happens at the end of installation -- the service starts, then stops immediately, before the installer can do anything. Installation log: Starting the database server... Executing cscript //NoLogo "C:\Program Files\PostgreSQL\9.1\installer\server\startserver.vbs" postgresql-x64-9.1 Script exit code: 0 Script output: Starting postgresql-x64-9.1 Service postgresql-x64-9.1 started successfully // <==== NOT REALLY!! It stops! startserver.vbs ran to completion Script stderr: Loading additional SQL modules... Executing cscript //NoLogo "C:\Program Files\PostgreSQL\9.1\installer\server\loadmodules.vbs" "postgres" "****" "C:\Program Files\PostgreSQL\9.1" "C:\Program Files\PostgreSQL\9.1\data" 5432 Script exit code: 2 Script output: Installing the adminpack module in the postgres database... Executing 'C:\Users\HOMEUS~1\AppData\Local\Temp\rad6C20D.bat'... psql: could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? Failed to install the 'adminpack' module in the 'postgres' database loadmodules.vbs ran to completion Script stderr: Program ended with an error exit code Error running cscript //NoLogo "C:\Program Files\PostgreSQL\9.1\installer\server\loadmodules.vbs" "postgres" "****" "C:\Program Files\PostgreSQL\9.1" "C:\Program Files\PostgreSQL\9.1\data" 5432 : Program ended with an error exit code

    Read the article

  • dhcp-snooping option 82 drops valid dhcp requests on 2610 series Procurve switches

    - by kce
    We are slowly starting to implement dhcp-snooping on our HP ProCurve 2610 series switches, all running the R.11.72 firmware. I'm seeing some strange behavior where dhcp-request or dhcp-renew packets are dropped when originating from "downstream" switches due "untrusted relay information from client". The full error: Received untrusted relay information from client <mac-address> on port <port-number> In more detail we have a 48 port HP2610 (Switch A) and a 24 port HP2610 (Switch B). Switch B is "downstream" of Switch A by virtue of a DSL connection to one of Switch A ports. The dhcp server is connected to Switch A. The relevant bits are as follows: Switch A dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 168 interface 25 name "Server" dhcp-snooping trust exit Switch B dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 interface Trk1 dhcp-snooping trust exit The switches are set to trust BOTH the port the authorized dhcp server is attached to and its IP address. This is all well and good for the clients attached to Switch A, but the clients attached to Switch B get denied due to the "untrusted relay information" error. This is odd for a few reasons 1) dhcp-relay is not configured on either switch, 2) the Layer-3 network here is flat, same subnet. DHCP packets should not have a modified option 82 attribute. dhcp-relay does appear to be enabled by default however: SWITCH A# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 0 0 0 0 SWITCH B# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 40156 0 0 0 And interestingly enough the dhcp-relay agent seems very busy on Switch B, but why? As far as I can tell there is no reason why dhcp requests need a relay with this topology. And furthermore I can't tell why the upstream switch is dropping legitimate dhcp requests for untrusted relay information when the relay agent in question (on Switch B) isn't modifying the option 82 attributes anyway. Adding the no dhcp-snooping option 82 on Switch A allows the dhcp traffic from Switch B to be approved by Switch A, by virtue of just turning off that feature. What are the repercussions of not validating option 82 modified dhcp traffic? If I disable option 82 on all my "upstream" switches - will they pass dhcp traffic from any downstream switch regardless of that traffic's legitimacy? This behavior is client operating system agnostic. I see it with both Windows and Linux clients. Our DHCP servers are either Windows Server 2003 or Windows Server 2008 R2 machines. I see this behavior regardless of the DHCP servers' operating system. Can anyone shed some light on what's happening here and give me some recommendations on how I should proceed with configuring the option 82 setting? I feel like i just haven't completely grokked dhcp-relaying and option 82 attributes.

    Read the article

  • Transfer DNS zones from master to slave (MS DNS to BIND9)

    - by Bryan
    Hello, I have a problem with DNS servers. My master dns server runs on Microsoft DNS server and now I want to start slave DNS server on Linux Bind9. The problems is that master MS DNS server can't validate slave DNS server (bind9) and can't resolve FQDN. Maybe, I missed something... firewall, dns configuration and network looks like ok. And the second question is: How I can make full transfer of dns zones to slave dns server? from MS DNS to BIND9 Thanks in advance. Regards, Bryan

    Read the article

  • Microsoft CRM Dynamics Install

    - by Pino
    Trying to install CRM4.0 on Windows Small Business Server. We get through the isntall process (Setting up database, website, AD etc) when we clikc install though the following error is dumped to the log file, anyone point us in the right directon? 13:49:07| Info| Win32Exception: 1635 13:49:07| Error| Failed to revoke temporary database access.System.Data.SqlClient.SqlException: Cannot open database "MSCRM_CONFIG" requested by the login. The login failed. Login failed for user 'SCHIP\administrator'. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.Crm.CrmDbConnection.Open() at Microsoft.Crm.Setup.Database.SharedDatabaseUtility.DeleteDBUser(String sqlServerName, String databaseName, String user, CrmDBConnectionType connectionType) at Microsoft.Crm.Setup.Server.RevokeConfigDBDatabaseAccessAction.Do(IDictionary parameters) 13:49:07| Error| HResult=80004005 Install exception.System.Exception: Action Microsoft.Crm.Setup.Server.MsiInstallServerAction failed. --- System.ComponentModel.Win32Exception: This update package could not be opened. Verify that the update package exists and that you can access it, or contact the application vendor to verify that this is a valid Windows Installer update package at Microsoft.Crm.Setup.Common.Utility.MsiUtility.InstallProduct(String packagePath, String commandLine) at Microsoft.Crm.Setup.Server.MsiInstallServerAction.Do(IDictionary parameters) at Microsoft.Crm.Setup.Common.Action.ExecuteAction(Action action, IDictionary parameters, Boolean undo) --- End of inner exception stack trace --- at Microsoft.Crm.Setup.Common.Action.ExecuteAction(Action action, IDictionary parameters, Boolean undo) at Microsoft.Crm.Setup.Common.Installer.Install(IDictionary stateSaver) at Microsoft.Crm.Setup.Common.ComposedInstaller.InternalInstall(IDictionary stateSaver) at Microsoft.Crm.Setup.Common.ComposedInstaller.Install(IDictionary stateSaver) at Microsoft.Crm.Setup.Server.ServerSetup.Install(IDictionary data) at Microsoft.Crm.Setup.Server.ServerSetup.Run() 13:49:07| Info| Microsoft Dynamics CRM Server install Failed. 13:49:07| Info| Microsoft Dynamics CRM Server Setup did not complete successfully. Action Microsoft.Crm.Setup.Server.MsiInstallServerAction failed. This update package could not be opened. Verify that the update package exists and that you can access it, or contact the application vendor to verify that this is a valid Windows Installer update package

    Read the article

  • Changes to .htaccess ignored

    - by Bojan Kogoj
    I have a website on the server, containing .htaccess. For testing purposes I wanted to replace it with another .htaccess, but changes have been ignored. Even though I have replaced with new .htaccess or even deleted it from server root, website is still working, like I haven't done any changes. Basically new .htaccess is being ignored, it's like server cached it and doesn't care about the new one. Because of that a testing site won't work since old rewrite rules are still in place. All I know about server is that it's Linux. Is there any way to make server see the changes? I cannot restart server.

    Read the article

  • WSUS appears to be functioning, however errors are reported in Application Log (DSS Authentication W

    - by Richard Slater
    I re-installed WSUS a few months ago on a new server as part of a server hardware refresh. It is functioning normally downloading, authorizing and supplying patches to workstations. System Specification: HP DL360 G5 Quad 2.5 Zeon 6GB RAM Not Virtualized WSUS 3.2.7600.3226 SQL 2005 Express Windows 2003 R2 SP2 WSS SSL Enabled Every six hours five events are logged to the Application Event Log: Source | Category | ID | Description -------------------------------------------------------------------------------------- Windows Server Update Services | Web Services | 12052 | The DSS Authentication Web Service is not working. Windows Server Update Services | Web Services | 12042 | The SimpleAuth Web Service is not working Windows Server Update Services | Web Services | 12022 | The Client Web Service is not working Windows Server Update Services | Web Services | 12032 | The Server Synchronization Web Service is not working Windows Server Update Services | Web Services | 12012 | The API Remoting Web Service is not working In addition the following .NET Stack Trace is logged in C:\Program Files\Update Services\LogFiles\SoftwareDistribution.log each stack trace is identical except for the names of the services: 2009-11-27 11:56:52.757 UTC Error WsusService.10 HmtWebServices.CheckApiRemotingWebService ApiRemoting WebService WebException:System.Net.WebException: The request failed with HTTP status 403: Forbidden. at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Microsoft.UpdateServices.Internal.ApiRemoting.Ping(Int32 pingLevel) at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.CheckApiRemotingWebService(EventLoggingType type, HealthEventLogger logger) at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.CheckApiRemotingWebService(EventLoggingType type, HealthEventLogger logger) at Microsoft.UpdateServices.Internal.HealthMonitoring.HealthMonitoringTasks.ExecuteSubtask(HealthMonitoringSubtask subtask, EventLoggingType type, HealthEventLogger logger) at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.Execute(EventLoggingType type) at Microsoft.UpdateServices.Internal.HealthMonitoring.HealthMonitoringTasks.Execute(EventLoggingType type) at Microsoft.UpdateServices.Internal.HealthMonitoring.HealthMonitoringThreadManager.Execute(Boolean waitIfNecessary, EventLoggingType loggingType) at Microsoft.UpdateServices.Internal.HealthMonitoring.RemotingChannel.PrivateLogEvents() at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg, Int32 methodPtr, Boolean fExecuteInContext) at System.Runtime.Remoting.Messaging.ServerObjectTerminatorSink.SyncProcessMessage(IMessage reqMsg) at System.Runtime.Remoting.Lifetime.LeaseSink.SyncProcessMessage(IMessage msg) at System.Runtime.Remoting.Messaging.ServerContextTerminatorSink.SyncProcessMessage(IMessage reqMsg) at System.Runtime.Remoting.Channels.CrossContextChannel.SyncProcessMessageCallback(Object[] args) at System.Runtime.Remoting.Channels.ChannelServices.DispatchMessage(IServerChannelSinkStack sinkStack, IMessage msg, IMessage& replyMsg) at System.Runtime.Remoting.Channels.SoapServerFormatterSink.ProcessMessage(IServerChannelSinkStack sinkStack, IMessage requestMsg, ITransportHeaders requestHeaders, Stream requestStream, IMessage& responseMsg, ITransportHeaders& responseHeaders, Stream& responseStream) at System.Runtime.Remoting.Channels.BinaryServerFormatterSink.ProcessMessage(IServerChannelSinkStack sinkStack, IMessage requestMsg, ITransportHeaders requestHeaders, Stream requestStream, IMessage& responseMsg, ITransportHeaders& responseHeaders, Stream& responseStream) at System.Runtime.Remoting.Channels.Ipc.IpcServerTransportSink.ServiceRequest(Object state) at System.Runtime.Remoting.Channels.SocketHandler.ProcessRequestNow() at System.Runtime.Remoting.Channels.SocketHandler.BeginReadMessageCallback(IAsyncResult ar) at System.Runtime.Remoting.Channels.Ipc.IpcPort.AsyncFSCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) So far I have tried the following: Ensuring the settings were accurate as per TechNet. Checked that there was a suitable binding to 127.0.0.1 binding in IIS. Gone through and checked the settings in IIS as per TechNet. I have discovered that you can run the command wsusutil checkhealth to force the healt check to run, wsusutil can be found in C:\Program Files\Update Services\Tools. When this executese it will tell you to check the application log.

    Read the article

< Previous Page | 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452  | Next Page >