Search Results

Search found 5383 results on 216 pages for 'ffmpeg hosting'.

Page 210/216 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • ASP.NET WebAPI Security 5: JavaScript Clients

    - by Your DisplayName here!
    All samples I showed in my last post were in C#. Christian contributed another client sample in some strange language that is supposed to work well in browsers ;) JavaScript client scenarios There are two fundamental scenarios when it comes to JavaScript clients. The most common is probably that the JS code is originating from the same web application that also contains the web APIs. Think a web page that does some AJAX style callbacks to an API that belongs to that web app – Validation, data access etc. come to mind. Single page apps often fall in that category. The good news here is that this scenario just works. The typical course of events is that the user first logs on to the web application – which will result in an authentication cookie of some sort. That cookie will get round-tripped with your AJAX calls and ASP.NET does its magic to establish a client identity context. Since WebAPI inherits the security context from its (web) host, the client identity is also available here. The other fundamental scenario is JavaScript code *not* running in the context of the WebAPI hosting application. This is more or less just like a normal desktop client – either running in the browser, or if you think of Windows 8 Metro style apps as “real” desktop apps. In that scenario we do exactly the same as the samples did in my last post – obtain a token, then use it to call the service. Obtaining a token from IdentityServer’s resource owner credential OAuth2 endpoint could look like this: thinktectureIdentityModel.BrokeredAuthentication = function (stsEndpointAddress, scope) {     this.stsEndpointAddress = stsEndpointAddress;     this.scope = scope; }; thinktectureIdentityModel.BrokeredAuthentication.prototype = function () {     getIdpToken = function (un, pw, callback) {         $.ajax({             type: 'POST',             cache: false,             url: this.stsEndpointAddress,             data: { grant_type: "password", username: un, password: pw, scope: this.scope },             success: function (result) {                 callback(result.access_token);             },             error: function (error) {                 if (error.status == 401) {                     alert('Unauthorized');                 }                 else {                     alert('Error calling STS: ' + error.responseText);                 }             }         });     };     createAuthenticationHeader = function (token) {         var tok = 'IdSrv ' + token;         return tok;     };     return {         getIdpToken: getIdpToken,         createAuthenticationHeader: createAuthenticationHeader     }; } (); Calling the service with the requested token could look like this: function getIdentityClaimsFromService() {     authHeader = authN.createAuthenticationHeader(token);     $.ajax({         type: 'GET',         cache: false,         url: serviceEndpoint,         beforeSend: function (req) {             req.setRequestHeader('Authorization', authHeader);         },         success: function (result) {              $.each(result.Claims, function (key, val) {                 $('#claims').append($('<li>' + val.Value + '</li>'))             });         },         error: function (error) {             alert('Error: ' + error.responseText);         }     }); I updated the github repository, you can can play around with the code yourself.

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Configuring Oracle HTTP Server 12c for WebLogic Server Domain

    - by Emin Askerov
    Oracle HTTP Server (OHS) 12c 12.1.2 which was released in July 2013 as a part of Oracle Web Tier 12c is the web server component of Oracle Fusion Middleware. In essence this is Apache HTTP Server 2.2.22 (with critical bug fixes from higher versions) which includes modules developed specifically by Oracle. It provides a listener functionality for Oracle WebLogic Server and the framework for hosting static pages, dynamic pages, and applications over the Web. OHS can be easily managed by Weblogic Management Framework, a set of tools which provides administrative capabilities (start, stop, lifecycle operations, etc.) for Oracle Fusion Middleware products. In other words all tools which are familiar to us (Node Manager, WLST, Administration Console, Fusion Middleware Control etc.) presented as a part of Weblogic Management Framework and using for managing Java and System Components both for Weblogic Server and Standalone Domain types. You can familiarize yourself with these terms using related documentation: 1. Introduction to Oracle HTTP Server: http://docs.oracle.com/middleware/1212/webtier/index.html 2. Weblogic Management Framework: http://docs.oracle.com/middleware/1212/core/ASCON/terminology.htm#ASCON11260 In the given post I would like to cover rather simple use case how to configure OHS as web proxy in Weblogic Cluster environment. For example, we have existing Weblogic Domain where some managed servers have been joined to cluster and host business applications. We need to configure web proxy component which will act as entry point, load balancer for our cluster for user requests. Of course, we could install old good Apache HTTP Server and configure mod_wl plugin. However this solution not optimal from manageability perspective: we need to install Apache, install additional plugin then configure it by editing configuration file which is not really convenient for FMW Administrators and often increase time of performing of simple administrative task. Alternatively, we could use OHS as System Component within Weblogic Domain and use full power of Weblogic Management Framework in order to configure, manage and monitor it! I like this idea! What about you? I hope after reading this post you will agree with me. First of all it is necessary to download OHS binaries. You can use this link for downloading: http://www.oracle.com/technetwork/java/webtier/downloads/index2-303202.html As we will use Fusion Middleware Control for managing OHS instances it is necessary to extend your domain with Enterprise Manager and Oracle ADF and JRF templates. This is not topic for focusing in this post, but you could get more information from documentation or one of my previous posts: http://docs.oracle.com/middleware/1212/wls/WLDTR/fmw_templates.htm#sthref64 https://blogs.oracle.com/imc/entry/the_specifics_of_adf_12c Note: you should have properly configured Node Manager utility for managing OHS instances Let’s consider configuration process step by step: 1. Shut down all Weblogic instances of existing domain including Admin Server; 2. Install Oracle HTTP Server. You should use your Fusion Middleware Home Path (e.g. /u01/Oracle/FMW12) for Installation Location and select Colocated HTTP Server option as Installation Type. I will not focus on this topic in this post. All information related to OHS installation you could find here: http://docs.oracle.com/middleware/1212/webtier/WTINS/install_gui.htm#i1082009 3. Next we need to extend our existing domain with OHS component. In order to do this you should do the following: a. Run Fusion Middleware Configuration Wizard (ORACLE_HOME/oracle_common/common/bin/config.sh); b. On the step 1 select Update an existing domain option and point your Fusion Middleware Home Path; c. On the step 2 check Oracle HTTP Server, Oracle Enterprise Manager Plugin for WEBTIER templates; d. Go through other steps without any changes and finish configuration process. 4. Start Admin Server and all managed servers related to your cluster 5. Log in to Enterprise Manager FMW Control using http://<hostname>:<port>/em URL 6. Now we will create OHS instance within our Weblogic Domain Infrastructure. Navigate to Weblogic Domain -> Administration -> Create/Delete OHS menu item; 7. Enter to edit mode, clicking Changes -> Lock&Edit menu item; 8. Create new OHS instance clicking Create button; 9. Define Instance Name (e.g. DevOSH) and Machine parameters; 10. Now we need to define listen port. By default OHS will use 7777 port number for income HTTP requests. We could change it to any free port number we would like to use. In order to do it, right click on our created OHS instance (left hand panel) and navigate to Administration -> Port Configuration; 11. Click on record with port number 7777 and then click Edit button; 12. Change port number value (in our case this will be 8080) and then click OK button; 13. Now we need to edit mod_wl_ohs configuration in order to enable OHS to act as proxy for WebLogic Server Instances/Cluster; 14. In order to do it right click on our created OHS instance (left panel) and navigate to Administration -> mod_wl_ohs Configuration; a. In Weblogic Cluster you should enter cluster address (define <host:port> for all managed servers which participated in cluster), e.g: 192.168.56.2:7004,192.168.56.2:7005 b. Define Weblogic Port parameter at which the Oracle WebLogic Server host is listening for connection requests from the module (or from other servers); c. Check Dynamic Server List option. This will dynamically update cluster list for every request; d. In the Location table define list of endpoint locations which you would like to process. In order to do this click Add Row button and define Location, Weblogic Cluster, Path Trim and Path Prefix parameters (if required); e. Click Apply button in order to save changes. 15. Activate changes clicking Changes ? Activate Changes menu item; 16. Finally we will start configured OHS instance. Right click on OHS instance tree item under Web Tier folder, select Control -> Start Up menu item; 17. Ensure that OHS instance up and running and then test your environment. Run deployed application to your Weblogic Cluster accessing via OHS web proxy; Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Moving from Tortoise to TFS

    - by MarkPearl
    The Past A few years ago my small software company made the jump from storing code on a shared folder to source code control. At the time we had evaluated a few of the options and settled on Tortoise SVN. The main motivation for going the SVN route was that we found a great plugin for Visual Studio that allowed us to avoid the command prompt for uploading changes (like I said we are windows programmers… command prompt bad!! ) and it was free. Up to now we have been pretty happy with SVN as it removed many of the worries that I had about how safe my code was on a shared folder and also gave us the opportunity to safely have several developers work on the same project at the same time. The only times when we have been unhappy has been when we have had SVN hell days – which pretty much occur when you are doing something out of the norm and suddenly SVN just won’t resolve conflicts or something along those lines. This happens once every 4 or 5 months and is not necessarily a problem caused directly by SVN – but a problem augmented by SVN. When you have SVN hell days you want to curse SVN! With that in mind I recently have been relooking at our source code control. I have explored using GIT and was very impressed by it and have also looked at TFS. From a source code control perspective I don’t want to get into a heated discussion on which one is better – but I do want to mention that I wear two hats in my organization – software developer & manager, and with the manager hat on I tend to sway the TFS route. So when I was given a coupon to test DiscountASP.Net Team Foundation Server Service for a year, I thought it was the perfect opportunity to try TFS in a distributed environment and also make the first step towards having an integrated development management system. Some of the things that appeal to me about DiscountASP’s offering are the following… Basic management / planning facilities like to do lists inside Visual Studio Daily backup of data on the server – we are developers, not IT managers and so the more of this I could outsource the better Distributed solution – all of us work remotely and so this was a big one as well. Registering and Setting Up with DiscountASP.NET The whole registration process was simple and intuitive. The web interface is not the most visually impressive one, but it is functional and a few seconds after I clicked the last submit button a email was sitting in my inbox giving me my control panel username and suggesting that I read the “Getting Started” article. The getting started article was easy to read and understand so no complaints there either. Next to set my dev environment to work. With a few references to the getting started article I had completed the whole setup process in a matter of minutes. Ten minutes after initiating the whole thing I was logged into VS2010 and creating my first TFS project. With the service that I signed up for, I have access for 5 users – which is sufficient for my internal needs. So from what I can tell, to set the rest of us up on the system I just need to supply them with their user credentials and url. My Concerns Resolved 1) Security So, a few concerns I had about the service. First and foremost – is it secure? I would hate for someone to get access to our code and the whole idea of putting it up on the internet is a concern for me. Turning to the Knowledge Base on the DiscountASP website this is one of the first question I can see answered. According to them it is secure. I have extracted their comment below regarding this. Our TFS hosting service is secure. We only accept HTTPS connections ensuring that any client-server data transmission is encrypted. At the network level, all of our systems are protected by multiple Juniper firewalls, Tipping Point's Intrusion Detection System (see Tipping Point's case study of our use here), and we also employ DDoS mitigation to add extra layers of security. Additionally, physical access to the servers is tightly restricted. Please see the security section of this Knowledge Base article for further details. 2) Web Portal Access The other big concern I have is regarding web portal access. In the ideal world I would like to be able to give my end users access to a web portal for reporting bugs etc. When I initially read through the FAQ of the site it mentioned that there was web portal access – but from what I can see this is just for “users”. Since I am limited to 5 users for the account, it would not be practical to set up external users that we could get feedback from on bugs etc. I would be interested if this is possible – and if so if someone could post it in the comments it would be much appreciated. If this isn’t possible, it is a slight let down as we rely heavily on end user feedback to get feedback and it would have been ideal to have gotten this within the service. Other than those two items, I didn’t have any real concerns that were unresolved. So where do I go from here? So time passed by from the initial writing of this post and as work whirred in and out of my inbox I have still not had a proper opportunity to give the service a test run. Recently though things have began to slow down and then surprise surprise I had another SVN Hell day. With that experience I had a new found resolve to get our team on TFS and so today we are going to start to use the service as a team. I am hoping that I do not have TFS hell days – but if I do, I will be sure to write about them. In short - the verdict is still out on whether this service is going to be invaluable to my business or whether it will create more headaches than it is worth BUT I am hopping it will be an invaluable service. I will only really be able to determine that in a few months… till then!

    Read the article

  • ODEE Green Field (Windows) Part 4 - Documaker

    - by AndyL-Oracle
    Welcome back! We're about nearing completion of our installation of Oracle Documaker Enterprise Edition ("ODEE") in a green field. In my previous post, I covered the installation of SOA Suite for WebLogic. Before that, I covered the installation of WebLogic, and Oracle 11g database - all of which constitute the prerequisites for installing ODEE. Naturally, if your environment already has a WebLogic server and Oracle database, then you can skip all those components and go straight for the heart of the installation of ODEE. The ODEE installation is comprised of two procedures, the first covers the installation, which is running the installer and answering some questions. This will lay down the files necessary to install into the tiers (e.g. database schemas, WebLogic domains, etcetera). The second procedure is to deploy the configuration files into the various components (e.g. deploy the database schemas, WebLogic domains, SOA composites, etcetera). I will segment my posts accordingly! Let's get started, shall we? Unpack the installation files into a temporary directory location. This should extract a zip file. Extract that zip file into the temporary directory location. Navigate to and execute the installer in Disk1/setup.exe. You may have to allow the program to run if User Account Control is enabled. Once the dialog below is displayed, click Next. Select your ODEE Home - inside this directory is where all the files will be deployed. For ease of support, I recommend using the default, however you can put this wherever you want. Click Next. Select the database type, database connection type – note that the database name should match the value used for the connection type (e.g. if using SID, then the name should be IDMAKER; if using ServiceName, the name should be “idmaker.us.oracle.com”). Verify whether or not you want to enable advanced compression. Note: if you are not licensed for Oracle 11g Advanced Compression option do not use this option! Terrible, terrible calamities will befall you if you do! Click Next. Enter the Documaker Admin user name (default "dmkr_admin" is recommended for support purposes) and set the password. Update the System name and ID (must be unique) if you want/need to - since this is a green field install you should be able to use the default System ID. The only time you'd change this is if you were, for some reason, installing a new ODEE system into an existing schema that already had a system. Click Next. Enter the Assembly Line user name (default "dmkr_asline" is recommended) and set the password. Update the Assembly Line name and ID (must be unique) if you want/need to - it's quite possible that at some point you will create another assembly line, in which case you have several methods of doing so. One is to re-run the installer, and in this case you would pick a different assembly line ID and name. Click Next. Note: you can set the DB folder if needed (typically you don’t – see ODEE Installation Guide for specifics. Select the appropriate Application Server type - in this case, our green field install is going to use WebLogic - set the username to weblogic (this is required) and specify your chosen password. This credential will be used to access the application server console/control panel. Keep in mind that there are specific criteria on password choices that are required by WebLogic, but are not enforced by the installer (e.g. must contain a number, must be of a certain length, etcetera). Choose a strong password. Set the connection information for the JMS server. Note that for the 12.3.x version, the installer creates a separate JVM (WebLogic managed server) that hosts the JMS server, whereas prior editions place the JMS server on the AdminServer.  You may also specify a separate URL to the JMS server in case you intend to move the JMS resources to a separate/different server (e.g. back to AdminServer). You'll need to provide a login principal and credentials - for simplicity I usually make this the same as the WebLogic domain user, however this is not a secure practice! Make your JMS principal different from the WebLogic principal and choose a strong password, then click Next. Specify the Hot Folder(s) (comma-delimited if more than one) - this is the directory/directories that is/are monitored by ODEE for jobs to process. Click Next. If you will be setting up an SMTP server for ODEE to send emails, you may configure the connection details here. The details required are simple: hostname, port, user/password, and the sender's address (e.g. emails will appear to be sent by the address shown here so if the recipient clicks "reply", this is where it will go). Click Next. If you will be using Oracle WebCenter:Content (formerly known as Oracle UCM) you can enable this option and set the endpoints/credentials here. If you aren't sure, select False - you can always go back and enable this later. I'm almost 76% certain there will be a post sometime in the future that details how to configure ODEE + WCC:C! Click Next. If you will be using Oracle UMS for sending MMS/text messages, you can enable and set the endpoints/credentials here. As with UCM, if you're not sure, don't enable it - you can always set it later. Click Next. On this screen you can change the endpoints for the Documaker Web Service (DWS), and the endpoints for approval processing in Documaker Interactive. The deployment process for ODEE will create 3 managed WebLogic servers for hosting various Documaker components (JMS, Interactive, DWS, Dashboard, Documaker Administrator, etcetera) and it will set the ports used for each of these services. In this screen you can change these values if you know how you want to deploy these managed servers - but for now we'll just accept the defaults. Click Next. Verify the installation details and click Install. You can save the installation into a response file if you need to (which might be useful if you want to rerun this installation in an unattended fashion). Allow the installation to progress... Click Next. You can save the response file if needed (e.g. in case you forgot to save it earlier!) Click Finish. That's it, you're done with the initial installation. Have a look around the ODEE_HOME that you just installed (remember we selected c:\oracle\odee_1?) and look at the files that are laid down. Don't change anything just yet! Stay tuned for the next segment where we complete and verify the installation. 

    Read the article

  • Secure Your Wireless Router: 8 Things You Can Do Right Now

    - by Chris Hoffman
    A security researcher recently discovered a backdoor in many D-Link routers, allowing anyone to access the router without knowing the username or password. This isn’t the first router security issue and won’t be the last. To protect yourself, you should ensure that your router is configured securely. This is about more than just enabling Wi-Fi encryption and not hosting an open Wi-Fi network. Disable Remote Access Routers offer a web interface, allowing you to configure them through a browser. The router runs a web server and makes this web page available when you’re on the router’s local network. However, most routers offer a “remote access” feature that allows you to access this web interface from anywhere in the world. Even if you set a username and password, if you have a D-Link router affected by this vulnerability, anyone would be able to log in without any credentials. If you have remote access disabled, you’d be safe from people remotely accessing your router and tampering with it. To do this, open your router’s web interface and look for the “Remote Access,” “Remote Administration,” or “Remote Management” feature. Ensure it’s disabled — it should be disabled by default on most routers, but it’s good to check. Update the Firmware Like our operating systems, web browsers, and every other piece of software we use, router software isn’t perfect. The router’s firmware — essentially the software running on the router — may have security flaws. Router manufacturers may release firmware updates that fix such security holes, although they quickly discontinue support for most routers and move on to the next models. Unfortunately, most routers don’t have an auto-update feature like Windows and our web browsers do — you have to check your router manufacturer’s website for a firmware update and install it manually via the router’s web interface. Check to be sure your router has the latest available firmware installed. Change Default Login Credentials Many routers have default login credentials that are fairly obvious, such as the password “admin”. If someone gained access to your router’s web interface through some sort of vulnerability or just by logging onto your Wi-Fi network, it would be easy to log in and tamper with the router’s settings. To avoid this, change the router’s password to a non-default password that an attacker couldn’t easily guess. Some routers even allow you to change the username you use to log into your router. Lock Down Wi-Fi Access If someone gains access to your Wi-Fi network, they could attempt to tamper with your router — or just do other bad things like snoop on your local file shares or use your connection to downloaded copyrighted content and get you in trouble. Running an open Wi-Fi network can be dangerous. To prevent this, ensure your router’s Wi-Fi is secure. This is pretty simple: Set it to use WPA2 encryption and use a reasonably secure passphrase. Don’t use the weaker WEP encryption or set an obvious passphrase like “password”. Disable UPnP A variety of UPnP flaws have been found in consumer routers. Tens of millions of consumer routers respond to UPnP requests from the Internet, allowing attackers on the Internet to remotely configure your router. Flash applets in your browser could use UPnP to open ports, making your computer more vulnerable. UPnP is fairly insecure for a variety of reasons. To avoid UPnP-based problems, disable UPnP on your router via its web interface. If you use software that needs ports forwarded — such as a BitTorrent client, game server, or communications program — you’ll have to forward ports on your router without relying on UPnP. Log Out of the Router’s Web Interface When You’re Done Configuring It Cross site scripting (XSS) flaws have been found in some routers. A router with such an XSS flaw could be controlled by a malicious web page, allowing the web page to configure settings while you’re logged in. If your router is using its default username and password, it would be easy for the malicious web page to gain access. Even if you changed your router’s password, it would be theoretically possible for a website to use your logged-in session to access your router and modify its settings. To prevent this, just log out of your router when you’re done configuring it — if you can’t do that, you may want to clear your browser cookies. This isn’t something to be too paranoid about, but logging out of your router when you’re done using it is a quick and easy thing to do. Change the Router’s Local IP Address If you’re really paranoid, you may be able to change your router’s local IP address. For example, if its default address is 192.168.0.1, you could change it to 192.168.0.150. If the router itself were vulnerable and some sort of malicious script in your web browser attempted to exploit a cross site scripting vulnerability, accessing known-vulnerable routers at their local IP address and tampering with them, the attack would fail. This step isn’t completely necessary, especially since it wouldn’t protect against local attackers — if someone were on your network or software was running on your PC, they’d be able to determine your router’s IP address and connect to it. Install Third-Party Firmwares If you’re really worried about security, you could also install a third-party firmware such as DD-WRT or OpenWRT. You won’t find obscure back doors added by the router’s manufacturer in these alternative firmwares. Consumer routers are shaping up to be a perfect storm of security problems — they’re not automatically updated with new security patches, they’re connected directly to the Internet, manufacturers quickly stop supporting them, and many consumer routers seem to be full of bad code that leads to UPnP exploits and easy-to-exploit backdoors. It’s smart to take some basic precautions. Image Credit: Nuscreen on Flickr     

    Read the article

  • SharePoint 2010 - Access denied during ApplyWebConfigModifications()

    - by tcoalson
    I have SharePoint 2010 installed on a Windows Server 2008 R2 machine which is also hosting SQL Sever 2008 R2. I am attempting to deploy a solution that includes web parts in the 2010 environment that is working fine in MOSS 2007. The Web Part feature has a feature receiver that updates the web.config. When I try to activate the feature through the Site Collection Feature GUI, I receive an access denied message. I am logged on to the server and in SharePoint with the APP Pool account which is also a member of the domain administrator group, local administrator group and SharePoint Farm Admin group. This account is also dbo on SQL Server. This same feature activates fine using the stsadm command. I have dug into this issue at length and here is what I have found: Looking at the Microsoft assemblies in reflector, my error is coming from the SPWebApplication.ApplyWebConfigModifications() method. I can see the trace statements from SPWebConfigFileChanges.RemoveModificationsWebConfigXMLDocument and SPWebConfigFileChanges.ApplyModificationsWebConfigXMLDocument. The next line is a Save(str). Below is the output from the SharePoint logs that pertain to this error: Apply web config modifications to web app 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation General 8grn Medium WebConfigModification: Applying web config modifications to web app in server tw-s1-m4400-007 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 88gw Medium WebConfigModification: Applying web config modifications to file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/system.web/httpModules Node name add[@name='JivePageController'] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/system.web/httpHandlers Node name add[@path='ScriptResource.axd'] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name [local-name()="dependentAssembly"][/@name="System.Web.Extensions.Design"] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name [local-name()="dependentAssembly"][/@name="System.Web.Extensions"] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name - [local-name()="dependentAssembly"][/@name="System.Web.Extensions"] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name - [local-name()="dependentAssembly"][/@name="System.Web.Extensions.Design"] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/system.web/httpHandlers Node name - add[@path='ScriptResource.axd'] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/system.web/httpModules Node name - add[@name='JivePageController'] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x15C4) 0x1444 SharePoint Foundation Topology e5mb Medium WcfReceiveRequest: LocalAddress: 'http://tw-s1-m4400-007.jivedemo.local:32843/15702467ece1408f881abeabac3b5077/MetadataWebService.svc' Channel: 'System.ServiceModel.Channels.ServiceChannel' Action: xxx MessageId: 'urn:uuid:4e859532-ed7f-4937-8b88-68d3af43d589' 9f403ede-2c94-490b-a05c-e169cc5fe58d 02/24/2010 16:05:41.10 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology f6kh High WebConfigModification: Save of web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config for applying modifications to web app SharePoint - 2008 failed. Error message - Access to the path 'C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config' is denied. 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.10 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8j2o High WebConfigModification: Changes not applied to web application SharePoint - 2008 with Url xxx 5a817a37-7bf6-4d26-be51-207369e38f5b Any help would be appreciated!

    Read the article

  • Could not load file or assembly 'System.Data.SQLite'

    - by J. Pablo Fernández
    I've installed ELMAH 1.1 .Net 3.5 x64 in my ASP.NET project and now I'm getting this error (whenever I try to see any page): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format. More error details at the bottom. My Active Solution platform is "Any CPU" and I'm running on a x64 Windows 7 on an x64, of course, processor. The reason why we are using this version of ELMAH is because 1.0 .Net 3.5 (x86, which is the only platform for which it's compiled) gave us this same error on our x64 Windows server. I've tried compiling for x86 and x64 and I get the same error. I've tried removing the all compiler output (bin and obj). Finally I've made a reference to the SQLite dll directly, something that was not needed for the project to work on the server and I've got this compiler error: Error 1 Warning as Error: Assembly generation -- Referenced assembly 'System.Data.SQLite.dll' targets a different processor MyProject Any ideas what the problem might be? More error details: Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [BadImageFormatException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +0 System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +43 System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +127 System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +142 System.Reflection.Assembly.Load(String assemblyString) +28 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +46 [ConfigurationErrorsException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +613 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +203 System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) +105 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +178 System.Web.Compilation.BuildProvidersCompiler..ctor(VirtualPath configPath, Boolean supportLocalization, String outputAssemblyName) +54 System.Web.Compilation.ApplicationBuildProvider.GetGlobalAsaxBuildResult(Boolean isPrecompiledApp) +232 System.Web.Compilation.BuildManager.CompileGlobalAsax() +52 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +337 [HttpException (0x80004005): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() +58 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +512 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) +729 [HttpException (0x80004005): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +8896783 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +259

    Read the article

  • HTTP Bad Request error when requesting a WCF service contract

    - by Enrico Campidoglio
    I have a WCF service with the following configuration: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="MetadataEnabled"> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceMetadata httpGetEnabled="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="MetadataEnabled" name="MyNamespace.MyService"> <endpoint name="BasicHttp" address="" binding="basicHttpBinding" contract="MyNamespace.IMyServiceContract" /> <endpoint name="MetadataHttp" address="contract" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost/myservice" /> </baseAddresses> </host> </service> </services> </system.serviceModel> When hosting the service in the WcfSvcHost.exe process, if I browse to the URL: http://localhost/myservice/contract where the service metadata is available I get an HTTP 400 Bad Request error. By inspecting the WCF logs I found out that an System.Xml.XmlException exception is being thrown with the message: "The body of the message cannot be read because it is empty."Here is an extract of the log file: <Exception> <ExceptionType> System.ServiceModel.ProtocolException, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 </ExceptionType> <Message>There is a problem with the XML that was received from the network. See inner exception for more details.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, ItemDequeuedCallback callback) at System.ServiceModel.Channels.SharedHttpTransportManager.OnGetContextCore(IAsyncResult result) at System.ServiceModel.Channels.SharedHttpTransportManager.OnGetContext(IAsyncResult result) at System.ServiceModel.Diagnostics.Utility.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result) at System.Net.LazyAsyncResult.Complete(IntPtr userToken) at System.Net.LazyAsyncResult.ProtectedInvokeCallback(Object result, IntPtr userToken) at System.Net.ListenerAsyncResult.WaitCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <InnerException> <ExceptionType>System.Xml.XmlException, System.Xml, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>The body of the message cannot be read because it is empty.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, ItemDequeuedCallback callback) at System.ServiceModel.Channels.SharedHttpTransportManager.OnGetContextCore(IAsyncResult result) at System.ServiceModel.Channels.SharedHttpTransportManager.OnGetContext(IAsyncResult result) at System.ServiceModel.Diagnostics.Utility.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result) at System.Net.LazyAsyncResult.Complete(IntPtr userToken) at System.Net.LazyAsyncResult.ProtectedInvokeCallback(Object result, IntPtr userToken) at System.Net.ListenerAsyncResult.WaitCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> </InnerException> </Exception> If I instead browse to the URL: http://localhost/myservice?wsdl everything works just fine and I get the WSDL contract. At this point, I can also remove the "MetadataHttp" metadata endpoint completely, and it wouldn't make any difference. I'm using .NET 3.5 SP1. Does anyone have an idea of what could be wrong here?

    Read the article

  • WCF Service, Java JApplet client, transport error 405

    - by Juan Zamudio
    Hi all, I'm having a problem with a WCF Service and Java Client, I will try to give as much information as i can, thanks for your time. The Endpoint of the server is BasicHttpBinding, I tried hosting the server as a Windows Service and in IIS but nothing changed. The weird thing is that the Client works great if I use a simple class, in the moment I switch the class to an JApplet I get the problem mentioned. I'm using Eclipse as an IDE, I tried Axis and Metro to generate the stub with the same bad results. Here is an example of the Java class where everything is working public class TestSoaMetro { public String TestMethod(){ String result=null; IDigitalSignatureService aa = new DigitalSignatureService().getBasicHttpBindingEndpoint(); try { result= aa.getData("1", "id002962"); } catch (IDigitalSignatureServiceGetDataArgumentExceptionFaultFaultMessage e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IDigitalSignatureServiceGetDataInvalidOperationExceptionFaultFaultMessage e) { // TODO Auto-generated catch block e.printStackTrace(); } return result; } } Here is the example of the JApplet where I get the error: public class TestSoaMetroApplet extends JApplet { public void init() { Container content = getContentPane(); content.setBackground(Color.white); content.setLayout(new FlowLayout()); String result= this.TestMethod(); JLabel label = new JLabel(result); content.add(label); } public String TestMethod(){ String result=null; IDigitalSignatureService aa = null; try { aa = new DigitalSignatureService().getBasicHttpBindingEndpoint(); result= aa.getData("1", "id002962"); } catch (IDigitalSignatureServiceGetDataArgumentExceptionFaultFaultMessage e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IDigitalSignatureServiceGetDataInvalidOperationExceptionFaultFaultMessage e) { // TODO Auto-generated catch block e.printStackTrace(); } return result; } } In the moment the Applet loads I get the error, is the exact same call so I don't understand why I get the exception using the Applet. I Also tried to call this from a Silverlight client and I was getting a security exception, this is where I found out about clientaccesspolicy.xml and crossdomain.xml, I added clientaccesspolicy.xml to the service and the Silverlight Client works great, so I decided to try crossdomain.xml and nothing, the Applet still does not work. I will put the stack trace at the end, thanks all for your time. Juan Zamudio javax.xml.ws.WebServiceException: org.apache.axis2.AxisFault: Transport error: 405 Error: Method not allowed at org.apache.axis2.jaxws.ExceptionFactory.createWebServiceException(ExceptionFactory.java:175) at org.apache.axis2.jaxws.ExceptionFactory.makeWebServiceException(ExceptionFactory.java:70) at org.apache.axis2.jaxws.ExceptionFactory.makeWebServiceException(ExceptionFactory.java:128) at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.execute(AxisInvocationController.java:559) at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.doInvoke(AxisInvocationController.java:118) at org.apache.axis2.jaxws.core.controller.impl.InvocationControllerImpl.invoke(InvocationControllerImpl.java:82) at org.apache.axis2.jaxws.client.proxy.JAXWSProxyHandler.invokeSEIMethod(JAXWSProxyHandler.java:317) at org.apache.axis2.jaxws.client.proxy.JAXWSProxyHandler.invoke(JAXWSProxyHandler.java:159) at $Proxy12.getData(Unknown Source) at TestSoaMetroApplet.TestMethod(TestSoaMetroApplet.java:28) at TestSoaMetroApplet.init(TestSoaMetroApplet.java:19) at sun.applet.AppletPanel.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: org.apache.axis2.AxisFault: Transport error: 405 Error: Method not allowed at org.apache.axis2.transport.http.HTTPSender.handleResponse(HTTPSender.java:295) at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:190) at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75) at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:389) at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:222) at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:435) at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:402) at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:229) at org.apache.axis2.client.OperationClient.execute(OperationClient.java:165) at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.execute(AxisInvocationController.java:554) ... 9 more

    Read the article

  • Poor home office network performance and cannot figure out where the issue is

    - by Jeff Willener
    This is the most bizarre issue. I have worked with small to mid size networks for quite a long time and can say I'm comfortable connecting hardware. Where you will start to lose me is with managed switches and firewalls. To start, let me describe my network (sigh, shouldn't but I MUST solve this). 1) Comcast Cable Internet 2) Motorola SURFboard eXtreme Cable Modem. a) Model: SB6120 b) DOCSIS 3.0 and 2.0 support c) IPv4 and IPv6 support 3-A) Cisco Small Business RV220W Wireless N Firewall a) Latest firmware b) Model: RV220W-A-K9-NA c) WAN Port to Modem (2) d) vlan 1: work e) vlan 2: everything else. 3-B) D-Link DIR-615 Draft 802.11 N Wireless Router a) Latest firmware b) WAN Port to Modem (2) 4) Servers connected directly to firewall a) If firewall 3-A, then vlan 1 b) CAT5e patch cables c) Dell PowerEdge 1400SC w/ 10/100 integrated NIC (Domain Controller, DNS, former DHCP) d) Dell PowerEdge 400SC w/ 10/100/1000 integrated NIC (VMWare Server) 4) Linksys EZXS88W unmanaged Workgroup 10/100 Switch a) If firewall 3-A, then vlan 2 b) 25' CAT5e patch cable to firewall (3-A or 3-B) c) Connects xBox 360, Blu-Ray player, PC at TV 5) Office equipment connected directly to firewall a) If firewall 3-A, then vlan 1 b) ~80' CAT6 or CAT5e patch cable to firewall (3-A or 3-B) c) Connects 1) Dell Latitude laptop 10/100/1000 2) Dell Inspiron laptop 10/100 3) Dell Workstation 10/100/1000 (Pristine host, VMWare Workstation 7.x with many bridged VM's) 4) Brother Laser Printer 10/100 5) Epson All-In-One Workforce 310 10/100 5-A) NetGear FS116 unmanaged 10/100 switch a) I've had this switch for a long time and never had issues. 5-B) NetGear GS108 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-C) Linksys SE2500 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-D) TP-Link TL-SG10008D unmanaged 10/100/1000 a) Bought new for this issue and still have. 6) VLan 1 Wireless Connections (on same subnet if 3-B) a) Any of those at 5c b) HP Laptop 7) VLan 2 Wireless Connection (on same subnet if 3-B) a) IPad, IPod b) Compaq Laptop c) Epson Wireless Printer Shew, without hosting a diagram I hope that paints a good picture. The Issue The breakdown here is at item 5. No matter what I do I cannot have a switch at 5 and have to run everything wireless regardless of router. Issues related to using a switch (point 5 above) SpeedTest is good. Poor throughput to other devices if can communicate at all. Usually cannot ping other devices even on the same switch although, when able, ping times are good. Eventual lose of connectivity and can "sometimes" be restored by unplugging everything for several days, not minutes or hours but we're talking a week if at all. Directly connect to computer gives good internet connection however throughput to other devices connected to firewall is at best horrible. Yet printing doesn't seem to be an issue as long as they are connected via wireless. I have to force the RV220W to 1000Mb on the respective port if using a Gig Switch Issues related to using wireless in place of a switch (point 5 above) Poor throughput to other devices if can communicate. SpeedTest is good. Bottom line Internet speeds are awesome. By the way, Comcast went WAY above and beyond to make sure it was not them. They rewired EVERYTHING which did solve internet drops. Computer to computer connections are garbage Cannot get switch at 5 to work, yet other at 4 has never had an issue. Direct connection, bypass switch, is good for DHCP and internet. DNS must be on server, not firewall. Cisco insists its my switches but as you can see I have used four and two different cables with the same result. My gut feeling is something is happening with routing. But I'm not smart enough to know that answer. I run a lot of VM's at 5-c-3, could that cause it? What's different compared to my previous house is I have introduced Gigabit hardware (firewall/switches/computers). Some of my computers might have IPv6 turned on if I haven't turned it off already. I'm truly at a loss and hope anyone has some crazy idea how to solve this. Bottom line, I need a switch in my office behind the firewall. I've changed everything. The real crux is I will find a working solution and, again, after days it will stop working. So this means I cannot isolate if its a computer since I have to use them. Oh and a solution is not throwing more money at this. I'm well into $1k already. Yah, lame.

    Read the article

  • How to pre-load all deployed assemblies for an AppDomain

    - by Andras Zoltan
    Given an App Domain, there are many different locations that Fusion (the .Net assembly loader) will probe for a given assembly. Obviously, we take this functionality for granted and, since the probing appears to be embedded within the .Net runtime (Assembly._nLoad internal method seems to be the entry-point when Reflect-Loading - and I assume that implicit loading is probably covered by the same underlying algorithm), as developers we don't seem to be able to gain access to those search paths. My problem is that I have a component that does a lot of dynamic type resolution, and which needs to be able to ensure that all user-deployed assemblies for a given AppDomain are pre-loaded before it starts its work. Yes, it slows down startup - but the benefits we get from this component totally outweight this. The basic loading algorithm I've already written is as follows. It deep-scans a set of folders for any .dll (.exes are being excluded at the moment), and uses Assembly.LoadFrom to load the dll if it's AssemblyName cannot be found in the set of assemblies already loaded into the AppDomain (this is implemented inefficiently, but it can be optimized later): void PreLoad(IEnumerable<string> paths) { foreach(path p in paths) { PreLoad(p); } } void PreLoad(string p) { //all try/catch blocks are elided for brevity string[] files = null; files = Directory.GetFiles(p, "*.dll", SearchOption.AllDirectories); AssemblyName a = null; foreach (var s in files) { a = AssemblyName.GetAssemblyName(s); if (!AppDomain.CurrentDomain.GetAssemblies().Any( assembly => AssemblyName.ReferenceMatchesDefinition( assembly.GetName(), a))) Assembly.LoadFrom(s); } } LoadFrom is used because I've found that using Load() can lead to duplicate assemblies being loaded by Fusion if, when it probes for it, it doesn't find one loaded from where it expects to find it. So, with this in place, all I now have to do is to get a list in precedence order (highest to lowest) of the search paths that Fusion is going to be using when it searches for an assembly. Then I can simply iterate through them. The GAC is irrelevant for this, and I'm not interested in any environment-driven fixed paths that Fusion might use - only those paths that can be gleaned from the AppDomain which contain assemblies expressly deployed for the app. My first iteration of this simply used AppDomain.BaseDirectory. This works for services, form apps and console apps. It doesn't work for an Asp.Net website, however, since there are at least two main locations - the AppDomain.DynamicDirectory (where Asp.Net places it's dynamically generated page classes and any assemblies that the Aspx page code references), and then the site's Bin folder - which can be discovered from the AppDomain.SetupInformation.PrivateBinPath property. So I now have working code for the most basic types of apps now (Sql Server-hosted AppDomains are another story since the filesystem is virtualised) - but I came across an interesting issue a couple of days ago where this code simply doesn't work: the nUnit test runner. This uses both Shadow Copying (so my algorithm would need to be discovering and loading them from the shadow-copy drop folder, not from the bin folder) and it sets up the PrivateBinPath as being relative to the base directory. And of course there are loads of other hosting scenarios that I probably haven't considered; but which must be valid because otherwise Fusion would choke on loading the assemblies. I want to stop feeling around and introducing hack upon hack to accommodate these new scenarios as they crop up - what I want is, given an AppDomain and its setup information, the ability to produce this list of Folders that I should scan in order to pick up all the DLLs that are going to be loaded; regardless of how the AppDomain is setup. If Fusion can see them as all the same, then so should my code. Of course, I might have to alter the algorithm if .Net changes its internals - that's just a cross I'll have to bear. Equally, I'm happy to consider SQL Server and any other similar environments as edge-cases that remain unsupported for now. Any ideas!?

    Read the article

  • Cyrus on CentOS with sasl / pam / ldap

    - by Oscar
    SASL/PAM/LDAP is driving me crazy... that's what I read a lot when googling for problems in this area, and what I experience myself :-S I'm trying to get Cyrus imap working for virtual hosting on CentOS with this authorisation backend and really don't know what's happening. In saslauthd I configured the LDAP search filter to use, but it looks like pam completely ignores it. Here's what I do for testing (done more tests but all with similar results): [root@testserv ~]# imtest -u [email protected] -a [email protected] WARNING: no hostname supplied, assuming localhost S: * OK [CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS] testserv. Cyrus IMAP4 v2.3.7-Invoca-RPM-2.3.7-7.el5_6.4 server ready C: C01 CAPABILITY S: * CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS ACL RIGHTS=kxte QUOTA MAILBOX-REFERRALS NAMESPACE UIDPLUS NO_ATOMIC_RENAME UNSELECT CHILDREN MULTIAPPEND BINARY SORT SORT=MODSEQ THREAD=ORDEREDSUBJECT THREAD=REFERENCES ANNOTATEMORE CATENATE CONDSTORE IDLE LISTEXT LIST-SUBSCRIBED X-NETSCAPE URLAUTH S: C01 OK Completed Please enter your password: C: L01 LOGIN [email protected] {6} S: + go ahead C: <omitted> S: L01 NO Login failed: authentication failure Authentication failed. generic failure Security strength factor: 0 C: Q01 LOGOUT * BYE LOGOUT received Q01 OK Completed Connection closed. The LDAP entry does exist (and so does the mailbox in Cyrus): [root@testserv ~]# ldapsearch -WxD cn=Manager,o=mydomain,c=com [email protected] Enter LDAP Password: # extended LDIF # # LDAPv3 # base <> with scope subtree # filter: [email protected] # requesting: ALL # # myuser, accounts, testserv.mydomain.com, mydomain, com dn: uid=myuser,ou=accounts,dc=testserv.mydomain.com,o=mydomain,c=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uidNumber: 16 uid: myuser gidNumber: 5 givenName: My sn: Name mail: [email protected] cn: My Name userPassword:: dYN5ebB0fXhNRn1pZllhRnJX7Uk= shadowLastChange: 15176 homeDirectory: /dev/null # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This is what I get in /var/log/messages Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] ... /var/adm/auth.log Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:11 testserv cyrus/imap[12514]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb Aug 2 04:00:19 testserv saslauthd[5926]: DEBUG: auth_pam: pam_authenticate failed: User not known to the underlying authentication module Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] (AFAIK I can ignore the auxprop msg) ... and /var/log/slapd.log: Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 ACCEPT from IP=127.0.0.1:51403 (IP=0.0.0.0:389) Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 BIND dn="" method=128 Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 RESULT tag=97 err=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SRCH base="o=mydomain,c=com" scope=2 deref=0 filter="([email protected])" Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=2 UNBIND Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 closed These are the settings in In /etc/imapd.conf: sasl_mech_list: PLAIN LOGIN sasl_pwcheck_method: saslauthd ## sasl_auxprop_plugin: sasldb sasl_auto_transition: no and my sasl config: [root@testserv ~]# cat /etc/sysconfig/saslauthd # Directory in which to place saslauthd's listening socket, pid file, and so # on. This directory must already exist. SOCKETDIR=/var/run/saslauthd # Mechanism to use when checking passwords. Run "saslauthd -v" to get a list # of which mechanism your installation was compiled with the ablity to use. MECH=pam # Additional flags to pass to saslauthd on the command line. See saslauthd(8) # for the list of accepted flags. FLAGS="-c -r -O /etc/saslauthd.conf" [root@testserv ~]# cat /etc/saslauthd.conf ldap_servers: ldap://127.0.0.1/ ldap_search_base: dc=%d,o=mydomain,c=com ldap_auth_method: bind #ldap_filter: (|(uid=%u)((&(mail=%u@%d)(accountStatus=active))) ldap_filter: (&(mail=%u@%d)(accountStatus=active)) ldap_debug: 1 ldap_version: 3 The accountStatus=active is not in ldap yet, but that doesn't make a difference since I don't see it in the filter... that's not the reason for the failure. The weird thing is, I do get an error when I rename or remove /etc/saslauthd.conf, but when the file exists it seems happily ignored... The filter in slapd.log seems to be taken from /etc/ldap.conf. Apart from some timers, that only contains: host 127.0.0.1 base o=mydomain,c=com pam_login_attribute mail Outcommenting the pam_login_attribute results in this filter in slapd.log: filter="([email protected])" Pam-imap looks like this: [root@testserv ~]# cat /etc/pam.d/imap auth required pam_ldap.so debug account required pam_ldap.so debug #auth sufficient pam_unix.so likeauth nullok #auth sufficient pam_ldap.so use_first_pass #auth required pam_deny.so #account sufficient pam_unix.so #account sufficient pam_ldap.so The outcommented stuff is because I don't have the cyrus admin user in Ldap; that's a Linux user. That works fine when uncommented, but I still need to play around with that a little and first I wanna get imap working. Finally nsswitch: [root@testserv ~]# cat /etc/nsswitch.conf # # /etc/nsswitch.conf # # An example Name Service Switch config file. This file should be # sorted with the most-used services at the beginning. # # The entry '[NOTFOUND=return]' means that the search for an # entry should stop if the search in the previous entry turned # up nothing. Note that if the search failed due to some other reason # (like no NIS server responding) then the search continues with the # next entry. # # Legal entries are: # # nisplus or nis+ Use NIS+ (NIS version 3) # nis or yp Use NIS (NIS version 2), also called YP # dns Use DNS (Domain Name Service) # files Use the local files # db Use the local database (.db) files # compat Use NIS on compat mode # hesiod Use Hesiod for user lookups # [NOTFOUND=return] Stop searching if not found so far # # To use db, put the "db" in front of "files" for entries you want to be # looked up first in the databases # # Example: #passwd: db files nisplus nis #shadow: db files nisplus nis #group: db files nisplus nis passwd: compat ldap group: compat ldap shadow: compat ldap hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: nisplus publickey: nisplus automount: files nisplus aliases: files nisplus Any info where to start looking will be greatly appreciated! Thnx in advance

    Read the article

  • OAuth Consumer request for token from ServiceProvider returns InternalServerError

    - by chridam
    I'm playing around with DevDefined.OAuth - an OAuth consumer and provider implementation for .Net http://code.google.com/p/devdefined-tools/wiki/OAuth and on launching the ExampleConsumerSite project after configuring the service endpoints on my IIS 7 web server, I'm receiving the following error: Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Exception: Request for uri: http://localhost%3A8080/RequestToken.aspx?oauth%5Fcallback=oob&oauth%5Fnonce=94efde0b-dd45-4cee-8253-7496cef0b877&oauth%5Fconsumer%5Fkey=key&oauth%5Fsignature%5Fmethod=PLAINTEXT&oauth%5Ftimestamp=1252512419&oauth%5Fversion=1.0&oauth%5Ftoken=&oauth%5Fsignature=secret%2526 failed. status code: InternalServerError An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Source Error: [HttpException]: 'RequestToken' is not allowed here because it does not extend class 'System.Web.UI.Page'. at System.Web.UI.TemplateParser.ProcessError(String message) at System.Web.UI.TemplateParser.ProcessInheritsAttribute(String baseTypeName, String codeFileBaseTypeName, String src, Assembly assembly) at System.Web.UI.TemplateParser.PostProcessMainDirectiveAttributes(IDictionary parseData) [HttpParseException]: 'RequestToken' is not allowed here because it does not extend class 'System.Web.UI.Page'. at System.Web.UI.TemplateParser.ProcessException(Exception ex) at System.Web.UI.TemplateParser.ParseStringInternal(String text, Encoding fileEncoding) at System.Web.UI.TemplateParser.ParseString(String text, VirtualPath virtualPath, Encoding fileEncoding) [HttpParseException]: 'RequestToken' is not allowed here because it does not extend class 'System.Web.UI.Page'. at System.Web.UI.TemplateParser.ParseString(String text, VirtualPath virtualPath, Encoding fileEncoding) at System.Web.UI.TemplateParser.ParseReader(StreamReader reader, VirtualPath virtualPath) at System.Web.UI.TemplateParser.ParseFile(String physicalPath, VirtualPath virtualPath) at System.Web.UI.TemplateParser.ParseInternal() at System.Web.UI.TemplateParser.Parse() at System.Web.UI.TemplateParser.Parse(ICollection referencedAssemblies, VirtualPath virtualPath) at System.Web.Compilation.BaseTemplateBuildProvider.get_CodeCompilerType() at System.Web.Compilation.BuildProvider.GetCompilerTypeFromBuildProvider(BuildProvider buildProvider) at System.Web.Compilation.BuildProvidersCompiler.ProcessBuildProviders() at System.Web.Compilation.BuildProvidersCompiler.PerformBuild() at System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath(VirtualPath virtualPath, Type requiredBaseType, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.UI.PageHandlerFactory.GetHandlerHelper(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.UI.PageHandlerFactory.System.Web.IHttpHandlerFactory2.GetHandler(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) at System.Web.HttpApplication.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) I've noticed the oauth_token GET parameter is empty. On tracing this, the error source is from the line 12 of Default.aspx.cs page: IToken requestToken = session.GetRequestToken(); protected void oauthRequest_Click(object sender, EventArgs e) { OAuthSession session = CreateSession(); IToken requestToken = session.GetRequestToken(); if (string.IsNullOrEmpty(requestToken.Token)) { throw new Exception("The request token was null or empty"); } Session[requestToken.Token] = requestToken; string callBackUrl = "http://localhost:" + HttpContext.Current.Request.Url.Port + "/Callback.aspx"; string authorizationUrl = session.GetUserAuthorizationUrlForToken(requestToken, callBackUrl); Response.Redirect(authorizationUrl, true); } While I'm not sure if this has to do with configuring the service endpoints but I'm running the consumer project from VS2008 and hosting the service on IIS. Please advice.

    Read the article

  • Mysql performance problem & Failed DIMM

    - by murdoch
    Hi I have a dedicated mysql database server which has been having some performance problems recently, under normal load the server will be running fine, then suddenly out of the blue the performance will fall off a cliff. The server isn't using the swap file and there is 12GB of RAM in the server, more than enough for its needs. After contacting my hosting comapnies support they have discovered that there is a failed 2GB DIMM in the server and have scheduled to replace it tomorow morning. My question is could a failed DIMM result in the performance problems I am seeing or is this just coincidence? My worry is that they will replace the ram tomorrow but the problems will persist and I will still be lost of explanations so I am just trying to think ahead. The reason I ask is that there is plenty of RAM in the server, more than required and simply missing 2GB should be a problem, so if this failed DIMM is causing these performance problems then the OS must be trying to access the failed DIMM and slowing down as a result. Does that sound like a credible explanation? This is what DELLs omreport program says about the RAM, notice one dimm is "Critical" Memory Information Health : Critical Memory Operating Mode Fail Over State : Inactive Memory Operating Mode Configuration : Optimizer Attributes of Memory Array(s) Attributes : Location Memory Array 1 : System Board or Motherboard Attributes : Use Memory Array 1 : System Memory Attributes : Installed Capacity Memory Array 1 : 12288 MB Attributes : Maximum Capacity Memory Array 1 : 196608 MB Attributes : Slots Available Memory Array 1 : 18 Attributes : Slots Used Memory Array 1 : 6 Attributes : ECC Type Memory Array 1 : Multibit ECC Total of Memory Array(s) Attributes : Total Installed Capacity Value : 12288 MB Attributes : Total Installed Capacity Available to the OS Value : 12004 MB Attributes : Total Maximum Capacity Value : 196608 MB Details of Memory Array 1 Index : 0 Status : Ok Connector Name : DIMM_A1 Type : DDR3-Registered Size : 2048 MB Index : 1 Status : Ok Connector Name : DIMM_A2 Type : DDR3-Registered Size : 2048 MB Index : 2 Status : Ok Connector Name : DIMM_A3 Type : DDR3-Registered Size : 2048 MB Index : 3 Status : Critical Connector Name : DIMM_B1 Type : DDR3-Registered Size : 2048 MB Index : 4 Status : Ok Connector Name : DIMM_B2 Type : DDR3-Registered Size : 2048 MB Index : 5 Status : Ok Connector Name : DIMM_B3 Type : DDR3-Registered Size : 2048 MB the command free -m shows this, the server seems to be using more than 10GB of ram which would suggest it is trying to use the DIMM total used free shared buffers cached Mem: 12004 10766 1238 0 384 4809 -/+ buffers/cache: 5572 6432 Swap: 2047 0 2047 iostat output while problem is occuring avg-cpu: %user %nice %system %iowait %steal %idle 52.82 0.00 11.01 0.00 0.00 36.17 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 47.00 0.00 576.00 0 576 sda1 0.00 0.00 0.00 0 0 sda2 1.00 0.00 32.00 0 32 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 46.00 0.00 544.00 0 544 avg-cpu: %user %nice %system %iowait %steal %idle 53.12 0.00 7.81 0.00 0.00 39.06 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 592.00 0 592 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 592.00 0 592 avg-cpu: %user %nice %system %iowait %steal %idle 56.09 0.00 7.43 0.37 0.00 36.10 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 232.00 0.00 64520.00 0 64520 sda1 0.00 0.00 0.00 0 0 sda2 159.00 0.00 63728.00 0 63728 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 73.00 0.00 792.00 0 792 avg-cpu: %user %nice %system %iowait %steal %idle 52.18 0.00 9.24 0.06 0.00 38.51 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 600.00 0 600 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 600.00 0 600 avg-cpu: %user %nice %system %iowait %steal %idle 54.82 0.00 8.64 0.00 0.00 36.55 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 100.00 0.00 2168.00 0 2168 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 100.00 0.00 2168.00 0 2168 avg-cpu: %user %nice %system %iowait %steal %idle 54.78 0.00 6.75 0.00 0.00 38.48 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 84.00 0.00 896.00 0 896 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 84.00 0.00 896.00 0 896 avg-cpu: %user %nice %system %iowait %steal %idle 54.34 0.00 7.31 0.00 0.00 38.35 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 81.00 0.00 840.00 0 840 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 81.00 0.00 840.00 0 840 avg-cpu: %user %nice %system %iowait %steal %idle 55.18 0.00 5.81 0.44 0.00 38.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 317.00 0.00 105632.00 0 105632 sda1 0.00 0.00 0.00 0 0 sda2 224.00 0.00 104672.00 0 104672 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 93.00 0.00 960.00 0 960 avg-cpu: %user %nice %system %iowait %steal %idle 55.38 0.00 7.63 0.00 0.00 36.98 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 74.00 0.00 800.00 0 800 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 74.00 0.00 800.00 0 800 avg-cpu: %user %nice %system %iowait %steal %idle 56.43 0.00 7.80 0.00 0.00 35.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 72.00 0.00 784.00 0 784 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 72.00 0.00 784.00 0 784 avg-cpu: %user %nice %system %iowait %steal %idle 54.87 0.00 6.49 0.00 0.00 38.64 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 80.20 0.00 855.45 0 864 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 80.20 0.00 855.45 0 864 avg-cpu: %user %nice %system %iowait %steal %idle 57.22 0.00 5.69 0.00 0.00 37.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 0.00 432.00 0 432 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 33.00 0.00 432.00 0 432 avg-cpu: %user %nice %system %iowait %steal %idle 56.03 0.00 7.93 0.00 0.00 36.04 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 41.00 0.00 560.00 0 560 sda1 0.00 0.00 0.00 0 0 sda2 2.00 0.00 88.00 0 88 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 39.00 0.00 472.00 0 472 avg-cpu: %user %nice %system %iowait %steal %idle 55.78 0.00 5.13 0.00 0.00 39.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 29.00 0.00 392.00 0 392 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 29.00 0.00 392.00 0 392 avg-cpu: %user %nice %system %iowait %steal %idle 53.68 0.00 8.30 0.06 0.00 37.95 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 78.00 0.00 4280.00 0 4280 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 78.00 0.00 4280.00 0 4280

    Read the article

  • I'm trying to run some PHP scripts as CLI instead of over HTTP. How do I make them play nice?

    - by gnfti
    Hi everyone. I'm using some PHP scripts from FeedForAll to join together RSS feeds (RSSmesh) and display them as HTML (RSS2HTML). Because I intend to run these scripts fairly intensively and don't want the resulting HTTP requests and bandwidth to count towards my hosting quota, I am in the process of moving to running them on the web host's server in an umbrella PHP "batch" script, and call this script via cron (this is a Linux server, by the way). Here's a (working) sample request over HTTP: http://www.mydomain.com/a/rss2htmlcore/rss2html2.php?XMLFILE=http://www.mydomain.com/a/myapp/xmlcache/feed.xml&TEMPLATE=template.html This will produce the desired HTML output. An example of how I want this to work on the command line: /srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html This is with the correct shebang line added to "rss2html2-cli.php". I could just as well specify the executable ("/usr/local/bin/php") in the request, I doubt it makes a difference because I am able to run another script (that I wrote myself) either way without problems. Now, RSS2HTML and RSSmesh are different in that, for starters, they include secondary files -- for example, both include an XML parser script -- and I suspect that this is where I am getting a bit in over my head. Right now I'm calling exec() from the "umbrella" batch script, like so: exec("/srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html", $output) But no output is being produced. What's the best way to go about this and what "gotchas" should I keep in mind? Is exec() the right way to approach this? It works fine for the other (simple) script but that writes its own output. For this I want to get the output and write it to a file from within the umbrella script if possible. I've also tried output buffering but to no avail. Do I need to pay attention to anything specific with regard to the includes? Right now they're specified in the scripts as include_once("FeedForAll_XMLParser.inc.php"); and the specified files are indeed in the same folder. Further info: -This is a Linux server. -I have no direct access to the shell, so I can't test things directly on a command line, everything is via crontab. -I will admit that support for the FeedForAll scripts leaves a lot to be desired, but I'd like to keep using their scripts if at all possible, if only because I know them and have been using them for a while. I have looked into Simplepie, but the FFA scripts do some things that I've seen no obvious solutions for with Simplepie, like limiting the number of items per individual feed (RSSmesh) or limiting the description length (RSS2HTML). -Yahoo! Pipes is out, they cache their data for too long for my application. Should you want to take a look at the code, here are the scripts as txt files. RSS2HTML2 and RSSmesh are the FeedForAll scripts, FeedForAll_XMLParser... is the included parser. Note that I have not yet amended these to handle $argv etc. I have however in "scraper-universal-rss-cli", which works fine with CLI. If anyone has any thoughts to share on this it would be very much appreciated. Thank you in advance.

    Read the article

  • How to convert html textfield/area data to server-side txt file? [closed]

    - by olijake
    How can I make a script that will convert the text/data in a html textfield/textarea and send it to the server, which then saves it as a .txt file for storage? NOTE: I am hosting a website(for testing purposes) using Apache 2.2 on a Windows 7 machine. I downloaded PHP version 5.4.7, but have not yet installed on my server yet (not sure if I will need it, but also not sure how to install it). 1st problem: Saving text to server Html page/section with title textfield, text textarea, and submit button. You would enter a title, the text/notes you need in the textfield, then press the submit button to have it store the text in the textarea, as a .txt file on the server called .txt. 2nd problem: Opening text from server Html with list of all txt files OR textfield for entering in title, then submit button to send the title of the requested .txt file to the server, which would then load it up on the page. Here is what I have so far: (let me know if there is something that I should change or if something just isn't correct in the index.html code I have right now.) <!DOCTYPE HTML> <html> <head> <title>Insert Title</title> <meta http-equiv="Content-Type" content="Text/HTML; charset=UTF-8"/> </head> <body> <form method="post" action="save.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <textarea rows="20" cols="100" id="text" name="text"></textarea><br/> <input type="submit" name="submit" value="Submit Text to Server"> </form><br/> <hr style="width: 100%; height: 4px;"><br/> <form method="post" action="open.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <input type="submit" name="submit" value="Submit Txt File Request"> </form><br/> <div>Opened text file displays here or goes on another page</div> </body> </html> I plan on using a server side language/script, but ANYTHING that gets the job done is fine. I already tried looking into using some ASP/jScript/PHP, but have had some trouble implementing it into my server. (ie: getting the modules loaded and telling the server what file types to parse.) I know this may be an extremely easy fix, but then in that case, hopefully you wouldn't mind helping me out a little :). If it turns out that this is MUCH more complicated than I expect, then feel free to let me know that, so I don't waste me time running in circles. I appreciate any help/assistance that you can provide, Thanks, Jake EDIT: Wrong Apache version. In response to the comments/closing of this thread: My question: "How exactly do I install the PHP module on the apache server? and is this even possible? and is this even recommended?" ^ In case I wasn't clear enough already To Clarify: I understand the basics of PHP, I just have trouble with INSTALLING PHP on the apache server. (I have used PHP before, but never successfully on apache (so far...)) For my script I wrote something similar to this already (using fopen() and a few other commands): <?php fopen("notes.txt", "r"); file_put_contents("notes.txt",teststring1); ?> I have used javascript for this task before also (although I prefer using PHP and server-side languages): <script language="javascript"> function WriteToFile(){ var fso = new ActiveXObject("Scripting.FileSystemObject"); var s = fso.CreateTextFile("C:\\NewFile.txt", true); var text=document.getElementById("TextArea1").innerText; s.WriteLine(text); s.WriteLine('***********************'); s.Close(); } </script>

    Read the article

  • WCF endpoint exception

    - by Lijo
    Hi Team, I am just trying with various WCF(in .Net 3.0) scenarios. I am using self hosting. I am getting an exception as "Service 'MyServiceLibrary.NameDecorator' has zero application (non-infrastructure) endpoints. This might be because no configuration file was found for your application, or because no service element matching the service name could be found in the configuration file, or because no endpoints were defined in the service element." I have a config file as follows (which has an endpoint) <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Lijo.Samples.NameDecorator" behaviorConfiguration="WeatherServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8010/ServiceModelSamples/FreeServiceWorld"/> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="Lijo.Samples.IElementaryService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WeatherServiceBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="False"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> And a Host as using System.ServiceModel; using System.ServiceModel.Dispatcher; using System.ServiceModel.Channels; using System.ServiceModel.Description; using System.Runtime.Serialization; namespace MySelfHostConsoleApp { class Program { static void Main(string[] args) { System.ServiceModel.ServiceHost myHost = new ServiceHost(typeof(MyServiceLibrary.NameDecorator)); myHost.Open(); Console.ReadLine(); } } } My Service is as follows using System.ServiceModel; using System.Runtime.Serialization; namespace MyServiceLibrary { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] CompanyLogo GetLogo(); } public class NameDecorator : IElementaryService { public CompanyLogo GetLogo() { CircleType cirlce = new CircleType(); CompanyLogo logo = new CompanyLogo(cirlce); return logo; } } [DataContract] public abstract class IShape { public abstract string SelfExplain(); } [DataContract(Name = "Circle")] public class CircleType : IShape { public override string SelfExplain() { return "I am a Circle"; } } [DataContract(Name = "Triangle")] public class TriangleType : IShape { public override string SelfExplain() { return "I am a Triangle"; } } [DataContract] [KnownType(typeof(CircleType))] [KnownType(typeof(TriangleType))] public class CompanyLogo { private IShape m_shapeOfLogo; [DataMember] public IShape ShapeOfLogo { get { return m_shapeOfLogo; } set { m_shapeOfLogo = value; } } public CompanyLogo(IShape shape) { m_shapeOfLogo = shape; } } } Could you please help me to understand what I am missing here? Thanks Lijo

    Read the article

  • Email sent from server with rDNS & SPF being blocked by Hotmail

    - by Canadaka
    I have been unable to send email to users on hotmail or other Microsoft email servers for some time. Its been a major headache trying to find out why and how to fix the issue. The emails being sent that are blocked from my domain canadaka.net. I use Google Aps to host my regular email serverice for my @canadaka.net email addresses. I can sent email from my desktop or gmail to a hotmail without any problem. But any email sent from my server on behalf of canadaka.net is blocked, not even arriving in the junk email. The IP that the emails are being sent from is the same IP that my site is hosted on: 66.199.162.177 This IP is new to me since August 2010, I had a different IP for the previous 3-4 years. This IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 The one list spamcannibal.org my IP is listed on seems to be out of my control, says "no reverse DNS, MX host should have rDNS - RFC1912 2.1". But since I use Google for my email hosting, I don't have control over setting up RDNS for all the MX records. I do have Reverse DNS setup for my IP though, it resolves to "mail.canadaka.net". I have signed up for SNDS and was approved. My ip says "All of the specified IPs have normal status." Sender Score: 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 My Mcafee threat level seems fine I have a TXT SPF record setup, I am currently using xname.org as my DNS, and they don't have a field for SPF, but their FAQ says to add the SPF info as a TXT entry. v=spf1 a include:_spf.google.com ~all Some "SPF checking" tools ive used detect that my domain has a valid SPF, but others don't. Like Microsoft's SPF wizard, i think this is because its specifically looking for an SPF record and not in the TXT. "No SPF Record Found. A and MX Records Available". From my home I can run "nslookup -type=TXT canadaka.net" and it returns: Server: google-public-dns-a.google.com Address: 8.8.8.8 Non-authoritative answer: canadaka.net text = "v=spf1 a include:_spf.google.com ~all" One strange thing I found is i'm unable to ping hotmail.com or msn.com or do a "telnet mail.hotmail.com 25". I am able to ping gmail.com and many other domains I tried. I tried changing my DNS servers to Google's Public DNS and did a ipconfig /flushdns but that had no effect. I am however able to connect with telnet to mx1.hotmail.com This is what the email headers look like when I send to a Google email server and I receive the email with no troubles. You can see that SPF is passing. Delivered-To: [email protected] Received: by 10.146.168.12 with SMTP id q12cs91243yae; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Received: by 10.43.48.7 with SMTP id uu7mr4292541icb.68.1298858509242; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Return-Path: Received: from canadaka.net ([66.199.162.177]) by mx.google.com with ESMTP id uh9si8493137icb.127.2011.02.27.18.01.45; Sun, 27 Feb 2011 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) client-ip=66.199.162.177; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) [email protected] Message-Id: <[email protected] Received: from coruscant ([127.0.0.1]:12907) by canadaka.net with [XMail 1.27 ESMTP Server] id for from ; Sun, 27 Feb 2011 18:01:29 -0800 Date: Sun, 27 Feb 2011 18:01:29 -0800 Subject: Test To: [email protected] From: XXXX Reply-To: [email protected] X-Mailer: PHP/5.2.13 I can send to gmail and other email services fine. I don't know what i'm doing wrong! UPDATE 1 I have been removed from hotmails IP block and am now able to send emails to hotmail, but they are all going directly to the JUNK folder. UPDATE 2 I used Telnet to send a test message to port25.com, seems my SPF is not being detected. Result: neutral (SPF-Result: None) canadaka.net. SPF (no records) canadaka.net. TXT (no records) I do have a TXT record, its been there for years, I did change it a week ago. Other sites that allow you to check your SPF detect it, but some others like Microsofts Wizard doesn't. This iw what my SPF record in my xname.org DNS file looks like: canadaka.net. 86400 IN TXT "v=spf1 a include:_spf.google.com ~all" I did have a nameserver as my 4th option that doens't have the TXT records since it doens't support it. So I removed it from the list and instead added wtfdns.com as my 4th adn 5th nameservers, which does support TXT.

    Read the article

  • Receicing POST data in ASP.NET

    - by grast
    Hi, I want to use ASP for code generation in a C# desktop application. To achieve this, I set up a simple host (derived from System.MarshalByRefObject) that processes a System.Web.Hosting.SimpleWorkerRequest via HttpRuntime.ProcessRequest. This processes the ASPX script specified by the incoming request (using System.Net.HttpListener to wait for requests). The client-part is represented by a System.ComponentModel.BackgroundWorker that builds the System.Net.HttpWebRequest and receives the response from the server. A simplified version of my client-part-code looks like this: private void SendRequest(object sender, DoWorkEventArgs e) { // create request with GET parameter var uri = "http://localhost:9876/test.aspx?getTest=321"; var request = (HttpWebRequest)WebRequest.Create(uri); // append POST parameter request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; var postData = Encoding.Default.GetBytes("postTest=654"); var postDataStream = request.GetRequestStream(); postDataStream.Write(postData, 0, postData.Length); // send request, wait for response and store/print content using (var response = (HttpWebResponse)request.GetResponse()) { using (var reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8)) { _processsedContent = reader.ReadToEnd(); Debug.Print(_processsedContent); } } } My server-part-code looks like this (without exception-handling etc.): public void ProcessRequests() { // HttpListener at http://localhost:9876/ var listener = SetupListener(); // SimpleHost created by ApplicationHost.CreateApplicationHost var host = SetupHost(); while (_running) { var context = listener.GetContext(); using (var writer = new StreamWriter(context.Response.OutputStream)) { // process ASP script and send response back to client host.ProcessRequest(GetPage(context), GetQuery(context), writer); } context.Response.Close(); } } So far all this works fine as long as I just use GET parameters. But when it comes to receiving POST data in my ASPX script I run into trouble. For testing I use the following script: // GET parameters are working: var getTest = Request.QueryString["getTest"]; Response.Write("getTest: " + getTest); // prints "getTest: 321" // don't know how to access POST parameters: var postTest1 = Request.Form["postTest"]; // Request.Form is empty?! Response.Write("postTest1: " + postTest1); // so this prints "postTest1: " var postTest2 = Request.Params["postTest"]; // Request.Params is empty?! Response.Write("postTest2: " + postTest2); // so this prints "postTest2: " It seems that the System.Web.HttpRequest object I'm dealing with in ASP does not contain any information about my POST parameter "postTest". I inspected it in debug mode and none of the members did contain neither the parameter-name "postTest" nor the parameter-value "654". I also tried the BinaryRead method of Request, but unfortunately it is empty. This corresponds to Request.InputStream==null and Request.ContentLength==0. And to make things really confusing the Request.HttpMethod member is set to "GET"?! To isolate the problem I tested the code by using a PHP script instead of the ASPX script. This is very simple: print_r($_GET); // prints all GET variables print_r($_POST); // prints all POST variables And the result is: Array ( [getTest] = 321 ) Array ( [postTest] = 654 ) So with the PHP script it works, I can access the POST data. Why does the ASPX script don't? What am I doing wrong? Is there a special accessor or method in the Response object? Can anyone give a hint or even know how to solve this? Thanks in advance.

    Read the article

  • Issue with Silverlight interop with JBoss WebService

    - by Hadi Eskandari
    I have a simple JAXWS webservice deployed in JBoss. It runs fine with a java client but I'm trying to connect using a Silverlight 3.0 application. I've changed the webservice to use Soap 1.1: @BindingType(value = "http://schemas.xmlsoap.org/wsdl/soap/http") public class UserSessionBean implements UserSessionRemote { ... } I'm using BasicHttpBinding on the Silverlight client. There are two issues: 1- When I connect from VisualStudio (2008 and 2010) to create the webservice proxies, the following exception is thrown, but the proxy is generated successfully. This also happens when I try to update the existing web service reference (but it also updates fine). com.sun.xml.ws.server.UnsupportedMediaException: Unsupported Content-Type: application/soap+xml; charset=utf-8 Supported ones are: [text/xml] at com.sun.xml.ws.encoding.StreamSOAPCodec.decode(StreamSOAPCodec.java:291) at com.sun.xml.ws.encoding.StreamSOAPCodec.decode(StreamSOAPCodec.java:128) at com.sun.xml.ws.encoding.SOAPBindingCodec.decode(SOAPBindingCodec.java:287) at com.sun.xml.ws.transport.http.HttpAdapter.decodePacket(HttpAdapter.java:276) at com.sun.xml.ws.transport.http.HttpAdapter.access$500(HttpAdapter.java:93) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:432) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:135) at org.jboss.wsf.stack.metro.RequestHandlerImpl.doPost(RequestHandlerImpl.java:225) at org.jboss.wsf.stack.metro.RequestHandlerImpl.handleHttpRequest(RequestHandlerImpl.java:82) at org.jboss.wsf.common.servlet.AbstractEndpointServlet.service(AbstractEndpointServlet.java:85) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:182) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:262) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446) at java.lang.Thread.run(Thread.java:619) 2- When I use the proxy to fetch some data from the webservice (even methods with primitive types), I get the following error on Silverlight client: "An error occurred while trying to make a request to URI 'http://localhost:9090/admintool/UserSessionEJB'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details." Setting a breakpoint on my java code, I can see that it is not hit when I run the silverlight client, so it is probably a cross domain issue, but I'm not sure how to handle it (I've already created a crossdomain.xml file and put it beside my HTML page hosting the silverlight client). I appreciate any help!

    Read the article

  • 500 Internal Server Error with PHP application

    - by James
    I have written a PHP application using Windows and XAMPP. I've been trying to run it on Ubuntu 10.10 with Lighttpd 1.4.26. Parts of the application work fine, but whenever I try to log in, I get a 500 - Internal Server Error page. The only thing that shows up in /var/log/lighttpd/error.log is 2011-02-25 13:43:13: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fastcgi process died): pid: 1169 socket: unix:/tmp/php.socket-0 2011-02-25 13:43:13: (mod_fastcgi.c.3367) response not received, request sent: 1596 on socket: unix:/tmp/php.socket-0 for /~denton/customer-facing-portal/index.php?, closing connection If I had any output whatsoever from PHP, this would be a lot easier to debug. Any ideas on how to get some? Here is my /etc/lighttpd/lighttpd.conf file: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load server.modules = ( "mod_alias", "mod_compress", # "mod_rewrite", # "mod_redirect", # "mod_usertrack", # "mod_expire", # "mod_flv_streaming", # "mod_evasive", "mod_setenv" ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to upload files to, purged daily. server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## Use ipv6 only if available. (disabled for while, check #560837) #include_shell "/usr/share/lighttpd/use-ipv6.pl" ## bind to port (default: 80) # server.port = 81 ## bind to localhost only (default: all interfaces) ## server.bind = "localhost" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts server.pid-file = "/var/run/lighttpd.pid" ## ## Format: <errorfile-prefix><status>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/var/www/" ## virtual directory listings dir-listing.encoding = "utf-8" server.dir-listing = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't change) server.username = "www-data" ## change gid to <gid> (default: don't change) server.groupname = "www-data" #### compress module compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ("text/plain", "text/html", "application/x-javascript", "text/css") #### url handling modules (rewrite, redirect, access) # url.rewrite = ( "^/$" => "/server-status" ) # url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### expire module # expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### external configuration files ## mimetype mapping include_shell "/usr/share/lighttpd/create-mime.assign.pl" ## load enabled configuration files, ## read /etc/lighttpd/conf-available/README first include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ## Set environment variables setenv.add-environment = ( "DB_URL__DEMO" => "192.168.1.231", "DB_NAME_DEMO" => "demo", "DB_USER_DEMO" => "user", "DB_PASS_DEMO" => "password", "DB_AGENCY_DEMO" => "demo" ) Here is my /etc/php5/cgi/php.ini file (sans 1641 lines of comments): [PHP] register_long_arrays = Off short_open_tag = Off engine = On short_open_tag = Off asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On html_errors = On variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = "America/Chicago" [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] Update: here is /etc/lighttpd/conf-enabled/15-fastcgi-php.conf As far as I know, it's just the default config file the Ubuntu package installed. ## FastCGI programs have the same functionality as CGI programs, ## but are considerably faster through lower interpreter startup ## time and socketed communication ## ## Documentation: /usr/share/doc/lighttpd-doc/fastcgi.txt.gz ## http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi ## Start an FastCGI server for php (needs the php5-cgi package) fastcgi.server += ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) )

    Read the article

  • PHP framework question

    - by iconiK
    I'm currently working on a browser-based MMO and have chosen the LAMP stack because of the extremely low cost to start with in production (versus Windows + IIS + ASP.NET/C# + SQL Server, even though I have MSDN Universal). However I will need a PHP framework for this as it's no easy task. I am not restricted by anything other than the ability to run on Linux, as I will use a dedicated cloud hosting solution (and a VMWare image for development) and can configure it as needed. In no specific order: It has to be easily scalable; this is crucial. If the game becomes a steady success it will eventually outgrow the server beyond what the host provides and would have to be moved to several load-balanced servers. It is crucial that this can be done with minimum effort. I do know this might require following strict conventions, so if you know of any for your suggested framework please explain what would be needed. It has to provide modules for all the core tasks: authentication, ACL, database access, MVC, and so on. One or two missing modules are fine, as long as they can easily be written and integrated. It should support internationalization. I think there is no excuse for any web framework not to provide means of translating the application and switching between languages without a lot of effort from the programmer. Must have very good community support and preferably commercial support as well. Yes, I do know QCodo/QCubed is so nice, but it is not mature enough for this task. Smooth AJAX support is required. Whether the framework comes with AJAX-capable widgets or has an easy way of adding AJAX is not relevant, as long as AJAX is easily doable. I plan to use jQuery + Dojo or one of them alone - not exactly sure. Auto-magically doing stuff when it improves readability and relieves a lot of effort would be especially nice if it is generally reliable and does not interfere with other requirements. This seems to be the case of CakePHP. I have read a lot of comparisons and I know it's a really hot debate. The general answer is "try and see for yourself what suits you". However, I can't say it is easy for this task and I'm calling for your experience with building applications with similar requirements. So far I'm tied up between Zend and CakePHP by the general criteria, however, all well-known frameworks offer the same functionality in some way or another with different approaches each with it's own advantages and disadvantages. Edits: I am kinda new to MVC, however, I am willing to learn it and I don't care if a framework is easier for those new to MVC. I have lots of time to learn MVC and any other architectures (or whatever they're called) you recommend. I will use Zend as a utility "framework", even though it's just a collection of libraries (some good ones though, as I have been told). Current PHP contenders are: CakePHP, Kohana, Zend alone.

    Read the article

  • WCF .svc Accessible Over HTTP But Accessing WSDL Causes "Connection Was Reset"

    - by Wolfwyrd
    I have a WCF service which is hosted on IIS6 on a Win2003 SP2 machine. The WCF service is hosting correctly and is visible over HTTP giving the usual "To test this service, you will need to create a client and use it to call the service" message. However accessing the .svc?WSDL link causes the connection to be reset. The server itself is returning a 200 in the logs for the WSDL request, an example of which is shown here, the first call gets a connection reset, the second is a successful call for the .svc file. 2010-04-09 11:00:21 W3SVC6 MACHINENAME 10.79.42.115 GET /IntegrationService.svc wsdl 80 - 10.75.33.71 HTTP/1.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+2.0.50727;+.NET+CLR+1.1.4322;+.NET+CLR+1.0.3705;+InfoPath.1;+.NET+CLR+3.0.04506.30;+MS-RTC+LM+8;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;) - - devsitename.mydevdomain.com 200 0 0 0 696 3827 2010-04-09 11:04:10 W3SVC6 MACHINENAME 10.79.42.115 GET /IntegrationService.svc - 80 - 10.75.33.71 HTTP/1.1 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-GB;+rv:1.9.1.9)+Gecko/20100315+Firefox/3.5.9+(.NET+CLR+3.5.30729) - - devsitename.mydevdomain.com 200 0 0 3144 457 265 My Web.Config looks like: <system.serviceModel> <serviceHostingEnvironment > <baseAddressPrefixFilters> <add prefix="http://devsitename.mydevdomain.com" /> </baseAddressPrefixFilters> </serviceHostingEnvironment> <behaviors> <serviceBehaviors> <behavior name="My.Service.IntegrationServiceBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="My.Service.IntegrationServiceBehavior" name="My.Service.IntegrationService"> <endpoint address="" binding="wsHttpBinding" contract="My.Service.Interfaces.IIntegrationService" bindingConfiguration="NoSecurityConfig" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <bindings> <wsHttpBinding> <binding name="NoSecurityConfig"> <security mode="None" /> </binding> </wsHttpBinding> </bindings> </system.serviceModel> I'm pretty much stumped by this one. It works fine through the local dev server in VS2008 but fails after deployment. For further information, the targeted machine does not have a firewall (it's on the internal network) and the logs show the site thinks it's fine with 200 OK responses. I've also tried updating the endpoint with the full URL to my service however it still makes no difference. I have looked into some of the other options - creating a separate WSDL manually and exposing it through the metadata properties (really don't want to do that due to the maintenance issues). If anyone can offer any thoughts on this or any other workarounds it would be greatly appreciated.

    Read the article

  • Core Plot never stops asking for data, hangs device

    - by Ben Collins
    I'm trying to set up a core plot that looks somewhat like the AAPLot example on the core-plot wiki. I have set up my plot like this: - (void)initGraph:(CPXYGraph*)graph forDays:(NSUInteger)numDays { self.cplhView.hostedLayer = graph; graph.paddingLeft = 30.0; graph.paddingTop = 20.0; graph.paddingRight = 30.0; graph.paddingBottom = 20.0; CPXYPlotSpace *plotSpace = (CPXYPlotSpace*)graph.defaultPlotSpace; plotSpace.xRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(numDays)]; plotSpace.yRange = [CPPlotRange plotRangeWithLocation:CPDecimalFromFloat(0) length:CPDecimalFromFloat(1)]; CPLineStyle *lineStyle = [CPLineStyle lineStyle]; lineStyle.lineColor = [CPColor blackColor]; lineStyle.lineWidth = 2.0f; // Axes NSLog(@"Setting up axes"); CPXYAxisSet *xyAxisSet = (id)graph.axisSet; CPXYAxis *xAxis = xyAxisSet.xAxis; // xAxis.majorIntervalLength = CPDecimalFromFloat(7); // xAxis.minorTicksPerInterval = 7; CPXYAxis *yAxis = xyAxisSet.yAxis; // yAxis.majorIntervalLength = CPDecimalFromFloat(0.1); // Line plot with gradient fill NSLog(@"Setting up line plot"); CPScatterPlot *dataSourceLinePlot = [[[CPScatterPlot alloc] initWithFrame:graph.bounds] autorelease]; dataSourceLinePlot.identifier = @"Data Source Plot"; dataSourceLinePlot.dataLineStyle = nil; dataSourceLinePlot.dataSource = self; [graph addPlot:dataSourceLinePlot]; CPColor *areaColor = [CPColor colorWithComponentRed:1.0 green:1.0 blue:1.0 alpha:0.6]; CPGradient *areaGradient = [CPGradient gradientWithBeginningColor:areaColor endingColor:[CPColor clearColor]]; areaGradient.angle = -90.0f; CPFill *areaGradientFill = [CPFill fillWithGradient:areaGradient]; dataSourceLinePlot.areaFill = areaGradientFill; dataSourceLinePlot.areaBaseValue = CPDecimalFromString(@"320.0"); // OHLC plot NSLog(@"OHLC Plot"); CPLineStyle *whiteLineStyle = [CPLineStyle lineStyle]; whiteLineStyle.lineColor = [CPColor whiteColor]; whiteLineStyle.lineWidth = 1.0f; CPTradingRangePlot *ohlcPlot = [[[CPTradingRangePlot alloc] initWithFrame:graph.bounds] autorelease]; ohlcPlot.identifier = @"OHLC"; ohlcPlot.lineStyle = whiteLineStyle; ohlcPlot.stickLength = 2.0f; ohlcPlot.plotStyle = CPTradingRangePlotStyleOHLC; ohlcPlot.dataSource = self; NSLog(@"Data source set, adding plot"); [graph addPlot:ohlcPlot]; } And my delegate methods like this: #pragma mark - #pragma mark CPPlotdataSource Methods - (NSUInteger)numberOfRecordsForPlot:(CPPlot *)plot { NSUInteger maxCount = 0; NSLog(@"Getting number of records."); if (self.data1 && [self.data1 count] > maxCount) { maxCount = [self.data1 count]; } if (self.data2 && [self.data2 count] > maxCount) { maxCount = [self.data2 count]; } if (self.data3 && [self.data3 count] > maxCount) { maxCount = [self.data3 count]; } NSLog(@"%u records", maxCount); return maxCount; } - (NSNumber *)numberForPlot:(CPPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index { NSLog(@"Getting record @ idx %u", index); return [NSNumber numberWithInt:index]; } All the code above is in the view controller for the view hosting the plot, and when initGraph is called, numDays is 30. I realize of course that this plot, if it even worked, would look nothing like the AAPLot example. The problem I'm having is that the view is never shown. It finished loading because viewDidLoad is the method that calls initGraph above, and the NSLog statements indicate that initGraph finishes. What's strange is that I return a value of 54 from numberOfRecordsForPlot, but the plot asks for more than 54 data points. in fact, it never stops asking. The NSLog statement in numberForPlot:field:recordIndex prints forever, going from 0 to 54 and then looping back around and continuing. What's going on? Why won't the plot stop asking for data and draw itself?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216  | Next Page >