Search Results

Search found 73134 results on 2926 pages for 'javax net ssl sslpeerunverifiedexception grid control'.

Page 147/2926 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • How to Validate mac address & ipaddress in client side(using javascript) & server side(using c#)

    - by amexn
    My team doing a project related to network using Asp.net MVC(c#). I need to validate mac address & ipaddress in client side(using javascript) & server side(using c#). I didn't get a good solution for validating mac address & ip addresss. I searched google for finding a good user interface for validating mac address by giving colon after two number eg: "XX:XX:XX:XX:XX:XX" by using Masked Input.Please give reference/guidance for implementing this. Any jquery plugin for implementing Masked Input.

    Read the article

  • ViewContext.RouteData.Values["action"] is null on server... works fine on local machine

    - by rksprst
    I'm having a weird issue where ViewContext.RouteData.Values["action"] is null on my staging server, but works fine on my dev machine (asp.net development server). The code is simple: public string CheckActiveClass(string actionName) { string text = ""; if (ViewContext.RouteData.Values["action"].ToString() == actionName) { text = "selected"; } return text; } I get the error on the ViewContext.RouteData.Values["action"] line. The error is: Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Any help is appreciated. Thanks in advance.

    Read the article

  • Security Level for WebBrowser control in C#

    - by jaywon
    I am trying to migrate an .hta application to a C# executable. Of course, since it's an .hta the code is all HTML and Jscript, with calls to local ActiveX objects. I created a C# executable project and am just using the WebBrowser control to display the HTML content. Simply renamed the .hta to an .html and took out the HTA declarations. Everything works great, except that when I make calls to the ActiveX objects, I get a security popup warning of running an ActiveX control on the page. I understand why this is happening since the WebBrowser control is essentially IE and uses the Internet Options security settings, but is there any way to get the WebBrowser control to bypass security popups, or a way to register the executable or DLLs as being trusted without having to change settings in Internet Options? Even a way to do on a deployment package would work as well.

    Read the article

  • Reattaching an object graph to an EntityContext: "cannot track multiple objects with the same key"

    - by dkr88
    Can EF really be this bad? Maybe... Let's say I have a fully loaded, disconnected object graph that looks like this: myReport = {Report} {ReportEdit {User: "JohnDoe"}} {ReportEdit {User: "JohnDoe"}} Basically a report with 2 edits that were done by the same user. And then I do this: EntityContext.Attach(myReport); InvalidOperationException: An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key. Why? Because the EF is trying to attach the {User: "JohnDoe"} entity TWICE. This will work: myReport = {Report} {ReportEdit {User: "JohnDoe"}} EntityContext.Attach(myReport); No problems here because the {User: "JohnDoe"} entity only appears in the object graph once. What's more, since you can't control how the EF attaches an entity, there is no way to stop it from attaching the entire object graph. So really if you want to reattach a complex entity that contains more than one reference to the same entity... well, good luck. At least that's how it looks to me. Any comments? UPDATE: Added sample code: // Load the report Report theReport; using (var context1 = new TestEntities()) { context1.Reports.MergeOption = MergeOption.NoTracking; theReport = (from r in context1.Reports.Include("ReportEdits.User") where r.Id == reportId select r).First(); } // theReport looks like this: // {Report[Id=1]} // {ReportEdit[Id=1] {User[Id=1,Name="John Doe"]} // {ReportEdit[Id=2] {User[Id=1,Name="John Doe"]} // Try to re-attach the report object graph using (var context2 = new TestEntities()) { context2.Attach(theReport); // InvalidOperationException }

    Read the article

  • How to mask tilde (~) character in C# MVC routing table?

    - by AC
    I'm moving my home-baked web site to MVC and got the trouble with url routing. The site already serves several links that contain tilde (~) character in the path; something like http://.../~files/... http://.../~ws/... and I want each of them are handled by separate controller, like filesController, wsController, so my route table looks like routes.MapRoute( "files", "~files/{*prms}", new { controller = "files", action = "index", prms = "" } ); routes.MapRoute( "ws", "~ws/{*prms}", new { controller = "ws", action = "index", prms = "" } ); ... but when I try to get the result I got the error saying "The route URL cannot start with a '/' or '~' character and it cannot contain a '?' character." As I understand those characters have the special meaning in ASP.net but is it possible to mask them somehow, at least tilde? Should I parse and route requests like this myself? What the best practice to handle urls like this? Thanks!

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • ASP .NET: SQL Server Money Type and .NET Currency Type

    - by Rudi Ramey
    MS SQL Server's Money Data Type seems to accept a well formatted currency value with no problem (example: $52,334.50) From my research MS SQL Sever just ignores the "$" and "," characters. ASP .NET has a parameter object that has a Type/DbType property and Currency is an available option to set as a value. However, when I set the parameter Type or DbType to currency it will not accept a value like $52,334.50. I receive an error "Input string was not in a correct format." when I try to Update/Insert. If I don't include the "$" or "," characters it seems to work fine. Also, if I don't specify the Type or DbType for the parameter it seems to work fine also. Is this just standard behavior that the parameter object with its Type set to currency will still reject "$" and "," characters in ASP .NET? Here's an example of the parameter declaration (in the .aspx page): <asp:Parameter Name="ImplementCost" DbType="Currency" />

    Read the article

  • Dissecting ASP.NET Routing

    The ASP.NET Routing framework allows developers to decouple the URL of a resource from the physical file on the web server. Specifically, the developer defines routing rules, which map URL patterns to a class or ASP.NET page that generates the content. For instance, you could create a URL pattern of the form Categories/CategoryName and map it to the ASP.NET page ShowCategoryDetails.aspx; the ShowCategoryDetails.aspx page would display details about the category CategoryName. With such a mapping, users could view category about the Beverages category by visiting www.yoursite.com/Categories/Beverages. In short, ASP.NET Routing allows for readable, SEO-friendly URLs. ASP.NET Routing was first introduced in ASP.NET 3.5 SP1 and was enhanced further in ASP.NET 4.0. ASP.NET Routing is a key component of ASP.NET MVC, but can also be used with Web Forms. Two previous articles here on 4Guys showed how to get started using ASP.NET Routing: Using ASP.NET Routing Without ASP.NET MVC and URL Routing in ASP.NET 4.0. This article aims to explore ASP.NET Routing in greater depth. We'll explore how ASP.NET Routing works underneath the covers to decode a URL pattern and hand it off the the appropriate class or ASP.NET page. Read on to learn more! Read More >

    Read the article

  • Broken tables in RichTextBox control (word wrap)

    - by Ben Robbins
    We are having problems with the Windows.Forms.RichTextBox control in Visual Studio 2008. We are trying to display text supplied as an RTF file by a 3rd party in a windows forms application (.NET 3.5). In this RTF text file there are tables, which contain text that spans multiple lines. The RTF file displays correctly when opened with either WordPad or Word 2003. However, when we load the RTF file into the RichTextBox control, or copy & paste the whole text (including the table) into the control, the table does not display correctly - the cells are only single line, without wrapping. Here are links to images showing the exact problem: Correctly displayed in WordPad Incorrectly displayed in RichTextBox control I have googled for solutions and 3rd party .net RTF controls without success. I have found this exact problem asked on another forum without an answer (in fact that's where the link to the images come from) so I'm hoping stack overflow does better ;-) My preferred solution would be to use code or a 3rd party control that can correctly render the RTF. However, I suspect the problem is that the RichTextBox control only supports a subset of the full RTF spec, so another option would be to modify the RTF directly to remove the unsupported control codes or otherwise fix the RTF file itself (in which case any information as to what control codes need to be removed or modified would be a huge help).

    Read the article

  • Homepage 301 Redirect to SSL Homepage

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have 1 site where we implemented a 301 redirect on the homepage from http to https. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our webmaster account I notice that we are not being provided with any webmaster information (search queries, backlinks) related to our homepage under SSL. I performed a Fetch Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried that the fact that Google Fetch is not getting the correct Title Tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the SiteMap to ensure that Google is correctly indexing all our pages and being able to flow from the https to the http without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Security Level for WebBrowser control

    - by jaywon
    I am trying to migrate an .hta application to a C# executable. Of course, since it's an .hta the code is all HTML and Jscript, with calls to local ActiveX objects. I created a C# executable project and am just using the WebBrowser control to display the HTML content. Simply renamed the .hta to an .html and took out the HTA declarations. Everything works great, except that when I make calls to the ActiveX objects, I get a security popup warning of running an ActiveX control on the page. I understand why this is happening since the WebBrowser control is essentially IE and uses the Internet Options security settings, but is there any way to get the WebBrowser control to bypass security popups, or a way to register the executable or DLLs as being trusted without having to change settings in Internet Options? Even a way to do on a deployment package would work as well.

    Read the article

  • MVC2: Best Way to Intercept ViewRequest and Alter ActionResult

    - by Matthew
    I'm building an ASP.NET MVC2 Web Application that requires some sophisticated authentication and business logic that cannot be achieved using the out of the box forms authentication. I'm new to MVC so bear with me... My plan was to mark all restricted View methods with one or more custom attributes (that contain additional data). The controller would then override the OnActionExecuting method to intercept requests, analyze the target view's attributes, and do a variety of different things, including re-routing the user to different places. I have the interception and attribute analysis working, but the redirection is not working as expected. I have tried setting the ActionExecutingContext.Result to null and even have tried spooling up controllers via reflection and invoking their action methods. No dice. I was able to achieve it this way... protected override void OnActionExecuting(ActionExecutingContext filterContext) { filterContext.HttpContext.Response.Redirect("/MyView", false); base.OnActionExecuting(filterContext); } This seems like a hack, and there has to be a better way...

    Read the article

  • Version control for PHP Developement

    - by Vinod Mohan
    Hello, I have been hearing a lot about the advantages of using a version control system and would like to try one. I was doing freelance web development in PHP for the past 2 years, two months back I hired two more programmer to help me. I will be hiring one more person soon. We maintain 4 websites, all of which are my own, which are continuously being edited by one of us. I learned PHP by myself and have never worked in any other firms. Hence I am new to version control, unit testing and all. Currently, we have development servers on our workstations. When we edit a particular section of a site, we download the code for that particular section (say /news/ or /movies/ or /wallpapers/ ) from the production server to the dev server, makes the changes locally and uploads to the production server (no code review / testing). Because of this, our dev server is always out of date from our prod server. Occasionally, this also create problem when one of us forgets to download the latest copy from prod and overwrites the last change. I know this is very very foolish, but currently our prod server is the only copy that has all the updates and latest changes. Can anyone please suggest which is the best version control system for me? I am more interested in distributed version control, since we don't have a central backup for all our code. I read about Mercurial and Git and found that Mercurial is used in several large open source projects by Mozilla, Sun, Symbian etc. So which one do you think is better for me? Not only version control, if there are any other package that I can use to make my current setup better, please mention that too :) Thanks and Regards, Vinod Mohan

    Read the article

  • Source control products that support linked/shared files?

    - by Ian Boyd
    We're interested in moving from a source control system that supports the concept of shared or linked files. A shared file means: a file modified in one project, is automatically updated changed in every other project that uses that same file. It does this without a developer having to request it, reverse-integrate it, ask for it, or even want it. We're trying to see if any other commonly used source-control systems can meet our needs, and include linked or shared files. My limited research shows that: Team Foundation Server doesn't support sharing files Subversion doesn't support sharing files (including Externals) CVS doesn't support sharing files (including Modules) Anything else? (besides our current source control product, obviously) References Subversion and shared files across repositories/projects? How to share files between CVS projects? Will TFS ever support shared files for projects under source control?

    Read the article

  • Setting useLegacyV2RuntimeActivationPolicy At Runtime

    - by Reed
    Version 4.0 of the .NET Framework included a new CLR which is almost entirely backwards compatible with the 2.0 version of the CLR.  However, by default, mixed-mode assemblies targeting .NET 3.5sp1 and earlier will fail to load in a .NET 4 application.  Fixing this requires setting useLegacyV2RuntimeActivationPolicy in your app.Config for the application.  While there are many good reasons for this decision, there are times when this is extremely frustrating, especially when writing a library.  As such, there are (rare) times when it would be beneficial to set this in code, at runtime, as well as verify that it’s running correctly prior to receiving a FileLoadException. Typically, loading a pre-.NET 4 mixed mode assembly is handled simply by changing your app.Config file, and including the relevant attribute in the startup element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } This causes your application to run correctly, and load the older, mixed-mode assembly without issues. For full details on what’s happening here and why, I recommend reading Mark Miller’s detailed explanation of this attribute and the reasoning behind it. Before I show any code, let me say: I strongly recommend using the official approach of using app.config to set this policy. That being said, there are (rare) times when, for one reason or another, changing the application configuration file is less than ideal. While this is the supported approach to handling this issue, the CLR Hosting API includes a means of setting this programmatically via the ICLRRuntimeInfo interface.  Normally, this is used if you’re hosting the CLR in a native application in order to set this, at runtime, prior to loading the assemblies.  However, the F# Samples include a nice trick showing how to load this API and bind this policy, at runtime.  This was required in order to host the Managed DirectX API, which is built against an older version of the CLR. This is fairly easy to port to C#.  Instead of a direct port, I also added a little addition – by trapping the COM exception received if unable to bind (which will occur if the 2.0 CLR is already bound), I also allow a runtime check of whether this property was setup properly: public static class RuntimePolicyHelper { public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; } static RuntimePolicyHelper() { ICLRRuntimeInfo clrRuntimeInfo = (ICLRRuntimeInfo)RuntimeEnvironment.GetRuntimeInterfaceAsObject( Guid.Empty, typeof(ICLRRuntimeInfo).GUID); try { clrRuntimeInfo.BindAsLegacyV2Runtime(); LegacyV2RuntimeEnabledSuccessfully = true; } catch (COMException) { // This occurs with an HRESULT meaning // "A different runtime was already bound to the legacy CLR version 2 activation policy." LegacyV2RuntimeEnabledSuccessfully = false; } } [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("BD39D1D2-BA2F-486A-89B0-B4B0CB466891")] private interface ICLRRuntimeInfo { void xGetVersionString(); void xGetRuntimeDirectory(); void xIsLoaded(); void xIsLoadable(); void xLoadErrorString(); void xLoadLibrary(); void xGetProcAddress(); void xGetInterface(); void xSetDefaultStartupFlags(); void xGetDefaultStartupFlags(); [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)] void BindAsLegacyV2Runtime(); } } Using this, it’s possible to not only set this at runtime, but also verify, prior to loading your mixed mode assembly, whether this will succeed. In my case, this was quite useful – I am working on a library purely for internal use which uses a numerical package that is supplied with both a completely managed as well as a native solver.  The native solver uses a CLR 2 mixed-mode assembly, but is dramatically faster than the pure managed approach.  By checking RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully at runtime, I can decide whether to enable the native solver, and only do so if I successfully bound this policy. There are some tricks required here – To enable this sort of fallback behavior, you must make these checks in a type that doesn’t cause the mixed mode assembly to be loaded.  In my case, this forced me to encapsulate the library I was using entirely in a separate class, perform the check, then pass through the required calls to that class.  Otherwise, the library will load before the hosting process gets enabled, which in turn will fail. This code will also, of course, try to enable the runtime policy before the first time you use this class – which typically means just before the first time you check the boolean value.  As a result, checking this early on in the application is more likely to allow it to work. Finally, if you’re using a library, this has to be called prior to the 2.0 CLR loading.  This will cause it to fail if you try to use it to enable this policy in a plugin for most third party applications that don’t have their app.config setup properly, as they will likely have already loaded the 2.0 runtime. As an example, take a simple audio player.  The code below shows how this can be used to properly, at runtime, only use the “native” API if this will succeed, and fallback (or raise a nicer exception) if this will fail: public class AudioPlayer { private IAudioEngine audioEngine; public AudioPlayer() { if (RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully) { // This will load a CLR 2 mixed mode assembly this.audioEngine = new AudioEngineNative(); } else { this.audioEngine = new AudioEngineManaged(); } } public void Play(string filename) { this.audioEngine.Play(filename); } } Now – the warning: This approach works, but I would be very hesitant to use it in public facing production code, especially for anything other than initializing your own application.  While this should work in a library, using it has a very nasty side effect: you change the runtime policy of the executing application in a way that is very hidden and non-obvious.

    Read the article

  • Parent Control Mouse Enter/Leave Events With Child Controls

    - by Paul Williams
    I have a C# .NET 2.0 WinForms app. My app has a control that is a container for two child controls: a label, and some kind of edit control. You can think of it like this, where the outer box is the parent control: +---------------------------------+ | [Label Control] [Edit Control] | +---------------------------------+ I am trying to do something when the mouse enters or leaves the parent control, but I don't care if the mouse moves into one of its children. I want a single flag to represent "the mouse is somewhere inside the parent or children" and "the mouse has moved outside of the parent control bounds". I've tried handling MouseEnter and MouseLeave on the parent and both child controls, but this means the action begins and ends multiple times as the mouse moves across the control. In other words, I get this: Parent.OnMouseEnter (start doing something) Parent.OnMouseLeave (stop) Child.OnMouseEnter (start doing something) Child.OnMouseLeave (stop) Parent.OnMouseEnter (start doing something) Parent.OnMouseLeave (stop) The intermediate OnMouseLeave events cause some undesired effects as whatever I'm doing gets started and then stopped. I want to avoid that. I don't want to capture the mouse as the parent gets the mouse over, because the child controls need their mouse events, and I want menu and other shortcut keys to work. Is there a way to do this inside the .NET framework? Or do I need to use a Windows mouse hook?

    Read the article

  • Is using jquery to call a WCF Data Service from the UI violating the MVC pattern.

    - by Lee Dale
    I'm fairly new to ASP.Net MVC 2 and understand the MVC pattern in itself. But my question is what's the best way to populate dropdownlists in the UI sticking to the MVC pattern. Should I be going through the controller? Every article I've seen to do this shows how to do it using javascript and jquery. I have a test application that I'm re-writing in MVC2 I have my dropdowns working with jquery basically calling a WCF Data Service that returns JSON which populates the dropdowns. Seems to me though that this is bypassing the controller and going straight to the model therefore strictly violating the MVC pattern. Or am I missing something obvious here. You thoughts or best practices would be greatly welcome here. Thanks

    Read the article

  • Asp.Net MVC and ajax async callback execution order

    - by lrb
    I have been sorting through this issue all day and hope someone can help pinpoint my problem. I have created a "asynchronous progress callback" type functionality in my app using ajax. When I strip the functionality out into a test application I get the desired results. See image below: Desired Functionality When I tie the functionality into my single page application using the same code I get a sort of blocking issue where all requests are responded to only after the last task has completed. In the test app above all request are responded to in order. The server reports a ("pending") state for all requests until the controller method has completed. Can anyone give me a hint as to what could cause the change in behavior? Not Desired Desired Fiddler Request/Response GET http://localhost:12028/task/status?_=1383333945335 HTTP/1.1 X-ProgressBar-TaskId: 892183768 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:12028/ Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:12028 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 3.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcVEVNUFxQcm9ncmVzc0Jhclx0YXNrXHN0YXR1cw==?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:39:08 GMT Content-Length: 25 Iteration completed... Not Desired Fiddler Request/Response GET http://localhost:60171/_Test/status?_=1383341766884 HTTP/1.1 X-ProgressBar-TaskId: 838217998 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:60171/Report/Index Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:60171 Pragma: no-cache Cookie: ASP.NET_SessionId=rjli2jb0wyjrgxjqjsicdhdi; AspxAutoDetectCookieSupport=1; TTREPORTS_1_0=CC2A501EF499F9F...; __RequestVerificationToken=6klOoK6lSXR51zCVaDNhuaF6Blual0l8_JH1QTW9W6L-3LroNbyi6WvN6qiqv-PjqpCy7oEmNnAd9s0UONASmBQhUu8aechFYq7EXKzu7WSybObivq46djrE1lvkm6hNXgeLNLYmV0ORmGJeLWDyvA2 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 4.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcSUxlYXJuLlJlcG9ydHMuV2ViXHRydW5rXElMZWFybi5SZXBvcnRzLldlYlxfVGVzdFxzdGF0dXM=?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:37:48 GMT Content-Length: 25 Iteration completed... The only difference in the two requests headers besides the auth tokens is "Pragma: no-cache" in the request and the asp.net version in the response. Thanks Update - Code posted (I probably need to indicate this code originated from an article by Dino Esposito ) var ilProgressWorker = function () { var that = {}; that._xhr = null; that._taskId = 0; that._timerId = 0; that._progressUrl = ""; that._abortUrl = ""; that._interval = 500; that._userDefinedProgressCallback = null; that._taskCompletedCallback = null; that._taskAbortedCallback = null; that.createTaskId = function () { var _minNumber = 100, _maxNumber = 1000000000; return _minNumber + Math.floor(Math.random() * _maxNumber); }; // Set progress callback that.callback = function (userCallback, completedCallback, abortedCallback) { that._userDefinedProgressCallback = userCallback; that._taskCompletedCallback = completedCallback; that._taskAbortedCallback = abortedCallback; return this; }; // Set frequency of refresh that.setInterval = function (interval) { that._interval = interval; return this; }; // Abort the operation that.abort = function () { // if (_xhr !== null) // _xhr.abort(); if (that._abortUrl != null && that._abortUrl != "") { $.ajax({ url: that._abortUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId } }); } }; // INTERNAL FUNCTION that._internalProgressCallback = function () { that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); $.ajax({ url: that._progressUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, success: function (status) { if (that._userDefinedProgressCallback != null) that._userDefinedProgressCallback(status); }, complete: function (data) { var i=0; }, }); }; // Invoke the URL and monitor its progress that.start = function (url, progressUrl, abortUrl) { that._taskId = that.createTaskId(); that._progressUrl = progressUrl; that._abortUrl = abortUrl; // Place the Ajax call _xhr = $.ajax({ url: url, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, complete: function () { if (_xhr.status != 0) return; if (that._taskAbortedCallback != null) that._taskAbortedCallback(); that.end(); }, success: function (data) { if (that._taskCompletedCallback != null) that._taskCompletedCallback(data); that.end(); } }); // Start the progress callback (if any) if (that._userDefinedProgressCallback == null || that._progressUrl === "") return this; that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); }; // Finalize the task that.end = function () { that._taskId = 0; window.clearTimeout(that._timerId); } return that; };

    Read the article

  • Using RouteExistingFiles to block access to existing files even if no route exists

    - by Michael Stum
    In ASP.net MVC 2, I can use routes.RouteExistingFiles = true; to send all requests through the routing system, even if they exist on the file system. Usually, this ends up hitting the "{controller}/{action}/{id}" route and throws an exception as the controller cannot be found. I do not want to use that route though (I have only a few URLs and they are specifically mapped), yet I would still like to prevent access to the file system. Basically I want to Whitelist pages using IgnoreRoute. Is there a built-in way to do this? My current approach is to still have the "{controller}/{action}/{id}" route and generate a 404 when this is hit, but I'm just wondering if something is built-in already?

    Read the article

  • Add client side javascript code and ASP.Net validation on a asp.net button

    - by Vinni
    Hello guys, I wanted to write javascript code on "OnClientClick" of the asp.net button and also I want the asp.net validation to be run for that button. but when i mix these both validation is not working. please help me out. Below is my code ASPX <asp:Button ID="btnAddToFeatureOffers" runat="server" Text="Add to Feature Offers" OnClick="btnAddToFeatureOffers_Click" ValidationGroup="vgAddOffer" OnClientClick="add();" /> javascript function add() { var selectedOrder = $('#ctl00_MainContent_ddlFeaturedHostingType option:selected')[0].index; var offer = $('#<%=txtOrder.ClientID%>').val(); var a = $("<a>").attr("href", "#").addClass("offer").text("X"); $("<div>").text(offer).append(a).appendTo($('#resultTable #resultRow td')[selectedOrder - 1]); }

    Read the article

  • Alternative to WebBrowser control on Windows CE

    - by Grzegorz
    Welcome, Has anyone know any alternative to WeBrowser control from .NET Compact Framework, which may be used with Windows CE? Unfortunately standard web control from .NET CF 2.0 doesn't work on WinCE. Is any way to present formatted text in embedded control in .NET CF application targeted to Windows CE? Regards, Grzegorz

    Read the article

  • Running Activex control and Maintaining security

    - by Shyju
    Hi Techies, In my a web application, I have a part to invoke an activex control .The Activex control is available in all the client PCs who are accessing my web application from web server. But When trying to run this ActiveX control from the browser in client machine (using Wshell), It was not getting invoked since "Run Activex Controls and Pluggins" are disabled in my browser. So I changed the browser settings to enable mode and Then the Activex control gave me the expected output. I afraid that this change in browser settings would allow any other website to harm my system. How could I get rid of this problem? Any thoughts? Thanks in advance

    Read the article

  • ASP.NET MVC + MySql Membership Provider, user cannot login

    - by Jason Miesionczek
    Hello, I've been playing around with using MySql as the membership provider for asp.net mvc forms authentication. I've got things configured correctly as far as i can tell, and i can create users via both the register action and asp.net web config site. however, when i try to login with one of the users, it does not work. it returns an error as if i had entered a wrong password, or if the account doesn't exist. i have verified in the database that the account does exist. I've followed the instructions here for reference: http://blog.tchami.com/post/ASPNET-MVC-2-and-MySQL-Membership-Provider.aspx here is my web.config: <?xml version="1.0"?> <!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=152368 --> <configuration> <connectionStrings> <add name="MySQLConn" connectionString="Server=localhost;Database=intereditor;Uid=<user>;Pwd=<password>;"/> </connectionStrings> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" name=".ASPXFORM$" path="/" requireSSL="false" slidingExpiration="true" enableCrossAppRedirects="false" /> </authentication> <membership defaultProvider="MySqlMembershipProvider"> <providers> <clear/> <add name="MySqlMembershipProvider" type="MySql.Web.Security.MySQLMembershipProvider,MySql.Web,Version=6.3.4.0, Culture=neutral,PublicKeyToken=c5687fc88969c44d" autogenerateschema="true" connectionStringName="MySQLConn" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" passwordStrengthRegularExpression="" applicationName="/" /> </providers> </membership> <profile defaultProvider="MySqlProfileProvider"> <providers> <clear/> <add name="MySqlProfileProvider" type="MySql.Web.Profile.MySQLProfileProvider,MySql.Web,Version=6.3.4.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" connectionStringName="MySQLConn" applicationName="/" /> </providers> </profile> <roleManager enabled="true" defaultProvider="MySqlRoleProvider"> <providers> <clear /> <add name="MySqlRoleProvider" type="MySql.Web.Security.MySQLRoleProvider,MySql.Web,Version=6.3.4.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" connectionStringName="MySQLConn" applicationName="/" /> </providers> </roleManager> <pages> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> </namespaces> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration> Can anyone please help me identify what is wrong so that users can login? UPDATE So after debugging the login process in the code of the membership provider itself, i discovered that there is a bug in the provider. There is a discrepancy between the password hash that is stored in the database, and the has that is generated based on the inputted password. As a workaround for my issue, i changed the password format to 'encrpyted' and added a machine key to my web.config. I am still interested in figuring out the issue with the hashed format in the provider, and will spend some more time debugging it, and if i can figure out the problem, i will put together a patch and submit it.

    Read the article

  • Does Security Trimming work with Web Forms Routing?

    - by Slauma
    In my web.config I have configured a SiteMapProvider with securityTrimmingEnabled="true" and on my main master page is an asp:Menu control bound to an asp:SiteMapDataSource. In addition I have configured restricted access to all pages in a subfolder "Admin" (using another web.config in this subfolder). If I put a sitemapNode in Web.sitemap... <siteMapNode url="~/Admin/Default.aspx" title="Administration" description="" > ... only users in role "Admin" will have the menu item related to that siteMapNode. So this is working fine and as intended. Now I have defined a URL route in Global.asax to map the physical file to a new URL: System.Web.Routing.RouteTable.Routes.MapPageRoute("AdminHomeRoute", "Administration/Home", "~/Admin/Default.aspx"); But when I use this route-URL in the SiteMap file... <siteMapNode url="Administration/Home" title="Administration" description="" > ... it seems that security trimming does not work: The menu item is visible for all users. (Access to the page is still restricted though, so selecting the menu item by non-Admin users does not navigate to the restricted page.) Question: Is there any setting I've missed so far to make security trimming working with URL routing in ASP.NET 4.0 Web Forms? Did I do something wrong? Is there any work-around? Thank you for help!

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >