Search Results

Search found 9696 results on 388 pages for 'proxy authentication'.

Page 348/388 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • A way to specify a different host in an SSH tunnel from the host in use

    - by Tom
    I am trying to setup an SSH tunnel to access Beanstalk (to bypass an annoying proxy server). I can get this to work, but with one caveat: I have to map my Beanstalk host URL (username.svn.beanstalkapp.com) in my hosts file to 127.0.0.1 (and use the ip in place of the domain when setting up the tunnel). The reason (I think) is that I am creating the tunnel using the local SSH instance (on Snow Leopard) and if I use localhost or 127.0.0.1 when talking to Beanstalk, it rejects the authorisation credentials. I believe this is because Beanstalk use the hostname specified in a request to determine which account the username / password combination should be checked against. If localhost is used, I think this information is missing (in some manner which Beanstalk requires) from the requests. At the moment I dig the IP for username.svn.beanstalkapp.com, map username.svn.beanstalkapp.com to 127.0.0.1 in my hosts file, then for the tunnel I use the command: ssh -L 8080:ip:443 -p 22 -l tom -N 127.0.0.1 I can tell Subversion that the repo. is located at: https://username.svn.beanstalkapp.com:8080/repo-name This uses my tunnel and the username and password are accepted. So, my question is if there is an option when setting up the SSH tunnel which would mean I wouldn't have to use my hosts file workaround?

    Read the article

  • How can I make an even more random number in ActionScript 2.0

    - by Theo
    I write a piece of software that runs inside banner ads which generates millions of session IDs every day. For a long time I've known that the random number generator in Flash is't random enough to generate sufficiently unique IDs, so I've employed a number of tricks to get even more random numbers. However, in ActionScript 2.0 it's not easy, and I'm seeing more and more collisions, so I wonder if there is something I've overlooked. As far as I can tell the problem with Math.random() is that it's seeded by the system time, and when you have sufficient numbers of simultaneous attempts you're bound to see collisions. In ActionScript 3.0 I use the System.totalMemory, but there's no equivalent in ActionScript 2.0. AS3 also has Font.enumerateFonts, and a few other things that are different from system to system. On the server side I also add the IP address to the session ID, but even that isn't enough (for example, many large companies use a single proxy server and that means that thousands of people all have the same IP -- and since they tend to look at the same sites, with the same ads, roughly at the same time, there are many session ID collisions). What I need isn't something perfectly random, just something that is random enough to dilute the randomness I get from Math.random(). Think of it this way: there is a certain chance that two people will generate the same random number sequence using only Math.random(), but the chance of two people generating the same sequence and having, say, the exact same list of fonts is significantly lower. I cannot rely on having sufficient script access to use ExternalInterface to get hold of things like the user agent, or the URL of the page. I don't need suggestions of how to do it in AS3, or any other system, only AS2 -- using only what's available in the standard APIs. The best I've come up with so far is to use the list of microphones (Microphone.names), but I've also tried to make some fingerprinting using some of the properties in System.capabilities, I'm not sure how much randomness I can get out of that though so I'm not using that at the moment. I hope I've overlooked something.

    Read the article

  • Strange Problem with Webservice and IIS

    - by Rene
    Hello there, I have a Problem which confuses me a little bit, resp. where I don't have any Idea about what it could be. The System I'm using is Windows Vista, IIS 7.0, VS2008, Windows Software Factory, Entity Framework, WCF. The Binding for all Webservices is wshttpbinding. I'm using a Webservice hosted in IIS. This Webservice uses/calls another Webservice (also installed in the IIS). If I use a client calling the first Webservice (which calls the second Webservice) it works fine for about 4-10 Times. And then (it is repeatable to get this Problem, but sometimes it happens after 4, sometimes after 10 Time, but it always will happen), the Service and the IIS gets stuck. Stuck means, that this Webservice isn't callable anymore and generates an timeout after 1 minute. Even increasing Timeout doesn't change anything. If i try to restart the IIS I get an timeout error. So the IIS is also "stuck" (it is not really stuck, but I can't restart it). Only if I kill the w3wp.exe IIS is restartable and the Webservice will work again (until i again call this service several times). The logfiles (i'm no expert in things like logging or where to find/enable such logs, so to say : i'm a newbie) like http-logging, Event Viewer or WCF-Message Logging don't show any hints upon the source of the problem. I don't have this problem when I'm using a Webservice which doesn't call another Service. Calling a Webservice is done by Service Reference (I'm using no Proxy-Classes), but I think this should be no Problem. I have no idea of what is happening, nor how to solve this Problem. Regards Rene

    Read the article

  • wsimport generate a client with cookies

    - by dierre
    I'm generating a client for a SOAP 1.2 service using wsimport from the jaxws-maven-plugin in maven with the following execution: <groupId>org.jvnet.jax-ws-commons</groupId> <artifactId>jaxws-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <goals> <goal>wsimport</goal> </goals> <configuration> <sourceDestDir>${project.basedir}/src/main/java</sourceDestDir> <wsdlUrls> <wsdlUrl>${webservice.url}</wsdlUrl> </wsdlUrls> <extension>true</extension> </configuration> </execution> The first time the client call the proxy, the load balancer generate a cookie and sends it back. The client should send it back so the load balancer knows where (which server) is dedicated to a specific client (the idea is that the first time the client get a server and the cookie identifies the server, then the load balancer sends the client to the same server for every call) Now, is there a way to tell to the plugin to enable automatically the cookie handling?

    Read the article

  • Routing trouble for RESTful API - Rails

    - by aressidi
    I'm building out an API for web app that I've been working on for some time. I've started with the User model. The user portion of the API will allow remote clients to a) retrieve user data, b) update user information and c) create new users. I've gotten all of this to work, but it doesn't seem like its setup correctly. Here are my questions: Should the API endpoint be users or user? What's the best practice? I have to add the action to the end, which I would expect to be picked up instead by the request type so I don't have to specify it explicitly. How do I get my routes setup properly as not to have to include the method for protected actions? Let me give some examples: Get request for show - want it to work without the "show" curl -u rmbruno:blah http://app.local/api/users/show Put request for update - want it to work without the "update" curl -X put -F 'user[forum_notifications]=true' -u rmbruno:blah http://app.local/api/users/update Create - works with or without 'create' which is what I want for all these actions curl -X post -F 'user[login]=mamafatta' -F 'user[email][email protected]' -F 'user[password]=12345678' http://twye.local/api/users/ How do I structure routes to not require the action name? Isn't that the common way to to RESTful APIs? Here is my route for the API now: map.namespace :api do |route| route.resources :users route.resources :weight end I'm using restful authentication which is handling the http auth in curl. Any guidance on the routes issues and best practice on singular versus plural would be really helpful. Thanks! -A

    Read the article

  • Python 3.0 IDE - Komodo and Eclipse both flaky?

    - by victorhooi
    heya, I'm trying to find a decent IDE that supports Python 3.x, and offers code completion/in-built Pydocs viewer, Mercurial integration, and SSH/SFTP support. Anyhow, I'm trying Pydev, and I open up a .py file, it's in the Pydev perspective and the Run As doesn't offer any options. It does when you start a Pydev project, but I don't want to start a project just to edit one single Python script, lol, I want to just open a .py file and have It Just Work... Plan 2, I try Komodo 6 Alpha 2. I actually quite like Komodo, and it's nice and snappy, offers in-built Mercurial support, as well as in-built SSH support (although it lacks SSH HTTP Proxy support, which is slightly annoying). However, for some reason, this refuses to pick up Python 3. In Edit-Preferences-Languages, there's two option, one for Python and Python3, but the Python3 one refuses to work, with either the official Python.org binaries, or ActiveState's own ActivePython 3. Of course, I can set the "Python" interpreter to the 3.1 binary, but that's an ugly hack and breaks Python 2.x support. So, does anybody who uses an IDE for Python have any suggestions on either of these accounts, or can you recommend an alternate IDE for Python 3.0 development? Cheers, Victor

    Read the article

  • SiteMap control based on user roles doesn't works

    - by nCdy
    <siteMapNode roles="*"> <siteMapNode url="~/Default.aspx" title=" Main" description="Main" roles="*"/> <siteMapNode url="~/Items.aspx" title=" Adv" description="Adv" roles="Administrator"/> .... any user can see Adv page. That is a trouble and a qustion : why and how to hide out of role sitenodes. but if I do HttpContext.Current.User.IsInRole("Administrator") it shows me if user in Administrator role or not. web config : <authentication mode="Forms"/> <membership defaultProvider="SqlProvider" userIsOnlineTimeWindow="20"> <providers> <add connectionStringName="FlowWebSQL" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" passwordFormat="Hashed" applicationName="/" name="SqlProvider" type="System.Web.Security.SqlMembershipProvider"/> </providers> </membership> <roleManager enabled="true" defaultProvider="SqlProvider"> <providers> <add connectionStringName="FlowWebSQL" name="SqlProvider" type="System.Web.Security.SqlRoleProvider" /> </providers> </roleManager>

    Read the article

  • Java Client .class File Protection

    - by Zac
    I am in the requirements phase of building a JEE application that will most likely run on a GlassFish/JBoss backend (doesn't matter for now). I know I shouldn't be thinking about architecture at requirements time, but one can't help but start to imagine how the components would all snap together :-) Here are some hard, non-flexible requirements on the client-side: (1) The client application will be a Swing box (2) The client is free to download, but will use a subscription model (thus requiring a login mechanism with server-side authentication/authorization, etc.) (3) Yes, Java is the best platform solution for the problem at hand for reasons outside the scope of this post (4) The client-side .class files need safeguarding against decompiling That last (4th) requirement is the basis of this post. I'm not really worried about someone actually decompiling and getting at my source code: in the end, it's just Swing controls driven by some lightweight business logic. I'm worried about a scenario where someone decompiles my code, modifies it to exploit/attack the server, re-compiles, and fires it up. I've envisioned all sorts of nasty solutions, but didn't know if this was a common problem with a common solution for JEE developers. Any thoughts? Not interested in "code obfuscation" techniques! Thanks for any input!

    Read the article

  • Cookie add in the Global.asax warning in application log

    - by Ioxp
    In my Global.ASAX file i have the following: System.Web.HttpCookie isAccess = new System.Web.HttpCookie("IsAccess"); isAccess.Expires = DateTime.Now.AddDays(-1); isAccess.Value = ""; System.Web.HttpContext.Current.Response.Cookies.Add(isAccess); So every time this method this is logged in the application events as a warning: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 5/25/2010 12:23:20 PM Event time (UTC): 5/25/2010 4:23:20 PM Event ID: c515e27a28474eab8d99720c3f5a8e90 Event sequence: 4148 Event occurrence: 332 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2100509645/Root-1-129192259222289896 Trust level: Full Application Virtual Path: / Application Path: <PathRemoved>\www\ Machine name: TIPPER Process information: Process ID: 6936 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: NullReferenceException Exception message: Object reference not set to an instance of an object. Request information: Request URL: Request path: User host address: User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 7 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at ASP.global_asax.Session_End(Object sender, EventArgs e) in <PathRemoved>\Global.asax:line 113 Any idea why this code would cause this error?

    Read the article

  • cookie name is truncated in Servlet 3.0 HttpServletRequest (Glassfish V3)

    - by idplmal
    I'm porting our Web authentication/authorization middleware for use in containers implementing the new servlet 3.0 API (Glassfish V3 in this case). The middleware pulls cookies from the HttpServletRequest filtering on cookies with names of the form "DACS:FEDERATION::JURISDICTION:username". This works fine in the version 2.5 servlet API but is broken in 3.0. The cookie names in 3.0 are being truncated at the first ":" in the name. I understand that the servlet 3.0 implementation defaults to RFC 2109 cookies which are more restrictive about cookie names than the old Netscape spec (":" is among the characters not allowed in RFC 2109 cookie names). Digging into the servlet 3.0 source code, it appears that the use of RFC2109 names can be disabled by setting a System property "org.glassfish.web.rfc2109.cookie_names_enforced" to false. I've tried this to no avail. But besides that, the code that uses checks cookie names is in the constructor for Cookie, and it would appear that the truncation is occurring elsewhere. So - finally - the question. Have others bumped into such issues in the servlet 3.0 API and have you found a work around?

    Read the article

  • restrict script inside iframe to run only within pages of same top-level domain?

    - by Justin Grant
    I'd like to enforce a requirement that client script inside a page (which in turn is loaded inside an iframe of another page) will only run when the parent page is on the same top-level domain as the framed page (although it may be on another hostname in that domain). Is this do-able? I assume that the easy solution of looking at top.location.host won't be available due to cross-site scripting limitations, but I'm wondering if other javascript hackery could suffice. Constraints on any potential solution inculde: I need to be able to run XmlHttpRequest calls inside the child page, and I need to validate that the hostname is in the same domain before I make those calls. (this makes a document.domain solution challenging because AFAIK setting document.domain disables the ability to make XmlHttpRequest calls. I can control client-side script and HTML on both parent or child (and I can create new pages if needed), but I can't make any server-side code changes. I can't simulate the above via server-side calls or proxies, because the child page's hostname uses a forms auth system with hostname-scoped cookies that I can't get access to from the parent page since it's on a different hostname. I don't have enough control over the child-frame site to be able to put both sites behind the same reverse-proxy or load-balancer (which would enable me to put both sites on the same hostname). I don't actually need to access any UI inside the IFrame-- the iframe is invisible and I'm only using it to run javascript within the security context of a site on a different hostname from the parent page. So at this point I'm stumped. Got any ideas? I want to make sure I'm not overlooking an easy solution before giving up.

    Read the article

  • How to suppress/control logging of Wagon-FTP Maven extension?

    - by Vincenzo
    I'm deploying Maven site by FTP, using Wagon-FTP. Works fine, but output is full of FTP connection/authentication details, which effectively expose logins and passwords to everybody (especially if the project is open source and its CI protocols are publicly accessible): [...] [INFO] [INFO] --- maven-site-plugin:3.0-beta-3:deploy (default-deploy) @ rempl --- Reply received: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 1 of 50 allowed. 220-Local time is now 09:08. Server port: 21. 220 You will be disconnected after 15 minutes of inactivity. Command sent: USER **** Reply received: 331 User **** OK. Password required Command sent: PASS ******** Reply received: 230-User **** has group access to: *** 230 OK. Current restricted directory is / [...] Is it possible to suppress this logging? Or configure it... This is a section of my pom.xml, where Wagon-FTP is used: [...] <build> <extensions> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ftp</artifactId> <version>1.0-beta-7</version> </extension> </extensions> [...] </build> [...]

    Read the article

  • C# webservice async callback not getting called on HTTP 407 error.

    - by Ben
    Hi, I am trying to test the use-case of a customer having a proxy with login credentials, trying to use our webservice from our client. If the request is synchronous, my job is easy. Catch the WebException, check for the 407 code, and prompt the user for the login credentials. However, for async requests, I seem to be running into a problem: the callback is never getting called! I ran a wireshark trace and did indeed see that the HTTP 407 error was being passed back, so I am bewildered as to what to do. Here is the code that sets up the callback and starts the request: TravelService.TravelServiceImplService svc = new TravelService.TravelServiceImplService(); svc.Url = svcUrl; svc.CreateEventCompleted += CbkCreateEventCompleted; svc.CreateEventAsync(crReq, req); And the code that was generated when I consumed the WSDL: public void CreateEventAsync(TravelServiceCreateEventRequest CreateEventRequest, object userState) { if ((this.CreateEventOperationCompleted == null)) { this.CreateEventOperationCompleted = new System.Threading.SendOrPostCallback(this.OnCreateEventOperationCompleted); } this.InvokeAsync("CreateEvent", new object[] { CreateEventRequest}, this.CreateEventOperationCompleted, userState); } private void OnCreateEventOperationCompleted(object arg) { if ((this.CreateEventCompleted != null)) { System.Web.Services.Protocols.InvokeCompletedEventArgs invokeArgs = ((System.Web.Services.Protocols.InvokeCompletedEventArgs)(arg)); this.CreateEventCompleted(this, new CreateEventCompletedEventArgs(invokeArgs.Results, invokeArgs.Error, invokeArgs.Cancelled, invokeArgs.UserState)); } } Debugging the WS code, I found that even the SoapHttpClientProtocol.InvokeAsync method was not calling its callback as well. Am I missing some sort of configuration?

    Read the article

  • How to intercept, parse and compile?

    - by epitka
    This is a problem I've been struggling to solve for a while. I need a way to either replace code in the method with a parsed code from the template at compile time (PostSharp comes to mind) or to create a dynamic proxy (Linfu or Castle). So given a source code like this [Template] private string GetSomething() { var template = [%=Customer.Name%] } I need it to be compiled into this private string GetSomething() { MemoryStream mStream = new MemoryStream(); StreamWriter writer = new StreamWriter(mStream,System.Text.Encoding.UTF8); writer.Write(@"" ); writer.Write(Customer.Name); StreamReader sr = new StreamReader(mStream); writer.Flush(); mStream.Position = 0; return sr.ReadToEnd(); } It is not important what technology is used. I tried with PostSharp's ImplementMethodAspect but got nowhere (due to lack of experience with it). I also looked into Linfu framework. Can somebody suggest some other approach or way to do this, I would really appreciate. My whole project depends on this.

    Read the article

  • Nhibernate join on a table twice

    - by Zuber
    Consider the following Class structure... public class ListViewControl { public int SystemId {get; set;} public List<ControlAction> Actions {get; set;} public List<ControlAction> ListViewActions {get; set;} } public class ControlAction { public string blahBlah {get; set;} } I want to load class ListViewControl eagerly using NHibernate. The mapping using Fluent is as shown below public UIControlMap() { Id(x => x.SystemId); HasMany(x => x.Actions) .KeyColumn("ActionId") .Cascade.AllDeleteOrphan() .AsBag() .Cache.ReadWrite().IncludeAll(); HasMany(x => x.ListViewActions) .KeyColumn("ListViewActionId") .Cascade.AllDeleteOrphan() .AsBag() .Cache.ReadWrite().IncludeAll(); } This is how I am trying to load it eagerly var baseActions = DetachedCriteria.For<ListViewControl>() .CreateCriteria("Actions", JoinType.InnerJoin) .SetFetchMode("BlahBlah", FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()); var listViewActions = DetachedCriteria.For<ListViewControl>() .CreateCriteria("ListViewActions", JoinType.InnerJoin) .SetFetchMode("BlahBlah", FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()); var listViews = DetachedCriteria.For<ListViewControl>() .SetFetchMode("Actions", FetchMode.Eager) .SetFetchMode("ListViewActions",FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()); var result = _session.CreateMultiCriteria() .Add("listViewActions", listViewActions) .Add("baseActions", baseActions) .Add("listViews", listViews) .SetResultTransformer(new DistinctRootEntityResultTransformer()) .GetResult("listViews"); Now, my problem is that the class ListViewControl get the correct records in both Actions and ListViewActions, but there are multiple entries of the same record. The number of records is equal to the number of joins made to the ControlAction table, in this case two. How can I avoid this? If I remove the SetFetchMode from the listViews query, the actions are loaded lazily through a proxy which I don't want.

    Read the article

  • What do I name this class whose sole purpose is to report failure?

    - by Blair Holloway
    In our system, we have a number of classes whose construction must happen asynchronously. We wrap the construction process in another class that derives from an IConstructor class: class IConstructor { public: virtual void Update() = 0; virtual Status GetStatus() = 0; virtual int GetLastError() = 0; }; There's an issue with the design of the current system - the functions that create the IConstructor-derived classes are often doing additional work which can also fail. At that point, instead of getting a constructor which can be queried for an error, a NULL pointer is returned. Restructuring the code to avoid this is possible, but time-consuming. In the meantime, I decided to create a constructor class which we create and return in case of error, instead of a NULL pointer: class FailedConstructor : public IConstructor public: virtual void Update() {} virtual Status GetStatus() { return STATUS_ERROR; } virtual int GetLastError() { return m_errorCode; } private: int m_errorCode; }; All of the above this the setup for a mundane question: what do I name the FailedConstructor class? In our current system, FailedConstructor would indicate "a class which constructs an instance of Failed", not "a class which represents a failed attempt to construct another class". I feel like it should be named for one of the design patterns, like Proxy or Adapter, but I'm not sure which.

    Read the article

  • Are these settings correct for sending mail through Rails/Gmail?

    - by aressidi
    Hi there, I spend a good deal of time building an email system for my Rails app that uses Gmail to send bulk mail to a list of opt-in users. I realize a shortcomming of using Google Apps for my mail, namely a rate limit on the number of emails it will send out (i believe 500). Anyway, I have reached out to my users to see how many have received the email, and a lot of them have not, though some have. The list I tried sending to was about 540 users, so I would have expected more "yes, got it," then "nope, still waiting" responses. I have two questions: Do these settings look correct for outgoing bulk mailing through Gmail? Again, using google apps to manage my domain and i know some people (including myself) have received the mailer. This is in a mail.rb initializer in my app. ActionMailer::Base.delivery_method = :sendmail ActionMailer::Base.smtp_settings = { :address => "smtp.gmail.com", :port => 25, :domain => "mydomain.com", :authentication => :login, :user_name => "[email protected]", :password => "mypass" } Is there any way I can test if the mail was delivered, or at least attempted to be delivered? I can't tell where in the list the mailer stops mailing! The way I generate the list is through a query which then passes the user info to a mailer worker which sends the emails out via Starling/Workling. Any advice here would be useful. Happy to post code, but want to make sure the method I'm using is sound. Thanks for the help!

    Read the article

  • Silverlight 3 + Java WebService

    - by Heko
    Hello! I have a Silverlight 3 project, and I need to call a Java WebService - the bindings are ok (SOAP 1.1 and basicHttpBinding): ClientConfig File: <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="SkyinfoTestInterfaceExport2_SkyinfoTestInterfaceHttpBinding" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="None"> <transport> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="myAddress" binding="basicHttpBinding" bindingConfiguration="SkyinfoTestInterfaceExport2_SkyinfoTestInterfaceHttpBinding" contract="SkyInfoServiceReference.SkyinfoTestInterface" name="SkyinfoTestInterfaceExport2_SkyinfoTestInterfaceHttpPort" /> </client> </system.serviceModel> When I call a method on client I get this Policy error: An error occurred while trying to make a request to URI '...'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. I know about those 2 policy XML filesbut Java EE service which I'm trying to call is hosted on a IBM WebSphere Process Server to which I don't have access. Does anybody know how to work around this policy exception?

    Read the article

  • Destructors not called when native (C++) exception propagates to CLR component

    - by Phil Nash
    We have a large body of native C++ code, compliled into DLLs. Then we have a couple of dlls containing C++/CLI proxy code to wrap the C++ interfaces. On top of that we have C# code calling into the C++/CLI wrappers. Standard stuff, so far. But we have a lot of cases where native C++ exceptions are allowed to propagate to the .Net world and we rely on .Net's ability to wrap these as System.Exception objects and for the most part this works fine. However we have been finding that destructors of objects in scope at the point of the throw are not being invoked when the exception propagates! After some research we found that this is a fairly well known issue. However the solutions/ workarounds seem less consistent. We did find that if the native code is compiled with /EHa instead of /EHsc the issue disappears (at least in our test case it did). However we would much prefer to use /EHsc as we translate SEH exceptions to C++ exceptions ourselves and we would rather allow the compiler more scope for optimisation. Are there any other workarounds for this issue - other than wrapping every call across the native-managed boundary in a (native) try-catch-throw (in addition to the C++/CLI layer)?

    Read the article

  • Check for modification failure in content Integration using visualSvn sever and cruise control.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • What MS technology to use for HTTP service returning XML?

    - by Borek
    I need to create a service that: accepts HTTP requests (with query string or HTTP POST parameters) does some processing on the requests (checking if the request is valid, authentication etc.) reads data from a custom store (another HTTP call in our case) returns the result as custom XML (defined with XSD) I'm trying to think of various MS technologies that could help me and how good they would be for this scenario (pretty standard one I guess). The tasks above are relatively separate, this is what comes to mind: HTTP front-end: ASP.NET Web Forms ASP.NET MVC (seems more appropriate here as I won't need server controls, view state etc.) WCF? Don't know much about it or how well it would suit my task. Custom logic on the server: this will probably be a generic C# code in all cases (sometimes "plugged into" or called from MVC controllers or some equivalent place in other technologies) Reading data from internal data stores: As said, this is another HTTP server in our case. Options that come to mind: Just read the data using something like WebClient (Just theoretically) implement a LINQ provider (Just even more theoretically) implement an EF provider Output the data as custom XML: Linq2XML Serialization? Is it flexible enough? Does WCF provide some tools for this? Some "OXM" - Object/XML mapper if there is something like that for .NET I may be wrong in many of my assumptions, this is just a quick list that comes to mind after a quick research. Some general notes / questions: Testing is important Solution with a clear domain model would be much preferred over the one without Can Entity Framework actually help somewhere in my scenario? If so, where and how? Would WCF be an appropriate technology for this? I don't know much about it.

    Read the article

  • backbone.js Model.get() returns undefined, scope using coffeescript + coffee toaster?

    - by benipsen
    I'm writing an app using coffeescript with coffee toaster (an awesome NPM module for stitching) that builds my app.js file. Lots of my application classes and templates require info about the current user so I have an instance of class User (extends Backbone.Model) stored as a property of my main Application class (extends Backbone.Router). As part of the initialization routine I grab the user from the server (which takes care of authentication, roles, account switching etc.). Here's that coffeescript: @user = new models.User @user.fetch() console.log(@user) console.log(@user.get('email')) The first logging statement outputs the correct Backbone.Model attributes object in the console just as it should: User _changing: false _escapedAttributes: Object _pending: Object _previousAttributes: Object _silent: Object attributes: Object account: Object created_on: "1983-12-13 00:00:00" email: "[email protected]" icon: "0" id: "1" last_login: "2012-06-07 02:31:38" name: "Ben Ipsen" roles: Object __proto__: Object changed: Object cid: "c0" id: "1" __proto__: ctor app.js:228 However, the second returns undefined despite the model attributes clearly being there in the console when logged. And just to make things even more interesting, typing "window.app.user.get('email')" into the console manually returns the expected value of "[email protected]"... ? Just for reference, here's how the initialize method compiles into my app.js file: Application.prototype.initialize = function() { var isMobile; isMobile = navigator.userAgent.match(/(iPhone|iPod|iPad|Android|BlackBerry)/); this.helpers = new views.DOMHelpers().initialize().setup_viewport(isMobile); this.user = new models.User(); this.user.fetch(); console.log(this.user); console.log(this.user.get('email')); return this; }; I initialize the Application controller in my static HTML like so: jQuery(document).ready(function(){ window.app = new controllers.Application(); }); Suggestions please and thank you!

    Read the article

  • Why am I getting "(304) Not Modified" error on some links when using HttpWebRequest?

    - by Greg
    Hi, Any ideas why on some links that I try to access using HttpWebRequest I am getting "The remote server returned an error: (304) Not Modified." in the code? The code I'm using is from Jeff's post here. Note the concept of the code is a simple proxy server, so I'm pointing my browser at this locally running piece of code, which gets my browsers request, and then proxies it on by creating a new HttpWebRequest, as you'll see in the code. It works great for most sites/links, but for some this error comes up. You will see one key bit in the code is where it seems to copy the http header settings from the browser request to it's request out to the site, and it copies in the header attributes. Not sure if the issue is something to do with how it mimics this aspect of the request and then what happens as the result comes back? case "If-Modified-Since": request.IfModifiedSince = DateTime.Parse(listenerContext.Request.Headers[key]); break; I get the issue for example from http://en.wikipedia.org/wiki/Main_Page thanks

    Read the article

  • ASP.NET MVC Authorize by Group

    - by Jimmo
    I have what seems like a common issue with SaaS applications, but have not seen this question on here anywhere. I am using ASP.NET MVC with Forms Authentication. I have implemented a custom membership provider to handle logic, but have one issue (perhaps the issue is in my mental picture of the system). As with many SaaS apps, Customers create accounts and use the app in a way that looks like they are the only ones present (they only see their items, users, etc.) In reality, there are generic controllers and views presenting data depending on their account. When calling something like ValidateUser, I have access to their affiliation in the User object - what I don't have is the context of the request to which to compare it. As an example, One company called ABC goes to abc.mysite.com Another company called XYZ goes to xyz.mysite.com When an ABC user calls http://abc.mysite.com/product/edit/12 I have an [Authorize] attribute on the Edit method in the ProductController to make sure he is signed in and has sufficient permission to do so. If that same ABC user tried to access http://xyz.mysite.com/product/edit/12 I would not want to validate him in the context of that call. In the ValidateUser of the MembershipProvider, I have the information about the user, but not about the request. I can tell that the user is from ABC, but I cannot tell that the request is for XYZ at that point in the code. How should I resolve this?

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >