Search Results

Search found 9696 results on 388 pages for 'proxy authentication'.

Page 346/388 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • Autologin for web application

    - by Maulin
    We want to AutoLogin feature to allow user directly login using link into our Web Application. What is the best way achieve this? We have following approches in our mind. 1) Store user credentials(username/password) in cookie. Send cookie for authentication. e.g. http: //www.mysite.com/AutoLogin (here username/password will be passed in cookie) OR Pass user credentials in link URL. http: //www.mysite.com/AutoLogin?userid=<&password=< 2) Generate randon token and store user random token and user IP on server side database. When user login using link, validate token and user IP on server. e.g. http: //www.mysite.com/AutoLogin?token=< The problem with 1st approach is if hacker copies link/cookie from user machine to another machine he can login. The problem with 2nd approach is the user ip will be same for all users of same organization behind proxy. Which one is better from above from security perspective? If there is better solution which is other than mentioned above, please let us know.

    Read the article

  • Global.asax Event: Application_OnPostAuthenticateRequest

    - by Hemant Kothiyal
    Hi, I am using Application_OnPostAuthenticateRequest event in global.asax to get roles and permissions of authenticated user also i have made my custom principal class to get user detail and roles and permission. To get some information which remain same for that user. following are the code void Application_OnPostAuthenticateRequest(object sender, EventArgs e) { // Get a reference to the current User IPrincipal objIPrincipal = HttpContext.Current.User; // If we are dealing with an authenticated forms authentication request if ((objIPrincipal.Identity.IsAuthenticated) && (objIPrincipal.Identity.AuthenticationType == "Forms")) { CustomPrincipal objCustomPrincipal = new CustomPrincipal(); objCustomPrincipal = objCustomPrincipal.GetCustomPrincipalObject(objIPrincipal.Identity.Name); HttpContext.Current.User = objCustomPrincipal; CustomIdentity ci = (CustomIdentity)objCustomPrincipal.Identity; HttpContext.Current.Cache["CountryID"] = FatchMasterInfo.GetCountryID(ci.CultureId); HttpContext.Current.Cache["WeatherLocationID"] = FatchMasterInfo.GetWeatherLocationId(ci.UserId); Thread.CurrentPrincipal = objCustomPrincipal; } } My question is as following This event fires every time for every request. Hence for each request the code execute? My approach is right or not? Is it right to add HttpContext.Current.Cache in this event or we should move it on session start

    Read the article

  • REST API error return good practices

    - by Remus Rusanu
    I'm looking for guidance on good practices when it comes to return errors from a REST API. I'm working on a new API so I can take it any direction right now. My content type is XML at the moment, but I plan to support JSON in future. I am now adding some error cases, like for instance a client attempts to add a new resource but has exceeded his storage quota. I am already handling certain error cases with HTTP status codes (401 for authentication, 403 for authorization and 404 for plain bad request URIs). I looked over the blessed HTTP error codes but none of the 400-417 range seems right to report application specific errors. So at first I was tempted to return my application error with 200 OK and a specific XML payload (ie. Pay us more and you'll get the storage you need!) but I stopped to think about it and it seems to soapy (/shrug in horror). Besides it feels like I'm splitting the error responses into distinct cases, as some are http status code driven and other are content driven. So what is the SO crowd recommendation? Good practices (please explain why!) and also, from a client pov, what kind of error handling in the REST API makes life easier for the client code?

    Read the article

  • Override ActiveRecord#save, Method Alias? Trying to mixin functionality into save method...

    - by viatropos
    Here's the situation: I have a User model, and two modules for authentication: Oauth and Openid. Both of them override ActiveRecord#save, and have a fair share of implementation logic. Given that I can tell when the user is trying to login via Oauth vs. Openid, but that both of them have overridden save, how do "finally" override save such that I can conditionally call one of the modules' implementations of it? Here is the base structure of what I'm describing: module UsesOauth def self.included(base) base.class_eval do def save puts "Saving with Oauth!" end def save_with_oauth save end end end end module UsesOpenid def self.included(base) base.class_eval do def save puts "Saving with OpenID!" end def save_with_openid save end end end end module Sequencer def save if using_oauth? save_with_oauth elsif using_openid? save_with_openid else super end end end class User < ActiveRecord::Base include UsesOauth include UsesOpenid include Sequencer end I was thinking about using alias_method like so, but that got too complicated, because I might have 1 or 2 more similar modules. I also tried using those save_with_oauth methods (shown above), which almost works. The only thing that's missing is that I also need to call ActiveRecord::Base#save (the super method), so something like this: def save_with_oauth # do this and that super.save # the rest end But I'm not allowed to do that in ruby. Any ideas for a clever solution to this?

    Read the article

  • How should I implement lazy session creation in PHP?

    - by Adam Franco
    By default, PHP's session handling mechanisms set a session cookie header and store a session even if there is no data in the session. If no data is set in the session then I don't want a Set-Cookie header sent to the client in the response and I don't want an empty session record stored on the server. If data is added to $_SESSION, then the normal behavior should continue. My goal is to implement lazy session creation behavior of the sort that Drupal 7 and Pressflow where no session is stored (or session cookie header sent) unless data is added to the $_SESSION array during application execution. The point of this behavior is to allow reverse proxies such as Varnish to cache and serve anonymous traffic while letting authenticated requests pass through to Apache/PHP. Varnish (or another proxy-server) is configured to pass through any requests without cookies, assuming correctly that if a cookie exists then the request is for a particular client. I have ported the session handling code from Pressflow that uses session_set_save_handler() and overrides the implementation of session_write() to check for data in the $_SESSION array before saving and will write this up as library and add an answer here if this is the best/only route to take. My Question: While I can implement a fully custom session_set_save_handler() system, is there an easier way to get this lazy session creation behavior in a relatively generic way that would be transparent to most applications?

    Read the article

  • EF + UnitOfWork + SharePoint RunWithElevatedPrivileges

    - by Lorenzo
    In our SharePoint application we have used the UnitOfWork + Repository patterns together with Entity Framework. To avoid the usage of the passthrough authentication we have developed a piece of code that impersonate a single user before creating the ObjectContext instance in a similar way that is described in "Impersonating user with Entity Framework" on this site. The only difference between our code and the referred question is that, to do the impersonation, we are using RunWithElevatedPrivileges to impersonate the Application Pool identity as in the following sample. SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = new SPSite(url)) { _context = new MyDataContext(ConfigSingleton.GetInstance().ConnectionString); } }); We have done this way because we expected that creating the ObjectContext after impersonation and, due to the fact that Repositories are receiving the impersonated ObjectContext would solve our requirement. Unfortunately it's not so easy. In fact we experienced that, even if the ObjectContext is created before and under impersonation circumstances, the real connection is made just before executing the query, and so does not use impersonation, which break our requirement. I have checked the ObjectContext class to see if there was any event through which we can inject the impersonation but unfortunately found nothing. Any help?

    Read the article

  • Simple imeplementation of admin/staff panel?

    - by Michael Mao
    Hi all: A new project requires a simple panel(page) for admin and staff members that : Preferably will not use SSL or any digital ceritification stuff, a simple login from via http will just be fine. has basic authentication which allows only admin to login as admin, and any staff member as of the group "staff". Ideally, the "credentials(username-hashedpassword pair)" will be stored in MySQL. is simple to configure if there is a package, or the strategy is simple to code. somewhere (PHP session?) somehow (include a script at the beginning of each page to check user group before doing anything?), it will detect any invalid user attempt to access protected page and redirect him/her to the login form. while still keeps high quality in security, something I worry about the most. Frankly I am having little knowledge about Internet security, and how modern CMS such as WordPress/Joomla do with their implementation in this. I only have one thing in my mind that I need to use a salt to hash the password (SHA1?) to make sure any hacker gets the username and password pair across the net cannot use that to log into the system. And that is what the client wants to make sure. But I really not sure where to start, any ideas? Thanks a lot in advance.

    Read the article

  • JavaEE : "Access to default session denied" when sending mail using smtp.gmail.com

    - by Harry Pham
    I am trying to write email authentication feature for my website and I encounter some issues. I got java.lang.SecurityException: Access to default session denied, when I try to do Session.getDefaultInstance. Here are my codes: private static final String SMTP_HOST_NAME = "smtp.gmail.com"; private static final String SMTP_PORT = "465"; private static final String emailSubjectTxt = "Email Confirmation"; private static final String emailFromAddress = "[email protected]"; private static final String SSL_FACTORY = "javax.net.ssl.SSLSocketFactory"; ... String sendTo = "[email protected]"; boolean debug = true; Properties props = new Properties(); props.put("mail.smtp.host", SMTP_HOST_NAME); props.put("mail.smtp.auth", "true"); props.put("mail.debug", "true"); props.put("mail.smtp.port", SMTP_PORT); props.put("mail.smtp.socketFactory.port", SMTP_PORT); props.put("mail.smtp.socketFactory.class", SSL_FACTORY); props.put("mail.smtp.socketFactory.fallback", "false"); //It dies at the next line Session session = Session.getDefaultInstance(props, new javax.mail.Authenticator() { @Override protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication("myUserName", "myPassword"); } }); session.setDebug(debug); //Set the FROM address Message msg = new MimeMessage(session); InternetAddress addressFrom = new InternetAddress(emailFromAddress); msg.setFrom(addressFrom); //Set the TO address InternetAddress[] addressTo = new InternetAddress[1]; addressTo[0] = new InternetAddress(sendTo); msg.setRecipients(Message.RecipientType.TO, addressTo); //Construct the content of the email confirmation String message = "Test Content" // Setting the Subject and Content Type msg.setSubject(emailSubjectTxt); msg.setContent(message, "text/plain"); Transport.send(msg);

    Read the article

  • GoDaddy Subdomain Hosting Issue/Question with Disk Access (C#/ASP.NET 3.5)

    - by Vogel
    This isn't a very complicated scenario really, but as I start to type out the problem I'm realizing how convoluted it can become textually. Let me try and be very clear: First, the set up... I have a C#/ASP.NET web application that is publicly facing on my main domain (www), let's call it www.mysite.com. Nothing fancy, just a front-end that connects to SQL to display records. Then, I have a second C#/ASP.NET web application that is secured using forms authentication running on a subdomain, let's call it admin.mysite.com. This is a very light-weight CMS system to administer the public site. Now, the problem... Both of these sites run fine for basic tasks, however, my problem arises when I try to gain access to the file system for uploading. GoDaddy requires subdomains to run as a virtual directories under the main application in IIS (so the subdomains actually resolve/re-direct to www.mysite.com/admin when you type in admin.mysite.com), but because of this I am unable to write to my website root from the subfolder. Let me explain a little more... The CMS system (running as a virtual directory) gives the admin the ability to upload photos for display on the main site, the target folder of which is www.mysite.com/images - when attempting disk access from the root app, I am able to write to the virtual directory, but cannot do the opposite -- that is, write to the root from the virtual directory, getting security violations. If I can only upload to the /admin/ virtual directory, the entire point is moot because it's a secured folder that the public can't see! The only solution I can think of is to upload the files to the /admin/ virtual directory, then call a URL in the root that moves files from /admin/ back to the root, but that is entirely ghetto. I hope this post makes sense. Anyone else experience anything like this? The bottom line is that it seems virtual directories ONLY have access to themselves, and not their parent directories, no matter what credentials are used. Thanks!

    Read the article

  • Factory Girl: Automatically assigning parent objects

    - by Ben Scheirman
    I'm just getting into Factory Girl and I am running into a difficulty that I'm sure should be much easier. I just couldn't twist the documentation into a working example. Assume I have the following models: class League < ActiveRecord::Base has_many :teams end class Team < ActiveRecord::Base belongs_to :league has_many :players end class Player < ActiveRecord::Base belongs_to :team end What I want to do is this: team = Factory.build(:team_with_players) and have it build up a bunch of players for me. I tried this: Factory.define :team_with_players, :class => :team do |t| t.sequence {|n| "team-#{n}" } t.players {|p| 25.times {Factory.build(:player, :team => t)} } end But this fails on the :team=>t section, because t isn't really a Team, it's a Factory::Proxy::Builder. I have to have a team assigned to a player. In some cases I want to build up a League and have it do a similar thing, creating multiple teams with multiple players. What am I missing?

    Read the article

  • Mocking an object that uses jni using EasyMock

    - by Visage
    So my class under test has code that looks braodly like this public void doSomething(int param) { Report report = new Report() ...do some calculations report.someMethod(someData) } my intention was to extract the construction of report into a protected method and override it to use a mock object that I could then test to ensure that someMethod had been called with the right data. So far so good. But Report isnt under my control, and to mkae things worse it uses JNI to load a library at runtime. If I do Report report = EasyMock.createMock(Report.class) then EasyMock attempts to use reflection to find out the class members, but this causes an attempt to load the JNI library, which fails (the JNI libraries are only available on UNIX). Im considering two things: a) Introduce a ReportWrapper interface with two implementations, one of which will delegate calls to an real Report (so basically a Proxy), and a second which will basically use a mock object. or b) instead of calling someMethod, call a protected method which will in turn call someMethod that I can override in a testing subclass. Either way it seems nasty. Any better ways?

    Read the article

  • Adding custom methods to a subclassed NSManagedObject

    - by CJ
    I have a Core Data model where I have an entity A, which is an abstract. Entities B, C, and D inherit from entity A. There are several properties defined in entity A which are used by B, C, and D. I would like to leverage this inheritance in my model code. In addition to properties, I am wondering if I can add methods to entity A, which are implemented in it's sub-entities. For example: I add a method to the interface for entity A which returns a value and takes one argument I add implementations of this method to A, B, C, D Then, I call executeFetchRequest: to retrieve all instances of B I call the method on the objects retrieved, which should call the implementation of the method contained in B's implementation I have tried this, but when calling the method, I receive: [NSManagedObject methodName:]: unrecognized selector sent to instance I presume this is because the objects returned by executeFetchRequest: are proxy objects of some sort. Is there any way to leverage inheritance using subclassed NSManagedObjects? I would really like to be able to do this, otherwise my model code would be responsible for determining what type of NSManagedObject it's dealing with and perform special logic according to the type, which is undesirable. Any help is appreciated, thanks in advance.

    Read the article

  • HTTP 500 a problem on CAS Server while setting SSLVerifyClent as "required"

    - by Huiyu.Bird
    I have 3 server, a Apache Server, a JBOSS Server and a CAS Server for SSO. The Apache Server resolve all request with a domain such as www.request.com, and the path of CAS Server is www.request.com/cas, and JBOSS Server is www.request.com/jboss (This app got a CAS client). My problem is if I set SSLVerifyClient require for the NameVirtualHost of www.request.com in my Apache Server, I got a HTTP 500 error during the redirecting to the JBOSS Server(http://www.request.com/jboss), after logined in the CAS login page successfully. But everything goes successfully if there is no SSLVerifyClient require . Error logs of my Apache Server : [Mon Apr 19 17:07:25 2010] [error] Re-negotiation handshake failed: Not accepted by client!? Error logs of my JBOSS Server : 2010-04-19 17:29:57,263 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[localhost].[/jboss].[jsp]] (ajp-0.0.0.0-8009-1) Servlet.service() for servlet jsp threw exception org.jasig.cas.client.validation.TicketValidationException: The CAS server returned no response. at org.jasig.cas.client.validation.AbstractUrlBasedTicketValidator.validate(AbstractUrlBasedTicketValidator.java:162) at org.jasig.cas.client.validation.AbstractTicketValidationFilter.doFilter(AbstractTicketValidationFilter.java:129) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jasig.cas.client.authentication.AuthenticationFilter.doFilter(AuthenticationFilter.java:103) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jasig.cas.client.session.SingleSignOutFilter.doFilter(SingleSignOutFilter.java:78) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:96) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:75) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.in Any tips will be highly appreciated. Thanks in advance.

    Read the article

  • Why won't VS2010 RC use my existing types when I add a service reference?

    - by Johan Driessen
    I have a huge problem getting services references in VS2010 RC to use existing assemblies. Even though I have a class library with all the data contracts (classes marked with DataContract and properties with DataMember) that is shared between the service project and the consuming project (which is a class library), when I add a service reference, the data contracts are regenerated withing the service reference instead of using the existing types. When I was using VS2010 beta 2, this worked fine, and I have existing service references using the very same data contracts. But if I add a new service reference, or even update an old one, it won't use the existing types anymore. I have made a mini-test-solution, with one service, one data contract type and one console app as a consumer (all in the same solution), and there it seems to work, but that's no great comfort to me. Is there any way to see why it can't use the existing types? Edit to clearify. It works to generate the proxy classes with svcutil.exe, and point to the data contracts dll, like this: svcutil.exe http://localhost/MyService.svc /reference:[Path To DataContracts]\DataContracts.dll /n:*,MyProject.MyServiceReference /ct:System.Collections.Generic.List`1 The question is, what possible reason could there be for Visual Studio to generate its own datacontracts instead of using the existing ones even though the "reuse" checkbox is checked and the datacontracts assembly is referenced.

    Read the article

  • How to share data between SSRS Security and Data Processing extension?

    - by user2904681
    I've spent a lot of time trying to solve the issue pointed in title and have no found a solution yet. I use MS SSRS 2012 with custom Security (based on Form Authentication and ClaimsPrincipal) and Data Processing extensions. In Data extension level I need to apply filter programmatically based on one of the claim which I have access in Security extension level only. Here is the problem: I do know how to pass the claim from Security to Data Processing extension code... What I've tried: IAuthenticationExtension.LogonUser(string userName, string password, string authority) { ... ClaimsPrincipal claimsPrincipal = CreateClaimsPrincipal(...); Thread.CurrentPrincipal = claimsPrincipal; HttpContext.Current.User = claimsPrincipal; ... }; But it doesn't work. It seems SSRS overrides it within either GenericPrincipal or FormsIdentity internally. The possible workaround I'm thinking about (but haven't checked it yet): 1. Create HttpModule which will create HttpContext with all required information (minus: will be invoke getting claims each time - huge operation) 2. Write to custom SQL table to store logged users information which is required for Data extension and then read it 3. try somehow to append to cookies due to LogOn and then read each time on IAuthenticationExtension.GetUserInfo and fill HttpContext None of them seems to be a good solution. I would be grateful for any help/advise/comments.

    Read the article

  • RIA Services for transmitting non DB object-graph

    - by Mike Gates
    I have been getting into RIA services because I thought it would simplify dealing with the services layer of web applications I wish to build. I see lots of examples out there showing how to create DomainService classes which expose and consume entities that have some kind of relational database backing, and therefore have foreign-key relationships. However, I would like to know how to expose and consume normal object graphs...objects that contain references to eachother but don't have foreign keys. For example, say I want a service operation called "GetFolderInformation(string pathToFolder)". I want this to return a custom object called "FolderInformation" structured with: - string Name - IEnumerable<FileInformation> Files I cannot get this to work because it seems that RIA wants to deal with entities that have foreign key relationships. Why? Why can't the serializer just see my object references and recreate that in the proxy on the other side? Data exists behind service layers that doesn't necessarily have foreign key relationships...like folder/file for example. EDIT: I realized I hadn't asked my question! My question is, is there a way to do what I am trying to do?

    Read the article

  • asp.net mvc and portal like functionality

    - by richard-heesbeen
    fHi, I need to build an site with some portal like functionality where an param in the request will indentify the portal. like so http:/domain/controller/action/portal Now my problem is if an portal doesn't exists there must be an redirect to an other site/page and an user can login in to one portal but if the user comes to an other portal the user must be redirected back to the login page for that portal. I have something working now, but i feel like there must be an central place in the pipeline to handle this. My current solution uses an custom action filter which checks the portal param and sees if the portal exists and checks if the user logged on in that portal (the portal the user logged on for is in the authentication cookie). I make my own IIndentiy and IPrincipal in the application_postauthentication event. I have 2 problems with my current approach: 1: It's not really enforced, i have to add the attributes to all controllers and/or actions. 2: The isauthenticated on an user isn't really working, i would like that to work. But for that i need to have access to the params of the route when i create my IPrincipal/IIndenty and i can't seem to find an correct place to do that. Hope someone can give me some pointers, Richard.

    Read the article

  • unable to implement HTTP Tunneling correctly in order to enable Java rmi calls over internet(and und

    - by Lokesh Kumar
    in my previous question :-How to Setup RMI Server under(NAT/ISP) Now,i m able to start my RMI server by Installing apache Tomcat 6.0 server. i have also installed servlet programs into apache Tomcat server in order to enable HTTP tunneling. my servlet codes:- (1) [SimplifiedServletHandler.java][2] (2) [ServletForwardCommand.java][3] these servlets resides inside :- C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\examples\WEB-INF\classes\ one more thing that i hv added to my CalcultorClient.java program:- try { RMISocketFactory. setSocketFactory(new sun.rmi.transport.proxy .RMIHttpToCGISocketFactory( )); }catch (IOException ignored) { System.out.println("Error :- ignored.getMessage()"); } But,when i try to make client connect with server(under ISP/NAT) i get the following Exception :- RemoteException java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.IOException: HTTP request failed i don't know the correct reason behind this Exception.. but,i think that i haven't installed or invoke my servlet programs properly on server side. so,can anybody tell me the correct reason behind this error/Exception????? and if u think that it is servlet problem then tell me the correct procedure to run my serlvet program inside tomcat server.

    Read the article

  • Delphi 6 OleServer.pas Invoke memory leak

    - by Mike Davis
    There's a bug in delphi 6 which you can find some reference online for when you import a tlb the order of the parameters in an event invocation is reversed. It is reversed once in the imported header and once in TServerEventDIspatch.Invoke. you can find more information about it here: http://cc.embarcadero.com/Item/16496 somewhat related to this issue there appears to be a memory leak in TServerEventDispatch.Invoke with a parameter of a Variant of type Var_Array (maybe others, but this is the more obvious one i could see). The invoke code copies the args into a VarArray to be passed to the event handler and then copies the VarArray back to the args after the call, relevant code pasted below: // Set our array to appropriate length SetLength(VarArray, ParamCount); // Copy over data for I := Low(VarArray) to High(VarArray) do VarArray[I] := OleVariant(TDispParams(Params).rgvarg^[I]); // Invoke Server proxy class if FServer <> nil then FServer.InvokeEvent(DispID, VarArray); // Copy data back for I := Low(VarArray) to High(VarArray) do OleVariant(TDispParams(Params).rgvarg^[I]) := VarArray[I]; // Clean array SetLength(VarArray, 0); There are some obvious work-arounds in my case: if i skip the copying back in case of a VarArray parameter it fixes the leak. to not change the functionality i thought i should copy the data in the array instead of the variant back to the params but that can get complicated since it can hold other variants and seems to me that would need to be done recursively. Since a change in OleServer will have a ripple effect i want to make sure my change here is strictly correct. can anyone shed some light on exactly why memory is being leaked here? I can't seem to look up the callstack any lower than TServerEventDIspatch.Invoke (why is that?) I imagine that in the process of copying the Variant holding the VarArray back to the param list it added a reference to the array thus not allowing it to be release as normal but that's just a rough guess and i can't track down the code to back it up. Maybe someone with a better understanding of all this could shed some light?

    Read the article

  • Publishing artifacts with sources on archiva

    - by Palimondo
    At work I'm dipping my toes in managing project dependencies with maven. We use Apache Archiva (1.2.1) as a local repository and proxy. I'm adding artifact for open source project, that is not published on any public repository. I've learned that to publish the sources I should use the Classifier field on Upload artifact page. The sources are then listed alongside the jar and pom when I browse the repository. But when I update my maven dependencies I get only the jar and pom from the repository. I noticed that sources are also missing when the archiva proxies for me the downloads from other public repositories. I didn't find any configuration options in Archiva's admin pages to serve the sources... What am I missing? Update: I was missing the fact that artifact sources have to be downloaded manually. I.e. the maven client has to request them, which is controlled by command line option -DdownloadSources=true. Maven Integration for Eclipse has a preference setting to always download them as described in Resolving artifact sources. Archiva then serves the sources for local artifacts or proxies the request to remote repositories and caches the sources for future requests.

    Read the article

  • Strange File Upload issue with asp.net site on a web farm

    - by Coov
    I have a basic asp.net file upload page. When I test file uploads from my local machine, it works fine. When I test file uploads from our dev machine, it works fine. When I deploy the site to our production webfarm, it behaves strangely. If I access the site from off the network, I can load file-after-file without issue. If I access the site from within our network, I can load the first file just fine but any subsequent files result it a bad sequence of commands error. I'm not sure if this is web farm issue, a network issue, or something else. It feels like a connection is not being disposed of properly but it doesn't make sense why everything works fine remotely. Markup: <asp:FileUpload ID="FileUpload1" runat="server" Width="350px" /> <asp:Button ID="btnSubmit" runat="server" Text="Upload" onclick="btnSubmit_Click" /> Code: if (FileUpload1.HasFile) { FtpWebRequest ftpRequest; FtpWebResponse ftpResponse; ftpRequest = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://ftp.myftpsite.com/" + FileUpload1.FileName)); ftpRequest.Method = WebRequestMethods.Ftp.UploadFile; ftpRequest.Proxy = null; ftpRequest.UseBinary = true; ftpRequest.Credentials = new NetworkCredential("username", "password"); ftpRequest.KeepAlive = false; byte[] fileContents = new byte[FileUpload1.PostedFile.ContentLength]; using (Stream fr = FileUpload1.PostedFile.InputStream) { fr.Read(fileContents, 0, FileUpload1.PostedFile.ContentLength); } using (Stream writer = ftpRequest.GetRequestStream()) { writer.Write(fileContents, 0, fileContents.Length); } ftpResponse = (FtpWebResponse)ftpRequest.GetResponse(); Response.Write(ftpResponse.StatusDescription); }

    Read the article

  • Altering lazy-loaded object's private variables

    - by Kevin Pang
    I'm running into an issue with private setters when using NHibernate and lazy-loading. Let's say I have a class that looks like this: public class User { public int Foo {get; private set;} public IList<User> Friends {get; set;} public void SetFirstFriendsFoo() { // This line works in a unit test but does nothing during a live run with // a lazy-loaded Friends list Users(0).Foo = 1; } } The SetFirstFriendsFoo call works perfectly inside a unit test (as it should since objects of the same type can access each others private properties). However, when running live with a lazy-loaded Friends list, the SetFirstFriendsFoo call silently fails. I'm guessing the reason for this is because at run-time, the Users(0).Foo object is no longer of type User, but of a proxy class that inherits from User since the Friends list was lazy-loaded. My question is this: shouldn't this generate a run-time exception? You get compile-time exceptions if you try to access another class's private properties, but when you run into a situation like this is looks like the app just ignores you and continues along its way.

    Read the article

  • How to do a javascript redirection to a ClickOnce deployment URL?

    - by jerem
    I have a ClickOnce application used to view some documents on a website. When connected, the user sees a list of documents as links to http://server/myapp.application?document=docname. It worked fine until I had to integrate the website authentication/security system into my application. The website uses a ticketing system to grant access to its users. A ticket is generated by a web application and needs to be added to the deployment URL querystring, then I have to check at application startup that the ticket given in querystring was right by making another request to the web application. So the deployment URL becomes something like: h ttp://server/myapp.application?document=docname&ticket=ticketnumber. The problem is the ticket is valid only 10 seconds, so I have to get it only after the user has clicked a link. My first try was to have some javascript do the request to get the ticket, generate the proper deployment URL and then redirect the user to this URL with "window.location = deploymentUrl;". It works fine in Firefox, but IE does not prompt the user for installation. I guess it is a ClickOnce security constraints, but I am able to do the redirection when doing it on localhost, so I hope there is a workaround. I have also added the server on the "trusted sites" list in IE options. Is it possible to have that working in IE? What are my other options to do that?

    Read the article

  • WMI Query Script as a Job

    - by Kenneth
    I have two scripts. One calls the other with a list of servers as parameters. The second query is designed to execute a WMI query. When I run it manually, it does this perfectly. When I try to run it as a job it hangs forever and I have to remove it. For the sake of space here is the relevant part of the calling script: ProcessServers.ps1 Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList $sqlsrv,$destdb,$server,$instance GetServerDetailsLight.ps1 param($sqlsrv,$destdb,$server,$instance) $password = get-content C:\SQLPS\auth.txt | convertto-securestring $credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist "DOMAIN\MYUSER",$password [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $box_id = 0; if ($sqlsrv.length -eq 0) { write-output "No data passed" break } function getinfo { param( [string]$svr, [string]$inst ) "Entered GetInfo with: $svr,$inst" $cs = get-wmiobject win32_operatingsystem -computername $svr -credential $credentials -authentication 6 -Verbose -Debug | select Name, Model, Manufacturer, Description, DNSHostName, Domain, DomainRole, PartOfDomain, NumberOfProcessors, SystemType, TotalPhysicalMemory, UserName, Workgroup write-output "WMI Results: $cs" } getinfo $server $instance write-output "Complete" Executed as a job it will show as 'running' forever: PS C:\sqlps> Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList DBSERVER,LOGDB,SERVER01,SERVER01 Id Name State HasMoreData Location Command -- ---- ----- ----------- -------- ------- 21 Job21 Running True localhost param($sqlsrv,$destdb,... GAC Version Location --- ------- -------- True v2.0.50727 C:\WINDOWS\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll getinfo MSDCHR01 MSDCHR01 Entered GetInfo with: SERVER01,SERVER01 The last output I ever get is the 'Entered GetInfo with: SERVER01,SERVER01'. If I run it manually like so: PS C:\sqlps> .\GetServerDetailsLight.ps1 DBSERVER LOGDB SERVER01 SERVER01 The WMI query executes just as expected. I am trying to determine why this is, or at least a useful way to trap errors from within jobs. Thanks!

    Read the article

  • How to safely let users submit custom themes/plugins for a Rails app

    - by Brian Armstrong
    In my rails app I'd like to let users submit custom "themes" to display data in various ways. I think they can get the data in the view using API calls and I can create an authentication mechanism for this. Also an authenticated API to save data. So this is probably safe. But i'm struggling with the best way to let users upload/submit their own code for the theme. I want this to work sort of like Wordpress themes/plugins where people can upload the thing. But there are some security risks. For example, if I take the uploaded "theme" a user submits and put it in it's own directory somewhere inside the rails app, what are the risks of this? If the user inserts any rails executable code in their theme, even though it's the view they have full access at that point to all the models, everyone's data, etc. Even from other users. So that is not good. I need some way to let the uploaded themes exist in a sandbox of the rails app, but I haven't seen a good way to do this. Any ideas?

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >