Search Results

Search found 15403 results on 617 pages for 'request querystring'.

Page 475/617 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

  • Part 5: Choose the right tool - or - why

    - by volker.eckardt(at)oracle.com
    Consider the following client request “Please create a report for us to list expenses”. Which Oracle EBS tool would you choose? There are plenty of options available: Oracle Reports, or BI Publisher with PDF or Excel layout, or Discoverer, or BI Publisher Stand Alone, or PDF online generation, or Oracle WebADI, or Plain SQL*Plus as Concurrent Program, or Online review option … Assuming, you as development lead have to decide, you may decide by available skill set in your development team. However, is this a good decision? An important question to influence the decision is the “Why” question: why do you need this report, what process is behind, what exactly you like to achieve? We see often data created or printed, although it would be much better to get the data in Excel, and upload changes via WebADI directly. There are more points that should drive your decision: How many of such requirements you have got? Has this technique been used in the project already? Are there related reusable’s you may gain from? How difficult is it to maintain your solution? Can you merge this report with another one, to reduce test and maintenance work? In addition, also your own development standards should guide you a bit to come to a good decision. In one of my own projects, we discussed such topics in our weekly team meeting. By utilizing the team knowledge best, you may come to a better decision, and additionally, your team supports your decision. Unfortunately, I have rarely seen dedicated team trainings or planned knowledge transfer to support such processes. Often the pressure to deliver on time is too high to have discussion and decision time left. But exactly this can help keeping maintenance costs low by limiting the number of alternative solutions for similar requirements. Lastly, design decisions should be documented to allow another person taking this over easily. Decisions shall be reviewed and updated regularly, to reflect related procedures or Oracle products respective product versions. Summary: Oracle EBS offers plenty of alternatives to implement customizations. Create and maintain a decision tree to support the design process. Do not leave the decision just on developer side. Limit the number of alternative solutions as best as possible; choose one which is the most appropriate also from future maintenance perspective.

    Read the article

  • Specifying and applying broad changes to a program

    - by Victor Nicollet
    How do you handle incomplete feature requests, when the ones asking for the feature cannot possibly write a complete request? Consider an imaginary situation. You are a tech lead working on a piece of software that revolves around managing profiles (maybe they're contacts in a CRM-type application, or employees in an HR application), with many operations being directly or indirectly performed on those profiles — edit fields, add comments, attach documents, send e-mail... The higher-ups decide that a lock functionality should be added whereby a profile can be locked to prevent anyone else from doing any operations on it until it's unlocked — this feature would be used by security agents to prevent anyone from touching a profile pending a security audit. Obviously, such a feature interacts with many other existing features related to profiles. For example: Can one add a comment to a locked profile? Can one see e-mails that were sent by the system to the owner of a locked profile? Can one see who recently edited a locked profile? If an e-mail was in the process of being sent when the lock happened, is the e-mail sending canceled, delayed or performed as if nothing happened? If I just changed a profile and click the "cancel" link on the confirmation, does the lock prevent the cancel or does it still go through? In all of these cases, how do I tell the user that a lock is in place? Depending on the software, there could be hundreds of such interactions, and each interaction requires a decision — is the lock going to apply and if it does, how will it be displayed to the user? And the higher-ups asking for the feature probably only see a small fraction of these, so you will probably have a lot of questions coming up while you are working on the feature. How would you and your team handle this? Would you expect the higher-ups to come up with a complete description of all cases where the lock should apply (and how), and treat all other cases as if the lock did not exist? Would you try to determine all potential interactions based on existing specifications and code, list them and ask the higher-ups to make a decision on all those where the decision is not obvious? Would you just start working and ask questions as they come up? Would you try to change their minds and settle on a more easily described feature with similar effects? The information about existing features is, as I understand it, in the code — how do you bridge the gap between the decision-makers and that information they cannot access?

    Read the article

  • Is is possible to get a patch included in the current release? If so, how?

    - by Oli
    So a while back I reported a bug in Compiz's Place Window plugin. It's a fairly major regression for people affected by it: mainly those using Gnome-Fallback, judging by the reports. A patch surfaced a short time later. I created a PPA for testing and everybody involved so far is reporting the issues are fixed. It even fixes another bug. I've done testing with a standard Unity desktop and can say (for my testing) no adverse effects were visible. I want to get this pushed to Ubuntu right now for two main reasons: I'm selfish. I don't want to need to update my PPA every time a new version of Compiz is pushed to 12.04. I don't want Ubuntu users seeing their windows flying around because of a silly little bug. I want this patch pushed to Ubuntu's version of Compiz as soon as possible, so we can mark these bugs fixed and move on with our lives. Whose leg do I have to hump to get this pulled into Ubuntu right now? I don't maintain this project and it's an upstream thing but it's fairly integral to Ubuntu. I could go to Compiz but I imagine that if they accept the patch, it'll be months (at least a release) before it's anywhere near Ubuntu. And when I do find the right person, how can I make the process as slick as possible for them? I want them to see my request, go "Yup, that all looks great, done" and that be it. I don't want seventeen rounds of emails addressing aspects of the patch. More importantly, I don't want to waste their time either. And what do I have to provide them? My packaging skills are... lamentable. This was my first attempt at patching a package for redistribution so I've probably made every single packaging error known to man. Will they be happy with the original patch (so they can apply it themselves) or should I repackage things so the diff/changelog is a little cleaner (it took me a few goes and the versioning is all over the place). Note: This question is about Compiz but I'd prefer if answers could address other styles of package too so we have an authoritative and comprehensive thread of how to get things fixed.

    Read the article

  • Taking a Projects Development to the Next Level

    - by user1745022
    I have been looking for some advice for a while on how to handle a project I am working on, but to no avail. I am pretty much on my fourth iteration of improving an "application" I am working on; the first two times were in Excel, the third Time in Access, and now in Visual Studio. The field is manufacturing. The basic idea is I am taking read-only data from a massive Sybase server, filtering it and creating much smaller tables in Access daily (using delete and append Queries) and then doing a bunch of stuff. More specifically, I use a series of queries to either combine data from multiple tables or group data in specific ways (aggregate functions), and then I place this data into a table (so I can sort and manipulate data using DAO.recordset and run multiple custom algorithms). This process is then repeated multiple times throughout the database until a set of relevant tables are created. Many times I will create a field in a query with a value such as 1.1 so that when I append it to a table I can store information in the field from the algorithms. So as the process continues the number of fields for the tables change. The overall application consists of 4 "back-end" databases linked together on a shared drive, with various output (either front-end access applications or Excel). So my question is is this how many data driven applications that solve problems essentially work? Each backend database is updated with fresh data daily and updating each takes around 10 seconds (for three) and 2 minutes(for 1). Project Objectives. I want/am moving to SQL Server soon. Front End will be a Web Application (I know basic web-development and like the administration flexibility) and visual-studio will be IDE with c#/.NET. Should these algorithms be run "inside the database," or using a series of C# functions on each server request. I know you're not supposed to store data in a database unless it is an actual data point, and in Access I have many columns that just hold calculations from algorithms in vba. The truth is, I have seen multiple professional Access applications, and have never seen one that has the complexity or does even close to what mine does (for better or worse). But I know some professional software applications are 1000 times better then mine. So Please Please Please give me a suggestion of some sort. I have been completely on my own and need some guidance on how to approach this project the right way.

    Read the article

  • List of common pages to have in the footer [closed]

    - by user359650
    I would like to post this question as a reference for webmasters wondering what pages they should include in the footer. I will use answers to complete my initial list: About us / About MyCompany / MyCompany About / About us: description about the company, its mission, and its vision. History: summary of milestones achieved by the company. The team / Management / Board of directors: depending on size of the company there may be one of more pages describing the people involved in the company, depending on their position. Awards: list of awards received by the company if any. In the press / They're talking about us: list of links to external websites, usually highly regarded news websites, which mentioned the company in one of their articles. Media Wallpapers: wallpapers with company logo in different colors and formats that fans can set as desktop image for their computer. logos: company logo in different colors and formats that websites/blogs posting about the website can use for illustration purposes. Media kits: documents, usually in PDF format summarizing the key company figures and facts that journalists can download and read to get a quick overview of the company. Misc Contact / Contact us: contact details the company is prepared to disclose if any (address, email, phone) or contact form. Careers / Jobs / Join us: list of open vacancies with contact form to apply. Investors / Partners / Publishers: information and contact forms for companies willing to become Investors/Partners/Publishers or login page to access portal restricted to those who already are. FAQ: list of common questions and answers to guide users and reduce number of support requests. Follow us / Community Facebook / Twitter / Google+: links to the company's pages/accounts on various social networks. Legal Terms / Terms of use / Terms & Conditions: rules users must follow when browsing the website. Privacy / Privacy Statement: explanations as to how the company deals with users' personal data and what users can do about it (request information to be deleted...). cookies: page that starts appearing on more and more websites due to new regulation (notably EU) imposing more transparency and control for users about cookies (e.g. BBC cookie page). Any input is welcome PS: if someone with enough rep could add the footer tag that would be great (min. 300 required).

    Read the article

  • How to implement a component based system for items in a web game.

    - by Landstander
    Reading several other questions and answers on using a component based system to define items I want to use one for the items and spells in a web game written in PHP. I'm just stuck on the implementation. I'm going to use a DB schema suggested in this series (part 5 describes the schema); http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/ This means I'll have an items table with generic item properties, a table listing all of the components for an item and finally records in each component table used to make up the item. Assuming I can select the first two together in a single query, I'm still going to do N queries for each component type. I'm kind of fine with this because I can cache the data into memcache and check there first before doing any queries. I'll need to build up the items on every request they are used in so the implementation needs to be on the lean side even if they're pulled from memcache. But right there is where I feel confident about implementing a component system for my items ends. I figure I'd need to bring attributes and behaviors into the container from each component it uses. I'm just not sure how to do that effectively and not end up writing a lot of specialized code to deal with each component. For example an AttackComponent might need to know how to filter targets inside of a battle context and also maybe provide an attack behavior. That same item might also have a UsableComponent which allows the item to be used and apply some effect onto a different set of targets filtered differently from the same battle context. Then not every part of an item is an active part, an AttributeBonusComponent might need to only kick in when the item is in an equipped state or when displaying the item details page. Ultimately, how should I bring all of the components together into the container so when I use an item as a weapon I get the correct list of targets? Know when a weapon can also be used as an item? Or to apply the bonuses the item provides to a character object? I feel like I've gone too far down the rabbit hole and I can't grasp onto the simple solution in front of me. (If that makes any sense at all.) Likewise if I were to implement the best answer from here I feel like I'd have a lot of the same questions. How to model multiple "uses" (e.g. weapon) for usable-inventory/object/items (e.g. katana) within a relational database.

    Read the article

  • OSB unit testing, part 1 by Qualogy

    - by JuergenKress
    First you need to implement the simple bpel process like this : In my current project, I inherited a lot of OSB components that have been developed by (former) team members, but they all lack unit tests. This is a situation I really dislike, since this makes it much harder to refactor or bug-fix the existing code base. So, for all newly created components (and components I have to bug-fix) I strive to add unit tests. Of course, the unit tests will be created using my favourite testing tool: soapUI ! Unit of test The unit test should be created for the service composition, which in OSB terms should be the proxy service combination with its business service. Now, since you do not want to rely on any other services, you should provide mock services for all services invoked from your Component-Under-Test. In a previous article, I wrote about mocking your services in soapUI. While this approach would also be valid here, creating a mock service (and certainly deploying it on a separate WebServer) does violate one of the core principles of unit testing: to make your unit tests as self-contained as possible, i.e. not depending on any external components. In this article, I will show you how to achieve this by simply providing a mock response inside your unit test. Scenario The scenario I implement for testing is a simple currency converter; the external request consists of a from and a to currency, and an amount (in currency from). The service will perform an exchange rate lookup using the WebServiceX CurrencyConverter and return a response to the caller consisting of both the source and target currencies and amounts. For the purpose of unit testing, I will implement a mock response for the exchange rate lookup. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Qualogy,OSB,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Adding Facebook Open Graph Tags to an MVC Application

    - by amaniar
    If you have any kind of share functionality within your application it’s a good practice to add the basic Facebook open graph tags to the header of all pages. For an MVC application this can be as simple as adding these tags to the Head section of the Layouts file.<head> <title>@ViewBag.Title</title> <meta property="og:title" content="@ViewBag.FacebookTitle" /> <meta property="og:type" content="website"/> <meta property="og:url" content="@ViewBag.FacebookUrl"/> <meta property="og:image" content="@ViewBag.FacebookImage"/> <meta property="og:site_name" content="Site Name"/> <meta property="og:description" content="@ViewBag.FacebookDescription"/></head>  These ViewBag properties can then be populated from any action: private ActionResult MyAction() { ViewBag.FacebookDescription = "My Actions Description"; ViewBag.FacebookUrl = "My Full Url"; ViewBag.FacebookTitle = "My Actions Title"; ViewBag.FacebookImage = "My Actions Social Image"; .... } You might want to populate these ViewBag properties with default values when the actions don’t populate them. This can be done in 2 places. 1. In the Layout itself. (check the ViewBag properties and set them if they are empty) @{ ViewBag.FacebookTitle = ViewBag.FacebookTitle ?? "My Default Title"; ViewBag.FacebookUrl = ViewBag.FacebookUrl ?? HttpContext.Current.Request.RawUrl; ViewBag.FacebookImage = ViewBag.FacebookImage ?? "http://www.mysite.com/images/logo_main.png"; ViewBag.FacebookDescription = ViewBag.FacebookDescription ?? "My Default Description"; }  2. Create an action filter and add it to all Controllers or your base controller. public class FacebookActionFilterAttribute : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { var viewBag = filterContext.Controller.ViewBag; viewBag.FacebookDescription = "My Actions Description"; viewBag.FacebookUrl = "My Full Url"; viewBag.FacebookTitle = "My Actions Title"; viewBag.FacebookImage = "My Actions Social Image"; base.OnActionExecuting(filterContext); } } Add attribute to your BaseController. [FacebookActionFilter] public class HomeController : Controller { .... }

    Read the article

  • Advantages to Multiple Methods over Switch

    - by tandu
    I received a code review from a senior developer today asking "By the way, what is your objection to dispatching functions by way of a switch statement?" I have read in many places about how pumping an argument through switch to call methods is bad OOP, not as extensible, etc. However, I can't really come up with a definitive answer for him. I would like to settle this for myself once and for all. Here are our competing code suggestions (php used as an example, but can apply more universally): class Switch { public function go($arg) { switch ($arg) { case "one": echo "one\n"; break; case "two": echo "two\n"; break; case "three": echo "three\n"; break; default: throw new Exception("Unknown call: $arg"); break; } } } class Oop { public function go_one() { echo "one\n"; } public function go_two() { echo "two\n"; } public function go_three() { echo "three\n"; } public function __call($_, $__) { throw new Exception("Unknown call $_ with arguments: " . print_r($__, true)); } } Part of his argument was "It (switch method) has a much cleaner way of handling default cases than what you have in the generic __call() magic method." I disagree about the cleanliness and in fact prefer call, but I would like to hear what others have to say. Arguments I can come up with in support of Oop scheme: A bit cleaner in terms of the code you have to write (less, easier to read, less keywords to consider) Not all actions delegated to a single method. Not much difference in execution here, but at least the text is more compartmentalized. In the same vein, another method can be added anywhere in the class instead of a specific spot. Methods are namespaced, which is nice. Does not apply here, but consider a case where Switch::go() operated on a member rather than a parameter. You would have to change the member first, then call the method. For Oop you can call the methods independently at any time. Arguments I can come up with in support of Switch scheme: For the sake of argument, cleaner method of dealing with a default (unknown) request Seems less magical, which might make unfamiliar developers feel more comfortable Anyone have anything to add for either side? I'd like to have a good answer for him.

    Read the article

  • Operation times out trying to SSH outside LAN i.e. from internet to LAN no connection is established

    - by Pelle L
    I run Ubuntu 12.04 and have no success connecting with SSH from "Internet". The router is a TL-MR3420 which is set up to forward requests to one of the NIC's on ubuntu machine (which has in total 3 NICs). I can SSH from a client on the "local" network/LAN. The forward mechanism in the router seems to work. If I stop SSH service on the Ubuntu machine and instead start one on the windows machine - it works like a charm. I do not use the Std port 22 but that shouldn't be an issue as far as I understand - sine it works on the same port on the win machine. Since my public IS isn't static I use a dynDNS service but as said earlier the same setup works from the win machine. The router is located on 192.168.0.1 The Ubuntu NICs has the following IP: eth2 192.168.0.100 , eth1 192.168.0.101 , eth0 192.168.0.102 and I have forwarded the "outside" request to 192.168.0.100 In regards for firewall settings on the Ubuntu machine I have disabled the ufw and the command ufw status give status: inactive. I don't now it this is relevant information but teh command iptables --list give: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have tried to catch traffic with help of wireshark (a tool I'm not too used to use) and it seems as a few (3?) "requests" actually reaches the NIC but ... nothing happens. The syslog does not show any entries during these attempts. Perhaps it could be some routing issues but I have reached my level of competence and are stuck ... all help and support to get this sorted out is much appreciated. I'm new to Linux so please do not assume I have a configuration that is correct - but as I wrote earlier - if the client that initiate SSH is on the LAN it all works. PS:I have also tried to get VPN (PPP) working from Internet with no success - once again VPN works on the windows machine ... so my best guess is that this is related to how the ubuntu machine handles (IP) traffic and not the TL-MR3420 router or other network issues.

    Read the article

  • SSMS Tools Pack 3.0 is out. Full SSMS 2014 support and improved features.

    - by Mladen Prajdic
    With version 3.0 the SSMS 2014 is fully supported. Since this is a new major version you'll eventually need a new license. Please check the EULA to see when. As a thank you for your patience with this release, everyone that bought the SSMS Tools Pack after April 1st, the release date of SQL Server 2014, will receive a free upgrade. You won't have to do anything for this to take effect. First thing you'll notice is that the UI has been completely changed. It's more in line with SSMS and looks less web-like. Also the core has been updated and rewritten in some places to be better suited for future features. Major improvements for this release are: Window Connection Coloring Something a lot of people have asked me over the last 2 years is if there's a way to color the tab of the window itself. I'm very glad to say that now it is. In SSMS 2012 and higher the actual query window tab is also colored at the top border with the same color as the already existing strip making it much easier to see to which server your query window is connected to even when a window is not focused. To make it even better, you can not also specify the desired color based on the database name and not just the server name. This makes is useful for production environments where you need to be careful in which database you run your queries in. Format SQL The format SQL core was rewritten so it'll be easier to improve it in future versions. New improvement is the ability to terminate SQL statements with semicolons. This is available only in SSMS 2012 and up. Execution Plan Analyzer A big request was to implement the Problems and Solutions tooltip as a window that you can copy the text from. This is now available. You can move the window around and copy text from it. It's a small improvement but better stuff will come. SQL History Current Window History has been improved with faster search and now also shows the color of the server/database it was ran against. This is very helpful if you change your connection in the same query window making it clear which server/database you ran query on. The option to Force Save the history has been added. This is a menu item that flushes the execution and tab content history save buffers to disk. SQL Snippets Added an option to generate snippet from selected SQL text on right click menu. Run script on multiple databases Configurable database groups that you can save and reuse were added. You can create groups of preselected databases to choose from for each server. This makes repetitive tasks much easier New small team licensing option A lot of requests came in for 1 computer, Unlimited VMs option so now it's here. Hope it serves you well.

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 5

    - by Max
    In this post, we will look at implementing the following in our twitter client - displaying the profile picture in the home page after logging in. So to accomplish this, we will need to do the following steps. 1) First of all, let us create another field in our Global variable static class to hold the profile image url. So just create a static string field in that. public static string profileImage { get; set; } 2) In the Login.xaml.cs file, before the line where we redirect to the Home page if the login was successful. Create another WebClient request to fetch details from this url - .xml">http://api.twitter.com/1/users/show/<TwitterUsername>.xml I am interested only in the “profile_image_url” node in this xml string returned. You can choose what ever field you need – the reference to the Twitter API Documentation is here. 3) This particular url does not require any authentication so no need of network credentials for this and also the useDefaultCredentials will be true. So this code would look like. WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = myService.UseDefaultCredentials = true; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted_User);) myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/users/show/" + TwitterUsername.Text + ".xml")); 4) To parse the xml file returned by this and get the value of the “profile_image_url” node, the LINQ code will look something like XDocument xdoc; xdoc = XDocument.Parse(e.Result); var answer = (from status in xdoc.Descendants("user") select status.Element("profile_image_url").Value); GlobalVariable.profileImage = answer.First().ToString(); 5) After this, we can then redirect to Home page completing our login formalities. 6) In the home page, add a image tag and give it a name, something like <Image Name="image1" Stretch="Fill" Margin="29,29,0,0" Height="73" VerticalAlignment="Top" HorizontalAlignment="Left" Width="73" /> 7) In the OnNavigated to event in home page, we can set the image source url as below image1.Source = new BitmapImage(new Uri(GlobalVariable.profileImage, UriKind.Absolute)); Hope this helps. If you have any queries / suggestions, please feel free to comment below or contact me. Technorati Tags: Silverlight,LINQ,Twitter API,Twitter

    Read the article

  • Getting a SecurityToken from a RequestSecurityTokenResponse in WIF

    - by Shawn Cicoria
    When you’re working with WIF and WSTrustChannelFactory when you call the Issue operation, you can also request that a RequestSecurityTokenResponse as an out parameter. However, what can you do with that object?  Well, you could keep it around and use it for subsequent calls with the extension method CreateChannelWithIssuedToken – or can you? public static T CreateChannelWithIssuedToken<T>(this ChannelFactory<T> factory, SecurityToken issuedToken);   As you can see from the method signature it takes a SecurityToken – but that’s not present on the RequestSecurityTokenResponse class. However, you can through a little magic get a GenericXmlSecurityToken by means of the following set of extension methods below – just call rstr.GetSecurityTokenFromResponse() – and you’ll get a GenericXmlSecurityToken as a return. public static class TokenHelper { /// <summary> /// Takes a RequestSecurityTokenResponse, pulls out the GenericXmlSecurityToken usable for further WS-Trust calls /// </summary> /// <param name="rstr"></param> /// <returns></returns> public static GenericXmlSecurityToken GetSecurityTokenFromResponse(this RequestSecurityTokenResponse rstr) { var lifeTime = rstr.Lifetime; var appliesTo = rstr.AppliesTo.Uri; var tokenXml = rstr.GetSerializedTokenFromResponse(); var token = GetTokenFromSerializedToken(tokenXml, appliesTo, lifeTime); return token; } /// <summary> /// Provides a token as an XML string. /// </summary> /// <param name="rstr"></param> /// <returns></returns> public static string GetSerializedTokenFromResponse(this RequestSecurityTokenResponse rstr) { var serializedRst = new WSFederationSerializer().GetResponseAsString(rstr, new WSTrustSerializationContext()); return serializedRst; } /// <summary> /// Turns the XML representation of the token back into a GenericXmlSecurityToken. /// </summary> /// <param name="tokenAsXmlString"></param> /// <param name="appliesTo"></param> /// <param name="lifetime"></param> /// <returns></returns> public static GenericXmlSecurityToken GetTokenFromSerializedToken(this string tokenAsXmlString, Uri appliesTo, Lifetime lifetime) { RequestSecurityTokenResponse rstr2 = new WSFederationSerializer().CreateResponse( new SignInResponseMessage(appliesTo, tokenAsXmlString), new WSTrustSerializationContext()); return new GenericXmlSecurityToken( rstr2.RequestedSecurityToken.SecurityTokenXml, new BinarySecretSecurityToken( rstr2.RequestedProofToken.ProtectedKey.GetKeyBytes()), lifetime.Created.HasValue ? lifetime.Created.Value : DateTime.MinValue, lifetime.Expires.HasValue ? lifetime.Expires.Value : DateTime.MaxValue, rstr2.RequestedAttachedReference, rstr2.RequestedUnattachedReference, null); } }

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

  • How are software projects 'typically' managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is that we have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. You can think of this as analogous to moving some web pages from the 'test' server to the 'live' server, where QC personnel can bang on the system and complain that the developer has it all wrong ;-) Once QC signs off that the changes are working, the system again moves the code along to the next stage, where additional testing is performed, etc. I have been searching the internet for a few days trying to find how the process is accomplished anywhere else -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. To keep things consistent, let's say that we have a project called Calculator, which we want to add support for the basic trigonometric functions: sine, cosine and tangent. I'm open to reorganizing the company however we need to in order to accomplish due diligence testing and we can suppose that any tools are available for use (if that helps to illustrate the process). To start things off, I think I understand this much: we document the requirements, e.g.: support sine, cosine and tangent functions we create some type of change request/work order to assign to programming coding takes place, commits are made to version control peer review commences programmer marks the work order as completed? ... now what? How does QC do their thing? Would they perform testing before closing the 'work order'?

    Read the article

  • It’s that Time of Year Again… OPN Survey Time!

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Whether you’re still carving pumpkins or getting ready to carve a turkey, be sure to carve out your chance to win one of 5 Apple i-Pads, by taking our Oracle PartnerNetwork Survey! On October 12th, Oracle Alliances and Channels sent out invitations asking you to complete the Oracle PartnerNetwork Survey and provide feedback on Oracle’s Enablement, Partner Business Center, OPN Portal, Communications, Benefits, OPN Competency Center, Solutions Catalog and Marketing.  So, if you’re feeling as festive as we are, and received an invitation to participate, we hope you take this opportunity to provide us feedback so we can continue to drive meaningful programs and interactions with you, our partners! If you didn’t receive an invitation but still want to get the get the holidays started with your very own i-Pad, you’re welcome to send us a note at [email protected] with your candid feedback, or request to be added to the list for future surveys. The drawing for the Apple i-Pads will take place after the survey closes mid November. Thank you in advance for your participation and we look forward to receiving your response. Let the holiday season begin! The OPN Communications Team

    Read the article

  • Windows Backup failed with error 0x807800C5

    - by Alexey Ivanov
    I setup backup of my Windows 8 laptop with Windows 7 File Recovery (known as Backup and Restore in Windows 7). Backup of files runs successfully. But if I try to create system image, it fails with error 0x807800C5: Error details on the dialog: The mounted backup volume is inaccessible. Error details in the system log: There was a failure in preparing the backup image of one of the volumes in the backup set. I save the backup to a network location, WD MyBookLive.   Edit: I tried some of the steps suggested in the various thread about this issue: Cleaned up the backup location: Removed MediaID.bin in the backup location. Removed folder <ComputerName> from WindowsImageBackup. Restarted backup resulted in the same error. However, the error dialog shows slightly different error message: The specified backup disk cannot be found. Performed System File Check by running sfc /scannow. It showed no errors. Running backup failed nevertheless. I tried searching Google for error code but I've found no solution so far. Update: I submitted technical support request to Microsoft. The first suggestion was to clean boot, but it didn't help. I pointed out that I had tried all the methods from the same problem on MS Answers, and nothing had helped.

    Read the article

  • SSTP client disconnects shortly after successfully connected to VPN

    - by Eran Betzalel
    I'm successfully authenticating and connecting to a SSTP VPN (on windows 2008) from my windows 7 machine, but for some reason, the connection is disconnected about a 1-2 seconds after it's established. I've done the following: Defined a SSTP VPN on my windows server 2008. Defined the same machine as CA. Issued the needed certificates and published them on the client. I'm currently testing this VPN inside my LAN so all the needed ports are opened. Here are the event log entries when trying to connect: Error Log (Client): The user HOME\User dialed a connection named Home VPN which has terminated. The reason code returned on termination is 829. Error Log (Server-VPN): The user HOME\User connected on port VPN0-0 on 7/27/2012 at 1:57 AM and disconnected on 7/27/2012 at 1:57 AM. The user was active for 0 minutes 0 seconds. 312 bytes were sent and 4528 bytes were received. The reason for disconnecting was user request. What would be the issue? How can I resolve or debug it? UPDATE: I've found an event log (Log=System, Source=RasSstp) message on the windows 7 machine that tries to connect to the VPN: The SSTP-based VPN connection to the remote access server was terminated because of a security check failure. Security settings on the remote access server do not match settings on this computer. Contact the system administrator of the remote access server and relay the following information: SHA1 Certificate Hash: 065D681...520375552F SHA256 Certificate Hash: 18DED363...EEEE28CFD00

    Read the article

  • error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)

    - by ArunS
    Hello there, We have online shopping site. When I am going to checkout page i am getting a error like this "error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)" From the apache error log i can see some attempts to connect to api.paypal.com. Here is the part of my apache error log About to connect() to api.paypal.com port 443 (#0) Trying 66.211.168.123... * connected Connected to api.paypal.com (66.211.168.123) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Closing connection #0 When i tried to connect to api.paypal.com using curl i am getting a error like this curl -iv https://api.paypal.com/ * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.91... connected * Connected to api.paypal.com (66.211.168.91) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS alert, Server hello (2): * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Can anyone help me to figure out this. Thanks in Advance. Arun S

    Read the article

  • Active directory over SSL Error 81 = ldap_connect(hLdap, NULL);

    - by Kossel
    I have been several day to getting AD over SSL (LDAPS) I followed exactly this guide. I have Active Directory Certifica Service installed (stand alone Root CA), I can request cert, install certs. but whenever I want to test the connection using LDP.exe I got this famous error ld = ldap_sslinit("localhost", 636, 1); Error 0 = ldap_set_option(hLdap, LDAP_OPT_PROTOCOL_VERSION, 3); Error 81 = ldap_connect(hLdap, NULL); Server error: <empty> Error <0x51>: Fail to connect to localhost. I have been searching, I know there are many thing can cause of this error, I tried most thing I can then I decided to post it here. I tried to look if any error in system log, but nothing :/ (but I could be wwrong) can anyone tell me what else to look? UPDATE: I restarted AD service following error showed in event viewer: LDAP over Secure Sockets Layer (SSL) will be unavailable at this time because the server was unable to obtain a certificate. Additional Data Error value: 8009030e No credentials are available in the security package

    Read the article

  • Using dd-wrt Dynamic DNS client with CloudFlare

    - by Roman
    I'm trying to configure Dynamic DNS client on my router with dd-wrt (v24-sp2) firmware so it would dynamically change IP address in one of the DNS records. Unfortunately I encountered a problem… Here is an example request from their ddclient configuration: https://www.cloudflare.com/api.html?a=DIUP&u=<my_login>&tkn=<my_token>&ip=<my_ip>&hosts=<my_record> It works if I use it in browser, but in dd-wrt I get this output: Tue Jan 24 00:36:47 2012: INADYN: Started 'INADYN Advanced version 1.96-ADV' - dynamic DNS updater. Tue Jan 24 00:36:47 2012: I:INADYN: IP address for alias '<my_record>' needs update to '<my_ip>' Tue Jan 24 00:36:48 2012: W:INADYN: Error validating DYNDNS svr answer. Check usr,pass,hostname! (HTTP/1.1 303 See Other Server: cloudflare-nginx Date: Mon, 23 Jan 2012 14:36:48 GMT Content-Type: text/plain Connection: close Expires: Sun, 25 Jan 1981 05:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: https://www.cloudflare.com/api.html?a=DIUP&u=<my_login>&tkn=<my_token>&ip=<my_ip>&hosts=<my_record> Vary: Accept-Encoding Set-Cookie: __cfduid=<id>; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.cloudflare.com Set-Cookie: __cfduid=<id>; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.www.cloudflare.com You must include an `a' paramiter, with a value of DIUP|wl|chl|nul|ban|comm_news|devmode|sec_lvl|ipv46|ob|cache_lvl|fpurge_ts|async|pre_purge|minify|stats|direct|zone_check|zone_ips|zone_errors|zone_agg|zone_search|zone_time|zone_grab|app|rec_se URL from "Location" works perfectly and parameter "a" is included. What's the problem?

    Read the article

  • setsockopt EOPNOTSUPP (Operation not supported)

    - by brant
    When I strace my MySQL process, I keep finding the same error over and over: setsockopt(240, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0x87ab944, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x87ab940, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x87ab260, FUTEX_WAKE_PRIVATE, 1) = 1 select(13, [10 12], NULL, NULL, NULL) = 1 (in [12]) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, path="\246\32629iE"...}, [2]) = 803 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(803, {sa_family=AF_FILE, path="/var/lib/mysql\1"...}, [28]) = 0 fcntl64(803, F_SETFL, O_RDONLY) = 0 fcntl64(803, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(803, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(803, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0x87ab944, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x87ab940, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x87ab260, FUTEX_WAKE_PRIVATE, 1) = 1 select(13, [10 12], NULL, NULL, NULL) = 1 (in [12]) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, path="\246\32629iE"...}, [2]) = 240 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(240, {sa_family=AF_FILE, path="/var/lib/mysql\1"...}, [28]) = 0 fcntl64(240, F_SETFL, O_RDONLY) = 0 fcntl64(240, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(240, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(240, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) When I look for running mysql processes I don't see anything out of the ordinary. I figured it might be someplace in my code, so I modified .htaccess to spit out a 502 error to prevent it from loading anything. The error still shows up, just less frequently. There have been quite a few threads that talk about this error, but no real answer as to how to solve it. my.conf, as per request: [mysqld] #skip-networking #log-slow-queries #safe-show-database #local-infile = 0 log-slow-queries = /var/log/mysql-slow.log max_connections = 200 query_cache_limit = 128643200 key_buffer_size = 1200144000 low_priority_updates = 1 concurrent_insert = 2 thread_cache_size = 7 query_cache_size = 662144000 table_cache = 1600 table_definition_cache = 1024 long_query_time = 2.5 open_files_limit = 2647 max_connect_errors=999999999

    Read the article

  • Setting up PerformancePoint Services on Sharepoint 2010: connection errors

    - by Rik
    I have tried to setup PerformancePoint Services on SharePoint 2010, but every time I try to use the dashboard designer, I get this error: “An error has occurred attempting to contact the specified SharePoint site” I have tried these steps but it hasn't helped. Any ideas? The event log gives the following information: WebHost failed to process a request. Sender Information: System.ServiceModel.ServiceHostingEnvironment+HostingManager/24724999 Exception: System.ServiceModel.ServiceActivationException: The service '/_vti_bin/client.svc' cannot be activated due to an exception during compilation. The exception message is: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item. --- System.ArgumentException: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item at System.ServiceModel.UriSchemeKeyedCollection.InsertItem(Int32 index, Uri item) at System.Collections.Generic.SynchronizedCollection`1.Add(T item) at System.ServiceModel.UriSchemeKeyedCollection..ctor(Uri[] addresses) at System.ServiceModel.ServiceHost..ctor(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.CreateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.ActivateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) --- End of inner exception stack trace --- at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath) Process Name: w3wp Process ID: 2576

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >