Search Results

Search found 39788 results on 1592 pages for 'action method'.

Page 199/1592 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Zend_Router requirements mismatch

    - by elbicho
    Hello you all: I have two routes that match a url with the same apparent pattern, the difference lies in the $actionRoute, this should only be matched if the variable :action on it equals 'myaction'. If I go to /en/mypage/whatever/myaction it goes as expected through $actionRoute. If I go to /en/mypage/whatever/blahblah it gets rejected by $actionRoute and matched by $genRoute. If I go to /en/mypage/whatever it should be matched by $genRoute but it gets matched by $actionRoute instead throwing and exception because the action noactionAction() does not exist. Don't know what I'm doing wrong, I'd appreciate your help. $genRoute = new Zend_Controller_Router_Route( ':lang/mypage/:var1/:var2', array( 'lang' => '', 'module' => 'mymodule', 'controller' => 'index', 'action' => 'index', 'var1' => 'noone', 'var2' => 'no' ), array( 'var1' => '[a-z\-]+?', 'lang' => '(es|en|fr|de){1}' ) ); $actionRoute = new Zend_Controller_Router_Route( ':lang/mypage/:var1/:action', array( 'lang' => '', 'module' => 'mymodule', 'controller' => 'index', 'action' => 'noaction', 'var1' => 'noone', ), array( 'action' => '(myaction)+?', 'var' => '[a-z\-]+?', 'lang' => '(es|en|fr|de){1}', ) ); $router->addRoute('genroute',$genRoute); $router->addRoute('actionroute',$actionRoute);

    Read the article

  • Stop SQL returning the same result twice in a JOIN

    - by nbs189
    I have joined together several tables to get data i want but since I am new to SQL i can't figure out how to stop data being returned more than once. her's the SQL statement; SELECT T.url, T.ID, S.status, S.ID, E.action, E.ID, E.timestamp FROM tracks T, status S, events E WHERE S.ID AND T.ID = E.ID ORDER BY E.timestamp DESC The data that is returned is something like this; +----------------------------------------------------------------+ | URL | ID | Status | ID | action | ID | timestamp | +----------------------------------------------------------------+ | T.1 | 4 | hello | 4 | has uploaded a track | 4 | time | | T.2 | 3 | bye | 3 | has some news | 3 | time | | t.1 | 4 | more | 4 | has some news | 4 | time | +----------------------------------------------------------------+ That's a very basic example but does outline what happens. If you look at the third row the URL is repeated when there is a different status. This is what I want to happen; +-------------------------------------------------------+ | URL or Status | ID | action | timestamp | +-------------------------------------------------------+ | T.1 | 4 | has uploaded a track | time | | hello | 3 | has some news | time | | bye | 4 | has some news | time | +-------------------------------------------------------+ Please notice that the the url (in this case the mock one is T.1) is shown when the action is has uploaded a track. This is very important. The action in the events table is inserted on trigger of a status or track insert. If a new track is inserted the action is 'has uploaded a track' and you guess what it is for a status. The ID and timestamp is also inserted into the events table at this point. Note: There are more tables that go into the query, 3 more in fact, but I have left them out for simplicity.

    Read the article

  • Rails - Dynamic name routes namespace

    - by Kuro
    Hi, Using Rails 2.3, I'm trying to configure my routes but I uncounter some difficulties. I would like to have something like : http:// mydomain.com/mycontroller/myaction/myid That should respond with controllers in :front namespace http:// mydomain.com/aname/mycontroller/myaction/mydi That should respond with controllers in :custom namespace I try something like this, but I'm totaly wrong : map.namespace :front, :path_prefix => "" do |f| f.root :controller => :home, :action => :index f.resources :home ... end map.namespace :custom, :path_prefix => "" do |m| m.root :controller => :home, :action => :index m.resources :home ... m.match ':sub_url/site/:controller/:action/:id' m.match ':sub_url/site/:controller/:action/:id' m.match ':sub_url/site/:controller/:action/:id.:format' m.match ':sub_url/site/:controller/:action.:format' end I put matching instruction in custom namespace but I'm not sure it's the right place for it. I think I really don't get the way to customize fields in url matching and I don't know how to find documentation about Rails 2.3, most of my research drove me to Rails 3 doc about the topic... Somebody to help me ?

    Read the article

  • How to prevent screen locking when lid is closed?

    - by Joe Casadonte
    I have Ubuntu 11.10 with Gnome 3 (no Unity), gnome-screen-saver has been removed and replaced with xscreensaver. The screensaver stuff all works fine -- no complaints there. When I close my laptop lid, even for a second, the screen locks (and the dialog box asking for my password is xscreensaver's). I'd like for this not to happen... Things I've tried/looked at already: xscreensaver settings - the "Lock Screen After" checkbox is not checked (though I've also tried it checked and set to 720 minutes) gconf-editor - apps -> gnome-screensaver -> lock_enabled is not checked System Settings - Power - "When the lid is closed" is set to "Do nothing" for both battery and A/C System Settings - Screen - Lock is "off" gconf-editor - apps -> gnome-power-manager -> buttons -> lid_ac && lid_battery are both set to "nothing" dconf-editor - apps -> org -> gnome -> desktop -> screensaver -> lock_enabled is not checked Output from: gsettings list-recursively org.gnome.settings-daemon.plugins.power: org.gnome.settings-daemon.plugins.power active true org.gnome.settings-daemon.plugins.power button-hibernate 'hibernate' org.gnome.settings-daemon.plugins.power button-power 'suspend' org.gnome.settings-daemon.plugins.power button-sleep 'suspend' org.gnome.settings-daemon.plugins.power button-suspend 'suspend' org.gnome.settings-daemon.plugins.power critical-battery-action 'hibernate' org.gnome.settings-daemon.plugins.power idle-brightness 30 org.gnome.settings-daemon.plugins.power idle-dim-ac false org.gnome.settings-daemon.plugins.power idle-dim-battery true org.gnome.settings-daemon.plugins.power idle-dim-time 10 org.gnome.settings-daemon.plugins.power lid-close-ac-action 'nothing' org.gnome.settings-daemon.plugins.power lid-close-battery-action 'nothing' org.gnome.settings-daemon.plugins.power notify-perhaps-recall true org.gnome.settings-daemon.plugins.power percentage-action 2 org.gnome.settings-daemon.plugins.power percentage-critical 3 org.gnome.settings-daemon.plugins.power percentage-low 10 org.gnome.settings-daemon.plugins.power priority 1 org.gnome.settings-daemon.plugins.power sleep-display-ac 600 org.gnome.settings-daemon.plugins.power sleep-display-battery 600 org.gnome.settings-daemon.plugins.power sleep-inactive-ac false org.gnome.settings-daemon.plugins.power sleep-inactive-ac-timeout 0 org.gnome.settings-daemon.plugins.power sleep-inactive-ac-type 'suspend' org.gnome.settings-daemon.plugins.power sleep-inactive-battery true org.gnome.settings-daemon.plugins.power sleep-inactive-battery-timeout 0 org.gnome.settings-daemon.plugins.power sleep-inactive-battery-type 'suspend' org.gnome.settings-daemon.plugins.power time-action 120 org.gnome.settings-daemon.plugins.power time-critical 300 org.gnome.settings-daemon.plugins.power time-low 1200 org.gnome.settings-daemon.plugins.power use-time-for-policy true gnome-settings-daemon is running: <~> $ ps -ef | grep gnome-settings-daemon 1000 1719 1645 0 19:37 ? 00:00:01 /usr/lib/gnome-settings-daemon/gnome-settings-daemon 1000 1726 1 0 19:37 ? 00:00:00 /usr/lib/gnome-settings-daemon/gsd-printer 1000 1774 1645 0 19:37 ? 00:00:00 /usr/lib/gnome-settings-daemon/gnome-fallback-mount-helper Anything else I can check? Thanks!

    Read the article

  • AuthnRequest Settings in OIF / SP

    - by Damien Carru
    In this article, I will list the various OIF/SP settings that affect how an AuthnRequest message is created in OIF in a Federation SSO flow. The AuthnRequest message is used by an SP to start a Federation SSO operation and to indicate to the IdP how the operation should be executed: How the user should be challenged at the IdP Whether or not the user should be challenged at the IdP, even if a session already exists at the IdP for this user Which NameID format should be requested in the SAML Assertion Which binding (Artifact or HTTP-POST) should be requested from the IdP to send the Assertion Which profile should be used by OIF/SP to send the AuthnRequest message Enjoy the reading! Protocols The SAML 2.0, SAML 1.1 and OpenID 2.0 protocols define different message elements and rules that allow an administrator to influence the Federation SSO flows in different manners, when the SP triggers an SSO operation: SAML 2.0 allows extensive customization via the AuthnRequest message SAML 1.1 does not allow any customization, since the specifications do not define an authentication request message OpenID 2.0 allows for some customization, mainly via the OpenID 2.0 extensions such as PAPE or UI SAML 2.0 OIF/SP allows the customization of the SAML 2.0 AuthnRequest message for the following elements: ForceAuthn: Boolean indicating whether or not the IdP should force the user for re-authentication, even if the user has still a valid session By default set to false IsPassive Boolean indicating whether or not the IdP is allowed to interact with the user as part of the Federation SSO operation. If false, the Federation SSO operation might result in a failure with the NoPassive error code, because the IdP will not have been able to identify the user By default set to false RequestedAuthnContext Element indicating how the user should be challenged at the IdP If the SP requests a Federation Authentication Method unknown to the IdP or for which the IdP is not configured, then the Federation SSO flow will result in a failure with the NoAuthnContext error code By default missing NameIDPolicy Element indicating which NameID format the IdP should include in the SAML Assertion If the SP requests a NameID format unknown to the IdP or for which the IdP is not configured, then the Federation SSO flow will result in a failure with the InvalidNameIDPolicy error code If missing, the IdP will generally use the default NameID format configured for this SP partner at the IdP By default missing ProtocolBinding Element indicating which SAML binding should be used by the IdP to redirect the user to the SP with the SAML Assertion Set to Artifact or HTTP-POST By default set to HTTP-POST OIF/SP also allows the administrator to configure the server to: Set which binding should be used by OIF/SP to redirect the user to the IdP with the SAML 2.0 AuthnRequest message: Redirect or HTTP-POST By default set to Redirect Set which binding should be used by OIF/SP to redirect the user to the IdP during logout with SAML 2.0 Logout messages: Redirect or HTTP-POST By default set to Redirect SAML 1.1 The SAML 1.1 specifications do not define a message for the SP to send to the IdP when a Federation SSO operation is started. As such, there is no capability to configure OIF/SP on how to affect the start of the Federation SSO flow. OpenID 2.0 OpenID 2.0 defines several extensions that can be used by the SP/RP to affect how the Federation SSO operation will take place: OpenID request: mode: String indicating if the IdP/OP can visually interact with the user checkid_immediate does not allow the IdP/OP to interact with the user checkid_setup allows user interaction By default set to checkid_setup PAPE Extension: max_auth_age : Integer indicating in seconds the maximum amount of time since when the user authenticated at the IdP. If MaxAuthnAge is bigger that the time since when the user last authenticated at the IdP, then the user must be re-challenged. OIF/SP will set this attribute to 0 if the administrator configured ForceAuthn to true, otherwise this attribute won't be set Default missing preferred_auth_policies Contains a Federation Authentication Method Element indicating how the user should be challenged at the IdP By default missing Only specified in the OpenID request if the IdP/OP supports PAPE in XRDS, if OpenID discovery is used. UI Extension Popup mode Boolean indicating the popup mode is enabled for the Federation SSO By default missing Language Preference String containing the preferred language, set based on the browser's language preferences. By default missing Icon: Boolean indicating if the icon feature is enabled. In that case, the IdP/OP would look at the SP/RP XRDS to determine how to retrieve the icon By default missing Only specified in the OpenID request if the IdP/OP supports UI Extenstion in XRDS, if OpenID discovery is used. ForceAuthn and IsPassive WLST Command OIF/SP provides the WLST configureIdPAuthnRequest() command to set: ForceAuthn as a boolean: In a SAML 2.0 AuthnRequest, the ForceAuthn field will be set to true or false In an OpenID 2.0 request, if ForceAuthn in the configuration was set to true, then the max_auth_age field of the PAPE request will be set to 0, otherwise, max_auth_age won't be set IsPassive as a boolean: In a SAML 2.0 AuthnRequest, the IsPassive field will be set to true or false In an OpenID 2.0 request, if IsPassive in the configuration was set to true, then the mode field of the OpenID request will be set to checkid_immediate, otherwise set to checkid_setup Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> Let's configure OIF/SP for that IdP Partner, so that the SP will require the IdP to re-challenge the user, even if the user is already authenticated: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureIdPAuthnRequest() command:configureIdPAuthnRequest(partner="AcmeIdP", forceAuthn="true") Exit the WLST environment:exit() After the changes, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ForceAuthn="true" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> To display or delete the ForceAuthn/IsPassive settings, perform the following operatons: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureIdPAuthnRequest() command: To display the ForceAuthn/IsPassive settings on the partnerconfigureIdPAuthnRequest(partner="AcmeIdP", displayOnly="true") To delete the ForceAuthn/IsPassive settings from the partnerconfigureIdPAuthnRequest(partner="AcmeIdP", delete="true") Exit the WLST environment:exit() Requested Fed Authn Method In my earlier "Fed Authentication Method Requests in OIF / SP" article, I discussed how OIF/SP could be configured to request a specific Federation Authentication Method from the IdP when starting a Federation SSO operation, by setting elements in the SSO request message. WLST Command The OIF WLST commands that can be used are: setIdPPartnerProfileRequestAuthnMethod() which will configure the requested Federation Authentication Method in a specific IdP Partner Profile, and accepts the following parameters: partnerProfile: name of the IdP Partner Profile authnMethod: the Federation Authentication Method to request displayOnly: an optional parameter indicating if the method should display the current requested Federation Authentication Method instead of setting it delete: an optional parameter indicating if the method should delete the current requested Federation Authentication Method instead of setting it setIdPPartnerRequestAuthnMethod() which will configure the specified IdP Partner entry with the requested Federation Authentication Method, and accepts the following parameters: partner: name of the IdP Partner authnMethod: the Federation Authentication Method to request displayOnly: an optional parameter indicating if the method should display the current requested Federation Authentication Method instead of setting it delete: an optional parameter indicating if the method should delete the current requested Federation Authentication Method instead of setting it This applies to SAML 2.0 and OpenID 2.0 protocols. See the "Fed Authentication Method Requests in OIF / SP" article for more information. Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> Let's configure OIF/SP for that IdP Partner, so that the SP will request the IdP to use a mechanism mapped to the urn:oasis:names:tc:SAML:2.0:ac:classes:X509 Federation Authentication Method to authenticate the user: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setIdPPartnerRequestAuthnMethod() command:setIdPPartnerRequestAuthnMethod("AcmeIdP", "urn:oasis:names:tc:SAML:2.0:ac:classes:X509") Exit the WLST environment:exit() After the changes, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/>   <samlp:RequestedAuthnContext Comparison="minimum">      <saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">         urn:oasis:names:tc:SAML:2.0:ac:classes:X509      </saml:AuthnContextClassRef>   </samlp:RequestedAuthnContext></samlp:AuthnRequest> NameID Format The SAML 2.0 protocol allows for the SP to request from the IdP a specific NameID format to be used when the Assertion is issued by the IdP. Note: SAML 1.1 and OpenID 2.0 do not provide such a mechanism Configuring OIF The administrator can configure OIF/SP to request a NameID format in the SAML 2.0 AuthnRequest via: The OAM Administration Console, in the IdP Partner entry The OIF WLST setIdPPartnerNameIDFormat() command that will modify the IdP Partner configuration OAM Administration Console To configure the requested NameID format via the OAM Administration Console, perform the following steps: Go to the OAM Administration Console: http(s)://oam-admin-host:oam-admin-port/oamconsole Navigate to Identity Federation -> Service Provider Administration Open the IdP Partner you wish to modify In the Authentication Request NameID Format dropdown box with one of the values None The NameID format will be set Default Email Address The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress X.509 Subject The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName Windows Name Qualifier The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:WindowsDomainQualifiedName Kerberos The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos Transient The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:transient Unspecified The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified Custom In this case, a field would appear allowing the administrator to indicate the custom NameID format to use The NameID format will be set to the specified format Persistent The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:persistent I selected Email Address in this example Save WLST Command To configure the requested NameID format via the OIF WLST setIdPPartnerNameIDFormat() command, perform the following steps: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setIdPPartnerNameIDFormat() command:setIdPPartnerNameIDFormat("PARTNER", "FORMAT", customFormat="CUSTOM") Replace PARTNER with the IdP Partner name Replace FORMAT with one of the following: orafed-none The NameID format will be set Default orafed-emailaddress The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress orafed-x509 The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName orafed-windowsnamequalifier The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:WindowsDomainQualifiedName orafed-kerberos The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos orafed-transient The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:transient orafed-unspecified The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified orafed-custom In this case, a field would appear allowing the administrator to indicate the custom NameID format to use The NameID format will be set to the specified format orafed-persistent The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:persistent customFormat will need to be set if the FORMAT is set to orafed-custom An example would be:setIdPPartnerNameIDFormat("AcmeIdP", "orafed-emailaddress") Exit the WLST environment:exit() Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer> <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> After the changes performed either via the OAM Administration Console or via the OIF WLST setIdPPartnerNameIDFormat() command where Email Address would be requested as the NameID Format, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ForceAuthn="false" IsPassive="false" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer> <samlp:NameIDPolicy Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress" AllowCreate="true"/></samlp:AuthnRequest> Protocol Binding The SAML 2.0 specifications define a way for the SP to request which binding should be used by the IdP to redirect the user to the SP with the SAML 2.0 Assertion: the ProtocolBinding attribute indicates the binding the IdP should use. It is set to: Either urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST for HTTP-POST Or urn:oasis:names:tc:SAML:2.0:bindings:Artifact for Artifact The SAML 2.0 specifications also define different ways to redirect the user from the SP to the IdP with the SAML 2.0 AuthnRequest message, as the SP can send the message: Either via HTTP Redirect Or HTTP POST (Other bindings can theoretically be used such as Artifact, but these are not used in practice) Configuring OIF OIF can be configured: Via the OAM Administration Console or the OIF WLST configureSAMLBinding() command to set the Assertion Response binding to be used Via the OIF WLST configureSAMLBinding() command to indicate how the SAML AuthnRequest message should be sent Note: the binding for sending the SAML 2.0 AuthnRequest message will also be used to send the SAML 2.0 LogoutRequest and LogoutResponse messages. OAM Administration Console To configure the SSO Response/Assertion Binding via the OAM Administration Console, perform the following steps: Go to the OAM Administration Console: http(s)://oam-admin-host:oam-admin-port/oamconsole Navigate to Identity Federation -> Service Provider Administration Open the IdP Partner you wish to modify Check the "HTTP POST SSO Response Binding" box to request the IdP to return the SSO Response via HTTP POST, otherwise uncheck it to request artifact Save WLST Command To configure the SSO Response/Assertion Binding as well as the AuthnRequest Binding via the OIF WLST configureSAMLBinding() command, perform the following steps: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureSAMLBinding() command:configureSAMLBinding("PARTNER", "PARTNER_TYPE", binding, ssoResponseBinding="httppost") Replace PARTNER with the Partner name Replace PARTNER_TYPE with the Partner type (idp or sp) Replace binding with the binding to be used to send the AuthnRequest and LogoutRequest/LogoutResponse messages (should be httpredirect in most case; default) httppost for HTTP-POST binding httpredirect for HTTP-Redirect binding Specify optionally ssoResponseBinding to indicate how the SSO Assertion should be sent back httppost for HTTP-POST binding artifactfor for Artifact binding An example would be:configureSAMLBinding("AcmeIdP", "idp", "httpredirect", ssoResponseBinding="httppost") Exit the WLST environment:exit() Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration which requests HTTP-POST from the IdP to send the SSO Assertion. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> In the next article, I will cover the various crypto configuration properties in OIF that are used to affect the Federation SSO exchanges.Cheers,Damien Carru

    Read the article

  • Async CTP (C# 5): How to make WCF work with Async CTP

    - by javarg
    If you have recently downloaded the new Async CTP you will notice that WCF uses Async Pattern and Event based Async Pattern in order to expose asynchronous operations. In order to make your service compatible with the new Async/Await Pattern try using an extension method similar to the following: WCF Async/Await Method public static class ServiceExtensions {     public static Task<DateTime> GetDateTimeTaskAsync(this Service1Client client)     {         return Task.Factory.FromAsync<DateTime>(             client.BeginGetDateTime(null, null),             ar => client.EndGetDateTime(ar));     } } The previous code snippet adds an extension method to the GetDateTime method of the Service1Client WCF proxy. Then used it like this (remember to add the extension method’s namespace into scope in order to use it): Code Snippet var client = new Service1Client(); var dt = await client.GetDateTimeTaskAsync(); Replace the proxy’s type and operation name for the one you want to await.

    Read the article

  • Improving WIF&rsquo;s Claims-based Authorization - Part 3 (Usage)

    - by Your DisplayName here!
    In the previous posts I showed off some of the additions I made to WIF’s authorization infrastructure. I now want to show some samples how I actually use these extensions. The following code snippets are from Thinktecture.IdentityServer on Codeplex. The following shows the MVC attribute on the WS-Federation controller: [ClaimsAuthorize(Constants.Actions.Issue, Constants.Resources.WSFederation)] public class WSFederationController : Controller or… [ClaimsAuthorize(Constants.Actions.Administration, Constants.Resources.RelyingParty)] public class RelyingPartiesAdminController : Controller In other places I used the imperative approach (e.g. the WRAP endpoint): if (!ClaimsAuthorize.CheckAccess(principal, Constants.Actions.Issue, Constants.Resources.WRAP)) {     Tracing.Error("User not authorized");     return new UnauthorizedResult("WRAP", true); } For the WCF WS-Trust endpoints I decided to use the per-request approach since the SOAP actions are well defined here. The corresponding authorization manager roughly looks like this: public class AuthorizationManager : ClaimsAuthorizationManager {     public override bool CheckAccess(AuthorizationContext context)     {         var action = context.Action.First();         var id = context.Principal.Identities.First();         // if application authorization request         if (action.ClaimType.Equals(ClaimsAuthorize.ActionType))         {             return AuthorizeCore(action, context.Resource, context.Principal.Identity as IClaimsIdentity);         }         // if ws-trust issue request         if (action.Value.Equals(WSTrust13Constants.Actions.Issue))         {             return AuthorizeTokenIssuance(new Collection<Claim> { new Claim(ClaimsAuthorize.ResourceType, Constants.Resources.WSTrust) }, id);         }         return base.CheckAccess(context);     } } You see that it is really easy now to distinguish between per-request and application authorization which makes the overall design much easier. HTH

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

  • Problem when using ajax for refresh captcha with refresh button [closed]

    - by jowan
    But it doesn't work, I just get my image be vanished and I try METHOD 2, I think it can work but I'm wrong coz i just get display with code of image not new captcha image I am stack and confuse about what method exactly work to refresh my own captcha.. Any wrong in my code or my method can't be used to refresh captcha.. Could anyone tell me how to refresh captcha exactly ? Thanks in Advance JQUERY CODE $('.refresh_captcha').click( function(){ $.ajax({ type: 'POST', url: 'captcha_mk.php', success: function(data){ //$('img').attr('src', data); // METHOD 1 ( I try it and my image is lost ) $('div').html('<img src=' + data); // METHOD 2 ( display code of image not captcha image) } }); });

    Read the article

  • Understanding the static keyword

    - by user985482
    I have some experience in developing with Java, Javascript and PHP. I am reading Microsoft Visual C# 2010 Step by Step which I feel it is a very good book on introducing you to the C# language. I seem to be having problems in understanding the static keyword. From what I understand this far if a class is declared static all methods and variable have to be static. The main method always is a static method so in the class that the main method exists all variables and methods are declared static if you have to call them in the main method. Also I have noticed that in order to call a static method from another class you do not need to create an object of that you can use the class name. What are the advantages of declaring static variables and methods? When should I declare static variable and methods?

    Read the article

  • Saving game data to server [on hold]

    - by Eugene Lim
    What's the best method to save the player's data to the server? Method to store the game saves Which one of the following method should I use ? Using a database structure(e.g.. mySQL) to store the game data as blobs? Using the server hard disk to store the saved game data as binary data files? Method to send saved game data to server What method should I use ? socketIO web socket a web-based scripting language to receive the game data as binary? for example, a php script to handle binary data and save it to file Meta-data I read that some games store saved game meta-data in database structures. What kind of meta data is useful to store?

    Read the article

  • How to create a PeopleCode Application Package/Application Class using PeopleTools Tables

    - by Andreea Vaduva
    This article describes how - in PeopleCode (Release PeopleTools 8.50) - to enable a grid without enabling each static column, using a dynamic Application Class. The goal is to disable the following grid with three columns “Effort Date”, ”Effort Amount” and “Charge Back” , when the Check Box “Finished with task” is selected , without referencing each static column; this PeopleCode could be used dynamically with any grid. If the check box “Finished with task” is cleared, the content of the grid columns is editable (and the buttons “+” and “-“ are available): So, you create an Application Package “CLASS_EXTENSIONS” that contains an Application Class “EWK_ROWSET”. This Application Class is defined with Class extends “ Rowset” and you add two news properties “Enabled” and “Visible”: After creating this Application Class, you use it in two PeopleCode Events : Rowinit and FieldChange : This code is very ‘simple’, you write only one command : ” &ERS2.Enabled = False” → and the entire grid is “Enabled”… and you can use this code with any Grid! So, the complete PeopleCode to create the Application Package is (with explanation in [….]) : ******Package CLASS_EXTENSIONS : [Name of the Package: CLASS_EXTENSIONS] --Beginning of the declaration part------------------------------------------------------------------------------ class EWK_ROWSET extends Rowset; [Definition Class EWK_ROWSET as a subclass of Class Rowset] method EWK_ROWSET(&RS As Rowset); [Constructor is the Method with the same name of the Class] property boolean Visible get set; property boolean Enabled get set; [Definition of the property “Enabled” in read/write] private [Before the word “private”, all the declarations are publics] method SetDisplay(&DisplaySW As boolean, &PropName As string, &ChildSW As boolean); instance boolean &EnSW; instance boolean &VisSW; instance Rowset &NextChildRS; instance Row &NextRow; instance Record &NextRec; instance Field &NextFld; instance integer &RowCnt, &RecCnt, &FldCnt, &ChildRSCnt; instance integer &i, &j, &k; instance CLASS_EXTENSIONS:EWK_ROWSET &ERSChild; [For recursion] Constant &VisibleProperty = "VISIBLE"; Constant &EnabledProperty = "ENABLED"; end-class; --End of the declaration part------------------------------------------------------------------------------ method EWK_ROWSET [The Constructor] /+ &RS as Rowset +/ %Super = &RS; end-method; get Enabled /+ Returns Boolean +/; Return &EnSW; end-get; set Enabled /+ &NewValue as Boolean +/; &EnSW = &NewValue; %This.InsertEnabled=&EnSW; %This.DeleteEnabled=&EnSW; %This.SetDisplay(&EnSW, &EnabledProperty, False); [This method is called when you set this property] end-set; get Visible /+ Returns Boolean +/; Return &VisSW; end-get; set Visible /+ &NewValue as Boolean +/; &VisSW = &NewValue; %This.SetDisplay(&VisSW, &VisibleProperty, False); end-set; method SetDisplay [The most important PeopleCode Method] /+ &DisplaySW as Boolean, +/ /+ &PropName as String, +/ /+ &ChildSW as Boolean +/ [Not used in our example] &RowCnt = %This.ActiveRowCount; &NextRow = %This.GetRow(1); [To know the structure of a line ] &RecCnt = &NextRow.RecordCount; For &i = 1 To &RowCnt [Loop for each Line] &NextRow = %This.GetRow(&i); For &j = 1 To &RecCnt [Loop for each Record] &NextRec = &NextRow.GetRecord(&j); &FldCnt = &NextRec.FieldCount; For &k = 1 To &FldCnt [Loop for each Field/Record] &NextFld = &NextRec.GetField(&k); Evaluate Upper(&PropName) When = &VisibleProperty &NextFld.Visible = &DisplaySW; Break; When = &EnabledProperty; &NextFld.Enabled = &DisplaySW; [Enable each Field/Record] Break; When-Other Error "Invalid display property; Must be either VISIBLE or ENABLED" End-Evaluate; End-For; End-For; If &ChildSW = True Then [If recursion] &ChildRSCnt = &NextRow.ChildCount; For &j = 1 To &ChildRSCnt [Loop for each Rowset child] &NextChildRS = &NextRow.GetRowset(&j); &ERSChild = create CLASS_EXTENSIONS:EWK_ROWSET(&NextChildRS); &ERSChild.SetDisplay(&DisplaySW, &PropName, &ChildSW); [For each Rowset child, call Method SetDisplay with the same parameters used with the Rowset parent] End-For; End-If; End-For; end-method; ******End of the Package CLASS_EXTENSIONS:[Name of the Package: CLASS_EXTENSIONS] About the Author: Pascal Thaler joined Oracle University in 2005 where he is a Senior Instructor. His area of expertise is Oracle Peoplesoft Technology and he delivers the following courses: For Developers: PeopleTools Overview, PeopleTools I &II, Batch Application Engine, Language Oriented Object PeopleCode, Administration Security For Administrators : Server Administration & Installation, Database Upgrade & Data Management Tools For Interface Users: Integration Broker (Web Service)

    Read the article

  • Drawing graphics in Java game

    - by wolf
    I am quite new to game development, so here is a question (maybe a stupid one): In my sidescroller i have a bunch of different graphics objects that i need to draw (player, background tiles, creatures, projectiles etc). Most tutorials i've read so far show that each object has its own draw method, which is then called from some other method. What if I had one method that does all the drawing? Lets say i keep all my objects in an array or queue (or multiple arrays) and then go through each of them, get an image and draw it. So basically would it be better (and why) to have each object have its own draw method or one method that does all the drawing? Or does it matter at all? I feel like the second option is more comfortable, because then all the stuff to do with drawing would be in one place...

    Read the article

  • Hierarchy flattening of interfaces in WCF

    - by nmarun
    Alright, so say I have my service contract interface as below: 1: [ServiceContract] 2: public interface ILearnWcfService 3: { 4: [OperationContract(Name = "AddInt")] 5: int Add(int arg1, int arg2); 6: } Say I decided to add another interface with a similar add “feature”. 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract(Name = "AddDouble")] 5: double Add(double arg1, double arg2); 6: } My class implementing the ILearnWcfServiceExtend ends up as: 1: public class LearnWcfService : ILearnWcfServiceExtend 2: { 3: public int Add(int arg1, int arg2) 4: { 5: return arg1 + arg2; 6: } 7:  8: public double Add(double arg1, double arg2) 9: { 10: return arg1 + arg2; 11: } 12: } Now when I consume this service and look at the proxy that gets generated, here’s what I see: 1: public interface ILearnWcfServiceExtend 2: { 3: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfService/AddInt", ReplyAction="http://tempuri.org/ILearnWcfService/AddIntResponse")] 4: int AddInt(int arg1, int arg2); 5: 6: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/AddDouble", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/AddDoubleResponse")] 7: double AddDouble(double arg1, double arg2); 8: } Only the ILearnWcfServiceExtend gets ‘listed’ in the proxy class and not the (base interface) ILearnWcfService interface. But then to uniquely identify the operations that the service exposes, the Action and ReplyAction properties are set. So in the above example, the AddInt operation has the Action property set to ‘http://tempuri.org/ILearnWcfService/AddInt’ and the AddDouble operation has the Action property of ‘http://tempuri.org/ILearnWcfServiceExtend/AddDouble’. Similarly the ReplyAction properties are set corresponding to the namespace that they’re declared in. The ‘http://tempuri.org’ is chosen as the default namespace, since the Namespace property on the ServiceContract is not defined. The other thing is the service contract itself – the Add() method. You’ll see that in both interfaces, the method names are the same. As you might know, this is not allowed in WSDL-based environments, even though the arguments are of different types. This is allowed only if the Name attribute of the ServiceContract is set (as done above). This causes a change in the name of the service contract itself in the proxy class. See that their names are changed to AddInt / AddDouble respectively. Lesson learned: The interface hierarchy gets ‘flattened’ when the WCF service proxy class gets generated.

    Read the article

  • How to store Role Based Access rights in web application?

    - by JonH
    Currently working on a web based CRM type system that deals with various Modules such as Companies, Contacts, Projects, Sub Projects, etc. A typical CRM type system (asp.net web form, C#, SQL Server backend). We plan to implement role based security so that basically a user can have one or more roles. Roles would be broken down by first the module type such as: -Company -Contact And then by the actions for that module for instance each module would end up with a table such as this: Role1 Example: Module Create Edit Delete View Company Yes Owner Only No Yes Contact Yes Yes Yes Yes In the above case Role1 has two module types (Company, and Contact). For company, the person assigned to this role can create companies, can view companies, can only edit records he/she created and cannot delete. For this same role for the module contact this user can create contacts, edit contacts, delete contacts, and view contacts (full rights basically). I am wondering is it best upon coming into the system to session the user's role with something like a: List<Role> roles; Where the Role class would have some sort of List<Module> modules; (can contain Company, Contact, etc.).? Something to the effect of: class Role{ string name; string desc; List<Module> modules; } And the module action class would have a set of actions (Create, Edit, Delete, etc.) for each module: class ModuleActions{ List<Action> actions; } And the action has a value of whether the user can perform the right: class Action{ string right; } Just a rough idea, I know the action could be an enum and the ModuleAction can probably be eliminated with a List<x, y>. My main question is what would be the best way to store this information in this type of application: Should I store it in the User Session state (I have a session class where I manage things related to the user). I generally load this during the initial loading of the application (global.asax). I can simply tack onto this session. Or should this be loaded at the page load event of each module (page load of company etc..). I eventually need to be able to hide / unhide various buttons / divs based on the user's role and that is what got me thinking to load this via session. Any examples or points would be great.

    Read the article

  • SQL SERVER – Recover the Accidentally Renamed Table

    - by pinaldave
    I have no answer to following question. I saw a desperate email marked as urgent delivered in my mailbox. “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. I am in big trouble. Help me get my original tablename.” I have seen many similar scenarios in my life and they give me a very good opportunity to preach wisdom but when the house is burning, we cannot talk about how we should have conserved the water earlier. The goal at that point is to put off the fire as fast as we can. I decided to answer this email with my best knowledge. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you idea about what your tablename can be if you are lucky. Method 1: (Not Recommended but try your luck) Check your naming convention of your system. I have often seen that many organizations name their index as IX_TableName_Colms or name their keys as FK_TableName1_TableName2_Cols. If your organization is following the same you can get the name from your table, you may refer your keys. Again, note that this is quite possible that your tablename was already renamed and your keys were not updated. This can easily lead you to select incorrect name. I think follow this if you are confident or move to the next method. Method 2: (Not Recommended but try your luck) This method is also based on your orgs naming convention. If you use the name of the table in any columnname (some organizations use tablename in their incremental identity column name), you can get that name from there. Method 3: (Not Recommended but try your luck) If you know where your table was used in your stored procedures, you can script your stored procedure and find the name of the table back. Method 4: (Try your luck) All the best organizations first create a data model of the schema and there is good chance that this table is used there, you should take your chances and refer original document. If your organization is good at managing docs or source code, you will get the name of the table back for sure. Method 5: (It WORKS but try on a development server) There is no sure way to get you the name of the table which you accidentally renamed however, there is one way which will work for sure. You need to take your latest full backup and restore it on your development server (remember not on production or where you have renamed this column). Now restore latest differential file of the full backup. Now restore all the log files one by one making sure that you are restoring before the point of time of you renamed the tablename. Now go to explore and this will give you the name of the table which you have renamed. If you are confident that the same table existed with the same name when the last full backup was made, you do not have to go to all the steps. You can just get the name of the table directly from last backup’s restore. Read the article about Backup Timeline. Wisdom: How can I miss to preach wisdom when I get the opportunity to do so? Here are a few points to remember. Use a different account to explore production environment. Do not use the same account which have all the rights and permissions all the time. Use the account which has read only permissions if there are no modification required. Use policy based management to prevent changes which are accidental. If there was policy of valid names, the accidental change of the table was not possible unless it was intentional delibarate changes. Have a proper auditing of the system in place. You can use DDL triggers but be careful with its usage (get it reviewed properly first). (Add your suggestion here) I guess Method 5 will work all the time (using point in time restore). Everything else is chance of luck and if you are lucky are bad – you will get further incorrect name. Now go back and read the first line of this blog. Out of five method four methods are just lucky guesses. The method 5 will work but again it is a lengthy process if the size of the database is huge or if you do not have full backup. Did I miss anything obvious? Please leave a comment and I will publish your answer with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to write reusable code in node.js

    - by lortabac
    I am trying to understand how to design node.js applications, but it seems there is something I can't grasp about asynchronous programming. Let's say my application needs to access a database. In a synchronous environment I would implement a data access class with a read() method, returning an associative array. In node.js, because code is executed asynchronously, this method can't return a value, so, after execution, it will have to "do" something as a side effect. It will then contain some code which does something else than just reading data. Let's suppose I want to call this method multiple times, each time with a different success callback. Since the callback is included in the method itself, I can't find a clean way to do this without either duplicating the method or specifying all possible callbacks in a long switch statement. What is the proper way to handle this problem? Am I approaching it the wrong way?

    Read the article

  • Should I add parameters to instance methods that use those instance fields as parameters?

    - by john smith optional
    I have an instance method that uses instance fields in its work. I can leave the method without that parameters as they're available to me, or I can add them to the parameter list, thus making my method more "generic" and not reliable on the class. On the other hand, additional parameters will be in parameters list. Which approach is preferable and why? Edit: at the moment I don't know if my method will be public or private. Edit2: clarification: both method and fields are instance level.

    Read the article

  • Localization in php, best practice or approach?

    - by sree
    I am Localizing my php application. I have a dilemma on choosing best method to accomplish the same. Method 1: Currently am storing words to be localized in an array in a php file <?php $values = array ( 'welcome' => 'bienvenida' ); ?> I am using a function to extract and return each word according to requirement Method 2: Should I use a txt file that stores string of the same? <?php $welcome = 'bienvenida'; ?> My question is which is a better method, in terms of speed and effort to develop the same and why? Edit: I would like to know which method out of two is faster in responding and why would that be? also, any improvement on the above code would be appreciated!!

    Read the article

  • Preseed Partman: multiple partitions on one disk /tmp /data /usr swap

    - by Moritz
    trying to get preseeding on 12.04 64bit with what should be a basic setup to work: /dev/sda - the only drive beeing used / - rootfs - 100GB /boot - 1GB /tmp - 10GB /data - should take all available space swap - 10GB - d-i partman-auto/expert_recipe string \ boot-root :: \ 1000 50 1000 ext4 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /boot } \ . \ 500 1000 10000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /tmp } \ . \ 500 5000 100000000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /data } \ . \ 64 2000 10000 linux-swap \ method{ swap } format{ } \ . \ 500 3000 100000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ / } \ . If i only use the code for /boot,swap and / it works. Also i was wondering weather i have to specify some other recipe name than "boot-root", but trying "thisNameIsNotDefinedInPartman" the result was the same. The Error message displayed by the ubuntu installer is always "no root file system is defined" Thanks for your help, Moritz

    Read the article

  • Is 2 lines of push/pop code for each pre-draw-state too many?

    - by Griffin
    I'm trying to simplify vector graphics management in XNA; currently by incorporating state preservation. 2X lines of push/pop code for X states feels like too many, and it just feels wrong to have 2 lines of code that look identical except for one being push() and the other being pop(). The goal is to eradicate this repetitiveness,and I hoped to do so by creating an interface in which a client can give class/struct refs in which he wants restored after the rendering calls. Also note that many beginner-programmers will be using this, so forcing lambda expressions or other advanced C# features to be used in client code is not a good idea. I attempted to accomplish my goal by using Daniel Earwicker's Ptr class: public class Ptr<T> { Func<T> getter; Action<T> setter; public Ptr(Func<T> g, Action<T> s) { getter = g; setter = s; } public T Deref { get { return getter(); } set { setter(value); } } } an extension method: //doesn't work for structs since this is just syntatic sugar public static Ptr<T> GetPtr <T> (this T obj) { return new Ptr<T>( ()=> obj, v=> obj=v ); } and a Push Function: //returns a Pop Action for later calling public static Action Push <T> (ref T structure) where T: struct { T pushedValue = structure; //copies the struct data Ptr<T> p = structure.GetPtr(); return new Action( ()=> {p.Deref = pushedValue;} ); } However this doesn't work as stated in the code. How might I accomplish my goal? Example of code to be refactored: protected override void RenderLocally (GraphicsDevice device) { if (!(bool)isCompiled) {Compile();} //TODO: make sure state settings don't implicitly delete any buffers/resources RasterizerState oldRasterState = device.RasterizerState; DepthFormat oldFormat = device.PresentationParameters.DepthStencilFormat; DepthStencilState oldBufferState = device.DepthStencilState; { //Rendering code } device.RasterizerState = oldRasterState; device.DepthStencilState = oldBufferState; device.PresentationParameters.DepthStencilFormat = oldFormat; }

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • Is static universally "evil" for unit testing and if so why does resharper recommend it?

    - by Vaccano
    I have found that there are only 3 ways to unit test (mock/stub) dependencies that are static in C#.NET: Moles TypeMock JustMock Given that two of these are not free and one has not hit release 1.0, mocking static stuff is not too easy. Does that make static methods and such "evil" (in the unit testing sense)? And if so, why does resharper want me to make anything that can be static, static? (Assuming resharper is not also "evil".) Clarification: I am talking about the scenario when you want to unit test a method and that method calls a static method in a different unit/class. By most definitions of unit testing, if you just let the method under test call the static method in the other unit/class then you are not unit testing, you are integration testing. (Useful, but not a unit test.)

    Read the article

  • Is passing the Model around in this way considered bad practice?

    - by Theomax
    If I have a view called, for example, ViewDetails that displays user information in labels and has a Model called ViewDetailsModel and if I want to allow the user to click a button to edit some of these details, is it considered bad practice is I pass the entire Model in the markup to a controller method which then assigns the values for another model, using the values stored in the Model that was passed in as a parameter to that action method? If so, should there instead be a service method that gets the data required for the edit view? For example: In the ViewDetails view, the user clicks the edit button which calls an action method in the controller (and passes in the Model object). The action method then uses the data in the Model object to populate another model which will be used for the EditDetails view that will be returned.

    Read the article

  • The sign of a true manager is delegation (C# style)

    - by MarkPearl
    Today I thought I would write a bit about delegates in C#. Up till recently I have managed to side step any real understanding of what delegates do and why they are useful – I mean, I know roughly what they do and have used them a lot, but I have never really got down dirty with them and mucked about. Recently however with my renewed interest in Silverlight delegates came up again as a possible solution to a particular problem, and suddenly I found myself opening a bland little console application to just see exactly how far I could take delegates with my limited knowledge. So, let’s first look at the MSDN definition of delegates… A delegate declaration defines a reference type that can be used to encapsulate a method with a specific signature. A delegate instance encapsulates a static or an instance method. Delegates are roughly similar to function pointers in C++; however, delegates are type-safe and secure. Well, don’t you love MSDN for such a useful definition. I must give it credit though… later on it really explains it a bit better by saying “A delegate lets you pass a function as a parameter. The type safety of delegates requires the function you pass as a delegate to have the same signature as the delegate declaration.” A little more reading up on delegates mentions that delegates are similar to interfaces in that they enable the separation of specification and implementation. A delegate declares a single method, while an interface declares a group of methods. So enough reading - lets look at some code and see a basic example of a delegate… Let’s assume we have a console application with a simple delegate declared called AdjustValue like below… class Program { private delegate int AdjustValue(int val); static void Main(string[] args) { } } In a sense, all we have said is that we will be creating one or more methods that follow the same pattern as AdjustValue – i.e. they will take one input value of type int and return an integer. We could then expand our code to have various methods that match the structure of our delegate AdjustValue (remember the structure is int xxx (int xxx)) class Program { private delegate int AdjustValue(int val); private static int Dbl(int val) { return val * 2; } private static int AlwaysOne(int val) { return 1; } static void Main(string[] args) { } }  Above I have expanded my project to have two methods, one called Dbl and the other AlwaysOne. Dbl always returns double the input val and AlwaysOne always returns 1. I could now declare a variable and assign it to be one of those functions, like the following… class Program { private delegate int AdjustValue(int val); private static int Dbl(int val) { return val * 2; } private static int AlwaysOne(int val) { return 1; } static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; Console.WriteLine(myDelegate(1).ToString()); Console.ReadLine(); } } In this instance I have declared an instance of the AdjustValue delegate called myDelegate; I have then told myDelegate to point to the method Dbl, and then called myDelegate(1). What would the result be? Yes, in this instance it would be exactly the same as me calling the following code… static void Main(string[] args) { Console.WriteLine(Dbl(1).ToString()); Console.ReadLine(); }   So why all the extra work for delegates when we could just do what we did above and call the method directly? Well… that separation of specification to implementation comes to mind. So, this all seems pretty simple. Let’s take a slightly more complicated variation to the console application. Assume that my project is the same as the one previously except that my main method is adjusted as follows… static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; myDelegate = AlwaysOne; Console.WriteLine(myDelegate(1).ToString()); Console.ReadLine(); } What would happen in this scenario? Quite simply “1” would be written to the console, the reason being that myDelegate was last pointing to the AlwaysOne method before it was called. Make sense? In a way, the myDelegate is a variable method that can be swapped and changed when needed. Let’s make the code a little more confusing by using a delegate in the declaration of another delegate as shown below… class Program { private delegate int AdjustValue(InputValue val); private delegate int InputValue(); private static int Dbl(InputValue val) { return val()*2; } private static int GetInputVal() { Console.WriteLine("Enter a whole number : "); return Convert.ToInt32(Console.ReadLine()); } static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; Console.WriteLine(myDelegate(GetInputVal).ToString()); Console.ReadLine(); } }   Now it gets really interesting because it looks like we have passed a method into a function in the main method by declaring… Console.WriteLine(myDelegate(GetInputVal).ToString()); So, what it the output? Well, try take a guess on what will happen – then copy the code and see if you got it right. Well that brings me to the end of this short explanation of Delegates. Hopefully it made sense!

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >