Search Results

Search found 54118 results on 2165 pages for 'default value'.

Page 486/2165 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • The Unintended Consequences of Sound Security Policy

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Author: Kevin Moulton, CISSP, CISM Meet the Author: Kevin Moulton, Senior Sales Consulting Manager, Oracle Kevin Moulton, CISSP, CISM, has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East Enterprise Security Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. When I speak to a room of IT administrators, I like to begin by asking them if they have implemented a complex password policy. Generally, they all nod their heads enthusiastically. I ask them if that password policy requires long passwords. More nodding. I ask if that policy requires upper and lower case letters – faster nodding – numbers – even faster – special characters – enthusiastic nodding all around! I then ask them if their policy also includes a requirement for users to regularly change their passwords. Now we have smiles with the nodding! I ask them if the users have different IDs and passwords on the many systems that they have access to. Of course! I then ask them if, when they walk around the building, they see something like this: Thanks to Jake Ludington for the nice example. Can these administrators be faulted for their policies? Probably not but, in the end, end-users will find a way to get their job done efficiently. Post-It Notes to the rescue! I was visiting a business in New York City one day which was a perfect example of this problem. First I walked up to the security desk and told them where I was headed. They asked me if they should call upstairs to have someone escort me. Is that my call? Is that policy? I said that I knew where I was going, so they let me go. Having the conference room number handy, I wandered around the place in a search of my destination. As I walked around, unescorted, I noticed the post-it note problem in abundance. Had I been so inclined, I could have logged in on almost any machine and into any number of systems. When I reached my intended conference room, I mentioned my post-it note observation to the two gentlemen with whom I was meeting. One of them said, “You mean like this,” and he produced a post it note full of login IDs and passwords from his breast pocket! I gave him kudos for not hanging the list on his monitor. We then talked for the rest of the meeting about the difficulties faced by the employees due to the security policies. These policies, although well-intended, made life very difficult for the end-users. Most users had access to 8 to 12 systems, and the passwords for each expired at a different times. The post-it note solution was understandable. Who could remember even half of them? What could this customer have done differently? I am a fan of using a provisioning system, such as Oracle Identity Manager, to manage all of the target systems. With OIM, and email could be automatically sent to all users when it was time to change their password. The end-users would follow a link to change their password on a web page, and then OIM would propagate that password out to all of the systems that the user had access to, even if the login IDs were different. Another option would be an Enterprise Single-Sign On Solution. With Oracle eSSO, all of a user’s credentials would be stored in a central, encrypted credential store. The end-user would only have to login to their machine each morning and then, as they moved to each new system, Oracle eSSO would supply the credentials. Good-bye post-it notes! 3M may be disappointed, but your end users will thank you. I hear people say that this post-it note problem is not a big deal, because the only people who would see the passwords are fellow employees. Do you really know who is walking around your building? What are the password policies in your business? How do the end-users respond?

    Read the article

  • Getting a Conexant CX23885 TV Capture Card working

    - by Benny
    I'm new to Linux, and am trying to get my Capture Card working on 11.04. The only command that I know to run to find out any information is lspci, which tells me that I have 02:00.0 Multimedia video controller: Conexant Systems, Inc. CX23885 PCI Video and Audio Decoder (rev 04) I've looked at using Me TV, but haven't worked out how to configure it for my card, or what I need to do to get it running. I'm not fussed on what software I use to run the Capture Card, but I've currently got only Me TV installed. Edit: When I run tvtime, I get the following errors: videoinput: Cannot open capture device /dev/video0: No such file or directory mixer: find error: Success mixer: Can't open mixer default, mixer volume and mute unavailable. mixer: Can't open device default/Line, mixer volume and mute unavailable. Segmentation fault

    Read the article

  • Changing the Game: Why Oracle is in the IT Operations Management Business

    - by DanKoloski
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Next week, in Orlando, is the annual Gartner IT Operations Management Summit. Oracle is a premier sponsor of this annual event, which brings together IT executives for several days of high level talks about the state of operational management of enterprise IT. This year, Sushil Kumar, VP Product Strategy and Business Development for Oracle’s Systems & Applications Management, will be presenting on the transformation in IT Operations required to support enterprise cloud computing. IT Operations transformation is an important subject, because year after year, we hear essentially the same refrain – large enterprises spend an average of two-thirds (67%!) of their IT resources (budget, energy, time, people, etc.) on running the business, with far too little left over to spend on growing and transforming the business (which is what the business actually needs and wants). In the thirtieth year of the distributed computing revolution (give or take, depending on how you count it), it’s amazing that we have still not moved the needle on the single biggest component of enterprise IT resource utilization. Oracle is in the IT Operations Management business because when management is engineered together with the technology under management, the resulting efficiency gains can be truly staggering. To put it simply – what if you could turn that 67% of IT resources spent on running the business into 50%? Or 40%? Imagine what you could do with those resources. It’s now not just possible, but happening. This seems like a simple idea, but it is a radical change from “business as usual” in enterprise IT Operations. For the last thirty years, management has been a bolted-on afterthought – we pick and deploy our technology, then figure out how to manage it. This pervasive dysfunction is a broken cycle that guarantees high ongoing operating costs and low agility. If we want to break the cycle, we need to take a more tightly-coupled approach. As a complete applications-to-disk platform provider, Oracle is engineering management together with technology across our stack and hooking that on-premise management up live to My Oracle Support. Let’s examine the results with just one piece of the Oracle stack – the Oracle Database. Oracle began this journey with the Oracle Database 9i many years ago with the introduction of low-impact instrumentation in the database kernel (“tell me what’s wrong”) and through Database 10g, 11g and 11gR2 has successively added integrated advisory (“tell me how to fix what’s wrong”) and lifecycle management and automated self-tuning (“fix it for me, and do it on an ongoing basis for all my assets”). When enterprises take advantage of this tight-coupling, the results are game-changing. Consider the following (for a full list of public references, visit this link): British Telecom improved database provisioning time 1000% (from weeks to minutes) which allows them to provide a new DBaaS service to their internal customers with no additional resources Cerner Corporation Saved $9.5 million in CapEx and OpEx AND launched a brand-new cloud business at the same time Vodafone Group plc improved response times 50% and reduced maintenance planning times 50-60% while serving 391 million registered mobile customers Or the recent Database Manageability and Productivity Cost Comparisons: Oracle Database 11g Release 2 vs. SAP Sybase ASE 15.7, Microsoft SQL Server 2008 R2 and IBM DB2 9.7 as conducted by independent analyst firm ORC. In later entries, we’ll discuss similar results across other portions of the Oracle stack and how these efficiency gains are required to achieve the agility benefits of Enterprise Cloud. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   [Read More] 

    Read the article

  • Ubuntu 14.04LTS - runtime video card configuration through Radeon driver

    - by RJVB
    How does one configure Radeon video cards when using the open source Radeon driver - power profile, vsync, etc? Why I try the widely documented solution (against overheating) that worked for me under LMDE (confirmed with kernels up to 3.12.6), I get the following error: $ sudo cat /sys/class/drm/card0/device/power_profile default $ sudo sh -c "echo mid > /sys/class/drm/card0/device/power_profile" sh: echo: I/O error Exit 1 And when I try suggestions from Arch's ATI wiki my modifications are simply ignored: $ sudo cat /sys/class/drm/card0/device/power_dpm_force_performance_level auto $ sudo sh -c "echo high> /sys/class/drm/card0/device/power_dpm_force_performance_level" $ sudo cat /sys/class/drm/card0/device/power_dpm_force_performance_level auto Is this something Ubuntu specific, or something introduced with the 3.13 version of the Radeon driver? I'm encountering this on 2 laptops, one with a Radeon HD6290 (integrated GPU), the other with a discrete RV710 card. The RV710 needs a specific power setting to prevent overheating under LMDE, fortunately it doesn't seem to overheat with the Ubuntu default setting.

    Read the article

  • blurry lines between web application context layer, service layer and data access layer in spring

    - by thenaglecode
    I Originally asked this question in SO but on advice I have moved the question here... I'll admit I'm a spring newbie, but you can correct me if I'm wrong, this one liner looks kinda fishy in a best practices sort of way: @RepositoryRestResource(collectionResourceRel="people"...) public interface PersonRepository extends PagingAndSortingRepository<Person, Long> For those who are unaware, the following does many things: It is an interface definition that can be registered in an application context as a jpa repository, automagically hooking up all the default CRUD operations within a persistence context (that is externally configured). and also configures default controller/request-mapping/handler functionality at the namespace "/people" relative to your configured dispatcher servlet-mapping. Here's my point. I just crossed 3 conceptual layers with one line of code! this feels against my seperation-of-concern instincts but i wanted to hear your opinion. And for the sake of being on a question and answer site, I would like to know whether there is a better way of seperating these different layers - Service, Data, Controllers - whilst maintaining as minimal configuration as possible

    Read the article

  • Silverlight Relay Commands

    - by George Evjen
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I am fairly new at Silverlight development and I usually have an issue that needs research every day. Which I enjoy, since I like the idea of going into a day knowing that I am  going to learn something new. The issue that I am currently working on centers around relay commands. I have a pretty good handle on Relay Commands and how we use them within our applications. <Button Command="{Binding ButtonCommand}" CommandParameter="NewRecruit" Content="New Recruit" /> Here in our xaml we have a button. The button has a Command and a CommandParameter. The command binds to the ButtonCommand that we have in our ViewModel RelayCommand _buttonCommand;         /// <summary>         /// Gets the button command.         /// </summary>         /// <value>The button command.</value>         public RelayCommand ButtonCommand         {             get             {                 if (_buttonCommand == null)                 {                     _buttonCommand = new RelayCommand(                         x => x != null && x.ToString().Length > 0 && CheckCommandAvailable(x.ToString()),                         x => ExecuteCommand(x.ToString()));                 }                 return _buttonCommand;             }         }   In our relay command we then do some checks with a lambda expression. We check if the command  parameter is null, is the length greater than 0 and we have a CheckCommandAvailable method that will tell  us if the button is even enabled. After we check on these three items we then pass the command parameter to an action method. This is all pretty straight forward, the issue that we solved a few days ago centered around having a control that needed to use a Relay Command and this control was a nested control and was using a different DataContext. The example below illustrates how we handled this scenario. In our xaml usercontrol we had to set a name to this control. <Controls3:RadTileViewItem x:Class="RecruitStatusTileView"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"      xmlns:Controls1="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"      xmlns:Controls2="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Input"      xmlns:Controls3="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation"      mc:Ignorable="d" d:DesignHeight="400" d:DesignWidth="800" Header="{Binding Title,Mode=TwoWay}" MinimizedHeight="100"                             x:Name="StatusView"> Here we are using a telerik RadTileViewItem. We set the name of this control to “StatusView”. In our button control we set our command parameters and commands different than the example above. <HyperlinkButton Content="{Binding BigBoardButtonText, Mode=TwoWay}" CommandParameter="{Binding 'Position.PositionName'}" Command="{Binding ElementName=StatusView, Path=DataContext.BigBoardCommand, Mode=TwoWay}" /> This hyperlink button lives in a ListBox control and this listbox has an ItemSource of PositionSelectors. The Command Parameter is binding to the Position.Position property of that PositionSelectors object. This again is pretty straight forward again. What gets a bit tricky is the Command property in the hyperlink. It is binding to the element name we created in the user control (StatusView) Because this hyperlink is in a listbox and is in the item template it doesn’t have a direct handle on the DataContext that the RadTileViewItem has so we have to make sure it does. We do that by binding to the element name of status view then set the path to DataContext.BigBoardCommand. BigBoardCommand is the name of the RelayCommand in the view model. private RelayCommand _bigBoardCommand = null;         /// <summary>         /// Gets the big board command.         /// </summary>         /// <value>The big board command.</value>         public RelayCommand BigBoardCommand         {             get             {                 if (_bigBoardCommand == null)                 {                     _bigBoardCommand = new RelayCommand(x => true, x => AddToBigBoard(x.ToString()));                 }                 return _bigBoardCommand;             }         } From there we check for true again and then call the action and pass in the parameter that we had as the command parameter. What we are working on now is a bit trickier than this second example. In the above example we are only creating this TileViewItem with this name “StatusView” once. In another part of our application we are generating multiple TileViewItems, so we cannot set the name in the control as we cant have multiple controls with the same name. When we run the application we get an error that reads that the value is out of expected range. My searching has led me to think we cannot have multiple controls with the same name. This is today’s problem and Ill post the solution to this once it is found.

    Read the article

  • Code Analysis Rule Sets in Visual Studio 2010

    - by Anthony Trudeau
    Microsoft Visual Studio 2010 introduces the concept of rule sets when configuring code analysis.  This is a valuable change from Visual Studio 2008 that I didn't even realize I wanted.  Visual Studio 2008 by default selected all rules and then you had to remove rules on an item by item basis. The rule sets fall into logical groups including "Microsoft All Rules", "Microsoft Basic Correctness Rules", "Microsoft Security Rules", et al.  And within the project properties you can select one rule set, multiple rule sets, or you can define your own rule set based upon another. Selecting a single rule set is obviously the easiest option.  The default rule set when you create a new project is the "Microsoft Minimum Recommended Rules".  However, in my opinion the recommended rules are just too permissive.  For that reason you might want to change your rule set to "Microsoft All Rules" until you get around to creating your own rule set; or alternately you can select multiple rule sets which is an option from the rule set combo box.  The Visual Studio documentation has comprehensive help on what is contained within the rule sets. Creating your own rule set is easy if not obvious.  You need to start a rule set from an existing rule set.  To get started select a rule set in the combo box within the Code Analysis tab of the project properties.  I selected the "Microsoft All Rules" for my rule set, but you may find it easier to start with the "Microsoft Minimum Recommended Rules" if your rules are on the more permissive side. Once your rule set is selected click the Open button.  This will display a dialog that is similar in composition to the rules selection from Visual Studio 2008.  Browsing through the tree view you can select or deselect individual rules within their categories; and you can indicate that the rules are flagged as errors instead of the default which is a warning.  A nice touch to the form is that you get a help pane when you select an individual rule.  That helped me considerably when I first configured my rule set. Once you have finished selecting your rules click the Save tool button, specify a location and name, and click the Save button on the Save As dialog.  Once you're back on the Code Analysis tab you'll choose the Browse option within the combo box and open the file you just created.

    Read the article

  • How to Delete/Disable gnome panels. No existing solutions working

    - by Alan Peabody
    I would like to remove gnome panel completely. I spend most of my time in a (tmux) terminal or a browser. Synapse and a small hidden AWN panel fit the rest of my needs. I have tried all recommended solutions including this (found it a few places): How to delete Gnome Panel? However it always comes back at log in. I have tried changing the required components panel to avant-whatever as well as to empty. I have tried setting them both as default (right click set as default). Right now I just have the last panel set to transparent and auto hide, but it still tends to be annoying. What do I need to do to get rid of this damn thing? Clarification: Using gconf-editor, gconftool2, and/or Ubuntu tweak to set /desktop/gnome/session/required_component/panel to avant-window-navigator is not working. The setting stays when I reboot, but the empty gnome panel sticks around.

    Read the article

  • WIF-less claim extraction from ACS: SWT

    - by Elton Stoneman
    WIF with SAML is solid and flexible, but unless you need the power, it can be overkill for simple claim assertion, and in the REST world WIF doesn’t have support for the latest token formats.  Simple Web Token (SWT) may not be around forever, but while it's here it's a nice easy format which you can manipulate in .NET without having to go down the WIF route. Assuming you have set up a Relying Party in ACS, specifying SWT as the token format: When ACS redirects to your login page, it will POST the SWT in the first form variable. It comes through in the BinarySecurityToken element of a RequestSecurityTokenResponse XML payload , the SWT type is specified with a TokenType of http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0 : <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:31:18.655Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:11:18.655Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="uuid:fc8d3332-d501-4bb0-84ba-d31aa95a1a6c" ValueType="http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> [ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> Reading the SWT is as simple as base-64 decoding, then URL-decoding the element value:     var wrappedToken = XDocument.Parse(HttpContext.Current.Request.Form[1]);     var binaryToken = wrappedToken.Root.Descendants("{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}BinarySecurityToken").First();     var tokenBytes = Convert.FromBase64String(binaryToken.Value);     var token = Encoding.UTF8.GetString(tokenBytes);     var tokenType = wrappedToken.Root.Descendants("{http://schemas.xmlsoap.org/ws/2005/02/trust}TokenType").First().Value; The decoded token contains the claims as key/value pairs, along with the issuer, audience (ACS realm), expiry date and an HMAC hash, which are in query string format. Separate them on the ampersand, and you can write out the claim values in your logged-in page:     var decoded = HttpUtility.UrlDecode(token);     foreach (var part in decoded.Split('&'))     {         Response.Write("<pre>" + part + "</pre><br/>");     } - which will produce something like this: http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant=2012-08-31T06:57:01.855Z http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod=http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname=XYZ http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider=http://fs.svc.xyz.com/adfs/services/trust Audience=http://localhost/x.y.z ExpiresOn=1346402225 Issuer=https://x-y-z.accesscontrol.windows.net/ HMACSHA256=oDCeEDDAWEC8x+yBnTaCLnzp4L6jI0Z/xNK95PdZTts= The HMAC hash lets you validate the token to ensure it hasn’t been tampered with. You'll need the token signing key from ACS, then you can re-sign the token and compare hashes. There's a full implementation of an SWT parser and validator here: How To Request SWT Token From ACS And How To Validate It At The REST WCF Service Hosted In Windows Azure, and a cut-down claim inspector on my github code gallery: ACS Claim Inspector. Interestingly, ACS lets you have a value for your logged-in page which has no relation to the realm for authentication, so you can put this code into a generic claim inspector page, and set that to be your logged-in page for any relying party where you want to check what's being sent through. Particularly handy with ADFS, when you're modifying the claims provided, and want to quickly see the results.

    Read the article

  • How do I make Home and End work in PuTTY SSH with fish shell?

    - by endolith
    Years ago, an Ubuntu update broke this and I've never found a solution. I have fish as my default shell. Ubuntu 12.10 Locally (Gnome Terminal), Home and End keys work fine in both fish and bash. When I log in by SSH using PuTTY, then run bash, Home and End work fine inside of bash. However, when I log in by SSH using PuTTY, in the default fish shell, pressing Home key produces [1~ (sometimes erasing the line, sometimes not). When I press End, it produces [4~. How do I get Home and End to work correctly?

    Read the article

  • An Epic Question "How to call a method when the page loads"

    - by Arunkumar Ramamoorthy
    Quite often, there comes a question in OTN, with different subjects, all meaning "How to call a method when my ADF page loads?". More often, people tend to take the approach of ADF Phase Listener by overriding before/afterPhase methods.In this blog, we will go through different options in achieving it.1. Method Call Activity as default activity in Taskflow :If the application is built with taskflows, then this is the best suited approach to take. 1.a. Calling a Data Control Method :To call a Data Control method (ex: A method in AMImpl exposed as client interface), simply Drag and Drop the method as Default Method Call Activity, then draw a control flow case from the method to your page. Once after this, drop the taskflow as region in main page. When we run the main page, the Method Call Activity would be called first, and then the page will be rendered.1.b. Calling a Method in Backing Bean: To call a method in the backing bean before pageload, we can follow the similar approach as above. Instead of binding the Method Call Activity to an action/method binding in pagedef, we bind to the method. Insert a Method Call Activity (and make it as default) from the Component Palette. Double click on to select a method to bind. This approach can also be used, to perform some action in backing bean along with calling a method Data Control (just need to add bindings code in backing bean to execute DC method). 2. Using invokeAction Executable :If the application is built with pages and no taskflows are involved, then this option can be taken into consideration.In the page definition of the page, add an invokeAction Executable and bind it to the method needed to be executed. 3. Using combination of Server and Client Listeners : If the page does not have any page definition, then to call a method in backing bean, this approach can be taken. In this, a serverListener would be added at the document level, which would be calling the method in backing bean. Along with this, a clientListener would be added with "load" type (i.e will be triggered when the page loads), which would queue a serverEvent to trigger the method. 4. Using Page Phase Listener :This should be the last resort. Care should be taken when using this approach since the Phase Listener would be called for each request sent by the client.Zeeshan Baig's blog covers this scenario.

    Read the article

  • An Epic Question "How to call a method when the page loads"

    - by Arunkumar Ramamoorthy
    Quite often, there comes a question in OTN, with different subjects, all meaning "How to call a method when my ADF page loads?". More often, people tend to take the approach of ADF Phase Listener by overriding before/afterPhase methods.In this blog, we will go through different options in achieving it.1. Method Call Activity as default activity in Taskflow :If the application is built with taskflows, then this is the best suited approach to take. 1.a. Calling a Data Control Method :To call a Data Control method (ex: A method in AMImpl exposed as client interface), simply Drag and Drop the method as Default Method Call Activity, then draw a control flow case from the method to your page. Once after this, drop the taskflow as region in main page. When we run the main page, the Method Call Activity would be called first, and then the page will be rendered.1.b. Calling a Method in Backing Bean: To call a method in the backing bean before pageload, we can follow the similar approach as above. Instead of binding the Method Call Activity to an action/method binding in pagedef, we bind to the method. Insert a Method Call Activity (and make it as default) from the Component Palette. Double click on to select a method to bind. This approach can also be used, to perform some action in backing bean along with calling a method Data Control (just need to add bindings code in backing bean to execute DC method). 2. Using invokeAction Executable :If the application is built with pages and no taskflows are involved, then this option can be taken into consideration.In the page definition of the page, add an invokeAction Executable and bind it to the method needed to be executed. 3. Using combination of Server and Client Listeners : If the page does not have any page definition, then to call a method in backing bean, this approach can be taken. In this, a serverListener would be added at the document level, which would be calling the method in backing bean. Along with this, a clientListener would be added with "load" type (i.e will be triggered when the page loads), which would queue a serverEvent to trigger the method. 4. Using Page Phase Listener :This should be the last resort. Care should be taken when using this approach since the Phase Listener would be called for each request sent by the client.Zeeshan Baig's blog covers this scenario.

    Read the article

  • Ubuntu doesn't give the intended screen resolution

    - by JMCF125
    I have recently created a Ubuntu 12.04.2 64 bit virtual machine on VirtualBox, and I am not very used to Linux (I used Linux Mint for a few weeks some time ago), so please refer the full name of stuff, not just "the what-not-command". The problem is I can't set the full resolution my computer supports (I think it is 1366 by 768), I have found similar questions and tried most of the respective solutions, thy did not work. If I type xrandr to the terminal I get: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 61.0* 800x600 61.0 640x480 60.0 As you can see, the maximum is too low. And in the settings of the screen (I mean, with GUI) only 1024x768 and 800x600 appear. I don't remember exactly which answer of those questions, but it was one in the terminal (again, with xrandr) that made the resolution I wanted appear (although it gave an error when selected, not even changing to the 1366x768 resolution first and then back to 1024x768).

    Read the article

  • Request Validation in ASP.NET 4.0

    - by Ben Bastiaensen
    Up to ASP.NET 3.5 Request Validation is enabled by default. In order to to disable this for a page you needed to set the ValidationRequest property in the page directive to false. This is no longer the default case in ASP.NET 4.0. If you want to use this behaviour you need to add the follwing setting in web.config  <httpRuntime requestValidationMode="2.0" /> Of course you need to check all input in the page for XSS or other malicious input if you set the pages request validation to false.

    Read the article

  • Preventing adult content in a forum

    - by John Doe
    I'm working on a forum that allows images attached to the posts and doesn't require registration. Thing is, I'd like to provide a work-safe navigation option in which the posts with porn images attached aren't shown. The ideas I've come up with are: Making the work-safe option the default and treating all posts with images attached as pornographic, and making them visible only if the user "unchecks" it. Making all posts with images attached not work-safe by default and changing their status to work-safe only after a moderator approved it. Only then they would be visible if the user has the "work-safe" option checked. Does anyone else have an idea? Also, how the big web services deal with this? (YouTube, CraigsList, even StackExchange). By the way, I don't think that "nudity detector" libraries are accurate and they give plenty of false positives and negatives. Thanks!

    Read the article

  • [Windows 8] An application bar toggle button

    - by Benjamin Roux
    To stay in the application bar stuff, here’s another useful control which enable to create an application bar button that can be toggled between two different contents/styles/commands (used to create a favorite/unfavorite or a play/pause button for example). namespace Indeed.Controls { public class AppBarToggleButton : Button { public bool IsChecked { get { return (bool)GetValue(IsCheckedProperty); } set { SetValue(IsCheckedProperty, value); } } public static readonly DependencyProperty IsCheckedProperty = DependencyProperty.Register("IsChecked", typeof(bool), typeof(AppBarToggleButton), new PropertyMetadata(false, (o, e) => (o as AppBarToggleButton).IsCheckedChanged())); public string CheckedContent { get { return (string)GetValue(CheckedContentProperty); } set { SetValue(CheckedContentProperty, value); } } public static readonly DependencyProperty CheckedContentProperty = DependencyProperty.Register("CheckedContent", typeof(string), typeof(AppBarToggleButton), null); public ICommand CheckedCommand { get { return (ICommand)GetValue(CheckedCommandProperty); } set { SetValue(CheckedCommandProperty, value); } } public static readonly DependencyProperty CheckedCommandProperty = DependencyProperty.Register("CheckedCommand", typeof(ICommand), typeof(AppBarToggleButton), null); public Style CheckedStyle { get { return (Style)GetValue(CheckedStyleProperty); } set { SetValue(CheckedStyleProperty, value); } } public static readonly DependencyProperty CheckedStyleProperty = DependencyProperty.Register("CheckedStyle", typeof(Style), typeof(AppBarToggleButton), null); public bool AutoToggle { get { return (bool)GetValue(AutoToggleProperty); } set { SetValue(AutoToggleProperty, value); } } public static readonly DependencyProperty AutoToggleProperty = DependencyProperty.Register("AutoToggle", typeof(bool), typeof(AppBarToggleButton), null); private object content; private ICommand command; private Style style; private void IsCheckedChanged() { if (IsChecked) { // backup the current content and command content = Content; command = Command; style = Style; if (CheckedStyle == null) Content = CheckedContent; else Style = CheckedStyle; Command = CheckedCommand; } else { if (CheckedStyle == null) Content = content; else Style = style; Command = command; } } protected override void OnTapped(Windows.UI.Xaml.Input.TappedRoutedEventArgs e) { base.OnTapped(e); if (AutoToggle) IsChecked = !IsChecked; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To use it, it’s very simple. <ic:AppBarToggleButton Style="{StaticResource PlayAppBarButtonStyle}" CheckedStyle="{StaticResource PauseAppBarButtonStyle}" Command="{Binding Path=PlayCommand}" CheckedCommand="{Binding Path=PauseCommand}" IsChecked="{Binding Path=IsPlaying}" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When the IsPlaying property (in my ViewModel) is true the button becomes a Pause button, when it’s false it becomes a Play button. Warning: Just make sure that the IsChecked property is set in last in your control !! If you don’t use style you can alternatively use Content and CheckedContent. Furthermore you can set the AutoToggle to true if you don’t want to control is IsChecked property through binding. With this control and the AppBarPopupButton, you can now create awesome application bar for your apps ! Stay tuned for more awesome Windows 8 tricks !

    Read the article

  • Apache - create multiple aliases

    - by mc3mcintyre
    I'm trying to setup two websites on my Apache server. One is www.domain.com and the other is test.domain.com. Currently, my 000-default.conf file reads as follows: <VirtualHost www:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.domain.com #ServerAlias www ServerAdmin [email protected] DocumentRoot /var/www/domain.com/ # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/domain.error.log CustomLog ${APACHE_LOG_DIR}/domain.access.log combined UseCanonicalName on allow from all Options +Indexes # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> <VirtualHost test:80> DocumentRoot "/var/www/domain.com/test/" ServerName test.domain.com ServerAdmin [email protected] ErrorLog ${APACHE_LOG_DIR}/test.domain.error.log CustomLog ${APACHE_LOG_DIR}/test.domain.access.log combined UseCanonicalName on allow from all Options +Indexes </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet As is, when I use a browser to go to the www location, it show me a directory listing. However, if I remove the www:80 on Line 1 and replace it with *:80, it correctly displays the webpage. I don't understand why. Can anyone help me configure this 000-default.conf file so that www goes to "/var/www/domain.com" and that test goes to "/var/www/domain.com/test"? Thank you.

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • Pace Layering Comes Alive

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Rick Beers is Senior Director of Product Management for Oracle Fusion Middleware. Prior to joining Oracle, Rick held a variety of executive operational positions at Corning, Inc. and Bausch & Lomb. With a professional background that includes senior management positions in manufacturing, supply chain and information technology, Rick brings a unique set of experiences to cover the impact that technology can have on business models, processes and organizations. Rick hosts the IT Leaders Editorial on a monthly basis. By now, readers of this column are quite familiar with Oracle AppAdvantage, a unified framework of middleware technologies, infrastructure and applications utilizing a pace layered approach to enterprise systems platforms. 1. Standardize and Consolidate core Enterprise Applications by removing invasive customizations, costly workarounds and the complexity that multiple instances creates. 2. Move business specific processes and applications to the Differentiate Layer, thus creating greater business agility with process extensions and best of breed applications managed by cross- application process orchestration. 3. The Innovate Layer contains all the business capabilities required for engagement, collaboration and intuitive decision making. This is the layer where innovation will occur, as people engage one another in a secure yet open and informed way. 4. Simplify IT by minimizing complexity, improving performance and lowering cost with secure, reliable and managed systems across the entire Enterprise. But what hasn’t been discussed is the pace layered architecture that Oracle AppAdvantage adopts. What is it, what are its origins and why is it relevant to enterprise scale applications and technologies? It’s actually a fascinating tale that spans the past 20 years and a basic understanding of it provides a wonderful context to what is evolving as the future of enterprise systems platforms. It all begins in 1994 with a book by noted architect Stewart Brand, of ’Whole Earth Catalog’ fame. In his 1994 book How Buildings Learn, Brand popularized the term ‘Shearing Layers’, arguing that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In 1997 he produced a 6 part BBC Series adapted from the book, in which Part 6 focuses on Shearing Layers. In this segment Brand begins to introduce the concept of ‘pace’. Brand further refined this idea in his subsequent book, The Clock of the Long Now, which began to link the concept of Shearing Layers to computing and introduced the term ‘pace layering’, where he proposes that: “An imperative emerges: an adaptive [system] has to allow slippage between the differently-paced systems … otherwise the slow systems block the flow of the quick ones and the quick ones tear up the slow ones with their constant change. Embedding the systems together may look efficient at first but over time it is the opposite and destructive as well.” In 2000, IBM architects Ian Simmonds and David Ing published a paper entitled A Shearing Layers Approach to Information Systems Development, which applied the concept of Shearing Layers to systems design and development. It argued that at the time systems were still too rigid; that they constrained organizations by their inability to adapt to changes. The findings in the Conclusions section are particularly striking: “Our starting motivation was that enterprises need to become more adaptive, and that an aspect of doing that is having adaptable computer systems. The challenge is then to optimize information systems development for change (high maintenance) rather than stability (low maintenance). Our response is to make it explicit within software engineering the notion of shearing layers, and explore it as the principle that systems should be built to be adaptable in response to the qualitatively different rates of change to which they will be subjected. This allows us to separate functions that should legitimately change relatively slowly and at significant cost from that which should be changeable often, quickly and cheaply.” The problem at the time of course was that this vision of adaptable systems was simply not possible within the confines of 1st generation ERP, which were conceived, designed and developed for standardization and compliance. It wasn’t until the maturity of open, standards based integration, and the middleware innovation that followed, that pace layering became an achievable goal. And Oracle is leading the way. Oracle’s AppAdvantage framework makes pace layering come alive by taking a strategic vision 20 years in the making and transforming it to a reality. It allows enterprises to retain and even optimize their existing ERP systems, while wrapping around those ERP systems three layers of capabilities that inherently adapt as needed, at a pace that’s optimal for the enterprise.

    Read the article

  • monitor multiple work repositories in ODI11g EM

    - by tina.wang
    when you create a domain, by default it will let you specify master/work repository information. This work repository is automatically configured and be directly monitored in EM But your master repository may contain multiple work repositories, how to let EM monitor all them. 1)these work repositories must have been registered in your master repository 2)in weblogic console, generate generic data source for every work repository, eg: jdbc/mySecondWork 3)in odiconsole, create new repository connection for the every work repository, master jndi information is jdbc/odiMasterRepository by default OK, now you can see the work repository status is configured. Btw, there is a bug when the work repository is execution type.

    Read the article

  • Hide Grub menu and keystroke to reveal

    - by Logan Williams
    How do you have the grub appear on a key combination, but have windows boot default. I'm running ubuntu 11.10 and grub 2.0. Here is my current /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=" quiet vga=769" Thanks! And here is my /boot/grub/grub.cfg http://pastebin.com/HbDBe8xz

    Read the article

  • Monitor not detected - low resolution only

    - by Jens
    I just installed Ubuntu 11.1 on my desktop pc. It was a clean install, no upgrading. I have a Samsung Syncmaster BX2450 connected to the PC. My problem is that I cannot make Ubuntu recognize my monitor - which is capable of more than 1024. I ran a shut down of lightdm, and ran sudo X -configure, but it gave me a "configuration failed". Nothing seems to work - any ideas? VESA: GF119 Board - 13100000 xx@xxx:~$ lspci -nn |grep VGA 02:00.0 VGA compatible controller [0300]: nVidia Corporation GT520 [GeForce GT520] [10de:1040] (rev a1) xx@xxx:~$ xrandr -q xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 61.0* 800x600 61.0 640x480 60.0

    Read the article

  • SharePoint Client Object Model: Step One

    - by PeterBrunone
    I almost didn't make it out alive.  I followed the instructions in every piece of sample code and every forum post by someone who had no idea why their client OM code wasn't working, and my code still wouldn't get past the page load.  I kept getting "'Type' is undefined" errors when sp.core.js tried to register the SP namespace.As it turns out, you need the help of the default master page (or one like it) to get the object model loaded.  Once I told my sample page to use the default master and modified everything accordingly, it hooked up and ran just fine.Now I can finally get some work done.

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >