Search Results

Search found 19485 results on 780 pages for 'exchange managed api'.

Page 103/780 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • How do I search using the Google Maps API?

    - by Thomas
    Hello all, I'm trying to figure out how to search for nearby businesses in an iPhone app using the Google Maps API. I'm completely new to Javascript, and have waded through some of Google's sample code easily enough. I found how to grab the user's current location. Now I want to search for "restaurants" near that location. I'm just building on the sample code from here. I'll post it below with my changes anyway, just in case. <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no" /> <meta http-equiv="content-type" content="text/html; charset=UTF-8"/> <title>Google Maps JavaScript API v3 Example: Map Geolocation</title> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script> <script type="text/javascript" src="http://code.google.com/apis/gears/gears_init.js"></script> <script type="text/javascript"> var currentLocation; var detroit = new google.maps.LatLng(42.328784, -83.040877); var browserSupportFlag = new Boolean(); var map; var infowindow = new google.maps.InfoWindow(); function initialize() { var myOptions = { zoom: 6, mapTypeId: google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); map.enableGoogleBar(); // Try W3C Geolocation method (Preferred) if(navigator.geolocation) { browserSupportFlag = true; navigator.geolocation.getCurrentPosition(function(position) { // TRP - Save current location in a variable (currentLocation) currentLocation = new google.maps.LatLng(position.coords.latitude,position.coords.longitude); // TRP - Center the map around current location map.setCenter(currentLocation); }, function() { handleNoGeolocation(browserSupportFlag); }); } else { // Browser doesn't support Geolocation browserSupportFlag = false; handleNoGeolocation(browserSupportFlag); } } function handleNoGeolocation(errorFlag) { if (errorFlag == true) { // TRP - Default location is Detroit, MI currentLocation = detroit; contentString = "Error: The Geolocation service failed."; } else { // TRP - This should never run. It's embedded in a UIWebView, running on iPhone contentString = "Error: Your browser doesn't support geolocation."; } // TRP - Set the map to the default location and display the error message map.setCenter(currentLocation); infowindow.setContent(contentString); infowindow.setPosition(currentLocation); infowindow.open(map); } </script> </head> <body style="margin:0px; padding:0px;" onload="initialize()"> <div id="map_canvas" style="width:100%; height:100%"></div> </body> </html>

    Read the article

  • How do I get a delete trigger working using fluent API in CTP5?

    - by user668472
    I am having trouble getting referential integrity dialled down enough to allow my delete trigger to fire. I have a dependent entity with three FKs. I want it to be deleted when any of the principal entities is deleted. For principal entities Role and OrgUnit (see below) I can rely on conventions to create the required one-many relationship and cascade delete does what I want, ie: Association is removed when either principal is deleted. For Member, however, I have multiple cascade delete paths (not shown here) which SQL Server doesn't like, so I need to use fluent API to disable cascade deletes. Here is my (simplified) model: public class Association { public int id { get; set; } public int roleid { get; set; } public virtual Role role { get; set; } public int? memberid { get; set; } public virtual Member member { get; set; } public int orgunitid { get; set; } public int OrgUnit orgunit { get; set; } } public class Role { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } public class Member { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } public class Organization { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } My first run at fluent API code looks like this: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) { DbDatabase.SetInitializer<ConfDB_Model>(new ConfDBInitializer()); modelBuilder.Entity<Member>() .HasMany(m=>m.assocations) .WithOptional(a=>a.member) .HasForeignKey(a=>a.memberId) .WillCascadeOnDelete(false); } My seed function creates the delete trigger: protected override void Seed(ConfDB_Model context) { context.Database.SqlCommand("CREATE TRIGGER MemberAssocTrigger ON dbo.Members FOR DELETE AS DELETE Assocations FROM Associations, deleted WHERE Associations.memberId = deleted.id"); } PROBLEM: When I run this, create a Role, a Member, an OrgUnit, and an Association tying the three together all is fine. When I delete the Role, the Association gets cascade deleted as I expect. HOWEVER when I delete the Member I get an exception with a referential integrity error. I have tried setting ON CASCADE SET NULL because my memberid FK is nullable but SQL complains again about multiple cascade paths, so apparently I can cascade nothing in the Member-Association relationship. To get this to work I must add the following code to Seed(): context.Database.SqlCommand("ALTER TABLE dbo.ACLEntries DROP CONSTRAINT member_aclentries"); As you can see, this drops the constraint created by the model builder. QUESTION: this feels like a complete hack. Is there a way using fluent API for me to say that referential integrity should NOT be checked, or otherwise to get it to relax enough for the Member delete to work and allow the trigger to be fired? Thanks in advance for any help you can offer. Although fluent APIs may be "fluent" I find them far from intuitive.

    Read the article

  • Javascript and Twitter API rate limitation? (Changing variable values in a loop)

    - by Pablo
    Hello, I have adapted an script from an example of http://github.com/remy/twitterlib. It´s a script that makes one query each 10 seconds to my Twitter timeline, to get only the messages that begin with a musical notation. It´s already working, but I don´t know it is the better way to do this... The Twitter API has a rate limit of 150 IP access per hour (queries from the same user). At this time, my Twitter API is blocked at 25 minutes because the 10 seconds frecuency between posts. If I set up a frecuency of 25 seconds between post, I am below the rate limit per hour, but the first 10 posts are shown so slowly. I think this way I can guarantee to be below the Twitter API rate limit and show the first 10 posts at normal speed: For the first 10 posts, I would like to set a frecuency of 5 seconds between queries. For the rest of the posts, I would like to set a frecuency of 25 seconds between queries. I think if making somewhere in the code a loop with the previous sentences, setting the "frecuency" value from 5000 to 25000 after the 10th query (or after 50 seconds, it´s the same), that´s it... Can you help me on modify this code below to make it work? Thank you in advance. var Queue = function (delay, callback) { var q = [], timer = null, processed = {}, empty = null, ignoreRT = twitterlib.filter.format('-"RT @"'); function process() { var item = null; if (q.length) { callback(q.shift()); } else { this.stop(); setTimeout(empty, 5000); } return this; } return { push: function (item) { var green = [], i; if (!(item instanceof Array)) { item = [item]; } if (timer == null && q.length == 0) { this.start(); } for (i = 0; i < item.length; i++) { if (!processed[item[i].id] && twitterlib.filter.match(item[i], ignoreRT)) { processed[item[i].id] = true; q.push(item[i]); } } q = q.sort(function (a, b) { return a.id > b.id; }); return this; }, start: function () { if (timer == null) { timer = setInterval(process, delay); } return this; }, stop: function () { clearInterval(timer); timer = null; return this; }, empty: function (fn) { empty = fn; return this; }, q: q, next: process }; }; $.extend($.expr[':'], { below: function (a, i, m) { var y = m[3]; return $(a).offset().top y; } }); function renderTweet(data) { var html = ''; html += ''; html += twitterlib.ify.clean(data.text); html += ''; since_id = data.id; return html; } function passToQueue(data) { if (data.length) { twitterQueue.push(data.reverse()); } } var frecuency = 10000; // The lapse between each new Queue var since_id = 1; var run = function () { twitterlib .timeline('twitteruser', { filter : "'?'", limit: 10 }, passToQueue) }; var twitterQueue = new Queue(frecuency, function (item) { var tweet = $(renderTweet(item)); var tweetClone = tweet.clone().hide().css({ visibility: 'hidden' }).prependTo('#tweets').slideDown(1000); tweet.css({ top: -200, position: 'absolute' }).prependTo('#tweets').animate({ top: 0 }, 1000, function () { tweetClone.css({ visibility: 'visible' }); $(this).remove(); }); $('#tweets p:below(' + window.innerHeight + ')').remove(); }).empty(run); run();

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • Oracle HRMS API – Create or Update Employee Phone

    - by PRajkumar
    API --  hr_phone_api.create_or_update_phone   Example -- DECLARE        ln_phone_id                              PER_PHONES.PHONE_ID%TYPE;        ln_object_version_number    PER_PHONES.OBJECT_VERSION_NUMBER%TYPE; BEGIN    -- Create or Update Employee Phone Detail    -- -----------------------------------------------------------     hr_phone_api.create_or_update_phone     (   -- Input data elements         -- -----------------------------         p_date_from                             => TO_DATE('13-JUN-2011'),         p_phone_type                          => 'W1',         p_phone_number                   => '9999999',         p_parent_id                              => 32979,         p_parent_table                         => 'PER_ALL_PEOPLE_F',         p_effective_date                       => TO_DATE('13-JUN-2011'),         -- Output data elements         -- --------------------------------         p_phone_id                              => ln_phone_id,         p_object_version_number    => ln_object_version_number      );    COMMIT; EXCEPTION       WHEN OTHERS THEN                     ROLLBACK;                      dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • TFS 2012 API Create Alert Subscriptions

    - by Bob Hardister
    Originally posted on: http://geekswithblogs.net/BobHardister/archive/2013/07/24/tfs-2012-api-create-alert-subscriptions.aspxThere were only a few post on this and I felt like really important information was left out: What the defaults are How to create the filter string Here’s the code to create the subscription. Get the Collection public TfsTeamProjectCollection GetCollection(string collectionUrl) { try { //connect to the TFS collection using the active user TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri(collectionUrl)); tpc.EnsureAuthenticated(); return tpc; } catch (Exception) { return null; } } Use Impersonation Because my app is used to create “support tickets” as stories in TFS, I use impersonation so the subscription is setup for the “requester.”  That way I can take all the defaults for the subscription delivery preferences. public TfsTeamProjectCollection GetCollectionImpersonation(string collectionUrl, string impersonatingUserAccount) { // see: http://blogs.msdn.com/b/taylaf/archive/2009/12/04/introducing-tfs-impersonation.aspx try { TfsTeamProjectCollection tpc = GetCollection(collectionUrl); if (!(tpc == null)) { //get the TFS identity management service (v2 is 2012 only) IIdentityManagementService2 ims = tpc.GetService<IIdentityManagementService2>(); //look up the user we want to impersonate TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, impersonatingUserAccount, MembershipQuery.None, ReadIdentityOptions.None); //create a new connection using the impersonated user account //note: do not ensure authentication because the impersonated user may not have //windows authentication at execution if (!(identity == null)) { TfsTeamProjectCollection itpc = new TfsTeamProjectCollection(tpc.Uri, identity.Descriptor); return itpc; } else { //the user account is not found return null; } } else { return null; } } catch (Exception) { return null; } } Create the Alert Subscription public bool SetWiAlert(string collectionUrl, string projectName, int wiId, string emailAddress, string userAccount) { bool setSuccessful = false; try { //use impersonation so the event service creating the subscription will default to //the correct account: otherwise domain ambiguity could be a problem TfsTeamProjectCollection itpc = GetCollectionImpersonation(collectionUrl, userAccount); if (!(itpc == null)) { IEventService es = itpc.GetService(typeof(IEventService)) as IEventService; DeliveryPreference deliveryPreference = new DeliveryPreference(); //deliveryPreference.Address = emailAddress; deliveryPreference.Schedule = DeliverySchedule.Immediate; deliveryPreference.Type = DeliveryType.EmailHtml; //the following line does not work for two reasons: //string filter = string.Format("\"ID\" = '{0}' AND \"Authorized As\" <> '[Me]'", wiId); //1. the create fails because there is a space between Authorized As //2. the explicit query criteria are all incorrect anyway // see uncommented line for what does work: you have to create the subscription mannually // and then get it to view what the filter string needs to be (see following commented code) //this works string filter = string.Format("\"CoreFields/IntegerFields/Field[Name='ID']/NewValue\" = '12175'" + " AND \"CoreFields/StringFields/Field[Name='Authorized As']/NewValue\"" + " <> '@@MyDisplayName@@'", projectName, wiId); string eventName = string.Format("<PT N=\"ALM Ticket for Work Item {0}\"/>", wiId); es.SubscribeEvent("WorkItemChangedEvent", filter, deliveryPreference, eventName); ////use this code to get existing subscriptions: you can look at manually created ////subscriptions to see what the filter string needs to be //IIdentityManagementService2 ims = itpc.GetService<IIdentityManagementService2>(); //TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, // userAccount, // MembershipQuery.None, // ReadIdentityOptions.None); //var existingsubscriptions = es.GetEventSubscriptions(identity.Descriptor); setSuccessful = true; return setSuccessful; } else { return setSuccessful; } } catch (Exception) { return setSuccessful; } }

    Read the article

  • google docs + web app

    - by King
    Hi Guys I am trying to create a web app to share docs with all editor features (just like google docs). My main requirements for this app are as follows: 1. Should have all editor features (can be done using open office api, google docs api, Microsoft office web apps api) 2. Should be shared between multiple users and can be edited by multiple users and other sync features (can be done using google docs api, Microsoft office web apps) 3. Can save the document created and edited on my own/ custom server addr. (Which api can support this??? I know open office can support this) Guys can you please suggest me one api which can be used to do all the above. Also please suggest if I am underestimating any API above regarding any functionality that i thing is not supported. Thanks King

    Read the article

  • Implementation of APIs on diferent platforms

    - by b-gen-jack-o-neill
    OK, this is basicly just about any non-default OS API running on all different OS. But for my example let´s consider platform Windows, API SDL (Simple DirectMedia Layer). Actually this question came to my mind when I was reading about SDL. Originally, I thought that on Windows (and basicly any other OS) you must use OS API to make certain actions, like wrtiting to screen, creating window and so on, becouse that API knows what kernel calls and system subroutines calls it has to do. But when I read about SDL, I surprised me, becouse, you cannot make computer to do anything more than OS can, since you cannot acess HW directly, only thru OS API, from Console allocation to DirectX. So, my question actually is, how does this not-default-OS APIs work? Do they use (wrap) original system API (like MFC wraps win32 api)? Or, do they actually have direct acess to Windows kernel? Or is there any third, way in between? Thanks.

    Read the article

  • How does Google implement Microsoft Exchange access?

    - by user358041
    I know with Android 2.x there is the ability to tap into Microsoft Exchange, for at least email, if not calendar and contacts. I would like to see how this was accomplished. Particularly because Microsoft Exchange exposes SOAP web services, and I understand there is no native Android support for SOAP. Since this is open source, shouldn't I be able to find something in the Android source? If so, can you point me in the right direction of where to find it in the ~4Gig (!) source? I want to develop an application that accesses Exchange contacts and calendars, but don't want to reinvent that piece. Any other guidance would be appreciated. Thanks.

    Read the article

  • HTML5 offline cache google font api

    - by Bala Clark
    I'm trying to create an offline HTML5 test application, and am playing with the new google fonts api at the same time. Does anyone have any ideas how to cache the remote fonts? Simply putting the api call in the cache manifest doesn't work, I assume this is because the api actually loads other files (ttf, eot, etc). Any ideas if using the font api offline would be possible? For reference this is the call I am making: http://fonts.googleapis.com/css?family=IM+Fell+English|Molengo|Reenie+Beanie

    Read the article

  • Implement eBay Finding/Feedback API [Java]

    - by zengr
    Hi, I have just started with the ebay Finding API and Feedback API and I need to deploy a basic API implementation on GAE/J. The problems are: How do we start with the local dev environment of the ebay SDK? There is no Java tutorial for the Finding API and feedback. GAE/J + ebay APIs won't cause any complexity right? I am looking for head start in the right direction,I am using Eclipse + GAE plugin.

    Read the article

  • TFS API-Process Template currently applied to the Team Project

    - by Tarun Arora
    Download Demo Solution - here In this blog post I’ll show you how to use the TFS API to get the name of the Process Template that is currently applied to the Team Project. You can also download the demo solution attached, I’ve tested this solution against TFS 2010 and TFS 2011.    1. Connecting to TFS Programmatically I have a blog post that shows you from where to download the VS 2010 SP1 SDK and how to connect to TFS programmatically. private TfsTeamProjectCollection _tfs; private string _selectedTeamProject;   TeamProjectPicker tfsPP = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPP.ShowDialog(); this._tfs = tfsPP.SelectedTeamProjectCollection; this._selectedTeamProject = tfsPP.SelectedProjects[0].Name; 2. Programmatically get the Process Template details of the selected Team Project I’ll be making use of the VersionControlServer service to get the Team Project details and the ICommonStructureService to get the Project Properties. private ProjectProperty[] GetProcessTemplateDetailsForTheSelectedProject() { var vcs = _tfs.GetService<VersionControlServer>(); var ics = _tfs.GetService<ICommonStructureService>(); ProjectProperty[] ProjectProperties = null; var p = vcs.GetTeamProject(_selectedTeamProject); string ProjectName = string.Empty; string ProjectState = String.Empty; int templateId = 0; ProjectProperties = null; ics.GetProjectProperties(p.ArtifactUri.AbsoluteUri, out ProjectName, out ProjectState, out templateId, out ProjectProperties); return ProjectProperties; } 3. What’s the catch? The ProjectProperties will contain a property “Process Template” which as a value has the name of the process template. So, you will be able to use the below line of code to get the name of the process template. var processTemplateName = processTemplateDetails.Where(pt => pt.Name == "Process Template").Select(pt => pt.Value).FirstOrDefault();   However, if the process template does not contain the property “Process Template” then you will need to add it. So, the question becomes how do i add the Name property to the Process Template. Download the Process Template from the Process Template Manager on your local        Once you have downloaded the Process Template to your local machine, navigate to the Classification folder with in the template       From the classification folder open Classification.xml        Add a new property <property name=”Process Template” value=”MSF for CMMI Process Improvement v5.0” />           4. Putting it all together… using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.VersionControl.Client; using Microsoft.TeamFoundation.Server; using System.Diagnostics; using Microsoft.TeamFoundation.WorkItemTracking.Client; namespace TfsAPIDemoProcessTemplate { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private TfsTeamProjectCollection _tfs; private string _selectedTeamProject; private void btnConnect_Click(object sender, EventArgs e) { TeamProjectPicker tfsPP = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPP.ShowDialog(); this._tfs = tfsPP.SelectedTeamProjectCollection; this._selectedTeamProject = tfsPP.SelectedProjects[0].Name; var processTemplateDetails = GetProcessTemplateDetailsForTheSelectedProject(); listBox1.Items.Clear(); listBox1.Items.Add(String.Format("Team Project Selected => '{0}'", _selectedTeamProject)); listBox1.Items.Add(Environment.NewLine); var processTemplateName = processTemplateDetails.Where(pt => pt.Name == "Process Template") .Select(pt => pt.Value).FirstOrDefault(); if (!string.IsNullOrEmpty(processTemplateName)) { listBox1.Items.Add(Environment.NewLine); listBox1.Items.Add(String.Format("Process Template Name: {0}", processTemplateName)); } else { listBox1.Items.Add(String.Format("The Process Template does not have the 'Name' property set up")); listBox1.Items.Add(String.Format("***TIP: Download the Process Template and in Classification.xml add a new property Name, update the template then you will be able to see the Process Template Name***")); listBox1.Items.Add(String.Format(" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -")); } } private ProjectProperty[] GetProcessTemplateDetailsForTheSelectedProject() { var vcs = _tfs.GetService<VersionControlServer>(); var ics = _tfs.GetService<ICommonStructureService>(); ProjectProperty[] ProjectProperties = null; var p = vcs.GetTeamProject(_selectedTeamProject); string ProjectName = string.Empty; string ProjectState = String.Empty; int templateId = 0; ProjectProperties = null; ics.GetProjectProperties(p.ArtifactUri.AbsoluteUri, out ProjectName, out ProjectState, out templateId, out ProjectProperties); return ProjectProperties; } } } Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Have you come across a better way of doing this, please share your experience here. Questions/Feedback/Suggestions, etc please leave a comment. Thank You! Share this post : CodeProject

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • Oracle HRMS API – Create Employee Address

    - by PRajkumar
    API - hr_person_address_api.create_person_address Example --   DECLARE     ln_address_id                           PER_ADDRESSES.ADDRESS_ID%TYPE;     ln_object_version_number    PER_ADDRESSES.OBJECT_VERSION_NUMBER%TYPE; BEGIN    -- Create Employee Address    -- --------------------------------------     hr_person_address_api.create_person_address     (     -- Input data elements           -- ------------------------------           p_effective_date                    => TO_DATE('08-JUN-2011'),           p_person_id                           => 32979,           p_primary_flag                     => 'Y',           p_style                                     => 'US',           p_date_from                           => TO_DATE('08-JUN-2011'),           p_address_line1                   => '50 Main Street',           p_address_line2                   => NULL,           p_town_or_city                     => 'White Plains',           p_region_1                              => 'Westchester',           p_region_2                              => 'NY',           p_postal_code                        => 10601,           p_country                                => 'US',           -- Output data elements           -- --------------------------------           p_address_id                          => ln_address_id,           p_object_version_number   => ln_object_version_number    );    COMMIT; EXCEPTION        WHEN OTHERS THEN                        ROLLBACK;                        dbms_output.put_line(SQLERRM); END; / SHOW ERR;    

    Read the article

  • Twitpic API from iPhone - pic posted but no URL returned?

    - by Jamie Badman
    This is a weird one... With the help of people here, I've got my iPhone app posting to TwitPic successfully - and when I first got it working, I could see an XML result being returned too... But for some reason over the past two days, the API call seems to succeed - the pic appears on TwitPic - but... the response seems to be empty... Anyone have any ideas? Seen anything similar? The code I use to invoke the API call is: ASIFormDataRequest *request = [[[ASIFormDataRequest alloc] initWithURL:url] autorelease]; [request setData:twitpicImage forKey:@"media"]; [request setPostValue:username forKey:@"username"]; [request setPostValue:password forKey:@"password"]; // Initiate the WebService request [request start]; // Need to find out how I can access the result from this call... /* Result structure should be: <?xml version="1.0" encoding="UTF-8"?> <rsp stat="ok"> <mediaid>abc123</mediaid> <mediaurl>http://twitpic.com/abc123</mediaurl> </rsp> */ // Check for errors if ([[request responseHeaders] objectForKey:@"stat"] != @"ok"){ UIAlertView *errorAlert = [[UIAlertView alloc] initWithTitle:@"TwitPic Submission" message:[[request responseHeaders] objectForKey:@"mediaurl"] delegate:nil cancelButtonTitle:@"OK!" otherButtonTitles:nil]; [errorAlert show]; [errorAlert release]; } NSString *twitpicURL = [[request responseHeaders] objectForKey:@"mediaurl"]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"TwitPic Submission" message:twitpicURL delegate:nil cancelButtonTitle:@"OK!" otherButtonTitles:nil]; I tried just dumping out [request responseString]... that's empty now also. That WAS showing a response, for sure. As always, any help gratefully received. I'll give back once I'm able! Cheers, Jamie.

    Read the article

  • Multiple dex files define Lcom/google/api/client/auth/oauth/AbstractOAuthGetToken;

    - by Elad Benda
    I have just followed this tutorial: https://developers.google.com/drive/quickstart-android so I don't see a reason for duplicated libs in my project. I have added the drive Client lib via Google plugin for eclipse When I build my android app with this manifest <uses-sdk android:minSdkVersion="15" android:targetSdkVersion="16" /> <uses-permission android:name="android.permission.READ_CALENDAR" /> <uses-permission android:name="android.permission.WRITE_CALENDAR" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.GET_ACCOUNTS"/> <uses-permission android:name="android.permission.INTERNET" /> <application android:icon="@drawable/todo" android:label="@string/app_name" > <activity android:name=".TodosOverviewActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".TodoDetailActivity" android:windowSoftInputMode="stateVisible|adjustResize" > <intent-filter> <action android:name="android.intent.action.SEND" /> <category android:name="android.intent.category.DEFAULT" /> <data android:mimeType="image/*" /> </intent-filter> </activity> <provider android:name=".contentprovider.MyTodoContentProvider" android:authorities="de.vogella.android.todos.contentprovider" > </provider> </application> I get the following error: [2013-10-27 00:43:58 - Dex Loader] Unable to execute dex: Multiple dex files define Lcom/google/api/client/auth/oauth/AbstractOAuthGetToken; [2013-10-27 00:43:58 - de.vogella.android.todos] Conversion to Dalvik format failed: Unable to execute dex: Multiple dex files define Lcom/google/api/client/auth/oauth/AbstractOAuthGetToken; how can I fix this?

    Read the article

  • Parse Facebook API data using loop for getting fan page ID#s?

    - by Brandon Lee
    I've been learning how to parse json data returned from the facebook-api. I've figured out how to fetch fan pages from a specific profile id and want to parse them using a loop! Heres the code and example I have below: This is the data I get back from the facebook-api Array ( [0] => Array ( [page_id] => XXXXXX60828 ) [1] => Array ( [page_id] => XXXXXX0750 ) [2] => Array ( [page_id] => XXXXXX91225 ) [3] => Array ( [page_id] => XXXXXX1960343 ) [4] => Array ( [page_id] => XXXXXX60863 ) [5] => Array ( [page_id] => XXXXXX8582 ) ) I need to be able to put this data in a loop and extract the page_id#s out... still getting familiar with json and am having issues figuring this out? How can I get this in a loop using for each and strip out the page id#s?

    Read the article

  • jquery google visual api graph's data rows does not work.

    - by marharépa
    Hi! I'd like to use google drawVisualization API. Example: var data = new google.visualization.DataTable(); data.addColumn('string'); data.addColumn('number'); data.addRows([ ['a', 14], ['b', 47], ['c', 80], ['d', 55], ['e', 16], ['f', 90], ['g', 29], ['h', 23], ['i', 58], ['j', 48] ]); My version gets elements by an other google api, and join them, and after place the variable between ([ and ]), to be like in the example. var outputGraph = []; for (var i = 0, entry; entry = entries[i]; ++i) { var asd = [ entry.getValueOf('ga:pageTitle'), entry.getValueOf('ga:pageviews') ].join("',"); outputGraph.push(" ['" + asd + "]"); //get the 2 elements and join them to be like ['asd', 2], } // this is fine, the outputgraph is like ['asd', 2], ['asd', 2], ['asd', 2] as seen in the example var outputGraphFine = ("(["+outputGraph+"])"); // i suggest this is which fails the script. var data = new google.visualization.DataTable(); data.addColumn('string', 'Task'); data.addColumn('number', 'Hours per Day'); data.addRows = outputGraphFine; But it doesn't work. Why?

    Read the article

  • Retrieving license type (linux/windows/windows+sqlserver) for an Amazon EC2 instance via the API?

    - by Geir
    I need to calculate the hourly running costs for my Amazon EC2 instances. This varies even between instances with same hardware configs (instance types) because I use different amazon images (AMIs): some plain windows server and some windows server with sql server (both of them have additional costs compared with plain linux instances) The EC2 Java API has a describeInstances() method which returns Instance objects with metadata such as instance id, instance type (m1.small/large...), state (running,stopped..) public ip, etc. This Instance object also has a .getLicense().getPool() which according to the Java API should return "The license pool from which this license was used (ex: 'windows')." I thought this is were it may also give 'windows+sqlserver' or something to that effect. The getLicense() method does however return null.. I've navigated around the EC2 web console, not being able to find this information, but I'm hoping that it is possible - otherwise it would mean that you cannot identify the true hourly cost of an particular instance unless you know which AMI was used to create it in the first place (plain windows server or windows server with sql server). Anyone? Thanks :) /Geir

    Read the article

  • How can I make an event created through Google Calendar's API send an invitation email?

    - by Cebjyre
    I'm trying to create an event through the API and it is mostly working, with the exception that while the new events are being created in the invitees calendars, no emails are being sent. Creating the event from the web interface is pushing the event through, as well as sending the email (except one account that doesn't get any notifications at all, but that's not relevant to my current problem). The event I am trying to push in is: <entry xmlns='http://www.w3.org/2005/Atom' xmlns:gd='http://schemas.google.com/g/2005'> <category scheme='http://schemas.google.com/g/2005#kind' term='http://schemas.google.com/g/2005#event'></category> <title type='text'>test event</title> <content type='text'>content.</content> <gd:transparency value='http://schemas.google.com/g/2005#event.opaque'> </gd:transparency> <gd:eventStatus value='http://schemas.google.com/g/2005#event.confirmed'> </gd:eventStatus> <gd:where valueString='somewhere'></gd:where> <gd:who email="[redacted]" rel='http://schemas.google.com/g/2005#event.attendee' valueString='Me'><gd:attendeeStatus value='http://schemas.google.com/g/2005#event.invited'/></gd:who> <gd:who email="[redacted again]" rel='http://schemas.google.com/g/2005#event.organizer' valueString='Also Me'><gd:attendeeStatus value='http://schemas.google.com/g/2005#event.accepted'/></gd:who> <gd:when startTime='2010-05-18T15:30:00.000+10:00' endTime='2010-05-18T16:00:00.000+10:00'></gd:when> </entry> And when I request event lists I can't see any large difference between events created through the API and through the web interface.

    Read the article

  • Twitter API Rate Limit - Overcoming on an unauthenticated JSON Get with Objective C?

    - by Cian
    I see the rate limit is 150/hr per IP. This'd be fine, but my application is on a mobile phone network (with shared IP addresses). I'd like to query twitter trends, e.g. GET /trends/1/json. This doesn't require authorization, however what if the user first authorized with my application using OAuth, then hit the JSON API? The request is built as follows: - (void) queryTrends:(NSString *) WOEID { NSString *urlString = [NSString stringWithFormat:@"http://api.twitter.com/1/trends/%@.json", WOEID]; NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *theRequest=[NSURLRequest requestWithURL:url cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:10.0]; NSURLConnection *theConnection=[[NSURLConnection alloc] initWithRequest:theRequest delegate:self startImmediately:YES]; if (theConnection) { // Create the NSMutableData to hold the received data. theData = [[NSMutableData data] retain]; } else { NSLog(@"Connection failed in Query Trends"); } //NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]]; } I have no idea how I'd build this request as an authenticated one however, and haven't seen any examples to this effect online. I've read through the twitter OAuth documentation, but I'm still puzzled as to how it should work. I've experimented with OAuth using Ben Gottlieb's prebuild library, and calling this in my first viewDidLoad: OAuthViewController *oAuthVC = [[OAuthViewController alloc] initWithNibName:@"OAuthTwitterDemoViewController" bundle:[NSBundle mainBundle]]; // [self setViewController:aViewController]; [[self navigationController] pushViewController:oAuthVC animated:YES]; This should store all the keys required in the app's preferences, I just need to know how to build the GET request after authorizing! Maybe this just isn't possible? Maybe I'll have to proxy the requests through a server side application? Any insight would be appreciated!

    Read the article

  • Using the JPA Criteria API, can you do a fetch join that results in only one join?

    - by Shaun
    Using JPA 2.0. It seems that by default (no explicit fetch), @OneToOne(fetch = FetchType.EAGER) fields are fetched in 1 + N queries, where N is the number of results containing an Entity that defines the relationship to a distinct related entity. Using the Criteria API, I might try to avoid that as follows: CriteriaBuilder builder = entityManager.getCriteriaBuilder(); CriteriaQuery<MyEntity> query = builder.createQuery(MyEntity.class); Root<MyEntity> root = query.from(MyEntity.class); Join<MyEntity, RelatedEntity> join = root.join("relatedEntity"); root.fetch("relatedEntity"); query.select(root).where(builder.equals(join.get("id"), 3)); The above should ideally be equivalent to the following: SELECT m FROM MyEntity m JOIN FETCH myEntity.relatedEntity r WHERE r.id = 3 However, the criteria query results in the root table needlessly being joined to the related entity table twice; once for the fetch, and once for the where predicate. The resulting SQL looks something like this: SELECT myentity.id, myentity.attribute, relatedentity2.id, relatedentity2.attribute FROM my_entity myentity INNER JOIN related_entity relatedentity1 ON myentity.related_id = relatedentity1.id INNER JOIN related_entity relatedentity2 ON myentity.related_id = relatedentity2.id WHERE relatedentity1.id = 3 Alas, if I only do the fetch, then I don't have an expression to use in the where clause. Am I missing something, or is this a limitation of the Criteria API? If it's the latter, is this being remedied in JPA 2.1 or are there any vendor-specific enhancements? Otherwise, it seems better to just give up compile-time type checking (I realize my example doesn't use the metamodel) and use dynamic JPQL TypedQueries.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Debugging unmanaged code while debugging managed code

    - by sc_ray
    Hi, The .NET 3.5 application I am working on consists of bunch of different solutions. Some of these solutions consist of managed code(C#) and others have unmanaged code(C++). Methods written in C# communicate with the ones written in C++. I am trying to trace the dependencies between these various functions and I thought setting breakpoints on the solution consisting my C++ functions. One of the C# solutions have the startup project. I run this solution in debug mode with the expectation that the breakpoints in my unmanaged code will be hit but nothing really happens. Can somebody guide me through the process of debugging mixed applications such as these using the Visual Studio IDE? Thanks

    Read the article

  • How to conditionally exclude features from "FeaturesDlg" in WiX 3.0 from a managed Custom Action (DT

    - by Gerald
    I am trying to put together an installer using WiX 3.0 and I'm unsure about one thing. I would like to use the FeaturesDlg dialog to allow the users to select features to install, but I need to be able to conditionally exclude some features from the list based on some input previously received, preferably from a managed Custom Action. I see that if I set the "Display" attribute of a Feature to "hidden" in the .wxs file that it does what I want, but I can't figure out a way to change that attribute at runtime. Any pointers would be great.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >