Search Results

Search found 68400 results on 2736 pages for 'industry data model'.

Page 151/2736 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • WPF DataContext does not refresh the DataGrid using MVVM model

    - by vikram bhatia
    Project Overview I have a view which binds to a viewmodel containing 2 ObserverableCollection. The viewmodel constructor populates the first ObserverableCollection and the view datacontext is collected to bind to it through a public property called Sites. Later the 2ed ObserverableCollection is populated in the LoadOrders method and the public property LoadFraudResults is updated for binding it with datacontext. I am using WCF to pull the data from the database and its getting pulled very nicely. VIEWMODEL SOURCE class ManageFraudOrderViewModel:ViewModelBase { #region Fields private readonly ICollectionView collectionViewSites; private readonly ICollectionView collectionView; private ObservableCollection<GeneralAdminService.Website> _sites; private ObservableCollection<FraudService.OrderQueue> _LoadFraudResults; #endregion #region Properties public ObservableCollection<GeneralAdminService.Website> Sites { get { return this._sites; } } public ObservableCollection<FraudService.OrderQueue> LoadFraudResults { get { return this._LoadFraudResults;} } #endregion public ManageFraudOrderViewModel() { //Get values from wfc service model GeneralAdminService.GeneralAdminServiceClient generalAdminServiceClient = new GeneralAdminServiceClient(); GeneralAdminService.Website[] websites = generalAdminServiceClient.GetWebsites(); //Get values from wfc service model if (websites.Length > 0) { _sites = new ObservableCollection<Wqn.Administration.UI.GeneralAdminService.Website>(); foreach (GeneralAdminService.Website website in websites) { _sites.Add((Wqn.Administration.UI.GeneralAdminService.Website)website); } this.collectionViewSites= CollectionViewSource.GetDefaultView(this._sites); } generalAdminServiceClient.Close(); } public void LoadOrders(Wqn.Administration.UI.FraudService.Website website) { //Get values from wfc service model FraudServiceClient fraudServiceClient = new FraudServiceClient(); FraudService.OrderQueue[] OrderQueue = fraudServiceClient.GetFraudOrders(website); //Get values from wfc service model if (OrderQueue.Length > 0) { _LoadFraudResults = new ObservableCollection<Wqn.Administration.UI.FraudService.OrderQueue>(); foreach (FraudService.OrderQueue orderQueue in OrderQueue) { _LoadFraudResults.Add(orderQueue); } } this.collectionViewSites= CollectionViewSource.GetDefaultView(this._LoadFraudResults); fraudServiceClient.Close(); } } VIEW SOURCE public partial class OrderQueueControl : UserControl { private ManageFraudOrderViewModel manageFraudOrderViewModel ; private OrderQueue orderQueue; private ButtonAction ButtonAction; private DispatcherTimer dispatcherTimer; public OrderQueueControl() { LoadOrderQueueForm(); } #region LoadOrderQueueForm private void LoadOrderQueueForm() { //for binding the first observablecollection manageFraudOrderViewModel = new ManageFraudOrderViewModel(); this.DataContext = manageFraudOrderViewModel; } #endregion private void cmbWebsite_SelectionChanged(object sender, SelectionChangedEventArgs e) { BindItemsSource(); } #region BindItemsSource private void BindItemsSource() { using (OverrideCursor cursor = new OverrideCursor(Cursors.Wait)) { if (!string.IsNullOrEmpty(Convert.ToString(cmbWebsite.SelectedItem))) { Wqn.Administration.UI.FraudService.Website website = (Wqn.Administration.UI.FraudService.Website)Enum.Parse(typeof(Wqn.Administration.UI.FraudService.Website),cmbWebsite.SelectedItem.ToString()); //for binding the second observablecollection******* manageFraudOrderViewModel.LoadOrders(website); this.DataContext = manageFraudOrderViewModel; //for binding the second observablecollection******* } } } #endregion } XAML ComboBox x:Name="cmbWebsite" ItemsSource="{Binding Sites}" Margin="5" Width="100" Height="25" SelectionChanged="cmbWebsite_SelectionChanged" DataGrid ItemsSource ={Binding Path = LoadFraudResults} PROBLEM AREA: When I call the LoadOrderQueueForm to bind the first observablecollection and later BindItemsSource to bind 2ed observable collection, everything works fine and no problem for the first time binding. But, when I call BindItemsSource again to repopulate the obseravablecollection based on changed selected combo value via cmbWebsite_SelectionChanged, the observalblecollection gets populated with new value and LoadFraudResults property in viewmodule is populated with new values; but when i call the datacontext to rebind the datagrid,the datagrid does not reflect the changed values. In other words the datagrid doesnot get changed when the datacontext is called the 2ed time in BindItemsSource method of the view. manageFraudOrderViewModel.LoadOrders(website); this.DataContext = manageFraudOrderViewModel; manageFraudOrderViewModel values are correct but the datagrid is not relected with changed values. Please help as I am stuck with this thing for past 2 days and the deadline is approaching near. Thanks in advance

    Read the article

  • How to penetrate the QA industry after layoffs, next steps...

    - by Erik
    Briefly, my background is in manual black box testing of websites and applications within the Agile/waterfall context. Over the past four years I was a member of two web development firms' small QA teams dedicated to testing the deployment of websites for national/international non profits, governmental organizations, and for profit business, to name a few: -Brookings Institution -Senate -Tyco Electronics -Blue Cross/Blue Shield -National Geographic -Discover Channel I have a very strong understanding of the: -SDLC -STLC of bugs and website deployment/development -Use Case & Test Case development In March of this year, my last firm downsized and lost my job as a QA tester. I have been networking and doing a very detailed job search, but have had a very difficult time getting my next job within the QA industry, even with my background as a manual black box QA tester in the website development context. My direct question to all of you: What are some ways I can be more competitive and get hired? Options that could get me competitive: Should I go back to school and learn some more 'hard' skills in website development and client side technologies, e.g.: -HTML -CSS -JavaScript Learn programming: -PHP -C# -Ruby -SQL -Python -Perl -?? Get Certified as a QA Tester, there are a countless numbers of programs to become a Certified Tester. Most, if not all jobs, being advertised now require Automated Testing experience, in: -QTP -Loadrunner -Selenium -ETC. Should I learn, Automated testing skills, via a paid course, or teach myself? --Learn scripting languages to understand the automated testing process better? Become a Certified "Project Management Professional" (PMP) to prove to hiring managers that I 'get' the project development life cycle? At the end of the day I need to be competitive and get hired as a QA tester and want to build upon my skills within the QA web development field. How should I do this, without reinventing the wheel? Any help in this regard would be fabulous. Thanks! .erik

    Read the article

  • SQL to XML open data made simple

    - by drrwebber
    The perennial question for people is how to easily generate XML from SQL table content?  The latest CAM Editor release really tackles this head on by providing a powerful and simple toolset.  Firstly you can visually browse your SQL tables and then drag and drop from columns and tables into the XML structure editor.   This gives you a code-free method of describing the transformation you require.  So you do not need to know about the vagaries of XML and XSD schema syntax. Second you can map directly into existing industry domain XML exchange structures in the XML visual editor, again no need to wrestle with XSD schema, you have WYSIWYG visual control over what your output will look like. If you do not have a target XML structure and need to build one from scratch, then the CAM Editor makes this simple.  Switch the SQL viewer into designer mode, then take your existing SQL table and drag and drop it into the XML structure editor.  Automatically the XML wizard tool will take your SQL column names and definitions and create equivalent XML for you and insert the mappings. Simply save the structure template, and run the Open Data generator menu option, and your XML is built for you. Completely code-free template driven development. To see this in action, see our video demonstration links and then download the tools and samples and try it yourself.

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • XForms and multiple inputs for same model tag

    - by iHeartGreek
    Hi! I apologize ahead of time if I am not asking this properly.. it is hard to put into words what I am asking.. I have XForms model such as: <file> <criteria> <criterion></criterion> </criteria> </file> I want to have multiple input text boxes that create a new criterion tag. user interface such as: <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> And I would like the XML output to look like this (once user has entered in info): <file> <criteria> <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>CCC</criterion> </criteria> </file> The way I have it doesn't work, as it sees the 3 input fields to be referring all to the same criterion tag. How do I differentiate? Thanks! I hope that made some sense! BEGIN FIRST EDIT Thanks for the responses for the basic text box! However, I now need to do this with a listbox. But for the life of me, I can't figure out how. I read somewhere to use with the xforms:select and deselect events.. but I didn't know where to place them, and the places I tried gave me very weird behaviour. I am currently implementing the following: <xf:select ref="instance('criteria_data')/criteria/criterion" selection="" appearance="compact" > <xf:label>Choose criteria</xf:label> <xf:itemset nodeset="instance('criteria_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> However when multiple choices are submitted, all selection values are inserted into the same node, separated by spaces. For example: If AAA and BBB and FFF were selected from listbox, it would result in the following XML: <criterion>AAA BBB FFF</criterion> How do I change my code to have each selection be in a separate node? i.e. I want it to look like this: <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>FFF</criterion> Thanks! END FIRST EDIT BEGIN SECOND EDIT: For the listboxes (ie xf:select appearance="compact") I ended up allowing the spaces to occur in the same node and then just transformed that xml using xsl to generate a properly formatted new xml doc (with separate individual nodes). Unfortunately, I did not find a less cumbersome solution by inserting them originally into separate nodes. The selected answer works very well for text boxes however, hence why I selected it as the answer. END SECOND EDIT

    Read the article

  • juju illegal base64 data at input byte 9

    - by ayr-ton
    After bootstrap a environment via manual provisioning, juju give me the following output for juju status: ERROR Unable to connect to environment "manual". Please check your credentials or use 'juju bootstrap' to create a new environment. Error details: illegal base64 data at input byte 9 And doing bootstrap again shows me: WARNING ignoring environments.yaml: using bootstrap config in file "/home/ayrton/.juju/environments/manual.jenv" ERROR illegal base64 data at input byte 9 The first bootstrap shows me no error, but the status crash as above and the second one output is just the base64 error. My juju version is 1.19.4-trusty-amd64, running in trusty 64. The bootstrap environment is a VPS with 1GB of memory, 20GB of hd and precise 64bits. Please, let me know if I can provide any further information.

    Read the article

  • Preventing duplicate Data with ASP.NET AJAX

    - by Yousef_Jadallah
      Some times you need to prevent  User names ,E-mail ID's or other values from being duplicated by a new user during Registration or any other cases,So I will add a simple approach to make the page more user-friendly. Instead the user filled all the Registration fields then press submit after that received a message as a result of PostBack that "THIS USERNAME IS EXIST", Ajax tidies this up by allowing asynchronous querying while the user is still completing the registration form.   ASP.NET enables you to create Web services can be accessed from client script in Web pages by using AJAX technology to make Web service calls. Data is exchanged asynchronously between client and server, typically in JSON format. I’ve added an article to show you step by step  how to use ASP.NET AJAX with Web Services , you can find it here .   Lets go a head with the steps :   1-Create a new project , if you are using VS 2005 you have to create ASP.NET Ajax Enabled Web site.   2-Create your own Database which contain user table that have User_Name field. for Testing I’ve added SQL Server Database that come with Dot Net 2008: Then I’ve created tblUsers:   This table and this structure just for our example, you can use your own table to implement this approach.   3-Add new Item to your project or website, Choose Web Service file, lets say  WebService.cs  .In this Web Service file import System.Data.SqlClient Namespace, Then Add your web method that contain string parameter which received the Username parameter from the Script , Finally don’t forget to qualified the Web Service Class with the ScriptServiceAttribute attribute ([System.Web.Script.Services.ScriptService])     using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; using System.Data.SqlClient;     [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.Web.Script.Services.ScriptService] public class WebService : System.Web.Services.WebService {     [WebMethod] public int CheckDuplicate(string User_Name) { string strConn = @"Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\TestDB.mdf;Integrated Security=True;User Instance=True"; string strQuery = "SELECT COUNT(*) FROM tblUsers WHERE User_Name = @User_Name"; SqlConnection con = new SqlConnection(strConn); SqlCommand cmd = new SqlCommand(strQuery, con); cmd.Parameters.Add("User_Name", User_Name); con.Open(); int RetVal= (int)cmd.ExecuteScalar(); con.Close(); return RetVal; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Our Web Method here is CheckDuplicate Which accept User_Name String as a parameter and return number of the rows , if the name will found in the database this method will return 1 else it will return 0. I’ve applied  [WebMethod] Attribute to our method CheckDuplicate, And applied the ScriptService attribute to a Web Service class named WebService.   4-Add this simple Registration form : <fieldset> <table id="TblRegistratoin" cellpadding="0" cellspacing="0"> <tr> <td> User Name </td> <td> <asp:TextBox ID="txtUserName" onblur="CallWebMethod();" runat="server"></asp:TextBox> </td> <td> <asp:Label ID="lblDuplicate" runat="server" ForeColor="Red" Text=""></asp:Label> </td> </tr> <tr> <td colspan="3"> <asp:Button ID="btnRegistration" runat="server" Text="Registration" /> </td> </tr> </table> </fieldset> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   onblur event is added to the Textbox txtUserName, This event Fires when the Textbox loses the input focus, That mean after the user get focus out from the Textbox CallWebMethod function will be fired. CallWebMethod will be implemented in step 6.   5-Add ScriptManager Control to your aspx file then reference the Web service by adding an asp:ServiceReference child element to the ScriptManager control and setting its path attribute to point to the Web service, That generate a JavaScript proxy class for calling the specified Web service from client script.   <asp:ScriptManager runat="server" ID="scriptManager"> <Services> <asp:ServiceReference Path="WebService.asmx" /> </Services> </asp:ScriptManager> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }     6-Define the JavaScript code to call the Web Service :   <script language="javascript" type="text/javascript">   // This function calls the Web service method // passing simple type parameters and the // callback function function CallWebMethod() { var User_Name = document.getElementById('<%=txtUserName.ClientID %>').value; WebService.CheckDuplicate(User_Name, OnSucceeded, OnError); }   // This is the callback function invoked if the Web service // succeeded function OnSucceeded(result) { var rsltElement = document.getElementById("lblDuplicate"); if (result == 1) rsltElement.innerHTML = "This User Name is exist"; else rsltElement.innerHTML = "";   }   function OnError(error) { // Display the error. alert("Service Error: " + error.get_message()); } </script> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   This call references the WebService Class and CheckDuplicate Web Method defined in the service. It passes a User_Name value obtained from a textbox as well as a callback function named OnSucceeded that should be invoked when the asynchronous Web Service call returns. If the Web Service in different Namespace you can refer it before the class name this Main formula may help you :  NameSpaceName.ClassName.WebMethdName(Parameters , Success callback function, Error callback function); Parameters: you can pass one or many parameters. Success callback function :handles returned data from the service . Error callback function :Any errors that occur when the Web Service is called will trigger in this function. Using Error Callback function is optional.   Hope these steps help you to understand this approach.

    Read the article

  • Preventing duplicate Data with ASP.NET AJAX

    - by Yousef_Jadallah
      Some times you need to prevent  User names ,E-mail ID's or other values from being duplicated by a new user during Registration or any other cases,So I will add a simple approach to make the page more user-friendly. Instead the user filled all the Registration fields then press submit after that received a message as a result of PostBack that "THIS USERNAME IS EXIST", Ajax tidies this up by allowing asynchronous querying while the user is still completing the registration form.   ASP.NET enables you to create Web services can be accessed from client script in Web pages by using AJAX technology to make Web service calls. Data is exchanged asynchronously between client and server, typically in JSON format. I’ve added an article to show you step by step  how to use ASP.NET AJAX with Web Services , you can find it here .   Lets go a head with the steps :   1-Create a new project , if you are using VS 2005 you have to create ASP.NET Ajax Enabled Web site.   2-Create your own Database which contain user table that have User_Name field. for Testing I’ve added SQL Server Database that come with Dot Net 2008: Then I’ve created tblUsers:   This table and this structure just for our example, you can use your own table to implement this approach.   3-Add new Item to your project or website, Choose Web Service file, lets say  WebService.cs  .In this Web Service file import System.Data.SqlClient Namespace, Then Add your web method that contain string parameter which received the Username parameter from the Script , Finally don’t forget to qualified the Web Service Class with the ScriptServiceAttribute attribute ([System.Web.Script.Services.ScriptService])     using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; using System.Data.SqlClient;     [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.Web.Script.Services.ScriptService] public class WebService : System.Web.Services.WebService {     [WebMethod] public int CheckDuplicate(string User_Name) { string strConn = @"Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\TestDB.mdf;Integrated Security=True;User Instance=True"; string strQuery = "SELECT COUNT(*) FROM tblUsers WHERE User_Name = @User_Name"; SqlConnection con = new SqlConnection(strConn); SqlCommand cmd = new SqlCommand(strQuery, con); cmd.Parameters.Add("User_Name", User_Name); con.Open(); int RetVal= (int)cmd.ExecuteScalar(); con.Close(); return RetVal; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Our Web Method here is CheckDuplicate Which accept User_Name String as a parameter and return number of the rows , if the name will found in the database this method will return 1 else it will return 0. I’ve applied  [WebMethod] Attribute to our method CheckDuplicate, And applied the ScriptService attribute to a Web Service class named WebService.   4-Add this simple Registration form : <fieldset> <table id="TblRegistratoin" cellpadding="0" cellspacing="0"> <tr> <td> User Name </td> <td> <asp:TextBox ID="txtUserName" onblur="CallWebMethod();" runat="server"></asp:TextBox> </td> <td> <asp:Label ID="lblDuplicate" runat="server" ForeColor="Red" Text=""></asp:Label> </td> </tr> <tr> <td colspan="3"> <asp:Button ID="btnRegistration" runat="server" Text="Registration" /> </td> </tr> </table> </fieldset> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   onblur event is added to the Textbox txtUserName, This event Fires when the Textbox loses the input focus, That mean after the user get focus out from the Textbox CallWebMethod function will be fired. CallWebMethod will be implemented in step 6.   5-Add ScriptManager Control to your aspx file then reference the Web service by adding an asp:ServiceReference child element to the ScriptManager control and setting its path attribute to point to the Web service, That generate a JavaScript proxy class for calling the specified Web service from client script.   <asp:ScriptManager runat="server" ID="scriptManager"> <Services> <asp:ServiceReference Path="WebService.asmx" /> </Services> </asp:ScriptManager> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }     6-Define the JavaScript code to call the Web Service :   <script language="javascript" type="text/javascript">   // This function calls the Web service method // passing simple type parameters and the // callback function function CallWebMethod() { var User_Name = document.getElementById('<%=txtUserName.ClientID %>').value; WebService.CheckDuplicate(User_Name, OnSucceeded, OnError); }   // This is the callback function invoked if the Web service // succeeded function OnSucceeded(result) { var rsltElement = document.getElementById("lblDuplicate"); if (result == 1) rsltElement.innerHTML = "This User Name is exist"; else rsltElement.innerHTML = "";   }   function OnError(error) { // Display the error. alert("Service Error: " + error.get_message()); } </script> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   This call references the WebService Class and CheckDuplicate Web Method defined in the service. It passes a User_Name value obtained from a textbox as well as a callback function named OnSucceeded that should be invoked when the asynchronous Web Service call returns. If the Web Service in different Namespace you can refer it before the class name this Main formula may help you :  NameSpaceName.ClassName.WebMethdName(Parameters , Success callback function, Error callback function); Parameters: you can pass one or many parameters. Success callback function :handles returned data from the service . Error callback function :Any errors that occur when the Web Service is called will trigger in this function. Using Error Callback function is optional.   Hope these steps help you to understand this approach.

    Read the article

  • USB packets - receive wrong data

    - by regorianer
    i have a little python script which shows me the packets of an enocean device and does some events depending on the packet type. unfortunately it doesn't work because i'm getting wrong packets. Parts of the python script (used pySerial): Blockquote ser = serial.Serial('/dev/ttyUSB1',57600,bytesize = serial.EIGHTBITS,timeout = 1, parity = serial.PARITY_NONE , rtscts = 0) print 'clearing buffer' s = ser.read(10000) print 'start read' while 1: s = ser.read(1) for character in s: sys.stdout.write(" %s" % character.encode('hex')) print 'end' ser.close() output baudrate 57600: e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 output baudrate 9600: a5 5a 0b 05 10 00 00 00 00 15 c4 56 20 6f a5 5a 0b 05 00 00 00 00 00 15 c4 56 20 5f linux terminal baudrate 57600: $stty -F /dev/ttyUSB1 57600 $stty < /dev/ttyUSB1 speed 57600 baud; line = 0; eof = ^A; min = 0; time = 0; -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke $while (true) do cat -A /dev/ttyUSB1 ; done myfile $hexdump -C myfile 00000000 4d 2d 60 4d 2d 60 5e 40 4d 2d 60 5e 40 4d 2d 60 |M-M-^@M-^@M-| 00000010 4d 2d 60 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 4d 2d |M-M-M-M-^@M-| 00000020 60 4d 2d 60 5e 40 5e 40 5e 40 5e 40 5e 40 5e 40 |M-^@^@^@^@^@^@| 00000030 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 5e 40 5e |^@M-M-M-`^@^@^| 00000040 40 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 |@^@M-M-M-`| 0000004c linux terminal baudrate 9600: $hexdump -C myfile2 00000000 5e 40 5e 55 4d 2d 44 56 30 4d 2d 3f 5e 40 5e 40 |^@^UM-DV0M-?^@^@| 00000010 5e 55 4d 2d 44 56 20 5f |^UM-DV _| 00000018 the specification says: 0x55 sync byte 1st 0xNNNN data length bytes (2 bytes) 0x07 opt length byte 0x01 type byte CRC, data, opt data und nochmal CRC but I'm not getting this packet structure. The output of the python script differs from the one I get via the terminal. I also wrote the python part with C, but the output is the same as with python As the USB receiver a BSC-BoR USB Receiver/Sender is used The EnOcean device is a simple button

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • Calculating estimated data loss with Always on

    - by blakmk
    Ever wondered how calculate estimated data loss (time) for always on. The metric in the always on dashboard shows the metric quite nicely but there does seem to be a lack of documentation about where the metrics ---come from. Heres a script that calculates the data loss ( lag ) so you can set up alerts based on your DR SLA's:       WITH DR_CTE ( replica_server_name, database_name, last_commit_time) AS                 (                                 select ar.replica_server_name, database_name, rs.last_commit_time                                 from master.sys.dm_hadr_database_replica_states  rs                                 inner join master.sys.availability_replicas ar on rs.replica_id = ar.replica_id                                 inner join sys.dm_hadr_database_replica_cluster_states dcs on dcs.group_database_id = rs.group_database_id and rs.replica_id = dcs.replica_id                                 where replica_server_name != @@servername                 ) select ar.replica_server_name, dcs.database_name, rs.last_commit_time, DR_CTE.last_commit_time 'DR_commit_time', datediff(ss,  DR_CTE.last_commit_time, rs.last_commit_time) 'lag_in_seconds' from master.sys.dm_hadr_database_replica_states  rs inner join master.sys.availability_replicas ar on rs.replica_id = ar.replica_id inner join sys.dm_hadr_database_replica_cluster_states dcs on dcs.group_database_id = rs.group_database_id and rs.replica_id = dcs.replica_id inner join DR_CTE on DR_CTE.database_name = dcs.database_name where ar.replica_server_name = @@servername order by lag_in_seconds desc

    Read the article

  • Connecting to Microsoft Excel using Oracle Data Integrator

    - by julien.testut
    The posts in this series assume that you have some level of familiarity with ODI. The concepts of Topology, Data Server, Physical and Logical Architecture are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for more details. In this post I will describe how a Microsoft Excel spreadsheet can be used in Oracle Data Integrator. Microsoft Excel is one of the many different technologies you can leverage in ODI as a source or as a target. Prepare your Excel spreadsheet Prior to using a Microsoft Excel spreadsheet in ODI we need to specify a name for the different cell tables we want to use. You can have multiple names in the same spreadsheet. First open up a Microsoft Excel spreadsheet, we will need to define a named range.

    Read the article

  • Oracle Healthcare Data Warehouse Foundations RELEASED!

    - by Glen McCallum
    Since I joined Oracle I've been working on Oracle Healthcare Data Warehouse Foundations (OHDF). It was officially released earlier this month at HIMSS. But for over 2 months prior to that I had to keep it a secret. It was so tough; I didn't even tell my family when they asked me what I was working on. Anyway, OHDF is an enterprise healthcare data model. Unlike Healthcare Transaction Base, OHDF is in 3rd normal form. It is logical and reasonably easy to understand for anyone with some experience in the healthcare domain. OHDF is emerging as the core of Oracle's healthcare business intelligence applications.

    Read the article

  • OWB 11gR2 &ndash; Flexible and extensible

    - by David Allan
    The Oracle data integration extensibility capabilities are something I love, nothing more frustrating than a tool or platform that is very constraining. I think extensibility and flexibility are invaluable capabilities in the data integration arena. I liked Uli Bethke's posting on some extensibility capabilities with ODI (see Nesting ODI Substitution Method Calls here), he has some useful guidance on making customizations to existing KMs, nice to learn by example. I thought I'd illustrate the same capabilities with ODI's partner OWB for the OWB community. There is a whole new world of potential. The LKM/IKM/CKM/JKMs are the primary templates that are supported (plus the Oracle Target code template), so there is a lot of potential for customizing and extending the product in this release. Enough waffle... Diving in at the deep end from Uli's post, in OWB the table operator has a number of additional properties in OWB 11gR2 that let you annotate the column usage with ODI-like properties such as the slowly changing usage or for your own user-defined purpose as in Uli's post, below you see for the target table SALES_TARGET we can use the UD5 property which when assigned the code template (knowledge module) which has been modified with Uli's change we can do custom things such as creating indices - provides The code template used by the mapping has the additional step which is basically the code illustrated from Uli's posting just used directly, the ODI 10g substitution references also supported from within OWB's runtime. Now to see whether this does what we expect before we execute it, we can check out the generated code similar to how the traditional mapping generation and preview works, you do this by clicking on the 'Inspect Code' button on the execution units code template assignment. This then  creates another tab with prefix 'Code - <mapping name>' where the generated code is put, scrolling down we find the last step with the indices being created, looks good, so we are ready to deploy and execute. After executing the mapping we can then use the 'Audit Information' panel (select the mapping in the designer tree and click on View/Audit Information), this gives us a view of the execution where we can drill into the tasks that were executed and inspect both the template and the generated code that was executed and any potential errors. Reflecting back on earlier versions of OWB, these were the kinds of features that were always highly desirable, getting under the hood of the code generation and tweaking bit and pieces - fun and powerful stuff! We can step it up a bit here and explore some further ideas. The example below is a daisy-chained set of execution units where the intermediate table is a target of one unit and the source for another. We want that table to be a global temporary table, so can tweak the templates. Back to the copy of SQL Control Append (for demo purposes) we modify the create target table step to make the table a global temporary table, with the option of on commit preserve rows. You can get a feel for some of the customizations and changes possible, providing some great flexibility and extensibility for the data integration tools.

    Read the article

  • mysql show databases not showing databases that are in /opt/bitnami/mysql/data directory

    - by hgolov
    and thank you for taking the time to look at my question. I have an ebs-backed ec2 ubuntu server which is running but unreachable. ** There are very stupidly no recent backups ** I made a snapshot of the block, created a volume, spun up a new instance, attached the new volume. I see all the data from my site in the /opt/bitnami/mysql/data directory, but when I go into the mysql console, it shows only information_schema and test when I type show databases; How can I 'point' mysql to the correct folder? Thank you!

    Read the article

  • Adding an expression based image in a client report definition file (RDLC)

    - by rajbk
    In previous posts, I showed you how to create a report using Visual Studio 2010 and how to add a hyperlink to the report.  In this post, I show you how to add an expression based image to each row of the report. This similar to displaying a checkbox column for Boolean values.  A sample project is attached to the bottom of this post. To start off, download the project we created earlier from here.  The report we created had a “Discontinued” column of type Boolean. We are going to change it to display an “available” icon or “unavailable” icon based on the “Discontinued” row value.    Load the project and double click on Products.rdlc. With the report design surface active, you will see the “Report Data” tool window. Right click on the Images folder and select “Add Image..”   Add the available_icon.png and discontinued_icon.png images (the sample project at the end of this post has the icon png files)    You can see the images we added in the “Report Data” tool window.   Drag and drop the available_icon into the “Discontinued” column row (not the header) We get a dialog box which allows us to set the image properties. We will add an expression that specifies the image to display based the “Discontinued” value from the Product table. Click on the expression (fx) button.   Add the following expression : = IIf(Fields!Discontinued.Value = True, “discontinued_icon”, “available_icon”)   Save and exit all dialog boxes. In the report design surface, resize the column header and change the text from “Discontinued” to “In Production”.   (Optional) Right click on the image cell (not header) , go to “Image Properties..” and offset it by 5pt from the left. (Optional) Change the border color since it is not set by default for image columns. We are done adding our image column! Compile the application and run it. You will see that the “In Production” column has red ‘x’ icons for discontinued products. Download the VS 2010 sample project NorthwindReportsImage.zip Other Posts Adding a hyperlink in a client report definition file (RDLC) Rendering an RDLC directly to the Response stream in ASP.NET MVC ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager Localization in ASP.NET MVC 2 using ModelMetadata Setting up Visual Studio 2010 to step into Microsoft .NET Source Code Running ASP.NET Webforms and ASP.NET MVC side by side Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework

    Read the article

  • Web Development Trends: Mobile First, Data-Oriented Development, and Single Page Applications

    - by dwahlin
    I recently had the opportunity to give a keynote talk at an Intel conference about key trends in the world of Web development that I feel teams should be taking into account with projects. It was a lot of fun and I had the opportunity to talk with a lot of different people about projects they’re working on. There are a million things that could be covered for this type of talk (HTML5 anyone?) but I only had 60 minutes and couldn’t possibly cover them all so I decided to focus on 3 key areas: mobile, data-oriented development, and SPAs. The talk was geared toward introducing people (many who weren’t Web developers) to topics such as mobile first development (demos showed a few tools to help here), responsive design techniques, data binding techniques that can simplify code, and Single Page Application (SPA) benefits. Links to code demos shown during the presentation can be found at the end of the slide deck. Web Development Trends - What's New in the World of Web Development by Dan Wahlin

    Read the article

  • Dynamic Data Connections

    - by Tim Dexter
    I have had a long running email thread running between Dan and David over at Valspar and myself. They have built some impressive connectivity between their in house apps and BIP using web services. The crux of their problem has been that they have multiple databases that need the same report executed against them. Not such an unusual request as I have spoken to two customers in the last month with the same situation. Of course, you could create a report against each data connection and just run or call the appropriate report. Not too bad if you have two or three data connections but more than that and it becomes a maintenance nightmare having to update queries or layouts. Ideally you want to have just a single report definition on the BIP server and to dynamically set the connection to be used at runtime based on the user or system that the user is in. A quick bit of digging and help from Shinji on the development team and I had an answer. Rather embarassingly, the solution has been around since the Oct 2010 rollup patch last year. Still, I grabbed the latest Jan 2011 patch - check out Note 797057.1 for the latest available patches. Once installed, I used the best web service testing tool I have yet to come across - SoapUI. Just point it at the WSDL and you can check out the available services and their parameters and then test them too. The XML packet has a new dynamic data source entry. You can set you own custom JDBC connection or just specify an existing data source name thats defined on the server. <pub:runReport> <pub:reportRequest> <pub:attributeFormat>xml</pub:attributeFormat> <pub:attributeTemplate>0</pub:attributeTemplate> <pub:byPassCache>true</pub:byPassCache> <pub:dynamicDataSource> <pub:JDBCDataSource> <pub:JDBCDriverClass></pub:JDBCDriverClass> <pub:JDBCDriverType></pub:JDBCDriverType> <pub:JDBCPassword></pub:JDBCPassword> <pub:JDBCURL></pub:JDBCURL> <pub:JDBCUserName></pub:JDBCUserName> <pub:dataSourceName>Conn1</pub:dataSourceName> </pub:JDBCDataSource> </pub:dynamicDataSource> <pub:reportAbsolutePath>/Test/Employee Report/Employee Report.xdo</pub:reportAbsolutePath> </pub:reportRequest> <pub:userID>Administrator</pub:userID> <pub:password>Administrator</pub:password> </pub:runReport> So I have Conn1 and Conn2 defined that are connections to different databases. I can just flip the name, make the WS call and get the appropriate dataset in my report. Just as an example, here's my web service call java code. Just a case of bringing in the BIP java libs to my java project. publicReportServiceService = new PublicReportServiceService(); PublicReportService publicReportService = publicReportServiceService.getPublicReportService_v11(); String userID = "Administrator"; String password = "Administrator"; ReportRequest rr = new ReportRequest(); rr.setAttributeFormat("xml"); rr.setAttributeTemplate("1"); rr.setByPassCache(true); rr.setReportAbsolutePath("/Test/Employee Report/Employee Report.xdo"); rr.setReportOutputPath("c:\\temp\\output.xml"); BIPDataSource bipds = new BIPDataSource(); JDBCDataSource jds = new JDBCDataSource(); jds.setDataSourceName("Conn1"); bipds.setJDBCDataSource(jds); rr.setDynamicDataSource(bipds); try { publicReportService.runReport(rr, userID, password); } catch (InvalidParametersException e) { e.printStackTrace(); } catch (AccessDeniedException e) { e.printStackTrace(); } catch (OperationFailedException e) { e.printStackTrace(); } } Note, Im no java whiz kid or whizzy old bloke, at least not unless Ive had a coffee. JDeveloper has a nice feature where you point it at the WSDL and it creates everything to support your calling code for you. Couple of things to remember: 1. When you call the service, remember to set the bypass the cache option. Forget it and much scratching of your head and taking my name in vain will ensue. 2. My demo actually hit the same database but used two users, one accessed the base tables another views with the same name. For far too long I thought the connection swapping was not working. I was getting the same results for both users until I realized I was specifying the schema name for the table/view in my query e.g. select * from EMP.EMPLOYEES. So remember to have a generic query that will depend entirely on the connection. Its a neat feature if you want to be able to switch connections and only define a single report and call it remotely. Now if you want the connection to be set dynamically based on the user and the report run via the user interface, thats going to be more tricky ... need to think about that one!

    Read the article

  • Retrieve Performance Data from SOA Infrastructure Database

    - by fip
    My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting: The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore. The BPEL engine performance statistics only includes latencies. It does not provide throughputs. Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted. Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system. 1. Asynchronous/one-way messages incoming rates The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states select composite_name composite, state, count(*) Count from dlv_message where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; 2. Throughput of BPEL process instances The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype order by count(*) desc; 3. Throughput and latencies of BPEL process instances This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances. select composite_name Composite, component_name Process, componenttype, state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;   4. Combine all together Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database: accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)' accept endTime prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)' Prompt "==== Rejected Messages ===="; REM 2012-10-24 21:00:00 REM 2012-10-24 21:59:59 select count(*), composite_dn from rejected_message where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS') and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_dn; Prompt " "; Prompt "==== Throughput of one-way/asynchronous messages ===="; select state, count(*) Count, composite_name composite from dlv_message where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; Prompt " "; Prompt "==== Throughput and latency of BPEL process instances ====" select state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime, composite_name Composite, component_name Process, componenttype from cube_instance where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;  

    Read the article

  • Why is String Templating Better Than String Concatenation from an Engineering Perspective?

    - by stephen
    I once read (I think it was in "Programming Pearls") that one should use templates instead of building the string through the use of concatenation. For example, consider the template below (using C# razor library) <in a properties file> Browser Capabilities Type = @Model.Type Name = @Model.Browser Version = @Model.Version Supports Frames = @Model.Frames Supports Tables = @Model.Tables Supports Cookies = @Model.Cookies Supports VBScript = @Model.VBScript Supports Java Applets = @Model.JavaApplets Supports ActiveX Controls = @Model.ActiveXControls and later, in a separate code file private void Button1_Click(object sender, System.EventArgs e) { BrowserInfoTemplate = Properties.Resources.browserInfoTemplate; // see above string browserInfo = RazorEngine.Razor.Parse(BrowserInfoTemplate, browser); ... } From a software engineering perspective, how is this better than an equivalent string concatentation, like below: private void Button1_Click(object sender, System.EventArgs e) { System.Web.HttpBrowserCapabilities browser = Request.Browser; string s = "Browser Capabilities\n" + "Type = " + browser.Type + "\n" + "Name = " + browser.Browser + "\n" + "Version = " + browser.Version + "\n" + "Supports Frames = " + browser.Frames + "\n" + "Supports Tables = " + browser.Tables + "\n" + "Supports Cookies = " + browser.Cookies + "\n" + "Supports VBScript = " + browser.VBScript + "\n" + "Supports JavaScript = " + browser.EcmaScriptVersion.ToString() + "\n" + "Supports Java Applets = " + browser.JavaApplets + "\n" + "Supports ActiveX Controls = " + browser.ActiveXControls + "\n" ... }

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >